added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2014-10-01T00:00:00.000Z
|
1993-08-01T00:00:00.000
|
1691790
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://europepmc.org/articles/pmc1968545?pdf=render",
"pdf_hash": "5be7498b60763057d4cf64fd990bc8a31ea12432",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43192",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "5be7498b60763057d4cf64fd990bc8a31ea12432",
"year": 1993
}
|
pes2o/s2orc
|
Suppression of anchorage-independent growth after gene transfection.
A novel procedure for isolating anchorage-dependent cells has been developed. It involves negative selection of cells growing in suspension followed by clonal replica screening for anchorage-dependent growth. Cells which have regained anchorage-dependent growth have been isolated from a library of the Chinese hamster ovary cell line, CHO-K1, transfected with pSV2neo and human genomic DNA. One anchorage-dependent clone, 1042AC, has been studied in detail. Anchorage-dependent growth of 1042AC is stable when cultured as adherent monolayers, but revertants appear rapidly when cultured in suspension. Suppression is unlikely to be due to loss or mutation of hamster genes conferring anchorage-independent growth as hybrids between 1042AC and CHO-K1 have the suppressed phenotype of 1042AC. Furthermore, a population of cells obtained from the hybrid by selecting for revertants to anchorage-independent growth showed selective loss of the transgenome derived from 1042AC. The growth suppression was not due to transfection of the human Krev-1 gene, which has previously been shown to restore anchorage-dependent growth, nor was there any evidence of alteration in the endogenous hamster Krev-1 gene. However, evidence for a human gene being responsible for the suppressed phenotype has not been obtained yet.
By comparison with the extensive knowledge of growth factors and their signal transducing pathways, the regulatory mechanisms of growth inhibition are poorly understood. Much of the information for such inhibitory mechanisms has come from studies of the tumour suppressor genes, whose functional loss may occur during neoplastic development (reviewed by Marshall, 1991). An alternative approach is to attempt to identify growth suppressor genes directly by phenotypic selection after gene transfection. Although tumour suppressor genes may have various functions, particularly in control of development, evidence that some can directly inhibit cell growth has been obtained for the retinoblastoma gene product (Huang et al., 1988;Bookstein et al., 1990;Madreperla et al., 1991) and for p53 Diller et al., 1990;Mercer et al., 1990;Michalovitz et al., 1990). Re-introduction of such genes may result in terminal arrest of cell growth (Huang et al., 1988;Baker et al., 1990;Diller et al., 1990) requiring conditional expression of the transfected gene to allow development of stably transfected cell lines (Mercer et al., 1990;Michalovitz et al., 1990). It is unlikely that such genes, centrally involved in cell cycle control, can be isolated by transfection of unmodified DNA. However, for genes that conditionally arrest growth, this approach should be successful.
Assays for uncontrolled growth provide a simple, direct method of selecting for transformed cells and have been used to isolate activated oncogenes in DNA from human tumours. The converse of this approach, searching for genes that specifically suppress the transformed growth of cells, has been inadequately explored due to the inherent difficulty in isolating cells with a growth disadvantage. Negative selection procedures are usually inefficient, requiring combination with other selection or screening procedures (Noda, 1990). Despite these difficulties, human DNA capable of suppressing transformed phenotypes has been successfully isolated in a few cases (Schafer et al., 1988;Noda et al., 1989;Eiden et al., 1991). The best characterised gene, Krev-1, was present in only one of a series of flat revertants isolated in this way , indicating that a number of genes may be involved in suppression of the transformed state.
We have chosen to investigate the mechanism which restricts cell growth in suspension. Cell transformation has been known for many years to result in loss of the normal requirement for attachment and spreading before cell division can occur (Stoker et al., 1968) and the ability to grow in suspension is observed frequently in malignant cells (Shin et al., 1975). However, despite the widespread use of this cul-ture assay the relationship between anchorage and growth remains unclear. Although some growth factors can induce anchorage-independent growth, studies with somatic cell hybrids indicate that the phenotype is also regulated by growth suppressor genes (Marshall et al., 1982;Islam et al., 1989;Koi et al., 1989). We have used the Chinese hamster ovary cell line (CHO-KI) which has been the subject of extensive genetic analysis and grows efficiently in suspension ( Thompson, 1979). This report describes the development of an efficient negative selection procedure for cells whose growth in suspension has been arrested, and a novel clonal screening assay for anchorage-dependent growth. In combination, they allow isolation of cells solely on the basis of anchorage dependency. Using these procedures we have isolated variants of CHO-KI that have substantially lost the ability to grow in suspension after DNA transfection.
Materials and methods
Cell culture and transfection CHO-K1 cells were routinely cultured in 'complete medium' (aMEM (ICN Flow, High Wycombe, UK) with 10% (v/v) added newborn bovine serum without antibiotics) on 90 mm tissue culture dishes (Falcon 3003, Becton Dickinson). For negative selection and for precise determination of doubling times in stirred suspension, the CHO-K1 cells were grown in 500ml culture vessels (Techne MCS stirrer, Techne (Cambridge) Ltd) stirred at 80 rpm in the same medium. Colony forming efficiency in 0.3% (w/v) agarose (Nicolson et al., 1988) in complete medium was determined by counting colonies larger than 90 1m.
Electroporation tests were carried out in the presence of varying concentrations of the human genomic DNA mixed 1:1 with pSV2neo (Southern & Berg, 1982) or plasmids derived from it, using 3 x 24 ys pulses of 2.5 kV cm-' (Winterbourne et al., 1988b). The cells were maintained at 20°C for 1 h before monitoring for DNA-dependent toxicity (Winterbourne et al., 1988b). On the same day as these tests, cells were electroporated at 2 x 107 cells ml-' in the concentration of mixed plasmid and genomic DNA that gave 70% DNA-dependent toxicity. After 1 h at 20'C the majority of the cells were plated into 90 mm dishes for the main library selecting for geneticin resistance as described (Winterbourne et al., 1988b). Four aliquots were also plated in duplicate 60 mm dishes for determination of stable transfection efficiency and survival from the electroporation. The colonies of geneticin resistant cells (containing on average about 2000 cells per colony) were collected by trypsinisation after thorough washing of the plates to remove non-adherent cells.
Negative selection in stirred suspension culture Isolation of cells that do not grow under defined conditions may be carried out in a variety of ways, usually by killing cells that have replicated their DNA. We have adapted the H33258-enhanced killing by long wave u.v. light of cells that have incorporated 5-bromodeoxyuridine (Stetten et al., 1977) to the efficient negative selection of cells growing in suspension (Winterbourne et al., 1988a). Cells were suspended at about 105 cellsml-' in medium containing 10ym bromodeoxyuridine (Sigma, Poole, UK). After 3 days, H33258 (Sigma) was added to a final concentration of 1 1tg ml-i 3 h before irradiation. The stirred suspension was irradiated for 30 s by a cylindrical arrangement of four lamps (Philips Actinic 09 long-wave u.v. lamps) concentric with the culture vessel, with a gap of 25 mm between the lamps and the wall of the vessel. The lamps have an emission spectrum which closely matches the excitation spectrum of H33258 and is negligible below 300 nm (manufacturer's data). The borosilicate glass of the culture vessel, which is only transparent above 300 nm, provided further protection from u.v. irradiation of unsubstituted DNA. Cells were collected by centrifugation (300 g 5 min), washed once in phosphate buffered saline and resuspended in complete medium. The cells were cultured for one day in stirred suspension to allow cells to die, before being plated into tissue culture dishes. After 24 h to allow attachment of viable cells, the medium containing the dead cells was discarded, the dishes washed twice with phosphate buffered saline and fresh medium was added. Colonies of surviving cells grew up within 10 days. One plate was fixed and stained to estimate the survival frequency. The other plates were harvested for subsequent screening for clones having anchorage-dependent growth.
Clonal screening assay for anchorage-dependent growth A replica plating method of screening individual clones for obligate anchorage-dependent growth was developed. Tissue culture treated microtest plates (Cat. No. 3596, Costar, Cambridge, MA) were seeded with one cell per well, on average. After growth to approximately 1000 cells per well, the plates were harvested using a multi-channel pipettor. Three replicas were made: the original master plate, a second tissue culture plastic plate, and a bacteriological grade plastic plate ICN Flow). The limited degree of attachment and spreading seen on some batches of bacteriological grade plastic was abolished by pre-incubation overnight at 37°C with 1001l per well of 4mg bovine serum albuminml'.
After 5-7 days growth, cells were quantified by staining. One tissue culture plastic plate was fixed and stained as described before (Winterbourne, 1986). Cells in the bacteriological grade plate were collected on No. 50 filter paper (Whatman Ltd, Maidstone, UK) using a 96 well dot blot apparatus (BioRad, Richmond, CA), washing the wells with 100 jil PBS to dislodge all cells. The cells were fixed on the filter with 3.7% formaldehyde in PBS, before removing from the apparatus. After drying, the filter was stained with Coomassie blue (Winterbourne, 1986). Tests with wild type CHO-KI and an anchorage-dependent cell line showed the small number of cells seeded gave negligible signals, as did the anchorage-dependent cell line after 7 days growth in bacteriological grade plates.
The assay was scored by comparing the staining intensity of wells under adhesive conditions (tissue culture treated plastic) with the filter replica from wells under non-adhesive conditions (bacteriological grade plastic): an anchoragedependent clone gave a signal on the stained plate, but not on the filter paper. Anchorage-independent cells, such as CHO-KI, gave signals under both conditions. No signal under either condition was either due to the chance absence of cells in the original Poisson distribution, or due to a cell that grew poorly under both conditions. Anchorage-dependent clones detected in this way were recovered from the master plate for further study.
As not all the wells containing cells in the screening assay will be of clonal origin, some anchorage-dependent clones may not be detected easily, due to overgrowth by a contaminating wild type cell or cells. To minimise the loss from this cause, we routinely picked cells from wells that were only marginally positive in the first screen. Such cells were then subjected to a second screen, using half a microtest plate. This resulted in the re-screening of about 30 sub-clones from each potentially positive well.
Preparation and analysis of DNA and RNA Human genomic DNA was prepared by proteinase K (Sigma) digestion of nuclei isolated from various human cell lines including GER, a pancreatic carcinoma line (Grant et al., 1979). DNA was also prepared from white blood cells from a healthy human volunteer. The DNA preparations ran as smears on pulsed field gel electrophoresis with apparent size ranges of 50-800 kb (results not shown). Plasmids were prepared for transfection by alkaline lysis followed by purification on Sephacryl S1000 columns (Pharmacia LKB Biotechnology, Milton Keynes, UK).
Hybridisation analysis of 10 jig aliquots of restriction endonuclease digested DNA, fractionated on 0.7% agarose gels at 0.5 V cm-', were performed after alkaline transfer to Hybond N (Amersham International plc, UK). The blots were hybridised in a Hybritube 15 (GIBCO BRL, UK) with probes labelled with [32P]CTP by the technique of Feinberg and Vogelstein (1984) either in 6 x SSC, 5 x Denhardt's solution, 0.5% SDS and 10% dextran sulphate with 100 lI ml1 sonicated salmon sperm DNA or in the buffer system of Church and Gilbert (1984). Probes, purified by electrophoresis, were the combined three small Pvu II fragments from pSV2neo, the 2 kb BamHl insert of Krev-l , the 1.3 kb insert of pRGAPDH-13 (Fort et al., 1985) or human repetitive DNA prepared as described by Shih and Weinberg (1982). After washing at room temperature, blots were washed for 30 min at 65°C in 2 x SSC containing 0.1% SDS and then for 15 min at 65°C 0.1 x SSC, 0.1% SDS. For the human DNA probe, hybridisation buffer without Denhardt's solution contained 2 tg ml-' sonicated CHO-KI DNA and the final high stringency wash was omitted. Before re-probing, blots were stripped by boiling for 15-30 min in 0.1% SDS. All washing and stripping steps were performed in the Hybritube.
RNA was prepared from sub-confluent plates of cells harvested by trypsinisation by the method described by Chomczynski and Sacchi (1987). Total RNA (20 ;g) denatured by heating to 65°C for 15 min in electrophoresis buffer containing 1.8 M deionised formaldehyde and 50% deionised formamide was electrophoresed in a 1.5% agarose gel containing 0.7 M formaldehyde in 20 mM 3-(Nmorpholino)propanesulphonic acid, 5mM sodium acetate, 0.5 mM EDTA pH 7. RNA was blotted onto Hybond N+ membranes (Amersham International plc, UK) and fixed by u.v. irradiation before hybridisation analysis as above. Results
Selection and screening of cells unable to grow in suspension
The Chinese hamster ovary cell line, CHO-KI, grows readily both as an adherent monolayer and in stirred suspension culture. When wild type CHO-KI cells, previously grown in suspension in bromodeoxyuridine under optimised conditions (Winterbourne et al., 1988a) were exposed to light at 320 nm, the cells rapidly acquired lethal defects. Random mutations were not significantly increased by irradiation at this wavelength as lethal defects only occurred in cells that incorporated the thymidine analogue and was not seen when cells were grown in the absence of bromodeoxyuridine (Table I).
Irradiation for 30 s was chosen for subsequent experiments.
Despite the highly efficient negative selection some survivors were seen when large numbers of cells were subjected to the procedure. Randomly chosen survivors were not anchorage-dependent in subsequent tests and appeared to be wild type CHO-KI that had escaped the negative selection.
Standard assays of anchorage-independency in viscous medium do not permit isolation of cells that fail to grow. To overcome this problem we developed a method of screening replicas of surviving clones for anchorage-dependent growth. Combination of the H33258-bromodeoxyuridine selection procedure on about 5 x I07 cells and the two pass clonal screening of subsequent survivors, detected no variants that had anchorage-dependent growth from wild type CHO-KI or a subclone (Table II). Thus, spontaneous appearance of the anchorage-dependent growth phenotype in these cells seems to be very rare (less than 1 in 10 million cells).
Restoration of anchorage-dependent growth following transfection Seven libraries of CHO-KI cells containing 20 to 100,000 independent clones bearing transfected DNA were prepared. Each library was created by pooling the geneticin resistant colonies recovered after transfecting 2 x 107 CHO-KI cells Anchorage-dependent clones 0 0 The number of wild-type CHO-KI cells (both the original mass culture and a subclone) at the beginning and end of the period of growth in bromodeoxyuridine is shown, as are the number of cells surviving the exposure to light at 320 nm and subsequently growing as colonies on plastic. To avoid the possibility of missing positive wells in the screening for anchorage dependency, even weakly positive wells were picked in the first round. About 30 individual cells from each positive well were subsequently re-screened. with a neo containing plasmid and different sources of human DNA, including human tumour cell lines. Although specific tumour suppressor genes may be inactivated in individual tumours, it is probable that other genes will remain functional. The frequency of stable geneticin resistance observed during the preparation of these libraries was between 0.1 and 0.6% of the electroporated cells.
Cultures of each library were subjected to the negative selection procedure in stirred suspensions. As expected, the number of survivors were similar to those obtained with the untransfected parental cells (Table II). For each selection experiment, 200 to 300 independent survivors were subjected to the microtest screening assay. Cells from positive or indeterminate wells were recovered from the master plates and rescreened using half a plate for each clone (yielding about 30 wells with sub-clones). Only in experiments with two libraries did cells surviving negative selection remain positive in this second clonal anchorage-dependency screen: nine clones were isolated after transfecting with DNA from the GER human pancreatic carcinoma cell line and two with DNA from normal white blood cells. Examples of the second screen with the GER library, which gave clear positive results, are shown in Figure 1.
Growth characteristics of the anchorage-dependent cells Of the nine anchorage-dependent clones isolated after transfecting with GER DNA, the cell line designated 1042AC was selected for subsequent studies. The defective growth of 1042AC in stirred suspensions was confirmed (Figure 2). Doubling times, estimated from such experiments, were 100 h for 1042AC compared with 21 h for the wild type CHO-KI. Similarly, colony forming efficiency in 0.3% agarose was 8% for 1042AC compared with 61% for CHO-KI. Despite the five-fold reduction in growth rate in stirred suspension, the cells grew at similar rates when attached to tissue culture 1042AC cells, they had a more elongated fibroblastic morphology with a greater tendency to form lateral alignments than the randomly oriented, compact, parental CHO-KI ( Figure 3a and b).
Anchorage-dependent cells tended to clump during their slow growth in stirred suspension culture. When the cultures were trypsinised and reseeded at a lower density, it was found that the subsequent growth in stirred supspension was dramatically increased to a rate similar to the wild type cells (20 h doubling time). This reproducible effect could be due to the loss or inactivation of a transfected suppressor gene in a small number of 1042AC cells, with subsequent selection of these revertants by their growth advantage in suspension culture. It was possible to obtain a good simulation of the observed results assuming reversion occurred spontaneously at a frequency of 10-', using a simple model of exponential growth (not shown). The population of revertant cells obtained in this way was designated 1052Rev, these cells retained their ability to grow in suspension after many passages as adherent monolayers. The morphology of 1052Rev ( Figure 3c) more closely resembled that of the wild type CHO-KI than the anchorage-dependent 1042AC cells.
The anchorage-dependent phenotype of 1042AC cells cultured continuously under non-selective conditions as adherent monolayers, was moderately stable. When 1042AC cultured for 28 passages as an adherent monolayer, was assayed in stirred suspension the doubling time was approximately 64 h, indicating only partial loss of the suppressed anchorage-independent growth phenotype. This stirred suspension culture showed the same stepwise shift to faster growth (26 h doubling time) when trypsinised and reseeded at a lower density in suspension (not shown).
DNA and RNA analysis of transfected cell lines Of the nine anchorage-dependent clones isolated after transfection with GER DNA, six appeared to have the neo gene integrated at the same site (Figure 4a), indicating that these may be independent isolations of cells for a single transfection event. Analysis of randomly selected clones showed restriction fragment length polymorphisms, indicating random integration of the neo vector (not shown). The suppression of anchorage-independent growth was not due to transfection of the human Krev-1 gene, as only the endogenous hamster gene was detected (Figure 4b). Re-probing the blots with human repetitive DNA, gave only weak signals for the presence of human DNA in the clones (Figure 4c). These data and the presence of only single copies of pSV2neo is consistent with integration and expression of much smaller amounts of DNA when transfection is induced by electroporation rather than by precipitation techniques.
The analysis of DNA isolated from late passage 1042AC, which had partially reverted to anchorage-independent growth, and from the revertant lO52Rev appeared to be similar to early passage 1042AC, although the signals on this blot were weak (Figure 4). In other experiments, no differences were detected on southern blots of BamH1, EcoRI and HindlII digested DNA isolated from 1042AC and the revertant 1052Rev when probed with neo and Krev-1 (not shown). All blots were also probed with human repetitive DNA, but gave only weak signals which showed no reproducible differences between CHO-KI and any of the cell lines derived from it. Analysis of RNA showed that the revertant still expressed the neo gene ( Figure 5a) and that there was no significant difference in the level of expression of the Krev-1 gene between 1042AC, 1052Rev and the wild type cells (Figure 5b). The amount of RNA in each lane was similar as shown by the signal for GAPDH (Figure 5c). Dominance of anchorage-dependent phenotype CHO-KI stably transfected with pSV2gpt was fused with 1042AC and a hybrid clone was obtained by selection with mycophenolic acid and geneticin. When cultured in stirred suspension, the hybrid showed the same phenotype as 1042AC i.e. suppressed growth over the first two weeks followed by a stepwise shift to faster growth after trypsinisation and reseeding at lower density ( Figure 6). The subsequent revertant to anchorage-independent growth obtained from this hybrid still retained the mycophenolic acid resistance of the wild type parent, but the majority of the cells from the revertant population had lost their resistance to geneticin conferred by 1042AC (relative colony forming efficiency in geneticin 17%). By contrast 1052Rev, the anchorage-independent revertant derived directly from 1042AC, retained the geneticin resistance (relative colony forming efficiency in geneticin 83%). Probing Southern blots for the drug-resistance marker genes confirmed that the fusion product with suppressed anchorage-independent growth was a hybrid (Figure 7, lanes 4-6). In addition, this analysis showed a selective reduction in the signal for the pSV2neo marker in DNA from the revertant population derived from the hybrid (Figure 7, lanes 6 and 7), in agreement with the loss of geneticin resistance described above. This result suggests that reversion in the suppressed hybrid may occur by simultaneous loss of a cotransfected growth suppressor gene and the pSV2neo marker.
Discussion
The high efficiency of our combined selection and screening methods was illustrated by the low frequency (<10-8) at which anchorage-dependent clones were observed. This contrasts with the large number of survivors observed by others using a different negative selection protocol and calcium phosphate transfected cells (Padmanabhan et al., 1987). The unconditional inhibition of growth in that study appeared to be due to transfected repetitive DNA (Padmanabhan et al., 1987) and is unlikely to explain the results described here. The development of efficient procedures for isolation of anchorage-dependent cells based only on their growth properties should be of use to others, avoiding the need to link growth with phenotypes such as morphology or lectinagglutinability. The rare cells with restored anchorage-dependent growth isolated after DNA transfection may have arisen by transfection of a human gene that suppresses anchorage-independent growth. However, we have been unable to demonstrate this so far and we cannot rule out the possibility that the low frequency of suppression may have arisen by some other mechanism not specifically involving a human suppressor gene. Thus, transfected DNA may have resulted either in reduced expression of an endogenous hamster gene required for anchorage-independent growth or increased expression of one that suppresses such growth. It is unlikely that anchorage-dependent variants preexisting in the population of CHO-KI were isolated, as seven out of nine complete selection and screening experiments yielded no suppressed cells. Five of the seven experiments that failed to yield suppressed cells were obtained with cells that bore transfected DNA (at least the selectable plasmid DNA). Therefore the results are also unlikely to be due to a non-specific effect of transfection or an effect of the selection pressure imposed by growth from low density during the isolation of the geneticin resistant libraries.
An example in which similar experiments resulted in reduced expression of an endogenous gene (the fos transformation effector gene) was reported by Kho and Zarbl (1992). Re-introduction of the cloned fte-l gene, the endogenous copy of which had been disrupted by the initial transfection event, restored the transformed phenotype (Kho & Zarbl, 1992). In contrast, the suppressed phenotype of 1042AC cells in the present study was dominant in somatic cell hybrids ( Figure 6). Furthermore, revertants that regained the ability to grow in suspension retained the inserted pSV2neo DNA and presumably also the disruption of the endogenous sequence at this site. Although these results indicate that suppression in 1042AC is unlikely to be due to inactivation of hamster genes conferring anchorage-independent growth, they do not exclude the possibility that damage to the recipient genome may be responsible for the suppressed phenotype. However, if this is the case, the plasmid is a marker that may allow cloning of the relevant gene.
The phenotypic dominance and the subsequent loss of suppression under selection pressure for anchorageindependence would be consistent with the acquisition and subsequent loss of exogenous genes. However, using repetitive-DNA probes, we have been unable to show con-vincingly the presence of human DNA in the suppressed cell line. The inability to detect human sequences in cells that were subsequently shown to bear a transfected human gene has been reported previously (Pinney et al., 1988). Although repetitive sequences are widely dispersed throughout the human genome, the distribution is not uniform (Schmid & Jelinek, 1982;McCombie et al., 1992). Also, there is considerable homology between human and rodent Alu sequences and variation between individual members of the human Alu family (Schmid & Jelinek, 1982). The combination of these factors may have contributed to our failure unequivocally to detect human DNA in 1042AC at the present stage.
The suppression of anchorage-independent growth was moderately stable under non-selective conditions and was lost only when suppressed cells were continuously cultured in suspension. Selection of revertants to anchorage-independent growth from the hybrid resulted in concomitant loss of geneticin, but not mycophenolic acid, resistance. As whole chromosome loss is a frequent event in hybrid cells and co-transfected DNA may become physically linked during integration (Perucho et al., 1980) revertants from the suppressed hybrid may have arisen by loss of a chromosome containing both the geneticin resistance marker and a putative suppressing gene. We are attempting to clone the transgenome from a genomic library of 1042AC DNA to test this possibility.
Human genes have been isolated in two studies in which the transformed phenotype induced by activated ras was suppressed by DNA transfection (Schafer et al., 1988;Kitayama et al., 1989;Noda et al., 1989). One of these genes, Krev-1, has been shown to share homology with, but have opposing actions to ras (Zhang et al., 1990). Our results show that anchorage-dependence induced in the suppressed CHO-K1 cells is not due to a transfected human Krev-1 gene, nor is expression of the endogenous hamster Krev-1 gene modified. We have also found that direct transfection of Krev-1 does not efficiently suppress anchorage-independent growth of CHO-Kl (unpublished observations).
The second ras-transformation suppressor gene was detected as an 18 kb restriction fragment that suppressed anchorage-independent growth of EJ-ras-transformed rat fibroblasts. It is unlikely to be responsible for the results reported here as this gene was detectable with repetitive human DNA probes (Schiifer et al., 1988). In another study, selecting against growth of spontaneously transformed Chinese hamster cells in low serum, Schafer et al.( 1991) have isolated another human DNA marker indirectly associated with tumour suppression. In neither case has the identity of these suppressor genes yet been reported.
Recently, Eiden et al. (1991) used direct microscopic examination of colony morphology to isolate a cDNA that suppressed the chemically transformed phenotype of BHK cells. The cDNA was found to be the partially processed human vimentin gene. However, no differences in the expression or size of vimentin proteins could be detected between the transformed or suppressed cells, leading the authors to suggest that the original chemical transformation may have resulted from small deletions or point mutations in the BHK vimentin gene (Eiden et al., 1991). Significantly, Chan et al. (1989) have shown that increased phosphorylation of vimentin was one of a small number of alterations in protein phosphorylation that correlated with the reversion of the transformed phenotype of CHO-KI cells induced by cyclic AMP.
The transient effects of cyclic AMP on CHO-KI morphology and anchorage-independent growth (Hsie & Puck, 1971;Puck, 1977) are remarkably similar to the stable effects obtained here after DNA transfection into the same cell line.
The ability of cyclic AMP to increase the phosphorylation of vimentin and other proteins (Chan et al., 1989) indicates that any defect in the organisation of the intermediate filament protein in CHO-KI is unlikely to reside in the vimentin gene itself, as may be the case in the chemically transformed BHK cells (Eiden et al., 1991). Instead, it would appear to reside in control of cyclic AMP-dependent phosphorylations. These results suggest that one possible mechanism for the suppression of anchorge-independent growth in 1042AC cells may be correction of defects in the regulation of cyclic AMPdependent phosphorylations. This possibility will be the subject of further investigation.
|
v3-fos-license
|
2023-02-01T16:16:15.921Z
|
2023-01-01T00:00:00.000
|
256454392
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/02/e3sconf_conmechydro2023_01011.pdf",
"pdf_hash": "da33ae69855e046e6590bdefe4b34840784d5727",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43194",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "693d644d081b7233b9e957e192c5806d26bd54c6",
"year": 2023
}
|
pes2o/s2orc
|
Numerical study of flow around flat plate using higher-order accuracy scheme
. Analytical methods exist to solve the problems of hydromechanics and heat transfer, but it is not possible to obtain the solution to some inhomogeneous and nonlinear problems of hydromechanics and heat transfer by analytical methods. The solution to such problems is carried out using numerical methods. Currently, there are many textbooks and monographs on numerical methods for solving problems of hydromechanics, thermal conductivity, heat and mass transfer, and others. The article presents the results of a numerical study of the flow structure in the flow around a flat plate. The calculations are based on the numerical solution of a system of nonstationary equations using a two-fluid turbulence model. For the numerical solution of these problems, schemes of the second and fourth order of accuracy were applied. The control volume method was used for the difference approximation of the initial equations, and the relationship between velocities and pressure was found using the SIMPLE procedure. To confirm the correctness of the numerical results, comparisons were made with each other and experimental data.
Introduction
In recent decades, significant progress has been made in developing high-precision numerical methods for solving the Navier-Stokes equations for modeling turbulent flows of practical interest.Computational methods such as large LES vortices modeling and direct numerical DNS modeling are increasingly being applied to such flows.However, their use is still limited by grid resolution requirements and, therefore, by available computational resources.Therefore, their wide practical application is associated with the development of computer technology and, according to experts, can begin only at the end of this century.Therefore, shortly, semi-empirical methods will remain the main working tool for solving applied aerodynamics problems.Most semi-empirical turbulence models are based on the so-called RANS equation.With this approach, in the equations of hydrodynamics, after averaging over time, Reynolds stresses arise, which must be determined.Consequently, the resulting system of equations is obtained unclosed.Many different mathematical models have been proposed to close the resulting system of equations.These models are based on the hypotheses of Boussinesq [1], Prandtl [2], Karman [3], etc.The NASA turbulence database provides a comparative analysis of various semi-empirical models.From this analysis, it can be concluded that the most accurate is the Spalart and Allmaras models [4] and the Menter model k-ω SST [5][6][7].To date, numerical solutions to many important practical problems have been obtained using these models.
Recently, the two-fluid turbulence model has become increasingly popular [8].This model is based on the dynamics of two fluids, which, unlike the Reynolds approach, leads to a closed system of equations.This model's peculiarity is that it can describe complex anisotropic turbulent flows.The problem under consideration is of great importance for aviation and rocket-space technology.In [9], a new two-fluid model was used to study the flow past a flat plate.In this case, a simplified, parabolize system of equations was used, i.e., the pressure was assumed to be constant.However, not in all streamlined flows the pressure can be considered constant.For example, the flow can occur in many technical devices in limited spaces.Therefore, in this work, there are two main goals, the first is to test the two-fluid model for the flow around a flat plate using the full system of turbulence equations, the second-first for the turbulence equation, the finite difference schemes of the second and fourth order of Runge-Kutta accuracy were used, and the results were compared with experimental data.This problem is described in the NASA database [10].
Physical and mathematical statements of the problem
A two-dimensional turbulent flow in a flat channel is considered.The physical picture of the analyzed flow and the configuration of the computational domain are shown in Fig. 1. x y
Calculated grids
In this study, the thickening of the mesh near the wall of the plates and the vertical position of the beginning of the plate were used.The calculated grid is shown in Fig.For this purpose, the transformation of coordinate systems [12] ( , ) ( , ) x y ^`^` The E parameter is about 1 and regulates the degree of grinding.The value Here
defV defV d v defV
For numerical implementation, the system of equations ( 5) is reduced to a dimensionless form by correlating all velocities to the average velocity of the incoming flow.All linear dimensions are the length of the plates L.
Solution method
The finite volume method is used for the numerical solution of the system of initial nonstationary equations (1).Due to the difficulties of matching the velocity and pressure fields, a grid with a spaced structure of the arrangement of grid nodes for dependent variables was used to discretize the equations of motion in directions and the continuity equation.This means that the velocity and pressure components are determined at different nodes.This approach is similar to the SIMPLE methods and gives certain advantages when calculating the pressure field [13].
The fractional step method achieves the implicit connection of the terms pressure and velocity, which consists of two stages [14].In the first stage, the intermediate velocities, ) , are initially predicted without applying the incompressibility constraint using the two-stage explicit Runge-Kutta method as follows: .
In the second stage, the intermediate velocities are projected onto vector fields without discrepancies using the Poisson equation, which calculates the increment of the pressure field, ߜp: .
where ∇ 2 is the Laplace operator.Equation ( 8) is solved iteratively using the multigrid method [15], and the velocity field is updated as follows: .
Then the Poisson equation is solved using this new pressure, and an iterative process is established.The iteration continues until the condition that there are no discrepancies in the calculated velocity field within a certain tolerance is met.
The exact second-order approximations of the partial derivatives of the cell center in the continuity equation are read: , .
Due to the staggered arrangement, the equations of x− and y-pulses are solved on the faces of the cells.
The exact discretization of the convective terms of the second order in the equation for longitudinal and transverse velocity is as follows: , 4 , 4 a molar longitudinal and transverse velocity: , 2 x y y x y y i j i j i j i j i j i j x y i j y y y y y y i j i j i j i j i j i j y y The exact discretization of the second-order diffusion terms in the equation for all velocities are as follows: , .
And for the Poisson equation, it is estimated as: Exact approximations of the fourth order of partial derivatives along the center of the cell in the continuity equation: , 27 27 , 24 27 27 , 24 The exact discretization of the convective terms of the fourth order in the equation for longitudinal and transverse velocity is as follows: , 27 27 , 24 27 27 , 24 , 27 27 , 24 27 27 .24 a molar longitudinal and transverse velocity: , 24 27 27 24 The exact discretization of the fourth-order diffusion terms in the equation for all velocities are as follows: Where Table 1.Two types of a sample of a finite difference scheme and their order of accuracy.
Of course, the difference scheme and their order of accuracy , , , , .
x y The integration was carried out in time steps for Scheme A-Δt=0.001 and for Scheme B-Δt=0.0001.
Results and Discussion
The comparisons of the numerical results obtained with the known experimental data are shown below.3 shows the numerical results of changing the Reynolds number of the momentum loss's thickness from the plate's dimensionless length.The Reynolds number of the momentum loss thickness was found by integrating the equation Re 0.5 .
Figure 3 shows experimental results for the comparison of rhombuses Here, for comparison with the results of the rhombic model, the results of the Coles theory are also shown [17,18].
Conclusion
The paper presents numerical solutions for the flow of an incompressible viscous fluid into the flow around a flat plate using a new two-fluid turbulent model.The dependence of the Reynolds number on the thickness of the momentum loss, the dependence of the coefficient of friction on the Reynolds number of the thickness of the momentum loss, and the transverse distribution of the longitudinal velocity are demonstrated.For the numerical implementation of the turbulence equation, second-and fourth-order accuracy schemes are used.From the figure, it can be seen both schemes satisfy the experimental results.But when using methods of the fourth order of accuracy, an increase in the algorithm's accuracy has to be paid for by increasing the calculation time and complicating the difference scheme.This should be carefully considered when choosing a method for solving partial differential equations.Usually, for most problems of hydrodynamics, sufficient accuracy can be obtained by methods of the second order of accuracy.
Fig. 1 .
Fig. 1.Diagram of the computational domain in a flat channel The unsteady system of equations of a turbulent two-fluid model in Cartesian coordinates has the following form [11]:
Fig. 2 .
Fig. 2. Calculated condensed mesh size 100x200 are set on all fixed solid walls: is a solid boundary.At the channel output for horizontal, vertical velocities, and relative velocities, the conditions of extrapolation of the second order of accuracy are accepted.Uniform profiles of the longitudinal component of velocity with 0 x V U are applied at the inlet, and the transverse component of velocity and pressure is zero 0 y V P .For the numerical implementation of the system (5), the following conditions were set at the input for relative velocities:
E3S 2022 Fig. 3 .
Fig. 3.The dependence of the Reynolds number of the pulse loss thickness on the length of the plate: 1 is results of scheme A, 2 is results of scheme B.
Figure 4
Figure 4 shows a solid line showing the dependence of the coefficient of friction on the dimensionless thickness of the momentum loss according to the proposed model.Lozenges is also illustrated by the results of the Karman-Schoenherr theory [16].
Fig. 4 .
Fig.4.The dependence of the coefficient of friction on the Reynolds number of the thickness of the momentum loss: 1 is results of scheme A, 2 is the results of scheme B.
Fig. 5 .
Fig. 5.The transverse distribution of the longitudinal velocity: 1 is results of scheme A, 2 is results of scheme B.
|
v3-fos-license
|
2017-10-11T01:09:15.275Z
|
2008-07-15T00:00:00.000
|
26979114
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ccsenet.org/journal/index.php/mas/article/download/2453/2303",
"pdf_hash": "1ba53a659b2edaccce8dc512d5d001fdd03407fd",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43195",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "1ba53a659b2edaccce8dc512d5d001fdd03407fd",
"year": 2008
}
|
pes2o/s2orc
|
Determination of Glucose and Fructose from Glucose Isomerization Process by High-performance Liquid Chromatography with UV Detection
Analysis of fructose and glucose from glucose isomerization process using immobilised glucose isomerase (IGI), {Sweetzyme from Novozymes} are often performed by HPLC methods with refractive index (RI) detector. This study is focused on developing new methods of measuring glucose and fructose using a specific carbohydrate column. The importance of this research, primarily based on the performance of the HPLC with ultra-violet (UV) detection as another alternative of detector rather than using RI . The method was carried out under the following condition; UV detection was made at 195 nm with column temperature of 30C , flow rate of 0.6 ml/min and injections of 20μL . The ratio of acetonitrile and the deionised water used was 80% to 20%. From the results, the detection of fructose and glucose by HPLC with acetonitrile and water as solvents can be obtained using UV detection (195nm) instead of the commonly used detector of RI.
Introduction
Enzymatic reaction is a chemical reaction with enzymes acting as biological catalysts.According to Shuler & Kargi(1992), under ambient conditions, the presence of enzymes result in much higher reactions rates as compared to chemically reactions.The role of enzyme catalysis in organic chemistry and bioprocess technology has increased tremendously in the last decade.According to Harmand et al (2004), two types of biological processes exist, microbiological and enzymatic reactions.
Isomerization of D-glucose to D-fructose by immobilized glucose isomerase is one example of enzymatic reaction.This reaction is a reversible reaction and important industrial process to produce high fructose syrup (HFS) with at least 50% conversion of glucose to fructose.The discovery of glucose isomerase started in 1957 by Marshall & Kooi (1957) who carried out enzymatic isomerization in batch reactors with soluble enzymes of immobilized glucose isomerase (IGI).
In the present work, we demonstrate the use of HPLC column by UV detection to measure glucose and fructose using a carbohydrate column, instead of using RI detector.This research differ from the work done by Slimestad & V gen. [10] in terms of range of detector and procedure of HPLC.In their study, Slimestad & V gen (2006) used evaporative light scattering (ELSD) detection of 230 4nm and the solvent gradient consisted of a linear increase in the amount of water in acetonitrile.The ability of the method proposed to analyze fructose and glucose is demonstrated under various operating conditions of the reaction.
Materials and methods
The materials for this study are: D-glucose(G), D-fructose(F) and MgSO 4 .7H 2 O, obtained from R&M Chemical,UK; 12g of Immmobilised Glucose Isomerase (IGI) of S.murinus,(brown cylindrical shape granules, diameter 0.3 -1.0 mm, length 1.0-1.5mmactivity 350 IGIU/g) from Sweetzyme, Novozymes; deionised water and acetonitrile (HPLC grade).The standard solutions were prepared in the following ways ; with 2g/100mL each of G and F and diluted with distilled water.All analytical samples were diluted with distilled water and filtered through 0.2μm Nylon filters prior to HPLC-analysis.
The HPLC system in this study is an Agilent 1100 with diode array detector.UV detection was made at 195 nm with column temperature of 30 0 C. The flow rate was set to 0.6mL/min and injections of 20μL were made.The column used was Supelco Kromasil NH 2 column (250mm x 4.5mm,5 μm).The ratio of acetonitrile and the deionised water used was 80% to 20%.A guard column was attached to the inlet of the Kromasil column to prevent clogging.
Results and discussion
Table 1 shows the retention time t R (min) and the area (mAUs) of glucose and fructose by HPLC-UV at different concentrations on a Kromasil NH 2 column.
From Table 1 it can be confirmed that fructose (mainly) and glucose can be determined using UV detector instead of the commonly used RI detector.The average retention time of fructose was 14.2 min, and 16.25 min for glucose.Figure 1 shows the standard curve for fructose and glucose at a specific concentration range.The values of R 2 for both of fructose and glucose confirmed that the results were statistically reliable.Figure 3 shows the HPLC result in this study for detection of fructose and glucose using the Kromasil column at a 0.5% concentration of fructose and glucose.The detection of fructose was faster compared to the glucose which was similar in trends to given by the supplier (Kromasil) who suggested using RI as the detector, as seen in Figure 2 and the research by Slimestad and V gen (2006).This occurs because fructose has been described as the first step in the hydrothermal degradation of glucose.Fructose and glucose are isomers which have similar molecular weight but different in terms of the arrangement or configuration of the atoms (Hawley,2001).
Comparable analyses with other method for detection of glucose was made by the DNS method (Miller,1959) for reducing sugar, and using a UV spectrophotometer (Shimadzu) at wavelength of 550nm for a sample obtained from the same operating reaction conditions.The result show that the glucose concentration using HPLC -UV was 12.8 g/L whereas for UV spectrophotometer (Shimadzu) was at 14.96 g/L.However the spectrophotometer measurement could not distinguish glucose from fructose, as they exist as isomers in the mixture.Hence resulting in the higher value.
Conclusion
The results imply that the detection of fructose and glucose by UV (195nm) with acetonitrile and water as solvents can be obtained using HPLC instead of commonly used detector of RI.
FructoseFigure 2 .
Figure 2. HPLC result for detection of fructose and glucose using Kromasil column by RI detector
Table 1 .
The retention time t R (min) and the area (mAUs) of glucose and fructose by HPLC-UV at different concentrations on a Kromasil NH 2 column Standard curve for HPLC result for detection of fructose and glucose using Kromasil column by UV detector.
|
v3-fos-license
|
2018-04-03T04:12:08.239Z
|
2014-04-15T00:00:00.000
|
14710266
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/crim/2014/512939.pdf",
"pdf_hash": "b3c6ede08273950a693da9c8b3e3c7f29c43c3d6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43196",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c68f7be769fd6b2f6b242145e95508bcbe9586d7",
"year": 2014
}
|
pes2o/s2orc
|
Multiple Gastrointestinal Complications of Crack Cocaine Abuse
Cocaine and its alkaloid free base “crack-cocaine” have long since been substances of abuse. Drug abuse of cocaine via oral, inhalation, intravenous, and intranasal intake has famously been associated with a number of medical complications. Intestinal ischemia and perforation remain the most common manifestations of cocaine associated gastrointestinal disease and have historically been associated with oral intake of cocaine. Here we find a rare case of two relatively uncommon gastrointestinal complications of hemorrhage and pancreatitis presenting within a single admission in a chronic crack cocaine abuser.
Case
HM is a 53-year-old African American male who presented to the emergency department with complaint of right-sided abdominal pain. The patient has a past medical history significant for hypertension controlled with clonidine and amlodipine, as well as polysubstance abuse. He denied alcohol abuse or any significant familial history of malignancy. The patient stated that he was in his normal state of health when he experienced rapid onset of intense abdominal pain and nausea which was worsened by eating, with no associated fevers, chills, vomiting, or diarrhea. However, patient did admit to intermittent heroin use and smoking crack cocaine on a daily basis.
Initial laboratory analysis revealed that the patient had a serum lipase level greater than 2000 u/L, alkaline phosphatase level of 53 IU/L, AST level of 34, ALT level of 25, total bilirubin level of 1.9 mg/dL, hemoglobin level of 12.9, and creatinine level of 4.0 mg/dL. He was admitted to the medical intensive care unit (MICU) with the diagnosis of acute pancreatitis and acute renal failure.
An abdominal ultrasound revealed a normal sized common bile duct of 3.7 mm, without evidence of cholelithiasis or biliary sludge. A lipid panel was done and showed a serum total cholesterol and triglyceride level of 107 mg/dL and 122 mg/dL, respectively. The patient did not have any other risk factors for pancreatitis; therefore it was concluded that the likely etiology of the patient's pancreatitis was secondary to his crack-cocaine abuse.
The patient remained in the MICU hemodynamically stable and afebrile for four days until being transferred to the medical floor with a hemoglobin level of 9.5 gm/dL. One day following transfer the patient had a single large volume melenic bowel movement and was noted to be lethargic and in acute respiratory distress. He was subsequently transferred back to the intensive care unit and found to have a hemoglobin level of 4.8 gm/dL and diagnosed with hypoxic respiratory failure secondary to gastrointestinal bleeding and acute blood loss.
Endoscopic evaluation via esophagogastroduodenoscopy (EGD) revealed several large ulcers of various sizes, ischemic in appearance throughout the stomach and duodenum. Given the fact that the patient did not have any risk factors for peptic ulcer disease and his history of cocaine abuse, it was thought that the patient's ulcers were likely ischemic in nature. At no time prior to the acute blood loss event secondary to gastric ulcer bleeding was the patient ever hypotensive or intubated. Nonetheless, a serum gastrin and H. pylori serology were sent and were negative. The patient was placed on intravenous pantoprazole drip, transfused, and remained in the ICU for further monitoring.
HM was eventually discharged to out-patient followup with the gastroenterology team as well as drug counseling service. He returned in approximately one month, substancefree and devoid of any further complications; however, he failed to return for follow-up endoscopic exam.
Discussion
Cocaine, benzoylmethyl ecgonine, is a crystalline tropane alkaloid that comes from the leaves of the Erythroxylum coca plant; a granular crystalline powder, cocaine hydrochloride, that can be smoked, is produced by dissolving the alkaloid in hydrochloride acid. Cocaine acts centrally by inhibition of the presynaptic reuptake of dopamine, norepinephrine, and serotonin causing stimulation of the central nervous system. It inhibits monoaminooxidases and has a direct anticholinergic effect and stimulates alpha adrenergic receptors. This activation of the sympathetic nervous system produces notorious vasoconstriction and ischemia [1].
Smoking cocaine allows for great absorption of the free base form, which does not undergo first-pass hepatic metabolism. Inhaled use allows for extremely rapid rise in plasma concentration to values greater than 900 ng/mL. Peak values (150 to 200 ng/mL) are reached at 30 to 40 minutes after inhalation of 96 mg of crystalline cocaine hydrochloride [2]. While there is a tremendous inconsistency in percent of actual drug product absorbed, owing to varying degrees of diffusion capacity and local vasoconstriction, substance effects can occur within 8-12 seconds, with similar if not more pronounced clinical effects [2][3][4]. This is in contrast to other routes of consumption where clinical effect may take up to one minute. The rapidity of effect accounts for the violent "rush" that is described by crack cocaine abusers and is additionally responsible for the psychopathology of significant addiction associated with its use [3,4].
The clinical, short term effects include decreased appetite, pupil dilatation, vasoconstriction, hypertension, and increased energy, heart rate, temperature, and altered mentation. Cocaine use has been related to myocardial infarction, arrhythmias, cerebral vascular accidents, and convulsions [5,6]. Pulmonary, renal, obstetrical, and psychiatric events have been reported as well as rare incidents of intestinal ischemia, perforation, retroperitoneal fibrosis, and gastric ulcer and sporadic cases of pancreatic involvement [7][8][9][10].
Gastrointestinal hemorrhage and intestinal perforations associated with smoking cocaine have been documented and felt to be due to deep gastric ulcerations from the multisystem toxicity of free based crack [2,6]. Four patients were reported with perforated gastric ulcers due to smoking crack by Kram et al. [11]. Kodali and Gordon reported an upper gastrointestinal bleed secondary to smoking crack cocaine [12]. Cocaine's blockage of norepinephrine reuptake leads to ensuing mesenteric vasoconstriction and focal tissue ischemia which leads more commonly to perforation [6,10].
Vazquez-Rodriquez et al. reported that a 21-year-old male used insufflated cocaine and developed abdominal pain 48 hours later with an elevated LDH. The diagnosis of acute pancreatitis was made with no other etiology [5]. The mechanism by which cocaine induces changes in the pancreas is unclear but involves its effect on the presynaptic nerve endings by blocking the reuptake of norepinephrine. This increases the norepinephrine in the synaptic cleft and postsynaptic stimulation. Cocaine also has a direct vasoconstriction effect mediated by the flux of calcium across the endothelial cell membrane. It also causes thrombus formation and platelet aggregation and decreases fibrinolytic activity by stimulating plasminogen activator inhibitor activity which may likely have played an alternative role in the ischemic pathology of crack cocaine induced pancreatitis.
This patient exhibits both hematemesis with multiple large ischemic gastric ulcers and acute pancreatitis. Clinicians must be aware of crack cocaine induced gastrointestinal symptomology and the associated pathophysiology, in order to astutely manage the multiple varied and serious complications affiliated with substance abuse.
Disclosure
All the authors have seen and agreed to the submitted version of the paper. All the authors certify that the material is original and that it has been neither published elsewhere nor submitted for publication simultaneously. All the authors certify that the paper will not be published elsewhere in the same form, in English or in any other language, without written consent of the copyright holder.
|
v3-fos-license
|
2020-10-29T09:08:01.117Z
|
2020-10-27T00:00:00.000
|
225100539
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.life-science-alliance.org/content/lsa/3/12/e202000815.full.pdf",
"pdf_hash": "14598247b07f7b03d9778b2de9d3a5bfba0350fc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43197",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "aa294a9a77fe41c607a9deb9b29857b4ddcce2f0",
"year": 2020
}
|
pes2o/s2orc
|
Aberrant autophagosome formation occurs upon small molecule inhibition of ULK1 kinase activity
Pharmacological inhibition of ULK1 with multiple and distinct small molecules does not block autophagosome initiation but does impair autophagic flux.
As you will see, the reviewers appreciate the significance of the findings presented in your manuscript. Although they also raise some points that need to be addressed, given the overall high level of interest in your study we would like to invite you to submit a revised version. When submitting the revision, please include a letter addressing the reviewers' comments point by point. Although we feel that a majority of the points that have been raised can be addressed, we would be happy to discuss individual revision points further with you should this be helpful. Please note that papers are generally considered through only one revision cycle, so strong support from the referees on the revised version is needed for acceptance.
In our view these revisions should typically be achievable in around 3 months. However, we are aware that many laboratories cannot function fully during the current COVID-19/SARS-CoV-2 pandemic and therefore encourage you to take the time necessary to revise the manuscript to the extent requested above. We will extend our 'scooping protection policy' to the full revision period required. If you do see another paper with related content published elsewhere, nonetheless contact me immediately so that we can discuss the best way to proceed.
To upload the revised version of your manuscript, please log in to your account: https://lsa.msubmit.net/cgi-bin/main.plex You will be guided to complete the submission of your revised manuscript and to fill in all necessary information. Please get in touch in case you do not know or remember your login name.
While you are revising your manuscript, please also attend to the below editorial points to help expedite the publication of your manuscript. Please direct any editorial questions to the journal office.
We hope that the comments below will prove constructive as your work progresses.
Thank you for this interesting contribution to Life Science Alliance. We are looking forward to receiving your revised manuscript. --High-resolution figure, supplementary figure and video files uploaded as individual files: See our detailed guidelines for preparing your production-ready images, http://www.life-sciencealliance.org/authors --Summary blurb (enter in submission system): A short text summarizing in a single sentence the study (max. 200 characters including spaces). This text is used in conjunction with the titles of papers, hence should be informative and complementary to the title and running title. It should describe the context and significance of the findings for a general readership; it should be written in the present tense and refer to the work in the third person. Author names should not be mentioned.
B. MANUSCRIPT ORGANIZATION AND FORMATTING:
Full guidelines are available on our Instructions for Authors page, http://www.life-sciencealliance.org/authors We encourage our authors to provide original source data, particularly uncropped/-processed electrophoretic blots and spreadsheets for the main figures of the manuscript. If you would like to add source data, we would welcome one PDF/Excel-file per figure for this information. These files will be linked online as supplementary "Source Data" files. ***IMPORTANT: It is Life Science Alliance policy that if requested, original data images must be made available. Failure to provide original images upon request will result in unavoidable delays in publication. Please ensure that you have access to all original microscopy and blot data images before submitting your revision.*** In the manuscript by Zacharia and Ganley the authors study the role of ULK1 in starvation induced autophagy using small molecule inhibitors of ULK1 kinase activity. They find that inhibition of ULK1 impairs the process of autophagosome formation not only at the nucleation stage but also at later stages. They further find that ULK1 activity is not strictly required for Vps34 activation, PI3P production and WIPI puncta formation. The paper presents interesting insights for scientists working in the field of autophagy and in particular those studying the process of autophagosome formation. The data mostly support the authors' conclusion, although a more quantitative approach would help to better support the author statements. In addition, as detailed below some of the statements related to the inhibition of autophagic flux and the stage(s) at which autophagosome formation is blocked upon inhibitor treatment should be toned down or backed up with additional experiments.
Major points: 1) In relation to figure 1E the authors state that inhibition of ULK1 blocks autophagic flux (line 212). According to the gel in figure 1E, LC3B-II is still detectable upon ULK1 inhibition, likely suggesting a slower flux, but not a complete blockage. Quantification of LC3B-II levels, perhaps in combination with analysis of p62 levels, would allow a better assessment of autophagy flux. 2) Figure 2: the authors state that in ULK1/2 DKO cells LC3 puncta are vastly diminished (lines 231-233). Quantification of the number of LC3 puncta/cell would better support this statement. Moreover, it is not clear what is the phenotype of the DKO cell lines, in absence of ULK1 inhibitor treatment. This panel should be added to the figure. In addition, the size of the autophagosomes could be measured and quantified.
3) In lines 246-247 it is stated that, upon ULK1 inhibition, the observed autophagosome-like structures do not traffic to the lysosome. This statement is not supported by data. The accumulation of bigger autophagosome-like structures could be due to a slower process of autophagosome formation which leads to the accumulation of more autophagy related proteins, but these structures might ultimately still traffic to lysosomes. 4) In line 269 the authors say that treatment with ULK1 inhibitors inhibits autophagic flux. According to the quantification of LC3-II levels, LC3B accumulation is still visible upon Bafilomycin treatment, although to a lesser extent than in WT cells. What can be concluded from the data is that the flux might be slower or decreased upon inhibition of ULK1, but not inhibited or blocked.
Minor points: 1) The authors should show a characterization of the stable cell lines used in Figure 1. For example, the expression levels of GFP-LC3 and mCherry-DFCP1, in comparison to WT proteins should be shown. 2) In line 202, when the authors describe the motility of LC3-positive structures, they should refer to the movie rather than the figure.
3) In line 213-214 (related to figure S1) the authors state that autophagosomes appearing upon ULK1 inhibition are positive for ATG2B. To support such a statement colocalization of ATG2B with LC3 or other autophagosomal marker should be shown. Alternatively, the supplementary figure could be removed, since it doesn´t contribute to the message of the manuscript. 4) In the paragraph related to Figure 3 the effect on autophagy of different ULK1 inhibitors is compared. Although the effect of the inhibitors is well described towards the end of the section, the statements at the beginning of it (lines 263-265) are not very accurate. For example, the activity of AMPK, is actually affected by treatment with SBI0206965. 5) In lines 287-289 the autophagosomal structures obtained upon ULK1 inhibition are characterized in number and size. Their brightness is evaluated by tracing a line across 2 or more puncta in the figure. This quantification of the brightness does not reflect the whole population of puncta. A better way of quantifying this parameter would be to measure the average signal intensities of ULK1 and LC3 puncta. Later in line 290 they say that ULK1 and LC3 decorates most of the enlarges autophagosomes. To better support this statement colocalization analysis of ULK1 and LC3 is needed. 6) Line 308 refers to Figure 5C which does not exist. 7) In figure S6 it is curious that upon starvation, treatment with Vps34 inhibitors lead to loss of ULK1 puncta, which then are restored upon treatment with ULK1 inhibitors. Maybe the authors could comment on these data.
Summary
In this manuscript, Zachari and Ganley examined the induction phase of starvation-induced autophagy in mammalian cells. In particular, the authors sought to consolidate the role of ULK1's kinase activity in the formation of autophagosomes and their precursors using live-cell imaging and confocal microscopy. Remarkably, the authors found that treatment with three distinct ULK1 inhibitors only delayed the appearance of omegasomes and autophagosomes but did not abolished their formation. Consistent with previous findings, the authors showed that under these treatment conditions both otherwise transient autophagic structures persisted. Intriguingly, the authors found that these apparent stalled or incomplete autophagosomes were only present upon pharmacological or genetic inhibition but not upon complete loss of ULK1. Lastly, the authors went on to show that ULK1-and WIPI2-positive pre-autophagic structures formed dependent on the activity of the lipid kinase hVps34 but were independent of UKL1's kinase activity. Together, this work provides several important mechanistic insights on catalytic and non-catalytic functions of ULK1 with potential far reaching implications on our understanding of the signaling events during autophagosome biogenesis. While this is indeed an elegant, well-controlled and -rationalized study, a few points still need to be addressed: 1) Do these inhibitors disrupt the ULK1 complex?
2) Would reconstitution of ULK1/2 KO cells with an ULK1 variant that lack the kinase domain phenocopy the expression of catalytic inactive ULK1 in this setting?
3) The authors may want to consider a scenario in which ULK1's activity is required to prime a second kinase which then allows autophagosome formation to proceed. One of the known ULK1 targets might function in this regard. An obvious (and easy) candidate to test would be TBK1.
Reviewer #3 (Comments to the Authors (Required)):
Zachari et al present interesting findings analysing the effects of a ULK1 inhibitor, MRT68921, on autophagosome initiation and maturation. The data in this manuscript are robust, well quantified and are supported by previous findings from the same lab (Petherick et al, JBC, 2015). The authors extend their analyses to include multiple ULK1 inhibitors and compare those to the effects of inhibiting Vps34 activity on autophagosome biogenesis. They show that the ULK1 inhibitors (and kinase dead ULK1 mutant) do not disrupt LC3 lipidation and phagophore localization of various ATG proteins (including ULK1, WIPI2, ATG2, and LC3). However, later stages of autophagosome biogenesis appear to be disrupted by ULK1 kinase inhibition. Analysing the stages of autophagy disrupted by ULK1 inhibitors, in comparison to the frequently used Vps34 inhibitors, provide important tools to dissect autophagosome biogenesis and distinguish the specific relevance of ULK1 kinase activity.
I only have few minor comments that mainly involve textual changes and data analyses: Figure S5: The conclusion on page 10 that the structures forming in the presence of ULK1 inhibitors are stable and still present after 8hrs of starvation is not well supported by this figure and in the absence of control images. Can the authors reanalyse the live imaging data in Figure 1 to measure the lifetime of LC3 puncta in the presence/absence of ULK1 inhibitors? Alternatively, the authors may simply alter the conclusions of this figure. Figure 3A: Comparing the ratio of LC3-II between starved cells and cells starved in the presence of BafA1 to confirm that lysosome fusion is affected in the presence of the ULK1 inhibitor would strengthen this conclusion. This is a very interesting finding and could be further supported by LC3-LAMP1 colocalisation experiments or cargo degradation (e.g. p62), if possible.
The authors could use arrow heads to mark 1-2 puncta in the different channels of the live imaging movies (supplementary data) to ease their monitoring.
1st Authors' Response to Reviewers October 6, 2020 We appreciate all the Reviewers' work in helping to review our manuscript and we hope to have addressed their comments sufficiently in the revised manuscript version.
Reviewer #1 (Comments to the Authors (Required)): In the manuscript by Zacharia and Ganley the authors study the role of ULK1 in starvation induced autophagy using small molecule inhibitors of ULK1 kinase activity. They find that inhibition of ULK1 impairs the process of autophagosome formation not only at the nucleation stage but also at later stages. They further find that ULK1 activity is not strictly required for Vps34 activation, PI3P production and WIPI puncta formation. The paper presents interesting insights for scientists working in the field of autophagy and in particular those studying the process of autophagosome formation. The data mostly support the authors' conclusion, although a more quantitative approach would help to better support the author statements. In addition, as detailed below some of the statements related to the inhibition of autophagic flux and the stage(s) at which autophagosome formation is blocked upon inhibitor treatment should be toned down or backed up with additional experiments.
-We would like to thank the Reviewer for their hard work in going through our manuscript and offering helpful comments. We believe that the manuscript is much more robust now.
Major points: 1) In relation to figure 1E the authors state that inhibition of ULK1 blocks autophagic flux (line 212). According to the gel in figure 1E, LC3B-II is still detectable upon ULK1 inhibition, likely suggesting a slower flux, but not a complete blockage. Quantification of LC3B-II levels, perhaps in combination with analysis of p62 levels, would allow a better assessment of autophagy flux.
-We agree with the Reviewer here and think in general there was perhaps a slight misunderstanding, likely due to a lack of precision in our terminology, which we apologise for. We assume the Reviewer is talking in absolute terms here and it is very rare that pharmacological inhibition results in a complete blockage of activity in cells, thus our terminology of inhibition and blockage was used a bit more loosely to incorporate a significant amount of inhibition while not necessarily meaning complete inhibition. We have changed the phrasing here slightly (and in other places) to hopefully clarify this: "Inhibition of ULK1 with MRT68921 was confirmed by western blot analysis of phospho-ATG13 (at serine 318), a well characterised substrate of ULK1, and a significant block in LC3-II flux ( Fig. 1E and F)." -We have also quantified LC3 flux by western blot and included the data in the new Fig. 1F -as in our previous publication (Petherick et al., 2015) there is no significant increase in LC3 accumulation -/+ Bafilomycin in the presence of MRT68921. We feel this data shows that LC3 flux is impaired. We also tried to analyse p62 flux, as suggested by the reviewer, however with the MEFs used in this study, we found very little p62 flux under the time course used (regardless of whether inhibitor was present), implying that autophagy of p62 in these cells occurs at a slower rate compared to that of LC3. As another assay of autophagic flux, we carried out flow cytometry of tandem LC3 expressing cells (new Fig. 3C). As can be seen, all the inhibitors significantly impair autophagy. -Fig1. In addition, the size of the autophagosomes could be measured and quantified.
-We apologise for this omission and have now included the control DKO panel and quantitation for LC3 puncta number and size -the numbers do indeed support our previous conclusions. We do note that the numbers here are slightly different from later quantitation (Fig. 4B); however, the cells here express exogenous ULK1 and quantitation was carried out on a different microscope (due to COVID restrictions we were not able to use the same microscope). This is made clear in the legend of Fig. 4 and Methods section. 3) In lines 246-247 it is stated that, upon ULK1 inhibition, the observed autophagosome-like structures do not traffic to the lysosome. This statement is not supported by data. The accumulation of bigger autophagosome-like structures could be due to a slower process of autophagosome formation which leads to the accumulation of more autophagy related proteins, but these structures might ultimately still traffic to lysosomes.
-We apologise for the confusion here and have removed the statement that trafficking is blocked -we were referring to data with LC3 flux suggesting the structures are impaired in their lysosomal degradation (from this study and our previous one). Please also see response to point 1 above too in that we agree, the formation of autophagosomes is slowed, which in turn significantly impairs their flux. We have altered the text slightly to hopefully clarify this: "In contrast, loss of ULK1 kinase activity (either by genetic modification or pharmacologically) still results in recruitment of the autophagic machinery and appearance of autophagosome-like structures, although these are aberrantly larger in size and display reduced lysosomal flux ( Fig. 1E and F, Fig. 2, Fig 3 and [24]). This is in support of previously published data showing that catalytically dead ULK1 blocks autophagy [29,30]." 4) In line 269 the authors say that treatment with ULK1 inhibitors inhibits autophagic flux. According to the quantification of LC3-II levels, LC3B accumulation is still visible upon Bafilomycin treatment, although to a lesser extent than in WT cells. What can be concluded from the data is that the flux might be slower or decreased upon inhibition of ULK1, but not inhibited or blocked.
-As with the above points, we have changed the phrasing slightly to make it clear we are not implying a complete block/inhibition. Text changed: "Autophagic flux was also impaired by all of the three ULK1 inhibitors" Minor points: 1) The authors should show a characterization of the stable cell lines used in Figure 1. For example, the expression levels of GFP-LC3 and mCherry-DFCP1, in comparison to WT proteins should be shown.
-We have now included a western blot of the cell line in a new Fig. S1. Unfortunately, the antibody we have for DFCP1 does not detect endogenous protein. We have included a blot for the exogenous mCherry-DFCP1, but as this is stable retroviralmediated expression (usually low) and we are not comparing to non mCherry-DFCP1-expressing cells, we feel this does not impact our conclusions. -Fig. S1 2) In line 202, when the authors describe the motility of LC3-positive structures, they should refer to the movie rather than the figure.
-We have changed the text to reflect this: "Following dissociation, the autophagosomes remained positive for GFP-LC3 and motile in the cytoplasm (Movie S1 and Fig. 1B-D)." 3) In line 213-214 (related to figure S1) the authors state that autophagosomes appearing upon ULK1 inhibition are positive for ATG2B. To support such a statement colocalization of ATG2B with LC3 or other autophagosomal marker should be shown. Alternatively, the supplementary figure could be removed, since it doesn´t contribute to the message of the manuscript.
-On the Reviewer's advice, we have removed this figure. Figure 3 the effect on autophagy of different ULK1 inhibitors is compared. Although the effect of the inhibitors is well described towards the end of the section, the statements at the beginning of it (lines 263-265) are not very accurate. For example, the activity of AMPK, is actually affected by treatment with SBI0206965.
4) In the paragraph related to
-We apologise for the confusion and have now altered the text to read: "Initially, we performed immunoblotting to identify concentrations and timepoints where ULK1 activity is sufficiently suppressed and LC3 flux is inhibited, as well as to confirm the activity status of upstream autophagy-regulating kinases (mTORC1 and AMPK)." -And: "A non-significant reduction in pS555 ULK1 levels was observed upon SBI0206965 treatment in Fed conditions, implying potential AMPK inhibition ( Fig. 3A and S3B), which has also been reported elsewhere [32]. However, given that AMPK phosphorylation of ULK1 (pS555) is dramatically reduced under these autophagyinducing conditions, we assume that any AMPK inhibitory effects of SBI0206965 will have a negligible impact on ULK1 in this instance." This quantification of the brightness does not reflect the whole population of puncta. A better way of quantifying this parameter would be to measure the average signal intensities of ULK1 and LC3 puncta. Later in line 290 they say that ULK1 and LC3 decorates most of the enlarges autophagosomes. To better support this statement colocalization analysis of ULK1 and LC3 is needed.
-We have now included quantitation of LC3 and ULK1 co-localisation across the whole population (new Fig. 4B). We have kept the line scans as we feel they offer a qualitative view of colocalization that is complementary to the new quantitative data. -Fig.4 6) Line 308 refers to Figure 5C which does not exist.
-We are sorry for this mistake, it has been corrected to 5B. figure S6 it is curious that upon starvation, treatment with Vps34 inhibitors lead to loss of ULK1 puncta, which then are restored upon treatment with ULK1 inhibitors. Maybe the authors could comment on these data.
7) In
-We agree with the Reviewer that this is intriguing data, yet we cannot fully explain this at the moment. We have added the following to the text where the Figure is described: "In the absence of ULK1 inhibition, VPS34-IN1 also prevented ULK1 puncta accumulation, though we cannot rule out that puncta still form but are smaller and harder to distinguish against the high background staining of the primary antibody used to detect ULK1 ( Fig. 5A and B). In support of the latter, enlarged ULK1 puncta were still forming in cells treated with both VPS34-IN1 and MRT68921 or ULK-101 (although with ULK-101 there appeared fewer in total), strongly suggesting that ULK1 is upstream of VPS34 (Fig. 5A, B and S6)." Reviewer #2 (Comments to the Authors (Required)): Summary In this manuscript, Zachari and Ganley examined the induction phase of starvation-induced autophagy in mammalian cells. In particular, the authors sought to consolidate the role of ULK1's kinase activity in the formation of autophagosomes and their precursors using live-cell imaging and confocal microscopy. Remarkably, the authors found that treatment with three distinct ULK1 inhibitors only delayed the appearance of omegasomes and autophagosomes but did not abolished their formation. Consistent with previous findings, the authors showed that under these treatment conditions both otherwise transient autophagic structures persisted. Intriguingly, the authors found that these apparent stalled or incomplete autophagosomes were only present upon pharmacological or genetic inhibition but not upon complete loss of ULK1. Lastly, the authors went on to show that ULK1-and WIPI2-positive pre-autophagic structures formed dependent on the activity of the lipid kinase hVps34 but were independent of UKL1's kinase activity. Together, this work provides several important mechanistic insights on catalytic and non-catalytic functions of ULK1 with potential far reaching implications on our understanding of the signaling events during autophagosome biogenesis. While this is indeed an elegant, well-controlled and -rationalized study, a few points still need to be addressed: -We thank the Reviewer for their supportive comments and thorough examination of our manuscript.
1) Do these inhibitors disrupt the ULK1 complex?
-We thank the Reviewer for raising this important point. We carried out an endogenous ULK1 co-IP from cells treated with inhibitors and found that complex formation was similar among all treatments. The new data is shown in Fig. S3C and mentioned in the text: "In addition, we found that treatment of cells with the three inhibitors did not appreciably disrupt the ULK1 complex itself, as determined by co-IP of ATG13 and FIP200 with ULK1 (Fig. S3C)." - Fig. S3 2) Would reconstitution of ULK1/2 KO cells with an ULK1 variant that lack the kinase domain phenocopy the expression of catalytic inactive ULK1 in this setting?
-This was an interesting suggestion and we tried to express an ULK1 truncation mutant lacking the kinase domain. However, we found it to be very poorly expressed in preliminary work, thus we abandoned this study as it would make interpretation of the results difficult due to inconsistencies in ULK1 protein expression between constructs.
3) The authors may want to consider a scenario in which ULK1's activity is required to prime a second kinase which then allows autophagosome formation to proceed. One of the known ULK1 targets might function in this regard. An obvious (and easy) candidate to test would be TBK1.
-We agree with the Reviewer on this point, which was actually a major part of the first author's PhD project. However, we have been unsuccessful to date in identifying the direct target of ULK1 in this instance. We had previously looked at TBK1 in our first study, but found that ULK1 inhibition still resulted in large puncta in double TBK1/IKKe knockout MEFs, implying TBK1 is not the target (Petherick et al., 2015).
Reviewer #3 (Comments to the Authors (Required)): Zachari et al present interesting findings analysing the effects of a ULK1 inhibitor, MRT68921, on autophagosome initiation and maturation. The data in this manuscript are robust, well quantified and are supported by previous findings from the same lab (Petherick et al, JBC, 2015). The authors extend their analyses to include multiple ULK1 inhibitors and compare those to the effects of inhibiting Vps34 activity on autophagosome biogenesis. They show that the ULK1 inhibitors (and kinase dead ULK1 mutant) do not disrupt LC3 lipidation and phagophore localization of various ATG proteins (including ULK1, WIPI2, ATG2, and LC3). However, later stages of autophagosome biogenesis appear to be disrupted by ULK1 kinase inhibition. Analysing the stages of autophagy disrupted by ULK1 inhibitors, in comparison to the frequently used Vps34 inhibitors, provide important tools to dissect autophagosome biogenesis and distinguish the specific relevance of ULK1 kinase activity.
-We thank the Reviewer for all their time and effort in going through our manuscript.
I only have few minor comments that mainly involve textual changes and data analyses: Figure S5: The conclusion on page 10 that the structures forming in the presence of ULK1 inhibitors are stable and still present after 8hrs of starvation is not well supported by this figure and in the absence of control images. Can the authors reanalyse the live imaging data in Figure 1 to measure the lifetime of LC3 puncta in the presence/absence of ULK1 inhibitors? Alternatively, the authors may simply alter the conclusions of this figure.
-Unfortunately we have found it tricky to image these structures over a long period of time -due to their movement, it requires taking frequent images (every 30 seconds) to ensure that we image the same puncta. In our microscopy set-up, this results in phototoxicity over any period of time longer than an hour. In the timecourse of Fig. 1, the inhibited structures remained stable but we only imaged these structures for 30 mins on average. We have altered the conclusions of this section to now read: "Large LC3 puncta were still apparent following 8 hours of EBSS starvation with inhibitors, though further work is needed to confirm the actual half-life of these structures (Fig. S5)." Figure 3A: Comparing the ratio of LC3-II between starved cells and cells starved in the presence of BafA1 to confirm that lysosome fusion is affected in the presence of the ULK1 inhibitor would strengthen this conclusion. This is a very interesting finding and could be further supported by LC3-LAMP1 colocalisation experiments or cargo degradation (e.g. p62), if possible.
-We thank the reviewer for this comment. Though we have not shown that fusion itself is blocked, we now have additional quantitative data that shows LC3-lysosomal flux is impaired upon ULK1 inhibition (( Fig. 1E-F as well as that in Fig. 3A and B). To further support this, we have tried p62 flux assays, but we have found that in MEFs, p62 turnover is slow compared to that of LC3, so in our experiments we have found very little change regardless of the presence or absence of inhibitor. To try and assess autophagic flux in another way, we expressed tandem mCherry-GFP-LC3 in our cells and monitored flux using flow cytometry (new Fig. 3C). As can be seen, autophagy is significantly impaired with all three inhibitors. Given that autophagosomal structures still form, and that with ULK1 inhibition there are more LC3 puncta that are positive for ULK1 (new quantitative data in Fig. 4B), we take this mean that there is a defect downstream of initiation but upstream of fusion. We do appreciate that further work is needed, beyond this current manuscript, to pinpoint the exact defect (and key ULK1 substrate identification) that occurs with ULK1 inhibition. We hope to address this in a future manuscript. The authors could use arrow heads to mark 1-2 puncta in the different channels of the live imaging movies (supplementary data) to ease their monitoring.
-We have added arrows to the movies to highlight the structures shown in Fig. 1A Along with the points listed below, please also attend to the following: -please upload your supplementary figures as single files If you are planning a press release on your work, please inform us immediately to allow informing our production team and scheduling a release date.
To upload the final version of your manuscript, please log in to your account: https://lsa.msubmit.net/cgi-bin/main.plex You will be guided to complete the submission of your revised manuscript and to fill in all necessary information. Please get in touch in case you do not know or remember your login name.
To avoid unnecessary delays in the acceptance and publication of your paper, please read the following information carefully.
A. FINAL FILES:
These items are required for acceptance.
--An editable version of the final text (.DOC or .DOCX) is needed for copyediting (no PDFs).
--High-resolution figure, supplementary figure and video files uploaded as individual files: See our detailed guidelines for preparing your production-ready images, https://www.life-sciencealliance.org/authors --Summary blurb (enter in submission system): A short text summarizing in a single sentence the study (max. 200 characters including spaces). This text is used in conjunction with the titles of papers, hence should be informative and complementary to the title. It should describe the context and significance of the findings for a general readership; it should be written in the present tense and refer to the work in the third person. Author names should not be mentioned.
B. MANUSCRIPT ORGANIZATION AND FORMATTING:
Full guidelines are available on our Instructions for Authors page, https://www.life-sciencealliance.org/authors We encourage our authors to provide original source data, particularly uncropped/-processed electrophoretic blots and spreadsheets for the main figures of the manuscript. If you would like to add source data, we would welcome one PDF/Excel-file per figure for this information. These files will be linked online as supplementary "Source Data" files. **Submission of a paper that does not conform to Life Science Alliance guidelines will delay the acceptance of your manuscript.** **It is Life Science Alliance policy that if requested, original data images must be made available to the editors. Failure to provide original images upon request will result in unavoidable delays in publication. Please ensure that you have access to all original data images prior to final submission.** **The license to publish form must be signed before your manuscript can be sent to production. A link to the electronic license to publish form will be sent to the corresponding author only. Please take a moment to check your funder requirements.** **Reviews, decision letters, and point-by-point responses associated with peer-review at Life Science Alliance will be published online, alongside the manuscript. If you do want to opt out of having the reviewer reports and your point-by-point responses displayed, please let us know immediately.** Thank you for your attention to these final processing requirements. Please revise and format the manuscript and upload materials within 7 days.
Thank you for this interesting contribution, we look forward to publishing your paper in Life Science Alliance.
|
v3-fos-license
|
2019-01-07T21:37:02.730Z
|
2018-03-02T00:00:00.000
|
133899636
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-7390/6/3/36/pdf?version=1519998657",
"pdf_hash": "4bbbe71bb3906c6d09cb9c7d0841f3149d08e15a",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43198",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "4bbbe71bb3906c6d09cb9c7d0841f3149d08e15a",
"year": 2018
}
|
pes2o/s2orc
|
Role of Bi-Directional Migration in Two Similar Types of Ecosystems
Migration is a key ecological process that enables connections between spatially separated populations. Previous studies have indicated that migration can stabilize chaotic ecosystems. However, the role of migration for two similar types of ecosystems, one chaotic and the other stable, has not yet been studied properly. In the present paper, we investigate the stability of ecological systems that are spatially separated but connected through migration. We consider two similar types of ecosystems that are coupled through migration, where one system shows chaotic dynamics, and other shows stable dynamics. We also note that the direction of the migration is bi-directional and is regulated by the population densities. We propose and analyze the coupled system. We also apply our proposed scheme to three different models. Our results suggest that bi-directional migration makes the coupled system more regular. We have performed numerical simulations to illustrate the dynamics of the coupled systems.
Introduction
In mathematical biology, population theory plays an important role.Historically, the first model of population dynamics was formulated by Malthus [1] and was later on adapted for more realistic situations by Verhulst [2].Lotka and Volterra [3,4] first modeled oscillations occurring in natural populations.Subsequently, the Lotka-Volterra model was modified by several researchers, and many of them observed chaotic dynamics [5][6][7][8][9].The occurrence of chaos in a simple ecological system motivated researchers to investigate complex dynamical behaviors of ecological systems, such as bi-stability, bifurcation and chaos.However, in real-world populations, the evidence of chaos is rare.In ecology, until now, many researchers have investigated three-species food chain/web models with the aim of controlling the chaos by incorporating several biological phenomena [10][11][12].
Spatial structure is an important factor in ecological systems.Natural systems are rarely isolated but rather interact among themselves as well as with their natural surroundings, and the dynamics of ecological systems connected by migration are very different from the dynamics of the individual systems.The concept of a metapopulation is a formalism to describe spatially separated interacting populations [13,14].A metapopulation consists of a group of spatially separated populations living in patches; individuals are allowed to migrate to surrounding patches.Levins (1969) [13] proposed a metapopulation theory and applied it in a pest-control situation.In landscape ecology and conservation biology, the idea of a metapopulation plays an important role [15,16].
In population biology, two systems can be coupled through migration, which is a common biological phenomenon and plays a vital role in the stability of ecosystems.Migration has been studied in a variety of taxa [17][18][19].In the stability of an ecosystem, migration can have a stabilizing effect [20][21][22][23].Holt (1985) [23] observed that passive dispersal between sink and source habitats can stabilize an otherwise unstable system.MacCullum observed that immigration could stabilize a chaotic system of the crown of thorns starfish Acanthaster planci and its associated larval recruitment patterns [21].Stone and Hart [22] observed that a discrete-time chaotic system could be stabilized by constant immigration.Silva et al. [24] also synchronized chaotic oscillations of uncoupled populations through migration.Furthermore, it has been established that unstable equilibria of a single-patch predator-prey model cannot be stabilized by coupling with identical patches [25].The persistence of coupled locally unstable systems depends on asynchronous behaviors between the populations [25][26][27].Ruxton [28] showed that weak coupling between two chaotic systems exhibited simple cycles or remained at a stable level and reduced populations' extinction probabilities.Recently, Pal et al. [29] investigated the effect of bi-directional migration on the stability of two non-identical ecosystems, which were connected through migration.They observed that an increase in the rate of migration could stabilize the non-identical coupled ecosystem.The above observations clearly indicate that migration has a major role in stabilizing chaotic ecosystems.However, the role of bi-directional migration for two similar types of ecosystems, where one is chaotic and the other is stable in nature, has not yet been investigated properly.
In the present paper, we consider metapopulation dynamics of spatially separated food webs that are connected through bi-directional migrations.Our aim of the present study is to investigate the role of migration on the stability of a coupled ecosystem for which one system shows chaotic dynamics, and the other system shows stable dynamics.In the next section, we formulate the model and analyze its behavior regarding the interior equilibrium point.In Section 3, we show the applications of the present scheme in three different models.Finally, the paper ends with a brief conclusion.
General Model Formulation and Stability Analysis
Two isolated systems can be coupled via migration.We consider the general case of two coupled ecological systems: where X and Y are the variables in the vector notation.The individual systems are described by the functions f (X) and g(Y); F(X, Y) and G(X, Y) are coupling functions.The equilibrium solutions of the uncoupled system are given by f (X) = 0 and g(Y) = 0.When coupling occurs, the equilibrium points of the system given by Equation ( 1) are given by f (X) + F(X, Y) = g(Y) + G(X, Y) = 0. Now we consider two three-species food-chain ecological systems that are coupled through bi-directional migrations.In bi-directional migration, a population can migrate from one patch to another depending on the population densities.The flow of the migration is from higher to lower density.Therefore, in bi-directional migration, the migration depends on the relative density difference between two patches.Then Equation (1) with bi-directional migration can be written as where x 1 , y 1 , and z 1 are the populations of system 1 and x 2 , y 2 , and z 2 are the populations of system 2; f i (i = 1, 2, 3) and g i (i = 1, 2, 3) are the functions describing systems 1 and 2, respectively; k 1 , k 2 , and k 3 are the migration coefficients of the three different populations.To study the stability behavior of the coupled system around the interior equilibrium point where the , and M 9 = g 3 z2 − k 3 suffixes denote the partial derivatives with respect to the corresponding variable.The characteristic equation of the above Jacobian matrix is where Now, the eigenvalues of the characteristic equation are negative or have negative real parts if all Routh-Hurwitz (RH) determinants (RH i , i = 1, 2, ..., 6) are positive, where , where σ j = 0 if j > n.
Applications
Migration within a population with spatial subdivision is important in some species and systems.It is observed that, if two identical patches are coupled through migration, then the coupled system acts exactly as a single-patch system.The persistence of coupled locally unstable systems depends on asynchrony behaviors among populations [25][26][27].It is to be noted that two identical chaotic systems cannot be stabilized by diffusive migration.Here we consider two tri-trophic food-chain systems of the same type with different parameter values, where one system shows chaotic dynamics and the other system shows stable dynamics.We also note that the two tri-trophic food web systems are spatially separated but are connected through bi-directional migrations.In this section, we describe the application of the above scheme developed in Section 2 to three different models, namely, the Hastings-Powell (HP) model, the Upadhyay-Rai (UR) model and the Priyadarshi-Gakkhar (PG) model, which are able to produce stable dynamics as well as chaotic dynamics for different sets of parameter values.
Hastings-Powell Model
In 1991, Hastings and Powell [6] proposed and analyzed a three-species food-chain model with a Holling type II functional response.The model is known for exhibiting chaotic dynamics in a continuous-time food-chain model.The non-dimensional HP model is governed by the following equations: where x, y and z are the densities of the prey, middle-predator and top-predator populations, respectively; a 1 , a 2 , b 1 , b 2 , d 1 and d 2 are the non-negative parameters that have the usual meanings [6].
Hastings and Powell [6] studied the model given by Equation ( 3) and observed switching of the dynamics of the system between stable focus, limit cycle oscillations and chaos by changing the parameter b 1 .
Coupling between Chaotic HP Model and Stable HP Model
The HP model shows different dynamical behaviors, including chaos.In the present section, we investigate the dynamics of the coupled ecosystem, for which one HP system shows chaotic dynamics and the other HP system shows stable dynamics.Here, we assume that the two different systems are connected by migration and that the direction of the migration is bi-directional.Further, all populations are free to migrate from one system to another.We denote the chaotic HP system with subscript 1 and the stable HP system with subscript 2. The coupled system is governed by the following equations: where k 1 , k 2 , and k 3 are the migration coefficients of the prey, middle-predator and top-predator populations, respectively.We assume that two systems differ only in the parameter b 1 in Equation ( 3); b 11 and b 21 are the parameters corresponding to systems 1 and 2, respectively.
Non-Negativity of the Solutions:
We let R 6 + = [0, ∞) 6 be the non-negative octant in R 6 .Then the interaction functions of the system given by Equation ( 4) are continuously differentiable and locally satisfy Lipschitz conditions in R 6 + .Thus, any solution of the system given by Equation ( 4) with non-negative initial conditions satisfies the non-negativity condition and exists uniquely in the interval [0, M) for some M > 0 ( [30], Theorem A.4).
Boundedness of the Solutions:
We define a function The time derivative of Equation ( 5) along with the solutions of Equation ( 4) are Applying the theorem of differential inequality [31], we obtain , which implies that P ≤ Q/µ + for all t ≥ t 0 .Therefore, all the solutions of the system given by Equation (4) are bounded.Hence, all the solutions of the system given by Equation ( 4), which are initiated in R 6 + , are positively invariant in the region Now we describe the numerical simulations for the system given by Equation ( 3) and the coupled system given by Equation ( 4) by considering the following parameter values: which were taken from [6].Choosing b 11 = b 21 = 3, then the coupled system given by Equation ( 4) remained chaotic for any coupling strength (migration rate).We then chose two different values of b 1 (b 1 = 3 and b 1 = 2), and the HP model of Equation ( 3) showed chaotic dynamics and stable dynamics.For system 1, we set b 11 = 3, so that the system showed chaotic dynamics, and for system 2, we set b 21 = 2, so that the system showed stable dynamics (Figure 1).The initial condition for the simulation of the coupled system given by Equation ( 4) was (x 1 (0), y 1 (0), z 1 (0), x 2 (0), y 2 (0), z 2 (0)) = (0.7, 0.6, 12, 0.75, 0.5, 11).We could then investigate the effect of bi-directional migration between the two systems.For simplicity, we considered and drew the bifurcation diagram of the coupled system of Equation ( 4) with respect to the rate of migration k (Figure 2).It is to be noted that in the absence of migration (k = 0), system 1 showed chaotic dynamics and system 2 showed stable dynamics.When we introduced migration between these two systems, then the coupled system became stable through a Hopf bifurcation when the migration rate (k) crossed a threshold value, (k * HP = 0.0145) (Figure 2).We observed that a small migration destabilized the stable system, and the coupled system showed higher periodic and chaotic oscillations, but if the strength of migration was increased gradually, then the coupled system became stable.We also observed that for k = 0.25, the coupled system of Equation ( 4) had a unique positive interior equilibrium E * HP (0.837058, 0.0841652, 12.2809, 0.692788, 0.171415, 12.4183).We also obtained the RH determinants, RH 1 = 2.4080 > 0, RH 2 = 4.2563 > 0, RH 3 = 2.7726 > 0, RH 4 = 0.3547 > 0, RH 5 = 0.0058 > 0, and RH 6 = 7.4751 × 10 −6 > 0, which satisfied the RH stability criterion of order 6.The eigenvalues of the coupled system given by Equation ( 4) were (−0.9721, −0.1091 + 0.1388i, −0.1091 − 0.1388i, −0.1544, −0.5317 + 0.0778i, −0.5317 − 0.0778i).Hence, the system given by Equation (4) was stable around the positive interior equilibrium E * HP (Figure 3).Further, we performed numerical simulations of the coupled system for the realistic parameter values considered by McCann and Yodzis [32].McCann and Yodzis [32] considered the modified HP model [6]; they produced a range of more "plausible" parameter values and demonstrated the existence of chaos for a wide range of these values.We considered the following parameter values: x c = 0.4, y c = 2.01, x p = 0.08, y p = 5, c 0 = 0.5, (7) which were taken from [32].The model and the meaning of the parameter values are given in [32].For system 1, we set r 0 = 0.161, so that the system showed chaotic dynamics, and for system 2, we set r 0 = 0.75, so that the system showed stable dynamics (Figure 4).The initial condition for the simulations of the coupled system was (x 1 (0), y 1 (0), z 1 (0), x 2 (0), y 2 (0), z 2 (0)) = (0.35, 0.5, 0.9, 0.35, 0.5, 0.9).If we introduced migration between the two systems (chaotic system and stable system), then the coupled system showed limit cycle oscillations via period-halving bifurcations (Figure 5).We observed that a gradual increase in migration made the coupled system switch its stability from chaotic dynamics to limit cycle oscillations (Figure 6).Therefore, migration could stabilize the coupled system by producing stable focus or more regular oscillations. x-population
Upadhyay-Rai Model
Upadhyay and Rai [7,33] proposed and analyzed a tri-trophic food-chain model by considering the middle predator as a specialist predator and the top predator as a generalist predator.
The prey-specialist predator-generalist system is governed by the following equations [7,33]: where x, y and z are the densities of the prey, specialist predator, and generalist predator populations, respectively; m 1 , n 1 , n 2 , w, w 1 , w 2 , w 3 , D, D 1 , D 2 and c are the non-negative parameters that have the usual meanings [7,33].In the above model, the prey population (x) grows logistically; the specialist predator (y) predates prey (only food item available to the specialist predator) via a Holling type II functional response; the generalist predator (z) sexually reproduces, its population growing quadratically (cz 2 ) and decaying as a result of intraspecific competition (− w 3 z 2 y+D 3 ).Additionally, males and females in the generalist predator population are assumed to be equal in terms of numbers, and the mating frequency is directly proportional to the number of males as well as the number of females.The interaction between the generalist predator and specialist predator follows a modified Leslie-Gower scheme.Here, the specialist middle predator is the favourite food choice of the generalist top predator and the generalist predator feeds on other food items (alternative food resources), in case of a short supply of the middle predator.
It is to be noted here that the system given by Equation ( 8) is not always dissipative, and the solutions may blow-up in finite time (explosive instability) depending on the parameter values and initial conditions [34].In recent literature, few researchers have investigated different models that show finite-time blow-up in the solutions [34][35][36][37][38][39][40].However, the above system given by Equation (8) shows very rich dynamics when w 3 y+D 3 < c < w 3 D 3 .Upadhyay and Rai [7,33] explored chaotic dynamics in the system by increasing the intrinsic growth rate m 1 .
Coupling between Chaotic UR Model and Stable UR Model
In this section, we denote the chaotic UR system with the subscript 1 and the stable UR system with the subscript 2. The coupled system is governed by the following equations: where k 1 , k 2 , and k 3 are the migration coefficients of the prey, specialist predator and generalist predator populations, respectively.We assume that two systems differ only in the parameter m 1 in Equation ( 8); m 11 and m 21 are the parameters corresponding to systems 1 and 2, respectively.Now we describe the numerical simulations of the system given by Equation ( 8) and the coupled system given by Equation ( 9).The set of parameters were as follows: which were taken from [33].Choosing m 11 = m 21 = 1.93, then the coupled system given by Equation ( 8) remained chaotic for any coupling strength (migration rate).We then chose two different values of m 1 (m 1 = 1.93 and m 1 = 1.2), and the UR model given by Equation ( 8) showed chaotic dynamics and stable dynamics, respectively (Figure 7).For system 1, we set m 11 = 1.93, so that the system showed chaotic dynamics, and for system 2, we set m 21 = 1.2, so that the system showed stable dynamics.The initial condition for the simulations of the coupled system given by Equation ( 9) was (x 1 (0), y 1 (0), z 1 (0), x 2 (0), y 2 (0), z 2 (0)) = (0.7, 0.5, 7, 0.7, 0.4, 6).We then investigated the effect of bi-directional migration on the two systems.For simplicity, we considered k 1 = k 2 = k 3 = k and drew the bifurcation diagram of the coupled system given by Equation ( 9) with respect to the rate of migration k (Figure 8).We observed that the coupled system given by Equation ( 9) became stable through a Hopf bifurcation when the migration coefficient crossed a threshold value k * UR = 0.21.We observed that when the migration was weak (k small), the stable system became unstable and the coupled system showed higher periodic and chaotic oscillations, but if the strength of migration was increased gradually, then the coupled system became stable.Further, we observed that for k = 0.25, the coupled system given by Equation ( 9) had a unique positive interior equilibrium E * UR (22.7980, 15.6757, 19.5396, 15.1274, 10.5329, 16.5322).We also obtained the RH determinants RH 1 = 2.9322 > 0, RH 2 = 7.7900 > 0, RH 3 = 9.8214 > 0, RH 4 = 3.1712 > 0, RH 5 = 0.0300 > 0, and RH 6 = 8.2866 × 10 −4 > 0, which satisfied the RH stability criterion of order 6.The eigenvalues of the coupled system given by Equation ( 9) were (−1.1705, −0.0096 + 0.3009i, −0.0096 − 0.3009i, −0.6209, −0.5608 + 0.3227i, −0.5608 − 0.3227i).Hence, the coupled system given by Equation ( 9) was stable around the positive interior equilibrium E * UR (Figure 9).The figure shows stable dynamics of the coupled system given by Equation ( 9) for k = 0.25.
Priyadarshi-Gakkhar Model
Priyadarshi and Gakkhar [9] proposed and analyzed a tri-trophic food-web model consisting of a Leslie-Gower-type generalist predator, where the middle predator is a specialist predator and the top predator is a generalist predator.
The prey-specialist predator-generalist predator system is governed by the following equations [9]: where x, y and z are the densities of the prey, specialist predator, and generalist predator populations, respectively.The parameters w 1 , w 2 , w 3 , w 4 , w 5 , w 6 , w 7 , w 8 , w 9 and w 10 are non-negative parameters that have the usual meanings [9].The formulation of the above model is similar to that of the UR model.However, in the above model, the specialist predator predates prey according to a Holling type II functional response, whereas the generalist predator predates prey and the specialist predator following a modified Holling type II functional response.It is to be noted that the system given by Equation (11) may not be dissipative and shows the blow-up phenomenon depending on the parameter values and initial conditions [34].Priyadarshi and Gakkhar [9] explored a "snail-shell" chaotic attractor in the system.
Coupling between Chaotic PG Model and Stable PG Model
In this section, we investigate the dynamics of the coupled ecosystem, where one PG system shows chaotic dynamics and the other PG system shows stable dynamics.Here, we consider that two different systems are connected by bi-directional migration.We assume that all populations are free to migrate from one system to the other.We denote the chaotic PG system with the subscript 1 and the stable PG system with the subscript 2. The coupled system is governed by the following equations: ) where k 1 , k 2 , and k 3 are the migration coefficients of the prey, specialist predator and generalist predator populations, respectively.We assume that two systems differ only in the parameter w 3 in Equation (11); w 31 , w 32 are the parameters corresponding to systems 1 and 2, respectively.
Conclusions
The persistence of coupled unstable systems depends on the maintenance of the asynchronous behavior among populations.Several types of asynchronous behaviors, such as the existence of refuge [41], biased dispersal [42], fixed differences in parameters [43], and so on, can enhance the stability of predator-prey systems.In the present paper, we considered two ecological systems of the same type that were connected through migration.We also considered different sets of parameter values so that one system (HP-1/UR-1/PG-1) showed chaotic dynamics and the other system (HP-2/UR-2/PG-2) showed stable dynamics.The direction of migration was taken as bi-directional and depended on the density difference of the populations in the two patches.We studied the effect of bi-directional migration on the chaotic ecosystem and stable ecosystem by considering three different types of food webs.We observed that small migration destabilized the stable system, and the coupled system showed higher periodic and chaotic oscillations, but if the strength of the migration was increased gradually, then above a threshold value, all the coupled systems (HP/UR/PG) became stable.Bi-directional migration can replace chaotic oscillations by a stable steady state or stable limit cycle.Therefore, migration makes the system more regular.In the present work, migration was considered as the coupling force; the migration could be both ways depending on the density difference of each population in the two patches.If the migration strength was weak, then we observed that the chaotic system dominated the dynamic properties of the coupled system.For a low migration rate, the population density of each patch changed very slowly.Intuitively, migration has a stabilizing effect.However, if the change in the population densities due to trophic interactions is greater than the change due to migration, then the population dynamics are likely to be dominated by trophic interactions.Therefore, the population dynamics of a coupled system may be unstable.However, if the migration strength is high enough, then the population densities of each patch quickly converge to the average density of the two patches, which may stabilize the coupled system.
o p u la tio n y − p o p u la ti o n z−population
Figure 9 .
Figure 9.The figure shows stable dynamics of the coupled system given by Equation (9) for k = 0.25.
Figure 13 .
Figure13.Largest Lyapunov exponent of the coupled systems given by Equations (4), (9) and (12) with respect to the parameter k.
|
v3-fos-license
|
2022-01-21T16:45:10.467Z
|
2022-01-19T00:00:00.000
|
246295220
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2022/5818840",
"pdf_hash": "e87116c37548ab1825f4102da9e7cfbb32386367",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43199",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b72bbc1c51fa5d370a582a6b6d76506f58b21b90",
"year": 2022
}
|
pes2o/s2orc
|
Role of Salivary Biomarkers in Cystic Fibrosis: A Systematic Review
Background Saliva biomarkers could be easily used as a noninvasive alternative tool for diagnosing cystic fibrosis (CF) disease. In this study, the significance of changes in salivary compositions in patients with CF was systematically reviewed. Methods An electronic search was utilized to include studies published in English, with case-control, cohort, or cross-sectional design. The evaluated salivary components were extracted and summarized. The included studies were assessed using the Strengthening the Reporting of Observational Studies in Epidemiology checklist. Results Out of 498 identified studies, nine met the eligibility criteria. Salivary electrolytes showed a substantial alteration in the CF group, especially with chloride and sodium. Total protein concentration was higher in patients with CF. However, SCN– concentration was lower in patients with CF. In addition, a reduction in the salivary flow rate and amylase levels was found in patients with CF. Conclusion Alterations in salivary biomarkers among patients with CF could be used as a promising diagnostic tool for cystic fibrosis.
Introduction
Cystic fibrosis (CF) is a life-limiting, multisystem autosomal recessive genetic disorder with a wide range of clinical and genetic variants [1]. CF most commonly affects Caucasians, with 70,000 people diagnosed worldwide [2]. It is caused by gene mutations in the CF transmembrane conductance regulator (CFTR) on the long arm of chromosome 7 that contributes to an abnormal chloride and sodium transportation across the epithelial cell membrane. As a result, this alteration affects hydration and mucociliary transport within exocrine glands, including the salivary glands [3]. CF is usually diagnosed on the basis of evidence of CFTR dysfunction, which is based on an abnormal sweat chloride test or the CFTR gene mutation. Other diagnostic tests may include immunoreactive trypsinogen test, sputum test, chest X-ray, CT scan, or pulmonary function tests.
Monitoring of CF has included sampling of numerous biofluids. In addition to the genetic test of CFTR mutations, the gold standard diagnostic method is chloride ion concentration (≥60 mEq/mL in sweat) [4]. Saliva was later introduced as a diagnostic modality [5]. Saliva has been utilized as a diagnostic tool for oral and systemic diseases [6][7][8][9]. Its use as an early detection approach has attracted special attention. It has been highly recognized due to its noninvasive accessibility, easy performance by modestly trained individuals, and simple equipment that could be used to collect salivary samples. Offering a cost-effective solution for screening larger population is considered the advantage of saliva over serum.
CF respiratory disease has been selected to confirm saliva's diagnostic technique based on well-founded studies of sputum and blood inflammation markers. Furthermore, many of these publications reported significant differences in the levels of different protein markers among patients with CF and healthy subjects [5]. In addition, salivary electrolytes have exhibited some changes depending on various CF-related factors [6].
Upon careful search in different databases about salivary biomarkers and their association with CF disorder, few studies investigated the changes in salivary components and biomarkers in patients with CF. Therefore, the present study is aimed at systematically reviewing the significance and medical uses of the changes in the salivary composition of patients with CF and evaluating the feasibility of using these biomarkers for diagnosis and clinical assessment of CF disorder.
Search Strategy and Selection of Studies.
The search strategy was planned in accordance with the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses [10]. The review of literature was based on the research question "what are the substantial changes that occurred in the saliva of patients affected by CF" and developed using Patient, Intervention, Comparison, and Outcome format [10]. This review covered published studies in English from the interval period of January 2000-December 2019. Observational studies, including case-control, cohort, or cross-sectional studies, concerning the question of this systematic review were included for analysis. PubMed, Scopus, Web of Science, EBSCO, and Cochrane Library were searched. The search was accomplished through the indexation of MeSH by using various combinations of terms, including "cystic fibrosis, saliva, saliva biomarkers, salivary enzymes," with prefixes "AND" and "OR" to involve all the relevant studies in the particular specified time of publication. Moreover, the reference lists of the included studies were manually searched for any additional relevant articles.
Data
Extraction and Quality Assessment. The following data were measured: author and year, type of study, CF group (number of participants and age), control groups (number of participants and age), measured saliva parameters, and primary outcomes of patients with CF compared with those of the control group from each included study were extracted, analyzed in detail, and then summarized in a table. In addition, quality assessment of the included studies was carried out to restrict the risk of bias by using the Strengthening the Reporting of Observational Studies in Epidemiology checklist and graded in accordance with the Olmos scale [11][12][13] as follows: A, if the study fulfilled >80% of the STROBE criteria; B, if 50%-80% of the criteria were met; and C, if <50% of the criteria were met.
Results
A total of 498 articles were retrieved from the search databases. After duplicates were excluded, 303 articles were analyzed. On the basis of the information provided in the study title and abstract, 274 publications were excluded for the following reasons: (1) irrelevant to the focus of this systematic review; (2) different language other than English; or (3) experimental in vitro studies, animal models, case reports, and reviews. The full texts of the remaining 24 articles were retrieved and screened for eligibility. Nine publications met the eligibility criteria, and they were included in this systematic review (Figure 1).
Study
Characteristics. The characteristics of the included studies are summarized in Table 1. Of the nine included studies, six had comparable participants in the experimental group (patients with CF) and the control group [14][15][16][17][18][19]. However, the participants in the control group were significantly higher than patients with CF in one study [20]. By contrast, patients with CF were higher than the control group in one study [5], but this characteristic was unclear in the study by Minarowski et al. [21]. The mean age of participants was comparable in patients with CF compared with the control group in seven studies [5,[14][15][16][17][18]20], while the mean age was not reported in two studies [4,21]. Male and female participants were comparable in six studies [5,15,16,[18][19][20]. However, the number of females was significantly higher than that of males in the control group in one study [17], and no information was reported regarding males and females in two studies [14,21].
The concentration of the salivary parameters of patients with CF was measured and compared to the salivary parameters of the healthy controls in all included studies. A diversity of the measured salivary parameters, methods of measurements and analysis, and purpose and outcome of each salivary parameter was observed. Moreover, the saliva collection methods were different among the included studies. Aps et al. [14] investigated the heterozygote and homozygote patients with CF to explore the effect of genetic heterogenicity on the salivary components. Minarowski et al. [21] included healthy smokers in the control group to study salivary thiocyanate (SCN-) levels and compared them with patients with CF and healthy nonsmokers. Patients with non-CF bronchiectasis were included as a control group in the study by Livnat et al. [15]. Malkovskiy et al. [19] investigated the levels of SCN-in patients with CF and included those undergoing treatment with CFTR modulators and reported their responses to therapy.
Quality Assessment.
Among the included studies, one was graded A score [5], five studies were graded B score [14][15][16][17][18][19][20], and one study was graded C score [21]. Most pitfalls were in the methodology and discussion sections as most studies did not provide adequate information about sample size calculation and sampling method. Some others did not report information about the participants. In some studies, key results, limitations, and generalizability were missing in the discussion section (Table 1).
Discussion
In this review, the result outcomes and the significance level of each biomarker presented in saliva and the validity of using these biomarkers in the diagnosis, clinical assessment, and monitoring of patients with CF were summarized. These parameters included electrolytes, proteins, acids (pH and buffering capacity), enzymes, antioxidants, salivary osmolarity, and flow rate.
4.1.
Electrolytes. The electrolyte concentrations in the saliva of patients with CF were analyzed using different assessment methods by four studies [14,15,18,20]. Some studies found a substantial alteration that may aid in CF diagnosis, especially with chloride (Cl) and sodium (Na) [14,18,20]. The first investigation of salivary electrolytes of CF heterozygotes was conducted by Aps et al. [14]. Although the researchers found that different genotypes of patients with CF have different electrolyte concentrations, the electrolytes were higher in CF homozygotes, especially those with F508 mutation (the most common mutation in patients with CF) [14]. Elevated Na and Cl were also reported in numerous other excluded studies [22][23][24], while another study reported an opposite result [25]. Phosphate was also higher in patients with CF in the study by Aps et al. [14], while it was not sta-tistically significant in Livnat et al.'s study [15]. Calcium (Ca) was also not statistically different in three studies [14,15,18]. Iron (Fe) and magnesium (Mg) were measured in only one study [15], which reported that Fe was not statistically significant and Mg was lower in patients with CF than in the control groups.
In 2013, Gonçalves et al. concluded that Na and Cl are the most reliable electrolytes to be comprehensively investigated as a possible diagnostic tool, because these two elements presented the highest values and sensitivity among other electrolytes. The researchers also recommended further studies with a larger population. In addition, a simultaneous comparison of the level of Na and Cl in saliva and sweat could provide new insights regarding the diagnostic ability of saliva [18]. The authors conducted another research in 2019 and concluded that saliva chloride (SaCl) concentration and saliva sodium (SaNa) concentration are candidates to be used in CF diagnosis [18,20]. The researchers found a positive correlation between sweat chloride and SaCl and between SwNa and SaNa [20]. However, in their narrative review, Pedersen [26] concluded that the SaNa levels for CF diagnosis are doubtful to be used when saliva is obtained from the submandibular or parotid gland. Nevertheless, the employment of Na-responsive electrodes as a screening tool for CF has shown some potentiality. [26][27][28][29][30][31]. In the present review, analysis of protein concentration in saliva was reported by four studies [5,[15][16][17]. Total protein concentration was higher in patients with CF in three studies [15][16][17]. Furthermore, albumin levels and glycoprotein concentration were found to be not statistically significant [15,16]. Total protein concentration was determined to be higher in saliva samples before salivary stimulation [16]. Salivary inflammatory cytokines were elevated in patients with inflammatory diseases [32]. Such findings encouraged researchers to investigate these salivary proteins in patients with CF by using promising platforms [33,34]. Another study examined the levels of six proteins (VEGF, IP-10, IL-8, EGF, MMP-9, and IL-1β) in two different time points by using two different platforms, and significant elevations in IP-10 and IL-8 were found. Meanwhile, a reduction in MMP-9 was observed in patients with CF compared with the control group. More interestingly, the levels of these proteins were correlated with the clinical assessment of patients with CF and their ability to be used as biomarkers for specific infections. Researchers found a significant correlation of IP-10 levels with FEV1 and disease severity [5]. In general, the reviewed studies in the present systematic review showed that the total proteins in saliva were higher in patients with CF. [15][16][17] Other studies also reported higher values of proteins and glycoproteins [35,36]. Cathepsin D activity was assessed and found to be higher in patients with CF before saliva stimulation, while glycoproteins were not sta-tistically significant [16]. Cathepsin D is a proteolytic enzyme, and it becomes abundant in body fluids, including serum and saliva, during physiological wearing out [37,38]. The cathepsin D in saliva has also been used to diagnose and monitor patients with breast cancer [39]. Moreover, patients with pulmonary fibrosis and inflammation, including those with CF, showed increased levels of cathepsin D [38].
Thiocyanate and Antioxidant.
The concentration of thiocyanates (SCN-) in the saliva of patients with CF is of great concern. Thiocyanate has a role in the host defense system as a substrate for lactoperoxidase, one of the antioxidant systems [21]. One study investigated the mean concentration of SCN-in patients with CF, healthy smokers, and health nonsmokers. The results showed that healthy smokers exhibited the highest levels, followed by healthy nonsmokers and patients with CF. [21] Another study used two different methods for thiocyanate (SCN-) concentration assessment in patients with CF. The researchers investigated if SCNconcentration could be used as a biomarker for CFTR function [19]. The results showed a reduction in the salivary thiocyanate SCN-of patients with CF in both techniques. However, the finding was significant only when Raman spectroscopy was used. Raman is considered a promising tool due to its ability to differentiate patients with CF and CFTR modulators, those with CF but without modulators, and healthy subjects. Furthermore, Raman was used to measure SCN-in a subject
5
BioMed Research International with G551D mutation before and after administration of ivacaftor, one of the CFTR modulators. The authors concluded that Raman could be used to assess the CFTR function through salivary thiocyanate concentration [19].
Oxidative stress elevation is considered part of the pathogenesis of CF and other inflammatory diseases. As a consequence of its elevation, many harmful effects have been raised, such as inflammatory injury, losing control over the inflammation process, organ failure, and dysfunction. These effects increase the importance of antioxidants, including the salivary antioxidant system in the oral cavity, for further protection against their harmful effects [4,40]. A reduction in peroxidase and an elevation in superoxide dismutase activities, uric acid concentration, and total antioxidant status have also been observed in patients with CF. [15] Most of the salivary antioxidant enzymes and molecules were altered in patients with CF. This finding is related to the decrease in the defense against oxidative stress, which may be of clinical importance considering the primary risk of patients with CF. [15] Following another study, a reduction in salivary peroxidase by 55% was observed in patients with CF compared with the control group [17].
4.4. Amylase, Lactate Dehydrogenase (LDH), Glucose, Lactate, Bicarbonate, and Sialic Acid. The α-amylase digestive enzyme is one of the highly copious components of saliva. It breaks down carbohydrates to help with indigestion. Moreover, it could bind with some oral bacteria and participate in bacterial clearance [41]. A significant reduction in the amylase levels by 55% was found in patients with CF compared with the control group [17]. This reduction of amylase and salivary peroxidase could contribute to undesirable effects in the oral cavity of patients with CF. [42] Conversely, another study did not record any statistically significant difference in amylase levels [15]. The authors evaluated various other changes in salivary composition, including LDH, which showed a significant decrease by 55% in the saliva of the CF group compared with the healthy control group. This finding could be responsible for the oral mucosal changes in patients with CF. [15] The investigation of sialic acid showed a reduction of its concentrations in saliva (total, free, and conjugated to glycoproteins) of patients with CF. [17] This acid is found in mucin and other glycoproteins; it also plays an essential role in protecting oral mucosa in providing lubrication and maintaining mucosal permeability and preventing the penetration of harmful substances [43]. No significant differences were found in glucose, lactate, and bicarbonate in the saliva of patients with CF. [14,18] 4.5. Salivary Flow Rate, pH, Osmolarity, and Buffering Capacity. Salivary flow rate was measured in three studies [15,17,18]. A reduction in the salivary flow rate in patients with CF was observed in two studies [17,18]. By contrast, Livnat et al. reported that salivary flow rate and pH in patients with CF were similar to those in the healthy control group [15]. However, a large-scale study by Gonçalves et al. [18] reported a reduction of salivary pH in patients with CF. Another aspect of this topic is the buffering capacity of saliva, which is essential for neutralizing and keeping the oral cavity pH; it is also considered critical for dental remineralization and demineralization [17,18]. da Silva Modesto et al. [17] measured saliva's total pH and buffering capacity. They also measured the buffering capacity in three different ranges of pH (pH > 7, 6.9-6.0, and <5.9). They found no difference in the initial pH and the total buffering capacity in patients with CF compared with the control group; however, a reduction in the buffering capacity was observed in the pH range of 6.9-6.0. Salivary osmolarity was investigated in only one study. It was higher in CF homozygotes due to an increase in the concentration of some organic and inorganic components and/or reduced water content of saliva [14].
Pseudomonas aeruginosa in patients with CF and IgA of saliva has been recently investigated for their diagnostic purposes. Sinus colonization could eventually lead to intermittent lung colonization, which proceeds to chronic infection. Sinus colonization results in elevated salivary IgA, specifically against P. aeruginosa. It aids in the early detection of bacteria to prevent further progression and lung colonization, which was discussed in several studies [44][45][46][47]. This relation initiates further research on salivary IgA and its possible prediction of the changes of lung infection in patients with CF.
A notable detail that the results obtained suggested that salivary biomarkers exhibit changes in CF, indicating their possibility as a diagnostic tool. However, several limitations have been encountered in the included studies as follows: (i) the methods for assessing salivary parameters differ, which hinders comparisons, and (ii) several studies have been performed with small sample size or inappropriate age/gender distribution. Such limitations made it necessary to recommend further research with better quality, larger populations, and randomization. Moreover, all other variables (e.g., gender, age, different genotypes, and experimental conditions, including the characteristics of participants, assessment methods, and environmental factors) must be controlled to confirm the findings of this review, further improve the measurement accuracy of saliva parameters in patients affected by CF, and strengthen the clinical uses of saliva.
Conclusion
In conclusion, saliva profile is altered due to CF pathogenesis. These alterations contribute to various effects in antimicrobial, antioxidant, lubricating, and digestive functions. Overall, the results emphasized the potential of using salivary biomarkers in the diagnosis, clinical evaluation, and monitoring of patients with CF. In addition, further controlled studies are highly recommended to confirm these findings.
Data Availability
The data supporting the findings of this review are already included. 6 BioMed Research International
Ethical Approval
The local Institutional Review Board deemed the study exempt from review.
Consent
Consent is not applicable.
|
v3-fos-license
|
2018-12-01T04:38:29.170Z
|
2017-11-09T00:00:00.000
|
54090161
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jsss.copernicus.org/articles/6/361/2017/jsss-6-361-2017.pdf",
"pdf_hash": "25cc2e3dbf924d5b21748044b24e1a8749cd8a8a",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43201",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "25cc2e3dbf924d5b21748044b24e1a8749cd8a8a",
"year": 2017
}
|
pes2o/s2orc
|
TSA infrared measurements for stress distribution on car elements
Because of the continuous evolution of the market in terms of quality and performance, the car production industry is being subjected to more and more pressing technological challenges. In this framework the use of an advanced measurement technique such as thermoelasticity allows the engineers to have a fast and reliable tool for experimental investigation, optimization and validation of the finite element method (FEM) of those critical parts, such as parts of car-frame tables (Marsili and Garinei, 2013; Ju et al., 1997). In this work it is shown how the thermoelastic measurement technique can be used to optimize a Ferrari car frame, as a method of experimental investigation and as a technique of validation of numerical models. The measurement technique developed for this purpose is described together with the calibration method used in the test benches normally used for fatigue testing and qualification of this car’s components. The results obtained show a very good agreement with FEM models and also the possibility of experimentally identifying the concentration levels of stress in critical parts with a very high spatial resolution and testing the effective geometry and material structure.
Introduction
In this work, in order to characterize a car frame, we propose a new measurement technique, and we have designed and realized a test bench to reproduce the real conditions of the use of the car.
The hydraulic shakers furnish the frames with the necessary solicitations to reproduce, in a few hours, the forces that the car will receive during its life.
The complex frame structure and the presence of notches and of weldings cause sudden fatigue fractures for the strain concentrations not always foreseen by the FEM model (Tomlinson and John, 2015;Brouckaert et al., 2012;Becchetti et al., 2010).
Moreover, for the safety coefficient growth, it is not possible to increase the material sections indiscriminately.In this way, in fact, the car would be heavier and its performance on the road would decrease.
Commonly, the mechanical characterization of the car frame is realized in two steps: the first one concerns the development of numerical models.In order to test the numerical solution and to increase the calculation speed, differ-ent types of solvers have been used, in linear and non-linear fields.The second step regards their experimental validation by using strain gauge techniques and accelerometers.Unfortunately this technique furnishes local information only on discrete points.Moreover, the measurement volume depends on the strain gauge dimensions, and often it is not lower than 1 mm (D'Emilia et al., 2015;Speranzini et al., 2016).
The thermoelastic measurement technique has been used to validate the FEM models in terms of stress distribution.The advantage of this technique is to determine, on the experimental bench, in very low time, the qualitative and quantitative stress distribution on all the car frames.
Thermoelastic theory
The phenomenon of material changing temperature when it is stretched was first noted by Ghough in 1805, who performed some simple experiments using a strand of rubber, but the first observation in metals of what is now known as the thermoelastic effect was made by Weber in 1830: he noted that a sudden change in tension applied to a vibrating Published by Copernicus Publications on behalf of the AMA Association for Sensor Technology.wire did not cause the fundamental frequency of the wire to change as suddenly as he expected, but that the change took place in a more gradual fashion.He reasoned that this transitory effect was due to a temporary change in temperature of the wire as the higher stress was applied.In 1974, the Admiralty Research Establishment approached Sira Ltd to determine the relationship between stress and the temperature changes that may be produced by an applied load.Sira confirmed the feasibility and, over the next 4 years, with funding from the English Ministry of Defence, developed a laboratory prototype called Spate (Stress Pattern Analysis by measurement of Thermal Emissions) for applied research.The scientific development of the thermoelastic effect, which is well known on gases, where a temperature variation gives a pressure variation, has been known in solid materials for a short time because of the small variation of temperature induced (in the steel where the stress level is near the yield point, the temperature increases by 0.2 • C).The thermoelastic technique for the measurement of stress distribution was developed as soon as they discovered a new temperature measurement technique, based on the emission of infrared radiation, with high sensitivity.The system consists of a differential thermocamera and of software for the post-processing of the image.The thermocamera measures the small temperature variation in the mechanical component induced by a dynamic applied load.Thanks to the software it is possible to have the map of stress distribution on the surface of the structure.The resolutions (supplied from the thermoelastic measurement systems) depend on the material characteristics; they are typically 1 MPa for steel and 0.4 MPa for aluminium.The structure must dynamically be loaded with frequencies sufficiently high so that the thermodynamic conditions in the material can be considered adiabatic.Under these hypotheses it is possible to have a relationship between the mechanical energy and the thermal energy of the structure.The minimum frequency of the applied load depends on the thermal characteristics of the material and on the gradient of the stress fields.The relationship to determine the temperature variation T of a homogeneous, isotropic, linear elastic material with his first stress invariant (σ 1 + σ 2 ) is where α is the thermal expansion coefficient, T is the absolute temperature of the component, ρ is the density, c p is the thermal capacity at constant pressure, and (σ 1 + σ 2 ) are the components of the tension tensor (Marsili and Garinei, 2012;Harish et al., 2000;Marsili et al., 2008).By the thermoelastic technique it is also possible to measure the map of stress distribution in complex geometries (Marsili and Garinei, 2014a;Dulieu-Barton and Stanley, 1999;Lin and Rowlands, 1995).
Normally it is necessary to paint the surface of the mechanical component to increase and make uniform the emissivity.This no-contact technique can have a high spatial resolution, which depends on the optical lenses of the thermocamera.
In order to obtain the stress distribution in terms of quantitative values, a calibration process is required.
This latter can be realized by using a common strain gauge, placed in a zone where the stress gradient is the smallest possible.
In this case the calibration factor K is calculated using the following equation: E = Young's module; ε x , ε y = principal strain; S Avg.= mean grey level of the infrared thermal imaging; ν = Poisson's module.
Generally, in order to perform the calibration, a doubleaxis strain gauge is used, with the aim of acquiring the sum of the two principal strains.
Ferrari car frame analysis
In this work we have analysed the mechanical behaviour of some components of a Ferrari car frame, which has proven critical in the experimental road tests.Figure 1 shows the photo of the frame under study and the FEM model developed.The presence of notches, of weldings and of brazing causes strain concentrations, as highlighted in the numerical analysis (Marsili et al., 2005;Marsili and Garinei, 2014a, b).
On these more stressed points, the use of the classical measurement techniques based on strain gauge or magnetic test are very difficult because of the non-planarity of the surface, the insufficient superficial finish and the small dimensions (Cardelli et al., 2015).The use of thermoelasticity, a measurement technique without contact for the surface distribution of solicitations, provides very important information (Marsili et al., 2009).
The test bench, equipped with the hydraulic shaker, generates a cyclical load on the frame which has been painted with a dull black paint, in order to make uniform the thermal emissivity of the body (Grigg et al., 2000;Lesniak et al., 2013;Offermann et al., 1997;Speranzini et al., 2014).An electrical strain gauge has been pasted on the frame to convert the thermographic frame into a stress frame, as well as to generate the reference signal necessary to the TSA system, to synchronize the frame grabber with the dynamic cycle load.
In fact the Delta Therm 1550 uses the lock-in amplifier technique to acquire only the temperature change synchronous with the applied load.
At the same time it allows us to improve the signal-noise ratio.Figure 2 presents a typical result that can be obtained using this type of measurement technique.In the same picture it is also possible to determine the stress concentrations near the constraint section and the zones where the shut is present.The thermoelastic stress map has been scaled by means of a calibration factor K, calculated with Eq. ( 2), using the strain gauge.In this way, through Eq. (1) a differential thermographic frame becomes a stress distribution map.
The previous TSA image is very useful for validating the FEM distributions in terms of the sum of principal stresses.In this case the correlation between FEM analysis and experimental results appears clearly.Drawing an interrogation line as shown in Fig. 2, it is possible to evaluate the stress trend along the same line as reported to the right of the same Fig. 2. From the analysis of two typical interrogation lines, the gap between FEM and thermoelastic analysis is evaluated.The maximum difference found is 3 MPa.The qualitative and quantitative coincidence of the experimental and numerical results now allows the use of the model to change the geometry and the sections of the frame, or to insert a bracing, in order to reduce at the maximum the concentration of strains, without repeating the experimental tests, with economic and time advantages.Figure 3 shows an example of a bracing welded on the frame.By the numerical analysis it is possible to see the strain concentration in correspondence to the bracing that could cause fatigue breaks of the component.
The same strain concentration is also seen in the experimental analyses by the thermoelastic system (Fig. 5).The thermoelastic experimental analysis highlights an elevated concentration of strains also around the screw not predicted, instead, by the FEM analysis.
Experimental calibration and uncertainty analysis
Normally in the thermoelastic measurement the calibration is based on the measurement of the deformation by a strain gauge rosette in a point of the structure.To put on the strain gauge rosette we have considered certain points with a high and regularly distributed solicitation.Repeated measurements by stain gauge of the principal strains and the relative measure of the infrared intensity radiation allow us to estimate the calibration factor K, varying the applied load, the observation area and the excitation frequency.The best available estimate of the thermoelastic constant k value is k = 0.14 MPa mV −1 .The experimental standard deviation is s(k) = 0.01 MPa mV −1 .
The composed uncertainty in the value of the thermoelastic constant K can be determined based on a relationship (Eq.2) as follows (Eq.3): Assuming a relative uncertainty of 2 % in the Poisson and Young moduli of the materials, of 2 % in the determination of the principal strain ε and 2 % in the rms of the signal measured by the infrared sensor V , the combined standard uncertainty is δK = 0.009, and so the relative uncertainty is
Conclusions
In this work a Ferrari car frame has been characterized from the mechanical point of view to single out the areas with higher concentrations of stress.Firstly, a model of numerical simulation has been validated using classic measurement techniques based on the use of a strain gauge and through thermoelastic techniques.
This last analysis has confirmed the results obtained from the numerical point of view and in certain cases we have identified areas with tension concentrations not foreseen with the FEM analysis.
The use of a strain gauge, as an instrument of reference, has given us the possibility of calculating the calibration constant and estimating the measurement uncertainty.
Data availability.
No data sets were used in this article.
Competing interests.The authors declare that they have no conflict of interest.
Edited by: Rosario Morello Reviewed by: two anonymous referees
Figure 1 .
Figure 1.Car frame under analysis and FEM analysis results.
|
v3-fos-license
|
2017-06-16T22:01:03.234Z
|
2012-12-07T00:00:00.000
|
18128912
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.spandidos-publications.com/10.3892/ijo.2012.1730/download",
"pdf_hash": "cbcc03ca3e1a0977bf7551314c7a55122dccc3f3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43202",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "cbcc03ca3e1a0977bf7551314c7a55122dccc3f3",
"year": 2012
}
|
pes2o/s2orc
|
Centrosomal localization of RhoGDIβ and its relevance to mitotic processes in cancer cells
Rho GDP-dissociation inhibitors (RhoGDIs) are regulators of Rho family GTPases. RhoGDIβ has been implicated in cancer progression, but its precise role remains unclear. We determined the subcellular localization of RhoGDIβ and examined the effects of its overexpression and RNAi knockdown in cancer cells. Immunofluorescence staining showed that RhoGDIβ localized to centrosomes in human cancer cells. In HeLa cells, exogenous GFP-tagged RhoGDIβ localized to centrosomes and its overexpression caused prolonged mitosis and aberrant cytokinesis in which the cell shape was distorted. RNAi knockdown of RhoGDIβ led to increased incidence of monopolar spindle mitosis resulting in polyploid cells. These results suggest that RhoGDIβ has mitotic functions, including regulation of cytokinesis and bipolar spindle formation. The dysregulated expression of RhoGDIβ may contribute to cancer progression by disrupting these processes.
Introduction
Rho family proteins function as molecular switches in various cellular processes, including actin cytoskeletal organization, microtubule dynamics, vesicle trafficking, cell cycle progression and cell polarity (1). More than 22 Rho family proteins have been identified in humans (2). There are three classes of regulators of Rho proteins, namely, Rho guanine nucleotide exchange factors (RhoGEFs), Rho GTPase-activating proteins (RhoGAPs) and RhoGDIs. In humans, over 69 RhoGEFs and 59 RhoGAPs have been characterized (3,4), while only three types of RhoGDIs (RhoGDIα/RhoGDI1, RhoGDIβ/RhoGDI2/LyGDI/D4GDI, and RhoGDIγ/RhoGDI3) have been identified (5). The existence of a great many types of RhoGEFs and RhoGAPs enables assignment of individual regulators to specific cellular processes. On the other hand, because there are fewer RhoGDIs, each type must regulate a wide range of cellular processes. Thus, RhoGDIs are considered multifunctional central regulatory molecules for Rho family proteins (5)(6)(7)(8). This multifunctional nature of RhoGDIs makes it difficult to clarify their specific roles in various cellular events. RhoGDIα is a major RhoGDI and is universally expressed. RhoGDIγ is expressed in brain, lung and pancreas (9,10). RhoGDIβ was originally isolated as a RhoGDI that is abundantly expressed in hematopoietic cells (11,12), however, it is also expressed in several other cell types, including keratinocytes, fibroblasts, amnion cells (13), non-hematopoietic tumors (14)(15)(16) and in various normal human tissues (17). Therefore, RhoGDIβ is expected to have a more general cellular role, not specific to the hematopoietic cell lineage.
RhoGDIβ is implicated in cancer progression, however, reports have presented contradictory evidence as to the nature of the correlation between cancer progression and RhoGDIβ expression level. RhoGDIβ was found to be upregulated in ovarian cancer (18), breast cancer (19), gastric cancer (20), and in pancreatic cancer cells that show high perineural invasion (21,22). The full-length RhoGDIβ promotes cancer cell invasion (19) and survival (23) in human breast cancer. In our previous studies, RhoGDIβ lacking C-terminal region was identified to induce metastasis by activating the Rac1 signaling pathway in c-Ha-Ras-transformed fibroblasts (15,24). In other studies, RhoGDIβ has been reported to suppress invasion (14) and its expression is inversely correlated with invasive capacity in human bladder cancer cells (16). Our experiments showed that RhoGDIβ lacking the N-terminal regulatory domain suppresses metastasis by promoting anoikis in v-Src-transformed fibroblasts (25). Metastasis suppression by RhoGDIβ is enhanced by Src-induced RhoGDIβ phosphorylation (26) and correlates with increased Rac1 activity (27). Thus, indicating yet-undetermined roles in cancer cells, there are inconsistent results regarding RhoGDIβ expression and cancer progression.
Centrosomal localization of RhoGDIβ and its relevance to mitotic processes in cancer cells
To clarify the role of RhoGDIβ in cancer progression, in the present study, we examined the subcellular localization of RhoGDIβ and the effects of overexpression and RNAi knockdown of RhoGDIβ in cancer cells. We found that RhoGDIβ localized to centrosomes in human cancer cells. In HeLa cells, exogenous GFP-tagged RhoGDIβ localized to centrosomes and its expression resulted in prolonged mitosis and aberrant cytokinesis. Knockdown of RhoGDIβ increased the incidence of monopolar spindle mitosis and polyploid cells in HeLa cells. The resulting polyploid cells were possibly caused by perturbation of centrosomal function with a lack of RhoGDIβ. Our presented results give new insights about the role of RhoGDIβ in cancer progression.
Materials and methods
Cells and cell culture. The human cervical cancer cell line HeLa was provided by the late Professor Masakatsu Horikawa, Faculty of Pharmaceutical Sciences, Kanazawa University (Kanazawa, Japan) (28). The human colon cancer cell lines HT-29, HCT116 and SW48 were purchased from ATCC (Rockville, MD). The human colon cancer cell lines DLD-1 and LoVo were purchased from the Human Science Research Resources Bank (Osaka, Japan). The human colon cancer cell lines SW480 and SW620 (29) were provided by Dr Ryuichi Yatani, Mie University School of Medicine (Mie, Japan). These cells were cultured in Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal bovine serum (FBS), penicillin (50 U/ml), and streptomycin (50 µg/ml) at 37˚C in a humidified atmosphere of 5% CO 2 and 95% air. Human microvascular endothelial cells (HMVEC) (Kurabo Industries Ltd., Osaka, Japan) were maintained in HuMedia-MvG in accordance with the supplier's instructions. Immortal OKF keratinocytes (OKF6/TERT-2) were kindly provided by Dr J.G. Rheinwald (Harvard Medical School, Boston, MA) and were cultured in keratinocyte serum-free medium (K-sfm, BD Biosciences, San Diego, CA) with 30 µg/ml bovine pituitary extract, 0.1 ng/ml EGF and 0.4 mM Ca 2+ (30). This culture condition permits cells to form cadherin-and desmosome-mediated junctions, and for some cells to stratify.
Preparation of mouse organs for immunoblotting. Six-week-old female ICR mice were obtained from Japan SLC Inc. (Shizuoka, Japan) and maintained under pathogen-free conditions. Seven-week-old mice were anesthetized with pentobarbital, then blood was gently drained from the inferior vena cava using a heparinized syringe equipped with a 22-gauge needle. Leukocytes and erythrocytes were isolated from blood using Lympholyte ® -Mammal (Cedarlane Laboratories Ltd., Burlington, Canada) according to the manufacturer's instructions and were lysed in Laemmli buffer (31). After blood collection, organs were quickly removed, washed with ice-cold phosphatebuffered saline (PBS), minced into small pieces and lysed in Laemmli buffer. All experiments using mice were approved by the Committee on Experimental Animals at Kanazawa Medical University and conducted in accordance with their guidelines.
Immunoblotting. Protein concentrations of lysed cells and organs were measured using the Bradford ULTRA kit (Novexin Ltd., Cambridge, UK). Proteins were resolved by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred to Immobilon-P membranes (Millipore, Billerica, MA). The membranes were then probed with a primary antibody, followed by incubation with a peroxidase-conjugated secondary antibody. Immunoreactive proteins were visualized using ECL Plus reagents (GE Healthcare UK Ltd., Little Chalfont, UK). In the immunoblot experiments, we used RhoGDIα as a loading control because of its universal expression. We applied the same amount of protein for immunoblot experiments. The samples from organs contain several kinds of extracellular matrices at various levels and the cell numbers contained in each samples was not constant, therefore the expression levels of RhoGDIα among various organs were not as constant as those among cultured cell lines.
Immunofluorescence staining. For immunofluorescence staining, cells were grown on 35-mm culture dishes (BD Biosciences, San Jose, CA) or Lab-Tek II chamber slides (Nalge Nunc International, Naperville, IL). Cells were fixed with freshly prepared 4% paraformaldehyde for 2 min, permeabilized with 0.5% Triton X-100 for 10 min and fixed again with 4% paraformaldehyde for 10 min at room temperature. In some experiments, cells were simply fixed with 99.8% methanol for 30 min. After washing with PBS, cells were incubated with 0.5% bovine serum albumin (BSA) in PBS for 60 min at room temperature and then incubated overnight at 4˚C with primary antibodies diluted in 0.5% BSA in PBS (1:2,000 for anti-γ-tubulin antibody or 1:200 for all other primary antibodies). After three washes with PBS, cells were incubated for 60 min at room temperature with secondary antibodies (Invitrogen, Carlsbad, CA) diluted 1:400 with 0.5% BSA in PBS containing 0.1 µg/ml 4',6-diamidino-2-phenylindole (DAPI). After washing with PBS, cells were mounted with Prolong Gold (Invitrogen). For the blocking experiment, anti-RhoGDIβ antibody (sc-6047) was incubated with 10-fold concentration of the blocking peptide for 60 min at room temperature before use. Images were obtained using an Axiovert 200 inverted fluorescence microscope or LSM710 confocal microscope (Carl Zeiss, Jena, Germany).
Plasmids and transfection. The entire sequence was amplified by PCR using pcDNA3.1-LyGDI, an expression vector for wild-type human RhoGDIβ (15). The product was then subcloned into pAcGFP1-C3 and used as pAcGFP-RhoGDIβ. Cells were transfected with the expression plasmids using Lipofectamine 2000 (Invitrogen). To obtain cell lines that stably expressed the introduced genes, G418-resistant cells were isolated in medium containing 800 µg/ml G418 (Nacalai Tesque Inc., Kyoto, Japan).
Observation of GFP-RhoGDIβ in living cells and time-lapse
analysis. HeLa cells that stably expressed GFP-RhoGDIβ were cultured in 35-mm glass-bottomed dishes (Asahi Glass Co. Ltd., Tokyo, Japan). The cells were maintained at 37˚C in a humidified atmosphere of 5% CO 2 and 95% air in an enclosed stage incubator (Incubator-XL, Carl Zeiss) built on top of an Axiovert 200 M inverted microscope. Time-lapse images of green fluorescence and differential interference contrast (DIC) were acquired on an Axiovert 200 M controlled by AxioVision image processing and analysis system 4.4 (Carl Zeiss). The onset of mitosis was considered to be the beginning of cell rounding, the onset of anaphase was defined as the beginning of chromosome segregation and the end of cytokinesis was recognized when the cleavage furrow maximally contracted. We analyzed the progression of mitosis temporally in cells in which these mitotic processes could be clearly recognized. The entire duration of mitosis was measured as the time from the beginning of cell rounding to the maximal contraction of the cleavage furrow. Statistical analysis. Differences between values were analyzed by the two-tailed Mann-Whitney U-test using the statistics function of KaleidaGraph (Version 4.1). P<0.05 was considered significant.
Results
Protein expression levels of RhoGDIβ. To confirm the previous observations showing the expression of RhoGDIβ in cells other than those in the hematopoietic cell lineage (13)(14)(15)(16)(17), we examined the levels of RhoGDIβ protein in various mouse organs (Fig. 1A, upper panel). RhoGDIβ was expressed in mouse brain, lung, trachea, esophagus, adrenals, bladder, blood vessels, small and large intestine, although the expression levels were much -6047). Images of representative immunoblots from two independent experiments are shown. (B) Whole-cell lysates were prepared from the indicated human cell lines and the levels of RhoGDIβ protein were determined by immunoblot analysis using anti-RhoGDIβ antibody (556498). Images of representative immunoblots from at least three independent experiments are shown. RhoGDIα, which is a universally expressed RhoGDI, was stained using anti-RhoGDIα antibody (sc-360) as a loading control. lower than in hematopoietic cells such as those from the thymus, spleen and leukocytes. Blood was thoroughly drained before the isolation of organs, but slight contamination of hematopoietic cells was unavoidable. Therefore, it could not be ruled out that very low expression of RhoGDIβ reflects contamination of hematopoietic cells. In contrast, RhoGDIα was expressed at various levels in all examined tissues (Fig. 1A, lower panel). Consistent with the expression of RhoGDIβ in the large intestine and in blood vessels, RhoGDIβ was expressed in cultured human colon cancer cells and microvascular endothelial cells (HMVEC) (Fig. 1B). RhoGDIβ was also expressed in other types of cultured epithelial cells, such as HeLa and OKF6/TERT-2 human keratinocytes (Fig. 1B).
Subcellular localization of RhoGDIβ. We examined the subcellular localization of RhoGDIβ in cultured colon cancer cells by immunofluorescence staining. Simultaneous staining with anti-RhoGDIβ and anti-γ-tubulin antibodies showed the colocalization of them in DLD-1 human colon cancer cells ( Fig. 2A, upper panels). Similar results were also obtained in HT-29, HCT116, LoVo, SW48, SW480 and SW620 human colon cancer cell lines. Representative images of simultaneous staining with anti-RhoGDIβ and anti-γ-tubulin antibodies in LoVo, HT-29 and HCT116 human colon cancer cells are shown in Fig. 2B. RhoGDIβ colocalized with γ-tubulin also in OKF6/TERT-2 human keratinocyte (Fig. 2C, upper panels) and HeLa cells (Fig. 2D, upper panels). During interphase, the colocalization of RhoGDIβ with γ-tubulin was not as clear as during metaphase. Such centrosome staining pattern is likely to be associated with centrosome maturation. The staining with anti-RhoGDIβ antibody was abolished by pretreatment of the antibody with antigen peptide (Fig. 2A, C and D, lower panels). Unlike RhoGDIβ, RhoGDIα did not colocalize with γ-tubulin in colon cancer cells and HeLa cells (data not shown).
Subcellular localization of GFP-RhoGDIβ in HeLa cells.
We used GFP-RhoGDIβ to confirm the localization of RhoGDIβ to centrosomes. pAcGFP-RhoGDIβ was transfected into HeLa cells and the cells stably expressing the introduced genes were selected (Fig. 3A). Expression of GFP-RhoGDIβ did not affect the levels of endogenous RhoGDIβ protein (Fig. 3A, right panel). In living cells, localization of GFP-RhoGDIβ was distinct from localization of GFP only. Specifically, GFP-empty localized all over the cell, although it was brighter in the nuclei than in cell cytoplasm, while GFP-RhoGDIβ localized predominantly to the cytoplasm (Fig. 3B). It was difficult to observe centrosomal localization of GFP-RhoGDIβ in living cells because of its granular localization throughout the cytoplasm. To confirm the centrosomal localization of GFP-RhoGDIβ, cells were fixed and stained with both anti-GFP and anti-γ-tubulin antibodies. GFP-RhoGDIβ colocalized with γ-tubulin in both metaphase and interphase cells, while GFP-empty did not (Fig. 3C).
Time-lapse observation of mitosis in HeLa cells stably expressing GFP-RhoGDIβ. Our observations suggested that
RhoGDIβ had a role related to centrosome function. To investigate this, we used time-lapse microscopy to observe the mitotic progression in HeLa cells stably expressing GFP-RhoGDIβ. The incidence of morphologically aberrant cytokinesis, in which the cell shape was distorted, increased about 4-fold in GFP-RhoGDIβ expressing cells (Fig. 4A). Representative images of aberrant cytokinesis of GFP-RhoGDIβ-expressing cell are shown (Fig. 4B). To examine the defects in cytoki-nesis in detail we analyzed the time-lapse images of the cells, in which the onset of mitosis, the onset of anaphase, and the end of cytokinesis could be clearly distinguished (Fig. 4C). In these cells, morphologically aberrant cytokinesis was observed in 70% (14/20) of cells expressing GFP-RhoGDIβ, but was not observed in cells expressing GFP-empty. Duration from the onset of anaphase and from the onset of anaphase to the end of cytokinesis was significantly increased in cells expressing GFP-RhoGDIβ compared with those of cells expressing GFP-empty (Fig. 4D). These observations indicated that GFP-RhoGDIβ affected the mitotic processes including anaphase and cytokinesis in HeLa cells, suggesting that RhoGDIβ plays a role in these mitotic processes.
Effect of knockdown of RhoGDIβ by siRNA. We examined the effect of RhoGDIβ knockdown in HeLa cells using three different siRNAs. All siRNAs decreased the expression of RhoGDIβ by about 90% and did not decrease the expression of RhoGDIα 72 h after transfection (Fig. 5A). Knockdown of RhoGDIβ by all three siRNAs increased the incidence of monopolar spindle mitosis (Fig. 5B). In contrast with the appearance of monopolar mitotic cells, a slightly higher (C) Cells were fixed with 4% paraformaldehyde, permeabilized with 0.5% Triton X-100, fixed again with 4% paraformaldehyde, then stained simultaneously with anti-GFP (green) and anti-γ-tubulin (red) antibodies and with DAPI (blue). Images were acquired on an LSM710 confocal microscope. Arrows indicate centrosomes. Scale bars, 5 µm. Representative images are shown from at least three independent experiments. frequency of multiple centrosomes was observed only in cells that had RhoGDIβ knocked down by no. 469 siRNA (Fig. 5C). Representative images of monopolar spindle mitosis and multiple centrosomes mitosis are shown (Fig. 5D). Therefore, we concluded that RhoGDIβ knockdown induced inhibition of centrosome separation rather than multipolar mitosis. The increase of polyploid (8, 16c or more) cells after RhoGDIβ knockdown as shown in Fig. 5E were thought to mainly result from monopolar mitosis. Overall, our data show that both upregulation and downregulation of RhoGDIβ constitute key events leading to perturbed mitotic processes in cancer cells.
Discussion
RhoGDIβ is abundantly expressed in hematopoietic cells (11), but is also expressed in various non-hematopoietic cells (13)(14)(15)(16)(17). Using immunoblotting, we confirmed that RhoGDIβ protein is expressed in many human epithelial cell lines as well as in many mouse organs, suggesting a general role for RhoGDIβ that is not specific to hematopoietic cells. We showed that RhoGDIβ, but not RhoGDIα, localized to centrosomes in human colon cancer cells, human keratinocytes and HeLa cells by immunofluorescence staining. Furthermore, we showed that exogenously introduced GFP-RhoGDIβ also localized to centrosomes in HeLa cells. Previously, RhoGDIβ was identified in the purified centrosome fraction from a human lymphoblastic cell line by the proteomic analysis, however, it was not confirmed as a genuine centrosomal protein (32). Since RhoGDIβ functions as a chaperone (8), its association with the centrosome would be expected to be transient and less stable than those of the scaffold proteins of centrosomes. Furthermore, RhoGDIβ is abundant in the cytosol. These properties of RhoGDIβ make it difficult to clarify its localization to the centrosome. In the present study, we confirmed that RhoGDIβ localized to centrosomes and suggested that RhoGDIβ had a role related to centrosome function. Actually, we showed that the expression of GFP-RhoGDIβ significantly prolonged anaphase and cytokinesis and increased the morphologically aberrant cytokinesis in HeLa cells. Furthermore, RhoGDIβ knockdown caused defects in centrosome separation. Supporting a previous proteomics study (32), collectively, our present data regarding localization and functional analyses strongly suggest that RhoGDIβ functions in centrosomes during mitosis.
Rho family proteins are important regulators of both cytokinesis and centrosome positioning (33). RhoA is the central regulator of cytokinesis in animal cells (34). Cdc42 and MgcRacGAP are reported to contribute to the correct formation of mitotic spindles during metaphase in HeLa cells (35). Rac and Tiam1 are localized to the centrosomal regions during early mitosis, and bipolar spindle formation is regulated by Tiam1-Rac signaling in MDCK II cells (36). Therefore, our results collectively suggest that RhoGDIβ, that is a regulator for Rho proteins, plays a role in the regulation of cytokinesis and the formation of bipolar spindles. To our knowledge this is the first report suggesting that RhoGDIβ participates in the regulation of these mitotic processes.
In normal cells, the mitotic checkpoint prevents the transition to anaphase when the monopolar spindle is formed; however, in cancer cells, this checkpoint is compromised and some cells become polyploid through aberrant mitosis (37). Therefore, an increase of polyploid cells by knockdown of RhoGDIβ could be due to the increased incidence of monopolar spindle. The monopolar spindles can result from defects in many different molecules, such as specific motor proteins, centrosome proteins, and mitotic kinases (38). Which of these molecules are associated with the observed phenotype by RhoGDIβ knockdown is unknown, but RhoGDIβ should be involved in the formation of normal bipolar spindles when required.
Correct centriole and centrosome positioning is important for many biological processes (39) and centrosomes play important roles in maintaining the polarity axis during asymmetrical cell division, and disruption of polarity is implicated in cancer development and progression (40)(41)(42). Cell motility, invasion, and anoikis in cancer progression are regulated by RhoGDIβ in cancer progression, irrespective of the direction of correlation with RhoGDIβ expression (14,15,19,21,23,25,27) and are closely associated with cell polarity (43,44). There is crosstalk between Rho family proteins and polarity proteins (44). The roles of RhoGDIβ in cancer progression may be related, at least in part, to its role in the regulation of cell polarity. RhoGDIs are conserved among eukaryotes and are suggested to have a universal role in the regulation of cell polarity in a wide range of eukaryotes (45)(46)(47)(48)(49)(50)(51). Many unicellular eukaryotes and lower metazoa have a single RhoGDI (52), while vertebrates, except for bony fish, have three kinds of RhoGDIs. Our results suggest that among RhoGDIs at least RhoGDIβ plays a role in the regulation of cell polarity in mammalian cells.
|
v3-fos-license
|
2020-12-24T09:12:01.487Z
|
2020-12-18T00:00:00.000
|
234687587
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3263/11/1/5/pdf",
"pdf_hash": "06ad7283716fa77e7aed80cf7ce6ade22233df8b",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43203",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "038e0f0ae61bc4a4933897b6657132631928775c",
"year": 2020
}
|
pes2o/s2orc
|
Modeling and Simulation of Tsunami Impact: A Short Review of Recent Advances and Future Challenges
: Tsunami modeling and simulation has changed in the past few years more than it has in decades, especially with respect to coastal inundation. Among other things, this change is supported by the approaching era of exa-scale computing, whether via GPU or more likely forms of hybrid computing whose presence is growing across the geosciences. For reasons identified in this review, exa-scale computing efforts will impact the on-shore, highly turbulent régime to a higher degree than the 2D shallow water equations used to model tsunami propagation in the open ocean. This short review describes the different approaches to tsunami modeling from generation to impact and underlines the limits of each model based on the flow régime. Moreover, from the perspective of a future comprehensive multi-scale modeling infrastructure to simulate a full tsunami, we underline the current challenges associated with this approach and review the few efforts that are currently underway to achieve this goal. A table of existing tsunami software packages is provided along with an open Github repository to allow developers and model users to update the table with additional models as they are published and help with model discoverability.
Introduction
The failure of a multi-billion dollar wall designed to protect the Tohoku coasts of Japan ( Figure 1) from a level-2 tsunami (Level-2 tsunami: infrequent but highly destructive [1]) in 2011 triggered an important debate about alternative approaches to tsunami risk reduction. This debate is ongoing and has attracted broad media attention worldwide (Reuters [2], The Guardian [3], The Economist [4], The New York Times [5], Wired [6]). The question whether a wall is the best solution to tsunami mitigation lies in the significant expense required for a wall that does not guarantee protection from big tsunamis. Even when cost is not the main constraint-consider Japan whose GDP accounts for 4.22% of the world economy versus Indonesia's (0.93%) or Chile's (0.24%)-relying on traditional concrete-based solutions alone may not be desirable or sustainable, partly because of their potential long-term negative impact on the population [2], coastal ecosystems [7][8][9], and shoreline stability [10,11]. For these reasons, decision-makers and engineers are increasingly considering protection solutions that rely on green designs as sustainable and effective alternatives to seawalls. These designs, three examples of which are shown in Figure 2 along the Ring of Fire, are usually human-made hillscapes erected on the shoreline to protect the communities behind them by partially dissipating and partially reflecting the tsunami energy [12]. The question remains whether these alternatives truly protect the people and properties behind it. Robust computational simulation capabilities are required that can model such diverse types of infrastructure including the forces upon them. While each individual concrete section of the wall did not suffer significant damage, the erosion at the foundations of the wall happened so quickly that the concrete barrier simply fell (Picture taken from in [13]).
Figure 2.
Map of the Ring of Fire, a long coastal stretch that is most likely impacted by large tsunamis. Some tsunami mitigation parks are being constructed in South Java, Indonesia (Image: Indonesia Ministry of Marine Affairs and Fisheries), Miyagi Prefecture, Japan (Image: the Morino Project), and Constitución, Chile (Image: Architect Magazine). Adapted from the work in [12].
While the numerical capabilities to model tsunamis in the deep ocean are well established and mature, as it will be further pointed out throughout this review, this is still not the case for the modeling of tsunami-shore interaction once the wave propagates inland. The limit lies in the level of detail necessary to correctly estimate the forces involved, especially on complex structures. In only five years since the review by Behrens and Dias [14], important steps towards high-fidelity tsunami modeling at all scales have been made, especially with respect to the fine-scale simulation of tsunami run-up and tsunami-shore interaction (see, e.g., in [15][16][17][18]), contributing to further advancing the forecast of tsunami impact as envisioned by Bernard et al. [19] ten years earlier.
In light of the development of a comprehensive, multi-scale tsunami modeling infrastructure as proposed in the latest 10-year science plan of the Natural Hazards Engineering Research Infrastructure [20], this article reviews models of tsunami propagation from generation to impact with respect to their régimes of validity and model-to-model interaction. As an aide to this, a basic derivation of the assumptions that lead to the basic two-dimensional shallow water equations are provided along with some of the most direct extensions and their limitations. Connecting these assumptions and their limitations to the much more complex three-dimensional and non-hydrostatic models follows and demonstrates the complexity the current state-of-the-art faces.
The article is organized as follows. Section 2 reviews the state of the field in tsunami modeling and simulation. The different mathematical models of classical use are described in Section 3, followed by a review of the different numerical methods to solve them given in Section 4. The idea of a future comprehensive and multi-scale simulation framework is discussed in Section 5. Conclusions are given in Section 6. A table of some available tsunami software packages is given in the Appendix A.
State of the Field in Tsunami Forward Modeling
The multi-scale nature of tsunami dynamics makes their study in a laboratory setting challenging [21,22]. For this reason, and supported by ever cheaper computing power and data storage, numerical modeling and simulation has become the most widely utilized tool for large-scale tsunami modeling and analysis [14]. The numerical simulation of tsunamis started in Japan sixty years ago with the work by Aida [23,24] and has shown to be effective to model their generation (see, e.g., in [25][26][27][28][29]), propagation (see, e.g., in [30][31][32]), and inundation [12,16,[33][34][35], although the problem of inundation is still partially open when it comes to its numerical treatment (see, e.g., in [36][37][38] and references therein).
When it comes to tsunami-shore interaction, the effects of isolated components such as bathymetry and vegetation have been studied for idealized one-dimensional (1D) and two-dimensional (2D) settings using numerical simulations for years. For example, in [12,33,[39][40][41] 1D and 2D shallow water models were used to demonstrate that the nonlinearity of tsunami waves has a significant effect on the on-shore flow velocities and propagation. Many inundation modeling efforts aim to obtain an accurate prediction of the water elevation level. While these models typically do a good job at this, their correct prediction of water level seldom translates into the accurate prediction of the forces at play [42,43], possibly indicating the limitations of 1D and 2D models in specific flow régimes.
A tsunami is a naturally multi-scale phenomenon whose characteristic length scales range from the planetary scale spanning oceans to the small scales of turbulence. On the one hand, in the open ocean it is effectively a fast moving two-dimensional long wave that travels undisturbed for thousands of kilometers and can be accurately described by the 2D nonlinear shallow water equations. On the other hand, as the flow approaches the shore and moves inland, its three-dimensionality and turbulent nature become important, with boundary layer dynamics that contribute to an important shear-driven erosion and sediment transport (see Figure 3).
As the tsunami advances further inland, sand, dirt, vegetation, structures, and other debris are scoured off of the sea bed, transported, and deposited along its path, potentially leading to important coastal morphological changes (see, e.g., in [44][45][46][47][48][49][50]). By construction, however, the shallow water equations are not apt to model such complex flows, although they have been used or enhanced in the context of erosion and sediment transport in [47,[51][52][53][54], for example.
Towards an understanding of the possible limitations of the shallow water equations for a fine-grained tsunami modeling, Qin et al. [15] compared the 3D Reynolds-Averaged Navier-Stokes (RANS) solution of a turbulent propagating bore using OpenFOAM [55] against the 2D shallow water solution of the same problem using GEOCLAW [56]; Qin et al. demonstrated that a 3D model for turbulent flows is necessary to correctly predict tsunami inundation and the fluid forces involved. Among the first numerical studies that attempted to model tsunami run-up as the solution of the Navier-Stokes equations, we find the 2D simulations by Hsiao and Lin [57] from 2011 and by Larsen and Fuhrman [17,18] who recently underlined the growing need for high-fidelity and multi-scale models to study tsunami risk and inland propagation. Furthermore, the RANS equations classically used in engineering were shown to overproduce turbulence beneath surface waves [58], therefore suggesting the need for high-fidelity turbulence models such as large eddy simulation (LES). LES was used in the context of plunging breakers by Christensen [59] who underlined the numerical challenges of models in reproducing experimental results on the surf zone dynamics. Much attention has been given to run-up; however, not enough work has addressed the analysis of the interaction of the tsunami with the built environment and vegetation and of the erosion caused by large tsunamis [18], all critical at improving the prediction of inundation. Modeling erosion with shallow water models can be difficult as they generally require some sort of reconstruction of vertical velocity profile to reconstruct the velocity shear in these models; however, shear is the driving mechanism of erosion and entrainment. Moreover, the modeling of the dynamics of the near-bed region and of erosion is still an open challenge in coastal engineering because it is governed by important particle-particle and particle-fluid interactions [60], which require grainresolving simulations [61,62]. In a recent inter-comparison of models to study shear and separation driven coastal currents, Lynett et al. [63] concluded that "[. . . ]In general, we find that models of increasing physical complexity provide better accuracy, and that low-order three-dimensional models are superior to high-order two-dimensional models[. . . ]".
Proper fine-grained modeling is important for studying erosion. In the context of tsunami impact mitigation analysis, the next frontier may lie in the detailed simulation of vegetation-tsunami and vegetation-morphology interactions. In light of the pioneering findings by the U.S. Army Corps of Engineers that vegetation may act as a bio-shield against flooding from storm surges [64,65], the designs of tsunami mitigation solutions around the world often incorporate vegetation. Significant experimental and numerical work has been done to analyze the effect of vegetation on wind waves, currents and, albeit less so, tsunamis (see, e.g., in [66][67][68][69][70][71][72][73][74][75][76]), particularly in the context of steady flows [77]. After the 2004 tsunami in Sumatra, Bayas et al. [78] estimated that vegetation along the west coast of Aceh may have reduced casualties by 5%. The benefits of coastal vegetation are not limited to attenuation. It is well known that bathymetry, topography, and coastal geomorphology (rivers, canals, and barrier islands) have a profound effect on run-up for tsunamis [12] and storm surge [79][80][81][82]. Furthermore, the presence of vegetation alters sediment-transport processes and landform evolution [83][84][85][86][87]. The modeling of the interaction between the tsunami and vegetation is numerically difficult because of the flexible nature of vegetation. Efforts have been made to include the effect of flexible vegetation on the flow. Examples of one-and two-way coupling of the flow with the dynamics of vegetation can be found in the recent work by Mattis et al. [88,89] and Mukherjee et al. [90], who, for the first time in the context of inundation, used fluid-structure interaction algorithms to study the effect of deformation of idealized trees on the flow (See Figure 4).
Mathematical Representations and Assumptions
While the most complete model to describe a tsunami as a highly turbulent free surface flow is given by the Navier-Stokes equations of incompressible flows, simpler models of wave propagation have been successfully adopted for decades to study tsunami propagation in the open ocean and run-up [14,91]. In this section, we introduce these models and underline their properties and limits of validity with the aim to identify the assumptions that go into them and where those assumptions may break down, especially in the near-shore. Based on this analysis, we justify the ever increasing use of the 3D Navier-Stokes equations to study the tsunami boundary layer in the near-shore and during run-up where 1D and 2D models were shown to lead to inaccurate solutions [15,63], leading to the idea suggested in Figure 5 of having both the 2D shallow water and 3D Navier-Stokes models working together to provide a full picture of a tsunami's impact. Representation of a comprehensive off-to-on-shore flow framework. The 3D Navier-Stokes solver is forced at the boundaries by the shallow water model of the tsunami off-shore, which is forced by an earthquake simulator. In fact, the 2D shallow water solver can extend all the way into the shore depending on the needs of the solver and compatibility layer.
The 3D Navier-Stokes Equations
We first start with arguably the best model of water flow, the three-dimensional Navier-Stokes equations. During inundation, the interaction of the flow with the coastal features, erodible terrain, sediments, vegetation, and structures is such that the flow is fully turbulent, and thus three-dimensional and characterized by shear. The correct estimation of the hydrodynamic forces responsible for the damages during run-up requires a model that can capture these flow characteristics [17,63]. The Navier-Stokes equations are the most complete model that can capture these features. Omitting their derivation from first principles, the 3D Navier-Stokes equations of incompressible flows are written as
Depth-Averaged Models
Although (1) are the model equations for tsunamis that we would like to use, they are computationally too costly to use everywhere and more than likely not needed everywhere in the domain of interest for a tsunami. We therefore need to carefully identify where and when we can approximate (1). The primary type of approximation associated with tsunamis are characterized by averaging through the depth of the water column in the gravity centered coordinate system (see Figure 6). This precludes modifications that would handle steeper terrain in favor of reducing complexity but should be noted as an alternative for simple topographical features. We will also forgo analysis of layered depth-averaged models as though they have unique properties and largely follow the same derivation. Here h is the depth of the water column; η the difference between a defined datum, commonly a given sea-level, and the sea-surface; and b the bathymetry surface as measured from the same datum.
Scaled Equations
We first start with the inviscid version of the two-dimensional Navier-Stokes equations (i.e., the incompressible Euler equations) of incompressible flows, one vertical and horizontal dimension, where we will ignore one of the horizontal directions for simplicity of notation and without loss of generality. With this setup and assuming a free surface and non-flat bottom boundary we have the equations with the boundary conditions where the velocity is represented by u in the horizontal direction and w in the vertical, ρ is the density of water, P its pressure-which includes non-hydrostatic components at this juncture, and g is the gravitational acceleration. The underscores t, x, and y indicate the partial derivatives with respect to them. The first step in defining and justifying the depth-averaged equations lies in the nondimensionalization assumptions made during their definition. Because of this, we will be explicit about these assumptions as they will play an important role later on. The primary scalings are where λ is a length scale, often taken as the characteristic wave-length of the waves involved; a the characteristic depth that the wave is propagating through; and T the characteristic time-scale or period of the wave. With these values defined, we can also normalize the velocities and P such that where P 0 is the pressure normalization, usually taken to be atmospheric pressure, and the non-dimensional quantities are those with·. Introducing then the traditional shallow water parameter = a/λ we can write the velocity and temporal normalizations as U = √ ga, W = U, and T = λ/U. Applying these scalings to (2) and simplifying we are lead to the system of equations where we have also assumed that P 0 /ρU 2 = 1 without loss of generality. For the boundary conditions we introduce a new scaling, η/δ =η and b/β =b which, for the top boundary, results in λ δw =η˜t +ũηx where δ is surface amplitude scaling. For the upper boundary condition, if δ/λ = O( ) the boundary condition is all of the same order. This is probably not the case as δ a unless in very shallow water but also should be noted for later.
The lower boundary condition is a similar situation, if β/λ = O( ) the boundary condition is all of the same order. In the case of the bathymetry scale, this is probably the case as the bathymetry will vary on the scale of the depth scale a. The final nondimensionalized equations are then (dropping the·) with the boundary conditions where P a is the appropriately scaled atmospheric pressure condition.
Depth Integration
The next step to deriving the shallow water and similar equations is to depth integrate (7)-(9). Here, we will skip many of the details other than to point out two important assumptions that are often misrepresented. Continuing on the first step is to integrate the incompressibility Equation (7) such that we find The next equation to be integrated is the horizontal momentum Equation (8). For this we will first introduce the assumption that the total pressure P can be split up additively into an hydrostatic and non-hydrostatic component such that P(x, z, t) = P a + (η − z) + p where the term (η − z) represents the depth from the surface η and therefore the non-dimensionalized hydrostatic component of the pressure and p(x, z, t) non-hydrostatic pressure component. Then, integrating the left-hand-side of (8), we find where we have simplified the notation using our average values making sure to maintain the non-commutativity of the average and squaring operators. For the right-hand-side of (8) we similarly conclude that Finally, we also integrate the vertical momentum Equation (9) through the depth. This is very similar to the horizontal equations leading to for the left-hand-side and − η b (P a + (η − z) + p) z + 1)dz = p| z=b for the right-hand side of the equation. Putting all this together leads us to the vertically integrated equations At this juncture it is useful to start to introduce the notation that indicates average quantities through the depths. We will denote these by
Remark 1.
Herein lies one of the critical misrepresentations of depth-averaged models, that they assume constant velocity, or even more general quantities, throughout their vertical profiles. For example, in general, u 2 = u 2 . In fact we will make some assumptions about the commutativity of the averaging operator with others but this does not preclude the consideration of non-constant functions of the velocity for instance. This leads to the following system of equations, One last equation for the pressure will be useful as it will give us a means to calculate p depending on the approximations that we will make in the next section. We can come to this equation by reconsidering the scaled vertical momentum equation and integrating from η to a vertical level b + αh where α ∈ [0, 1] leading to
Approximations
At this point we can use the equations from the previous section to derive a number of approximations commonly used in tsunami numerical modeling. This will not be exhaustive but rather suggestive to where this basic framework can be taken to derive and analyze the validity of these approximations.
The first of these approximations will assume that the averaging operator will commute with multiplication, e.g., u 2 ≈ u 2 . The terms that this ignores are often termed as differential advection; however, note that this approximation does not imply that the velocity profiles are constant but rather that the averages commute. This can be important when comparing boundary layer physics for instance. Moving forward, this allows us to rewrite (11) and simplify notation so that where we have also now dropped the · notation.
Shallow Water
Finally for the shallow water equations we simply need to assume that the shallow water parameter 1 implying that 0 = p| z=b . As the non-hydrostatic pressure is defined such that p(x, η, t) = 0 we can show that p = 0 therefore implying that (11) become which are of course the traditional, non-dimensionalized shallow water equations. Note that (12) can be used here to show that p = 0 is in fact true here.
Not So Shallow Equations
The next perhaps logical step is to maintain the assumption that differential advection is still small and therefore averages commute but that vertical momentum is not as ignorable as before, i.e., ≥ O(1). Instead we might make some assumptions about the depth profile of w and use that to derive a form of a closure expression for the non-hydrostatic pressure. Here, we will give one example of an analysis where we assume that w has a linear profile through the depth along with the equations that arise due to this assumption.
First, if w is linear in z we can use the incompressibility condition to find w(x, z, t) = w(x, t) − u x z. The crux of these calculations is then the computation of w and p. Leaving out the tedious details of these calculations but reporting the results the final equations with these assumptions leads to where p = 2 h ∂ ∂t Note that this complexity is a result of the attempt to recreate the 3D structure that we have already integrated out. This may be counterintuitive as to why we might want to do this but in fact schemes of this nature have been quite successful, namely, the Boussinesq, Serre-Green-Naghdi, and other types of semi-2D types of equations.
Mathematical Conclusions
As we have seen, (1) can be greatly reduced in complexity by integration and appropriate assumptions. While this means that the 2D equations can easily recreate the extent of flooding, the recreation of the 3D velocity fields and therefore the appropriate forces on structures is however difficult even for schemes that have been successfully implemented such as the Boussinesq and Serre-Green-Naghdi models, leaving the need for 3D Navier-Stokes models as the gold standard for impact questions. Furthermore, as we have shown, the assumptions we have made are often not as stringent as is often thought in the literature. The importance of this is that the shallow water 2D field does not preclude there being a complex horizontal velocity field, rather that its average have the special property that it commutes as mentioned above. This critical property prescribes exactly the compatibility condition that we need to have between a 2D shallow water model and a 3D Navier-Stokes model whose field does not require this commutativity property.
Each numerical method comes with its advantages and disadvantages. FD are the least costly, but they lack the geometrical flexibility of Galerkin methods and FV. FE/DG and FV are optimal for grid refinement [32,56,100] and are geometrically flexible to handle complex coastal lines; furthermore, FE/SE/DG are inherently optimal for massive parallelism [101]. SPH is good at tracking the free surface of the three dimensional flow, but it is expensive and it is still unclear how to treat boundary layers with it. Regardless of the numerical method of approximation, the numerical handling of the wet-dry interface to model inundation has been improved throughout the years as demonstrated in, e.g., [36][37][38][102][103][104][105][106]. It should also be noted that new methods in high-performance computing continue to push the envelope forward and will consequently lead to advances in tsunami science. This advancement will be either hampered by the same issues encountered in general geophysical turbulence computational modeling, or enjoy the benefits of embarrassingly parallel ensemble modeling as is the case with probabilistic tsunami hazard assessment (PTHA).
While Equation (1) models turbulent incompressible flows, their solution on a computational grid using any of the methods above (except for SPH) requires some attention with respect to the treatment of the viscous stresses. If we could afford to have an infinitely fine computational grid, they could be solved directly; this approach is known as direct numerical simulation (DNS). However, even with the approaching era of exa-scale computing, DNS is much too costly to be viable for tsunami simulations (The number of grid points in a computational grid necessary for DNS scales as the (9/4)th power of the Reynolds number.) [107]. Instead, by means of either a Reynolds averaging approach that gives rise to the RANS equations, or scale separation filtering for LES, turbulence can be modeled with sufficient precision at a drastically reduced computational cost with respect to DNS. RANS was first used to simulate tsunami run-up in the software COBRAS [108,109]. However, it took approximately one more decade to become a popular choice to model tsunami generated turbulence. Examples of RANS-based tsunami models are described in [17,18,[110][111][112].
In terms of solution cost, LES lies between DNS and RANS. Due to its computational cost, it is not popular among tsunami modelers, although it was recently considered as an optimal choice to study breaking waves and run-up in highly non-steady dynamics (see, e.g., in [90]). As the era of exa-scale computing on hybrid CPU-GPU architectures is fast approaching, we expect LES to become the model of choice among tsunami inundation modelers interested in the analysis of tsunami impact and coastal propagation, although it would be unreasonable to use either RANS or LES to model for off-shore propagation where, instead, the 2D shallow water equations do a perfect job.
The free surface in a model based on the solution of (1) is typically accounted for in either one of two ways: by means of a volume of fluid approach (in, e.g., [112,113]) or by a level-set tracking of the water-air interface in a two-fluid model (in, e.g., [90,114]). Both of these approaches make it natural to handle the wet-dry front as the flow reaches land; when high order methods are utilized, this may not be the case for shallow-water equations whose solution is challenged in wet-dry regions for certain numerical approximations, as demonstrated by the continuous work of, for example, [36][37][38][103][104][105].
In summary, all the numerical methods of solution as well as the mathematical models have their use case and there is no one that can rule them all. Similarly, the adage "using the right tool for the job" holds here as much as it does when doing carpentry. It is important therefore to consider what metrics may lead to the decisions to choose one model over another. The most prominent metrics are listed as follows.
•
Flow scale and régime. For example, are we modeling breaking waves in the vicinity of the shore or linear waves propagating in the ocean basin? Is turbulence an important factor? • Complexity of the physics needed. Here the difference between using a wall boundary condition at the shore and doing true wetting and drying may be significant as does the representation of true turbulent flow. • Performance on the computing architecture being considered. • Overall size of the problem considered. Is one interested in a single simulation or a large ensemble in order to account for uncertainty?
These are of course only some of the issues but they highlight the difficulties faced and the need for hybrid solver techniques to bridge the gaps between the advantages and disadvantages between solvers.
Towards a Multi-Scale Framework from Source to Impact
It seems plausible, given the ongoing research on all fronts from tsunami generation, propagation, to fine-scale run-up, that tsunami modelers may soon join forces to build a unified, all-scale framework to model a full tsunami event from source to impact (see, e.g., in [27,115]). The all-scale idea is naïvely represented in Figure 5 where the tsunami's domain is subdivided in different zones as a function of the flow régime and complexity. The tsunami assumes different shapes during its lifespan from generation to impact; with time scales that range from seconds (100-1000 s at the source [50]) to up to several hours during propagation and run-up/run-down, the flow is driven by both two-dimensional and three-dimensional dynamics across spatial scales from submeters to hundreds of kilometers. Due to such enormous scale variability in time and space, not all régimes can or should be described by the same physical model. While it may be or become technically possible, it is not sensible to solve the 3D Navier-Stokes equations for turbulent flows in the far-field where the shallow water equations are appropriate and inexpensive to solve. The wave propagates in the open ocean with a small amplitude from a few centimeters up to several tens of centimeters (Ref. [50] and citations therein), which is always much smaller than the open ocean depth which is, in turn, much smaller than the wavelength. Under these circumstances, the tsunami motion is well represented by the shallow water equations [116][117][118] for linear, quasi-linear, and quasi-dispersive waves. While the wave approaches land it decelerates, its amplitude increases, and its wavelength is drastically reduced; furthermore, it loses its linear, non-dispersive nature. In these conditions a non-linear model that describes turbulent breaking waves becomes necessary. The most complete model of this is given by the 3D Navier-Stokes equations of incompressible flows. If the only metric of interest during run-up were the inundation level, the 2D shallow water equations would still provide relatively satisfactory results, but their limits become clear when the correct hydrodynamic forces on structures are to be evaluated.
To achieve a complete multi-scale model of the tsunami, different solvers may be coupled as illustrated in such a way that the 3D Navier-Stokes solver is forced at the boundaries of the shore proximity area by the shallow water model of the tsunami moving off-shore [119,120]. At the source the shallow water model is forced by an earthquake simulator or landslide model. Arguably the most difficult and still unclear part of a modeling infrastructure of this type lies in the coupling across models. Coupling across models is an open field of research of its own (see, e.g., in [121][122][123][124].) Parallel performance, data exchange, and time scale interactions are among the difficulties to be overcome in designing coupling algorithms across software packages. We envision a major effort to optimize software coupling and make it efficient for costly tsunami simulations towards real-time modeling of a full tsunami event.
Conclusions
This article gives a concise review of the different approaches to tsunami modeling from generation to impact. From the perspective of a future comprehensive multi-scale modeling infrastructure to simulate a full tsunami, the article underlines the challenges associated with this approach and reviews the efforts that are underway to achieve this goal.
The current state of the field seems to indicate that the numerical tsunami modeling community is moving towards the high-fidelity modeling of tsunamis across scales, either by means of grid refinement or high order methods in the open ocean, or by means of a complete three-dimensional model of the flow in the near shore region. We expect an increasing effort to couple different models and allow the interaction of an earthquake simulator with the large scale dynamics in the open sea and fine scale dynamics in the coastal region, all within the same software infrastructure. While improved earthquaketsunami modeling at the source, adaptive grid refinement, and ever faster and inexpensive hybrid high-performance computing (i.e., graphics processing units) have contributed to the advancement of tsunami modeling at large scales, we are now witnessing the very beginning of a new tsunami modeling era; the introduction of high-fidelity simulations of inundation and enhanced coupling of software packages across all scales is finally leading the way towards full tsunami forecast as envisioned by Synolakis et al. [91] fifteen years ago.
While it is difficult to estimate how long it will be before a full scale tsunami simulation can be run in real time (or faster) on a laptop, the pace at which the necessary tools are being developed is fast. At risk of being too optimistic, the advent of exa-scale computing within the next decade and the ever-increasing presence of general purpose GPUs mounted on personal computers by default led us to think that such a tool may be available within the next two decades.
Acknowledgments:
The authors would like to thank two anonymous reviewers whose comments and suggestions helped improve the final version of the manuscript.
Conflicts of Interest:
The authors declare no conflicts of interest.
Appendix A
Some of the most common tsunami simulators are listed in Table A1. The table is maintained in an open repository hosted at https://github.com/mandli/tsunami-models to allow developers and scientists to add their models.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2004-01-02T00:00:00.000
|
16285760
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/279/1/436.full.pdf",
"pdf_hash": "dae299e77aaa7de63b68304583b769f232c3ea71",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43205",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "dae299e77aaa7de63b68304583b769f232c3ea71",
"year": 2004
}
|
pes2o/s2orc
|
IsdG and IsdI, Heme-degrading Enzymes in the Cytoplasm of Staphylococcus aureus*
Staphylococcus aureus requires iron for growth and utilizes heme as a source of iron during infection. Staphylococcal surface proteins capture hemoglobin, release heme from hemoglobin and transport this compound across the cell wall envelope and plasma membrane into the bacterial cytoplasm. Here we show that Staphylococcus aureus isdG and isdI encode cytoplasmic proteins with heme binding properties. IsdG and IsdI cleave the tetrapyrrol ring structure of heme in the presence of NADPH cytochrome P450 reductase, thereby releasing iron. Further, IsdI complements the heme utilization deficiency of a Corynebacterium ulcerans heme oxygenase mutant, demonstrating in vivo activity of this enzyme. Although Staphylococcus epidermidis, Listeria monocytogenes, and Bacillus anthracis encode homologues of IsdG and IsdI, these proteins are not found in other bacteria or mammals. Thus, it appears that bacterial pathogens evolved different strategies to retrieve iron from scavenged heme molecules and that staphylococcal IsdG and IsdI represent examples of bacterial heme-oxygenases.
In order to successfully colonize the mammalian host, bacterial cells must overcome a number of host defense systems both innate and acquired. A primary obstacle to colonization is the lack of available iron. Most living microorganisms require iron in the range of 0.4 -4.0 M (1); however in mammals, the concentration of free ionic iron is maintained at a level of about 10 Ϫ9 M (2,3). Iron sequestration in host tissues is caused by a combination of factors including the low solubility of iron at physiological pH (4), the intracellular location of iron (99.9% of total body iron is found within mammalian cells) (4), and the sequestration of this ion within the iron-binding glycoproteins transferrin and lactoferrin or heme-containing proteins such as hemoglobin. Utilization of hemoglobin as an iron source presumably requires bacterial binding of the hemoglobin polypeptide, removal, and transport of the heme molecule, and opening of the heme porphyrin ring to remove the single iron atom. All identified enzymes capable of heme degradation are monooxygenases known as heme oxygenases. Heme oxygenases are ubiquitous in nature, and are responsible for the oxidative degradation of heme to biliverdin, CO, and free iron (5)(6)(7). In mammals, this process is believed to be important in protecting the organism against oxidative damage (8 -11), however, in bacteria heme degradation is essential in order to access the iron atom for use as a nutrient source. Bacterial pathogens representing more than fifteen genera are capable of utilizing heme as a sole iron source, however, a comparatively smaller number of bacterial heme-degrading enzymes have been identified (12)(13)(14)(15). Examining the currently available complete or incomplete bacterial genome sequences with BLAST homology searches using known bacterial heme oxygenase enzymes as queries does not reveal additional candidates for enzymes capable of oxidatively degrading heme (data not shown). This paucity of identifiable heme-degrading enzymes in bacteria that utilize heme as an iron source allows for speculation as to how these bacteria are accessing the iron atom of the heme porphyrin ring.
We have previously described a heme-uptake system in the pathogenic bacterium Staphylococcus aureus (16), which is encoded by a cluster of genes known as (isd) 1 iron-regulated surface determinants. This cluster encompasses three transcriptional units, isdA, isdB, and isdCDEFsrtBisdG, specifying three cell-wall anchored proteins capable of binding hemin: IsdA, IsdB, and IsdC. In addition, IsdB binds hemoglobin with characteristics resembling receptor-ligand interactions. IsdD (a membrane protein), IsdE (a lipoprotein ATPase), and IsdF (a polytopic transmembrane protein) each display homology to Gram-positive and Gram-negative heme-iron transporters, whereas SrtB functions as a sortase and is responsible for anchoring IsdC to the cell wall (17). The product of the last gene within the multiple cistronic isdCDEFsrtBisdG transcript, IsdG, is hypothesized to reside in the cytoplasm. In this report we describe purification of IsdG, and its intrachromosomal paralogue IsdI. We show that sequence signatures place these proteins in a family of known monooxygenases, and we demonstrate that IsdG and IsdI bind hemin with characteristics consistent with known heme-degrading enzymes. In addition, we show that both IsdG and IsdI are capable of degrading the heme macrocyclic porphyrin ring, with subsequent release of free iron for use by the pathogen as a nutrient source. Analysis of the reaction products of IsdG-and IsdI-mediated heme degradation reveal compounds that are chromatographically similar to those produced by mammalian HO-1 heme oxygenase. Finally, IsdI is shown to complement the heme utilization defect of a C. ulcerans heme oxygenase mutant (18).
EXPERIMENTAL PROCEDURES
General Methods-Western blotting, transformations, plasmid purification, and subcloning were carried out as described previously (19). Deionized double distilled water was used for all experiments. Oligonucleotides were synthesized by Integrated DNA Technologies. Antibodies were created by injection of purified protein emulsified in com-plete Freund's adjuvant (day 7 injection) or incomplete Freund's adjuvant (two subsequent injections) into female New Zealand rabbits.
Bacterial Strains and Growth Conditions-Escherichia coli strain DH5␣ [F Ϫ ara D(lac-proAB) rpsL ⌽80dlacZDM15 hsd R17] was used for DNA manipulation, and E. coli strain BL21 (DE3) [F Ϫ ompT hsdS B (r B Ϫ m B Ϫ ) gal dcm (DE3)] was used for the expression of isdG and isdI. isdG and isdI were amplified with PCR using S. aureus strain Newman chromosomal DNA as a template. S. aureus strains RN6390 and Newman were used as parent strains for the inactivation of isdG. C. ulcerans strain CU712 (18) and CU29 were obtained from the strain collection of Michael P. Schmitt. C. ulcerans strains were grown in heart infusion broth (Difco, Detroit, MI) with 0.2% Tween 80 (HIBTW). Ampicillin (100 g/ml), kanamycin (50 g/ml), and chloramphenicol (30 g/ml for E. coli and 2 g/ml for C. ulcerans) were added to the media as required. All strains were incubated at 37°C.
Construction of Vectors-To create vectors to be used in expression of isdG and isdI, the complete isdG and isdI coding sequences were PCR amplified. The amplified DNA fragments were cloned into pCR2.1 (Invitrogen) and successful transformants were sequenced for accuracy. Inserts containing the correct sequence were subcloned into pET15B (Novagen). To create vectors for use in the heme utilization assay, orfXisdI was amplified by PCR using IsdI5BglIIB (AAAGATCTCGTA-AAATGCGTTAATGGGACAAG) and IsdI3BamHI (AAGGATCCTCAG-ACAAGCCGGATGAATTC), and cloned into pCR2.1 (Invitrogen). Inserts were excised from pCR2.1 with BglII and BamHI for directional cloning into pCGL0243 (20) creating pEPS10. The corynebacterial expression vector pCM2.6, and pCD293 (pCM2.6 containing the cloned C. diphtheria heme oxygenase gene hmuO) were generous gifts of Michael P. Schmitt.
Expression and Purification of IsdG and IsdI-E. coli BL21(DE3) strains carrying either pET15BisdG or pET15BisdI were grown overnight at 37°C in Luria-Bertani medium containing 100 g/ml ampicilin. The cells were subcultured into fresh medium and grown at 37°C to mid-log phase. At this time, target gene expression was induced using 1 mM isopropyl-1-thiol-D-galactopyranoside. Cell growth was continued for three hours at 30°C, and the cells were harvested by centrifugation (10,000 ϫ g for 15 min). Cells were lysed using a French press in 50 mM Tris-HCl (pH 7.5), 150 mM NaCl containing 100 M phenylmethysulfony fluoride. The cell suspension was centrifuged at 100,000 ϫ g for 60 min, and the soluble fraction was applied to a Ni-NTA column, preequilibrated with 50 mM Tris-HCl (pH 7.5), 150 mM NaCl. The column was washed with 20 volumes of 50 mM Tris-HCl (pH 7.5) 150 mM NaCl followed by a second washing with 30 volumes of 50 mM Tris-HCl (pH 7.5) 150 mM NaCl containing 10% glycerol and 10 mM imidazole. The protein was then eluted in 50 mM Tris-HCl (pH 7.5), 150 mM NaCl containing 50 mM imidazole, and fractions were dialyzed against 50 mM Tris-HCl (pH 7.5) 150 mM NaCl. The purified proteins were stored at Ϫ20°C.
Fractionation of Staphylococci-S. aureus were grown to mid-log phase and sedimented by centrifugation at 13,000 ϫ g for 5 min. The supernatant was collected and precipitated with ice-cold trichloroacetic acid and washed with ice-cold acetone (supernatant fraction). The resulting pellet was suspended in TSM (100 mM Tris-HCl (pH 7.0), 500 mM sucrose, 10 mM MgCl 2 ), and incubated in the presence of 100 g of lysostaphin for 60 min at 37°C. The resulting protoplasts were sedimented at 13,000 ϫ g for 15 min, and the supernatant was collected and precipitated in trichloroacetic acid/acetone (cell wall fraction). The pellet was suspended in membrane buffer (50 mM Tris-HCl (pH 7.0), 10 mM MgCl 2 , 60 mM KCl) and subjected to five rounds of freeze-thaw in a dry ice ethanol bath to lyse the protoplasts. The membranes were sedimented by ultracentrifugation at 200,000 ϫ g for 30 min, and the cytoplasm and membrane fractions were precipitated with trichloroacetic acid and acetone. Acetone-washed precipitates were suspended in sample buffer, separated by 15% SDS-PAGE, and analyzed by immunoblotting.
Reconstitution of IsdG and IsdI with Hemin-The heme-IsdG and heme-IsdI complexes were prepared as described previously for hemeheme oxygenase complexes (13,22). Hemin was added to purified protein at a ratio of 3:1 heme:protein. The sample was applied to a nickelnitrilotriacetic acid-agarose column pre-equilibrated with 50 mM Tris-HCl (pH 7.5) 100 mM NaCl. The column was then washed with the same buffer (30 volumes), and the protein was eluted in 500 mM imidazole. The fractions containing the heme-protein complexes were pooled and dialyzed against 50 mM Tris-HCl (pH 7.5), 100 mM NaCl.
Absorption Spectroscopy-All absorption spectra were obtained using a Varian Cary 50BIO. Hemin binding studies were carried out by difference absorption spectroscopy in the Soret region. Aliquots of hemin (0.1-25 M) were added to both the sample cuvette (10 M either IsdG or IsdI) and reference cuvettes at 25°C. Spectra were recorded 5 min after the addition of hemin. The millimolar extinction coefficient was determined using the pyridine hemochrome method (23).
Hemin Degradation Assays-Iron release assay: To measure the release of free iron from hemin, 10 M of IsdG or IsdI and 55 hemin mixture (0.4 nM [ 55 Fe]hemin (RI Consultants) per 50 l of 2% bovine serum albumin) were placed into 50 mM HEPES pH 7.4, 1 mM EDTA to a final volume of 1 ml, and incubated at 30°C for 30 min. 10 l of 1 mM unlabeled hemin in 2% bovine serum albumin was added to stop the reaction and the sample was vortexed. Trichloroacetic acid was added to the sample at a concentration of 7.5% followed by incubation on ice for thirty minutes. The resulting precipitate was sedimented by centrifugation at 13,000 ϫ g at 4°C for 15 min. 750 l of the supernatant was withdrawn and added to 5 ml scintillation fluid. The amount of [ 55 Fe] was determined using a scintillation counter (Beckman LS600K). Reaction with ascorbate: ascorbic acid-dependent degradation of heme was monitored spectrophotometrically as previously described (14). IsdG-Hemin (10 M) and IsdI-Hemin (10 M) in 50 mM Tris-HCl (pH 8.0) were incubated with ascorbic acid at a final concentration of 10 mM. The spectral changes between 300 and 800 nm were recorded every 5 min. The products of the reaction were extracted and subjected to HPLC as described below. Reaction with NADPH P450 reductase: the reaction of IsdG-Hemin and IsdI-Hemin in the presence of human NADPHcytochrome P450 oxidoreductase (recombinant enzyme from Spodoptera frugiperda, Calbiochem) was similar to that previously described (13,14). Human cytochrome P450 oxidoreductase was added to the IsdG-heme and IsdI-heme complex (10 M) at a ratio of reductase/Isd equal to 0.3:1 in a final volume of 1 ml 50 mM Tris-HCl (pH 8.0). Initiation of the reaction was carried out by the addition of NADPH in 10 M increments to a final concentration of 100 M. The spectral changes between 300 and 800 nm were monitored. Following completion of the reaction, the reaction products were extracted and subjected to HPLC as described below. Reaction in the presence of catalase: Purified recombinant catalase from Aspergillus niger (Sigma), was added to all reaction cuvettes at a ratio of catalase:hemoprotein equal to 0.5:1 immediately before the addition of either reductant or reductase.
HPLC of the IsdG and IsdI Reaction Products-Following the reaction of the heme-Isd complexes with NADPH-cytochrome P450 oxidoreductase or ascorbate, 200 l of glacial acetic acid and 200 l of 3 M HCl were added to quench the cleavage reaction. Subsequently, the reaction mixture was extracted with 1.5 ml of chloroform. The organic layer was washed three times with 1 ml of distilled water, and the chloroform layer was removed under a stream of nitrogen. The resultant residue was dissolved in 800 l of 85:15 (v/v) methanol:water prior to HPLC analysis. The samples were analyzed by reverse-phase HPLC on a Thermo hypersil C18 aquasil column (Keystone Scientific Operations), using a Beckman Coulter System Gold HPLC machine, eluted with 85:15 (v/v) methanol:water at a flow rate of 0.2 ml/min.
In Vivo Heme Utilization Assay-To determine the ability of C. ulcerans strains to utilize heme, a plate assay modified from that developed by Michael P. Schmitt (18) was employed. Briefly, C. ulcerans strains (CU712(pCM2.6), CU29(pCM2.6), CU29(pCD293), CU29(pEPS10)) were grown in HIBTW at 37°C overnight under the appropriate antibiotic selection. The following day, ϳ10 7 bacteria were plated onto the surface of HIBTW agar medium, HIBTW agar medium containing 200 g/ml EDDA, or HIBTW agar medium containing 200 g/ml EDDA and 1 M heme. In the absence of added heme, 200 g/ml EDDA completely inhibits the growth of all strains tested. After 28 h of incubation at 37°C, the number of colony forming units on HIBTW plates containing EDDA, and heme was divided by the number of colony forming units on HIBTW plates alone, and presented as "Hemin Growth Efficiency."
Identification and Genomic Context of IsdG and IsdI-The
iron atom of heme is used as a nutrient source by pathogenic bacteria capable of infecting vertebrates (15). Analysis of all available complete and incomplete bacterial genomes using previously identified bacterial heme oxygenase sequences (12)(13)(14)18) reveals that putative heme oxygenase enzymes are only identifiable in 5 genera including, Corynebacterium, Neisseria, Pseudomonas, Agrobacterium, and Streptomyces (data not shown). We have recently identified a cluster of three operons in S. aureus containing numerous iron regulated genes, encoding for a heme uptake apparatus (16) (Fig. 1A). The genes involved in the process of transportation of the poryphrin ring of heme into the bacterial cytoplasm have been identified, however the mechanism whereby Staphylococci access the iron atom contained within the porphyrin macrocyclic molecule has yet to be determined. We hypothesized that a gene within the isd gene cluster, predicted to encode for a cytoplasmic protein, isdG, is involved in the degradation of heme (Fig. 1A) (the amino acid sequence of this protein can be accessed through NCBI Protein Data base under NCBI Accession NP_371660). To establish the subcellular location of IsdG, staphylococcal cultures were fractionated into four compartments (extracellular medium, cell wall envelope, plasma membrane, and cytoplasm) and specific polypeptides were detected by immunoblotting (Fig. 1B). As expected, IsdG was found only in the staphylococcal cytoplasm. As a control, the lipoprotein IsdE was observed in the plasma membrane, whereas IsdB, the sortase A anchored surface protein, was located in the cell wall fraction. Small amounts of a faster migrating IsdB species were detected in the medium, suggesting that IsdB degradation products may be released from the staphylococcal surface.
After deletion of the isdG gene by allelic replacement, the ⌬(isdG) mutant strain S. aureus EPS1 failed to produce anti-IsdG immunoreactive species (Fig. 1C), indicating that the cytoplasmic protein identified in Fig. 1 indeed represents IsdG. S. aureus encodes an isdG paralogue, isdI, located outside of the isd locus (2.756 Mb of the genome) in a bicistronic transcriptional unit at 1.834 Mb (Fig. 1A) (the amino acid sequence of this protein can be accessed through NCBI Protein Data base under NCBI Accession NP_370689). The deduced IsdG protein is a 107 amino acid molecule with a calculated molecular weight of 12,545 Da, and a calculated pI of 6.61, while the deduced IsdI protein is a 109 amino acid protein with a calculated molecular weight of 12,790 and a calculated pI of 4.8. IsdG and IsdI are 78% similar at the amino acid level, and both proteins demonstrate little identity to the previously identified heme oxygenases, a group of enzymes capable of degrading heme (data not shown). IsdG has a histidine residue at position 27, corresponding to a potential candidate for the established proximal ligand of His-25 in mammalian HO-1 and His-20 in bacterial HmuO (24,25). Furthermore, Pfam analysis (26) associated IsdG and IsdI with the ABM family of monooxygenases, enzymes from Streptomyces that catalyze oxidation of aromatic polyketides. This enzyme family is unique in that its members catalyze the oxygenation of various compounds without the need for prosthetic groups or cofactors typically associated with the activation of molecular oxygen (27). Upstream of the first gene in the isdG containing transcriptional unit, and immediately upstream of isdI are canonical Fur boxes (Fig. 1A), consensus nucleotide sequence to which the Fur repressor is capable of binding (28), implying that these genes are ironregulated. Iron-regulation of IsdG was verified experimentally by immunoblot (data not shown). The genomic context surrounding these two genes reveals isdG localized at the end of a heme uptake operon, and isdI is immediately downstream of a gene with unknown function. The genetic linkage to a heme uptake system, confirmed iron regulation, cytoplasmic localization, and inclusion in a monooxygenase family of enzymes caused us to hypothesize that IsdG and IsdI might represent a new family of heme-degrading monooxygenases in pathogenic bacteria.
Expression of IsdG and IsdI-Both isdG and isdI were expressed as six-histidyl-tagged proteins under the control of the T7 polymerase promoter in E. coli and purified by affinity chromatography on nickel nitrilotriacetic acid-agarose yielding distinct protein bands migrating at ϳ12.5kDa on SDS-PAGE (Fig. 2, A and B). This size corresponds well with the predicted sizes of IsdG and IsdI. Typical purifications produced ϳ35 mg of protein per liter of E. coli BL21(DE3). Notably, E. coli cultures overexpressing IsdG or IsdI exhibited a bright yellow color as compared with the darker yellow color of cultures containing the pET15b expression vector alone (data not shown).
FIG. 1. Sequence analysis and subcellular localization of IsdG and IsdI.
A, genomic organization of the isd locus with three transcriptional units isdA, isdB, and isdCDEFsrtBisdG that are controlled by Fur through conserved DNA sites called fur-boxes. A bicistronic operon encoding isdI is also controlled by Fur but is located elsewhere on the chromosome of S. aureus. Nucleotide sequence of predicted Fur-boxes of isdG and isdI as compared with the S. aureus Fur-box consensus sequence is shown. Immunoblotting of subcellular fractions of S. aureus strains (B) Newman and (C) EPS1 [⌬(isdG)] grown under iron-starved conditions. Culture medium (MD), cell wall (CW), membrane (M), and cytoplasmic (C) fractions are shown. Antibodies raised against purified IsdB, IsdE, and IsdG were used for chemiluminescent detection.
Properties of the Heme-IsdG and Heme-IsdI Complex-Spectral analysis of purified IsdG and IsdI in the range of 400 -600 nm did not reveal absorption signals suggestive of hemin binding (Fig. 3). Reconstitution of both IsdG and IsdI with heme at pH 8.0 generated optical absorption spectra containing a Soret band at ϳ412 nm, and ␣/ bands at ϳ567 and 532 (Fig. 3, A and B). These spectral properties are consistent with those of known heme-binding proteins, albeit that the Soret band has a slightly higher wavelength than previously identified heme oxygenases in which the proximal ligand is a histidine residue (13, 29 -31). The presence of the ␣/ bands at pH 8.0 implies that heme-IsdG and heme-IsdI complexes exist as a six-carbon low spin system at alkaline pH, which is consistent with the mammalian and bacterial heme oxygenases (32). Incremental addition of hemin to both IsdG and IsdI allows for the visualization of the stoichiometric complex of these proteins with hemin. Due to the fact that the Soret peak of the heme-protein complexes is different than that of free heme alone at neutral pH, spectrophotometric titration of IsdG and IsdI was carried out utilizing this difference. Incremental addition of hemin to both IsdG (10 M) and IsdI (10 M) produced a distinct inflection point at 10 M hemin, revealing a 1:1 stoichiometric relationship between protein and heme (Fig. 3). These data were used to calculate molecular affinities using Michaelis-Menten kinetics. IsdG-bound hemin with a K d of 5.0 Ϯ 1.5 M, and IsdI-bound hemin with a K d of 3.5 Ϯ 1.4 M. The pyridine hemochrome method (23) was used to determine the millimolar extinction coefficient for IsdG to be 131 mM Ϫ1 cm Ϫ1 and for IsdI to be 126 mM Ϫ1 cm Ϫ1 . These values for extinction coefficient and dissociation constant are in a range consistent with known heme degradation enzymes (Table I). Taken together, these results reveal that IsdG and IsdI bind heme in a manner resembling known heme-degrading enzymes.
IsdG-and IsdI-mediated Heme Degradation and Iron Release-Initially, to determine if IsdG and IsdI release iron from heme via cleavage of macrocyclic porphyrin, we measured the ability of these enzymes to liberate radiolabled [ 55 (Table I) (16). To determine if iron release from heme occurred concomitantly with opening of the poryphrin macrocycle, we utilized optical absorption spectroscopy to monitor the IsdG-and IsdImediated degradation of heme. In the presence of a suitable electron donor such as ascorbate or NADPH-cytochrome P450 reductase (22), heme oxygenases catalyze the oxidative degradation of heme first to ␣-meso-hydroxyheme, followed by verdoheme, and finally biliverdin, carbon monoxide (CO), and iron (5,6). Incubation of the IsdG-heme complex with NADPHcytochrome P450 reductase in the presence of NADPH produced a UV spectrum for hemin cleavage products indistinguishable from those produced by the heme oxygenase from rat (HO-1) (Fig. 4, A and B). To characterize this reaction further, IsdG-heme and IsdI-heme were incubated with NADPH-cytochrome P450 reductase, and upon the addition of NADPH, the degradation of heme was monitored spectrophotometrically every 5 min over the course of 1 h. This reaction led to a slight increase in wavelength of the Soret peak from 412 to 414, and an almost complete elimination of the Soret and ␣/ peaks of the protein-heme complex consistent with degradation of the heme tetrapyrrole (Fig. 4, C and E). The 340-nm band of NADPH increases upon addition of 10-M increments of NADPH and subsequently decreases in proportion to the decrease of the Soret band. Over time, the Soret maximum decreased further and shifted back toward 400 nm with complete disappearance of the ␣/ bands. Furthermore, visualization of the reaction-containing cuvette revealed a change in color from brown-red to bright yellow-green over time. Substitution of ascorbate for NADPH-cytochrome P450 reductase as an electron donor led to similar color changes in the cuvette, and a more pronounced decrease in the Soret band of the proteinheme complex (Fig. 4, D and F). In addition, optical absorption spectroscopy demonstrated the formation of different reaction products upon ascorbate-catalyzed heme degradation as compared with the reaction performed with reductase. More specifically, a shoulder at 395 nm appears early in the reaction, likely indicative of verdoheme formation. In the later time points, broad bands centered near 380 nm and 600 -700 nm appear, consistent with biliverdin formation. The discontinuity of the tracings in the early time points suggests that the ascorbate-driven reaction initiates quickly, with later steps in the reaction occurring less rapidly.
Non-enzymatic oxidation of heme has been reported for certain heme-binding proteins such as myoglobin, cytochrome b 5 , and cytochrome b 562 (33)(34)(35). The coupled binding and oxidation of heme requires exogenous hydrogen peroxide, and therefore can be inhibited by catalase (34,36). To distinguish between coupled oxidation of heme, and IsdG-and IsdI-mediated enzymatic degradation of heme, we repeated the above heme degradation reactions in the presence of purified catalase. Purified catalase did not inhibit the ability of IsdG or IsdI to degrade heme using either NADPH-cytochrome P450 reductase or ascorbate as an electron donor (Fig. 4). To further distinguish IsdG-and IsdI-mediated heme degradation from coupled oxidation, pyridine was added to the reaction products. Coupled oxidation of heme will produce a verdoheme end product, whose regioisomers have distinct peaks between 640 -680 nm in the presence of pyridine (37,38). Pyridine extraction of the reaction products from all IsdG-and IsdI-mediated heme degradation reactions did not produce a spectrum typical of a pyridine-verdohemochrome (data not shown). Thus, IsdG and IsdI appear to catalyze enzymatic cleavage of the heme tetrapyrrol ring in the presence of NADPH-cytochrome P450 reductase. If so, IsdG, IsdI, or rat heme oxygenases should generate similar reaction products. Upon completion of the NADPHcytochrome P450 reductase catalyzed degradation of heme, the reaction products were extracted with chloroform and subjected to high-performance liquid chromatography on a C18 column. For chromatography calibration, hemin eluted at 20 min (85:15 MeOH:H 2 O) from the column, whereas biliverdin, the reaction product of rat heme oxygenase-mediated heme degradation, eluted at 10 min. The reaction products of both IsdG-and IsdI-mediated cleavage of hemin eluted at 10 min following HPLC analysis (Fig. 5), suggesting that the staphylococcal heme oxygenases indeed generate similar reaction products as the mammalian enzyme.
In Vivo Activity of IsdI-Complementation experiments using isdG are complicated by the location of this gene at the end of a large transcriptional unit (Fig. 1A), therefore isdI was chosen for complementation experiments. To demonstrate in vivo heme degradation by IsdI, we measured the ability of IsdI expressed in trans to complement the heme utilization deficiency of the previously characterized C. ulcerans heme oxygenase mutant, CU29 (18). To ensure appropriate expression of isdI, orfXisdI and ϳ100-bp upstream of orfX were cloned into a corynebacterial expression vector, creating pEPS10. Expression of pEPS10 encoded IsdI in C. ulcerans led to a decrease in colony size as compared with wild-type (data not shown) on HIBTW medium, implying that expression of orfXisdI in C. ulcerans causes low level toxicity. However, pEPS10 restores the ability of CU29 to utilize heme as a sole iron source in the presence of iron chelator (Fig. 6, A and B). This restoration of growth reaches levels similar to that of the previously characterized C. ulcerans heme oxygenase mutant complemented with the Corynebacterium diphtheriae heme oxygenase gene, hmuO[CU29(pCD293)] (18). These results demonstrate that IsdI has an in vivo role in heme degradation and subsequent iron release for use by the bacterium as a nutrient source. DISCUSSION Work in Gram-negative bacteria has revealed numerous ways whereby these organisms capture heme and heme-containing molecules and transport these compounds across the double membrane envelope. Receptors for heme (39 -41) and heme-containing proteins such as hemoglobin (42,43), haptoglobin (44), and hemopexin (45) have been identified. Additionally, numerous Gram-negative microbes are capable of producing extracellular-binding proteins that shuttle molecules containing heme to outer membrane receptors. These molecules, also known as hemophores, are capable of extracting heme from heme-containing proteins such as hemoglobin, with subsequent delivery to specific outer membrane receptors (39,46). Furthermore, extracellular proteases that degrade hemecontaining host proteins are capable of releasing heme and making it available to the bacteria (47,48). Little is known about how and to what extent Gram-positive pathogens utilize heme and how this compound is transported into bacterial cells. Recently, the first Gram-positive heme transport system involving the utilization of hemoglobin as an iron source was identified in C. diphtheriae (49). Additionally, a Streptococcus pyogenes cell surface protein that associates with heme has been identified (50). We have recently described a system of cell wall anchored proteins involved in the binding and transport of heme in S. aureus (16). Although systems involved in heme uptake in Gram-positive bacteria are beginning to be identi-fied, little characterization has been performed on the mechanism and proteins involved in this process. In addition, the mechanism by which most bacteria capable of utilizing heme as a sole iron source access the iron atom of the heme porphyrin ring remains a mystery. Published reports describe heme oxygenases in C. diphtheriae (13), Pseudomonas aeruginosa (12), and the pathogenic Neisseria (14), however, searches of all available finished and unfinished microbial genomes reveals potential homologues to these enzymes in only two other genera, Agrobacterium and Streptomyces (data not shown). This is surprising since over 20 different pathogenic species of bacteria are reportedly capable of utilizing heme as a sole iron source (15). In this study we describe two proteins from S. aureus that have sequence signatures placing them in a family of monooxy- genases involved in oxidation of aromatic intermediates (27), a process consistent with heme degradation. Purified IsdG and IsdI are both able to bind heme in a 1:1 ratio, exhibiting binding characteristics consistent with known heme oxygenases. IsdG was shown to be cytoplasmically localized, suggesting that heme degradation in S. aureus occurs in the cytoplasmic compartment. Both enzymes are capable of degrading the heme macrocyclic ring in the presence of a suitable electron donor, as observed spectrophotometrically. This degradation has characteristics similar to the heme degradation reaction carried out by mammalian heme oxygenases. Furthermore, the reaction products produced upon IsdG-and IsdI-mediated heme oxidation appear to include biliverdin, the reaction product of the paradigmatic heme oxygenase reaction (5,7). We have shown that incubation with IsdG or IsdI leads to a significant increase in the amount of free iron released from heme, likely providing S. aureus with a source of iron during infection. Finally, expression of isdI in trans complements the heme utilization defect of a C. ulcerans heme oxygenase mutant.
It has previously been reported that some heme is degraded when bound to certain hemoproteins through a process known as coupled oxidation (33,34,51). This process is a non-enzymatic degradation of the heme molecule, whose biological relevance is unknown. Coupled oxidation of heme by myoglobin utilizes an exogenous peroxide source, and as such can be inhibited by the addition of catalase. When added at a molecular ratio at, or greater than, one-tenth of an equivalent of hemoprotein, catalase was shown to inhibit myoglobin-mediated coupled oxidation (34). IsdG-or IsdI-mediated heme degradation in the presence of NADPH-cytochrome P450 reductase or ascorbate was not inhibited by catalase at a ratio of 0.5:1.0 (catalase to hemoprotein). Furthermore, coupled oxidation of heme typically leads to the formation of verdoheme, which remains associated with the hemoprotein (35). Upon addition of pyridine to the hemoprotein-verdoheme complex, verdoheme becomes dislodged from the hemoprotein and associates with pyridine, forming a strong pyridine-verdohemochrome spectrum. This spectrum was not observed upon pyridine extraction of the reaction products from the IsdG-or IsdI-mediated heme degradation. Taken together, these results imply that the observed degradation of heme in our study is likely enzymatic, and not due to coupled oxidation of the heme molecule.
Previously, we have presented the first description of a heme uptake system identified in S. aureus known as the iron-regulated surface determinant system (Isd) (16). These genes are found clustered together in the S. aureus genome in three iron-regulated transcriptional units. IsdA and IsdB are individually transcribed, and sorted to the cell wall in a sortase A (srtA) dependent manner. In a third operon is IsdC, a protein sorted to a distinct portion of the cell wall by sortase B (srtB). Downstream from IsdC in the same transcriptional unit is a membrane transport system consisting of IsdD (a membrane protein), IsdE (a hemin-binding lipoprotein), and IsdF (a hemepermease), followed by SrtB and IsdG. The gene encoding IsdI, a paralogue to IsdG, exists at an alternate location in the chromosome of S. aureus. Finally, IsdH, a separate iron-regulated cell-wall sorted protein with significant identity to IsdB has also been identified in a region outside of the isd cluster (16). IsdH, also known as HarA, has been recently reported to be a haptoglobin-hemoglobin receptor (44). Our model envisions an elaborate iron acquisition system in S. aureus, involving numerous gene products. Initially upon infection, S. aureus produces a number of toxins, including hemolysins capable of lysing red blood cells. This leads to the release of hemoglobin from erythrocytes for use as a potential iron source. The lack of available iron in humans leads to transcription of the isd genes upon liberation of Fur-mediated repression. IsdB and IsdA are then sorted to the cell wall by sortase A, concomitant with IsdC sorting to a different portion of the cell wall by SrtB. Free hemoglobin binds to IsdB, with subsequent removal of the heme molecule in an IsdB-and IsdA-dependent manner. The heme molecule is then passed to the IsdC cell wall transport protein, with subsequent movement through the membrane transport system composed of IsdDEF. Upon entry to the cytoplasm, IsdG and IsdI are capable of carrying out oxidative degradation of the heme molecule, leading to the release of free iron for use as an iron source. As inactivation of individual components of the Isd heme transport system do not inhibit growth on heme as a sole iron source as efficiently as a cell wall sorting mutant, it is likely that remaining heme utilization proteins are yet to be identified in the genome of S. aureus. This finding underlines the redundant nature of iron acquisition in S. aureus, and reveals the ability of this organism to acquire iron through various iron sources.
Numerous bacterial pathogens are capable of utilizing heme iron as a sole iron source, and many of the uptake systems responsible for acquiring heme have been described (39). Comparatively fewer descriptions exist of enzymes responsible for the removal of the iron atom from the poryphrin ring of heme (12)(13)(14)18). If fact, there is a marked absence of identifiable heme degradation enzymes in the sequenced genomes of bacteria. The identification of IsdG and IsdI from S. aureus adds to the list of heme degrading enzymes present in bacterial pathogens. A lack of identifiable sequence identity between IsdG or IsdI and other bacterial heme-degrading enzymes implies that IsdG and IsdI are members of a novel family of heme-degrading enzymes. The presence of homologues to IsdG and IsdI in the genomes of Bacillus anthracis, Staphylococcus epidermidis, and Listeria monocytogenes, suggests a conserved method of heme degradation in these organisms. Furthermore, the unique nature of these enzymes, combined with their presence in important human pathogens, and the vital nature of iron acquisition to successful infection implies that this new family of heme-degrading enzymes may represent a novel target for antimicrobial therapy. This idea is further supported by the antimicrobial potential of porphyrin compounds against staphylococci (52,53).
|
v3-fos-license
|
2017-09-15T12:28:58.707Z
|
2012-02-08T00:00:00.000
|
34678492
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.5772/28227",
"pdf_hash": "4fdacfc09708ee8f6b7e416575dd7c3b2c60ccfa",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43207",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "4fdacfc09708ee8f6b7e416575dd7c3b2c60ccfa",
"year": 2012
}
|
pes2o/s2orc
|
Molecular Chaperones as Prognostic Markers of Neuroblastoma
Neuroblastoma (NB) is a childhood tumor derived from sympathoadreanal lineage of the neural crest progenitor cells, and is the most common malignant disease of infancy, accounting for 96% of cases occurred before the age of 10 (Gurney et al., 1995, Maris and Matthay, 1999). The neuroblastoma cells exhibit characteristics of undifferentiated cells and often metastasize to distant organs (Maris and Matthay, 1999, Maris et al., 2007). Approximately, 60% of patients diagnosed with NB display a stage IV disease and a very poor prognosis. The 5-year survival rate of NB patients is no more than 30%, even with aggressive therapy (Nishihira et al., 2000). As a result, 50% of the NB patients die from this disease that continues to be one of the most difficult challenges among pediatric tumors. NB is quite a heterogeneous tumor and presents a broad clinical and biologic spectrum ranging from highly undifferentiated tumors with very poor outcomes to the most differentiated benign ganglioneuroma or NBs with high probability of spontaneous regression and hence favorable prognosis. The clinical presentation of NB can be categorized into three distinct patterns based on the tumor histology: (i) life-threatening progression; (ii) maturation to ganglioneuroblastoma (GNB) or ganglioneuroma (GN); and (iii) spontaneous regression (Pritchard and Hickman, 1994). Taking other biological variables into account, NBs can be categorized into two groups in terms of prognosis (Brodeur, 2003, Woods et al., 1992). One, the favorable NB, is associated with young age and
Introduction
Neuroblastoma (NB) is a childhood tumor derived from sympathoadreanal lineage of the neural crest progenitor cells, and is the most common malignant disease of infancy, accounting for 96% of cases occurred before the age of 10 ( Gurney et al., 1995, Maris andMatthay, 1999).The neuroblastoma cells exhibit characteristics of undifferentiated cells and often metastasize to distant organs (Maris andMatthay, 1999, Maris et al., 2007).Approximately, 60% of patients diagnosed with NB display a stage IV disease and a very poor prognosis.The 5-year survival rate of NB patients is no more than 30%, even with aggressive therapy (Nishihira et al., 2000).As a result, 50% of the NB patients die from this disease that continues to be one of the most difficult challenges among pediatric tumors.NB is quite a heterogeneous tumor and presents a broad clinical and biologic spectrum ranging from highly undifferentiated tumors with very poor outcomes to the most differentiated benign ganglioneuroma or NBs with high probability of spontaneous regression and hence favorable prognosis.The clinical presentation of NB can be categorized into three distinct patterns based on the tumor histology: (i) life-threatening progression; (ii) maturation to ganglioneuroblastoma (GNB) or ganglioneuroma (GN); and (iii) spontaneous regression (Pritchard and Hickman, 1994).Taking other biological variables into account, NBs can be categorized into two groups in terms of prognosis (Brodeur, 2003, Woods et al., 1992).One, the favorable NB, is associated with young age and
The tumorigenesis of neuroblastoma
Aberrant embryonic development of sympathetic nervous system has been suggested to underlie the tumorigenesis of NB.The molecular characterization of clinically relevant prognostic markers is likely to shed light on the molecular mechanisms governing the neuroblast development and lead to the identification of novel therapeutic targets of NB.Since NB exhibits great tendency to differentiate, intensive induction therapy of NB has been widely attempted to improve outcomes.Recent evidence suggests that NB cells exhibit capacity of differentiating into mature cells and can be forced to differentiate upon the treatment of retinoic acid, butyric acid, or cisplatin (Ijiri et al., 2000, Tonini, 1993).A number of molecules normally expressed during embryonic development, including HNK-1, neuropeptide Y, tyrosine hydroxylase, TrkA and CD44, are found in NB (Hoehner et al., 1996, Israel, 1993), suggesting that the tumorigenesis of NB could be a divergence of the embryonic development of the sympathetic nervous system.On the other hand, NB cells with better prognosis are often found to express markers indicative of cell differentiation, such as HNK-1 and TrkA (Cooper et al., 1992, Nakagawara et al., 1993).It is thus plausible that the tumorigenesis of NB might result from the defect in the differentiation of embryonic NB cells (Tonini, 1993).Interestingly, NB can regress spontaneously by apoptosis (Ijiri et al., 2000, Pritchard andHickman, 1994).The expression of pro-apoptotic genes is evident in NB,that correlates with favorable prognosis, and the survival rate of NB patients is proportional to the expression levels of these genes (Hoehner et al., 1997).Inducers of differentiation, including retinoid acid and cisplatin, could promote apoptosis in NB cells (Tonini, 1993), and NB cells expressing TrkA may undergo cell death when deprived of nerve growth factor (NGF) (Nakagawara et al., 1993), suggesting that the deficient apoptosis of embryonic NB cells could lead to the tumorigenesis of NB.However, what factors may www.intechopen.comMolecular Chaperones as Prognostic Markers of Neuroblastoma 289 contribute to the regulation of NB cell differentiation or apoptosis is still unclear.Accumulated evidence has suggested that apoptosis and differentiation of NB cells might occur simultaneously (Ijiri et al., 2000, Tonini, 1993).Consistently, NB cells expressing TrkA can be induced to differentiate in the presence of NGF, while undergo apoptosis upon withdrawal of NGF (Nakagawara et al., 1993).It is thus conceivable that factors mediating the tumorigenesis of NB would affect the differentiation and apoptosis of the NB cells simultaneously.NB tumors are highly vascular and autonomously produce a variety of angiogenic factors, such as VEGF, bFGF, Ang-2, TGF-β, and PDGF-A, that are commonly found in advancedstage tumors (Eggert et al., 2000).Although it is still debatable (Canete et al., 2000), the vascular index, as expressed by numbers of vessels per square millimeter of tissue area, has been shown to correlate with the adverse prognosis of NB patients (Meitar et al., 1996).Furthermore, the obstruction of angiogenesis may induce differentiation and apoptosis in NB (Wassberg et al., 1999), suggesting that the angiogenic factors produced by NB could also play an important role in the differentiation and apoptosis of NB cells.Together, these data suggest that failure of either differentiation or regression by apoptotic death of NB cells is critical for the development of NB.This notion can be further supported by these findings (i) that the high frequency of spontaneous differentiation and regression can be observed in 4S tumors as well as those detected by mass screening; (ii) that the neuroblastic tumors (NTs) in adrenal glands obtained from non-afflicted infants at autopsy indicates a high incidence of unrecognized spontaneous resolution; (iii) that expression of apoptosis-related genes has been demonstrated in NB; and (iv) that NB patients with higher apoptotic index have better prognosis.Along these lines, we will describe the functions of three chaperone proteins that are implicated in the differentiation of NB.The alteration of their functions, either individually or combinarotially, could result in the propensity for the neuroblastic cells to transform and initiate the tumorigenesis of NB.Here, we summarize the roles of three newly identified favorable prognostic markers, whose functionality as chaperones are well-established, in the pathogenesis of NB.Although these biomarkers, including calreticulin, glucose-regulated protein 78, and glucoseregulated protein 75, could be localized to different intracellular organelles, our recent studies have provided compelling evidence demonstrating that these biomarkers share an emerging function in actively governing the neuronal differentiation of neuroblastic cells.These new findings thus propose a model in which some chaperone proteins might not simply be a protein guardian in securing the normal folding of cellular proteins but could vigorously engage in crucial cellular functions by themselves.
Molecular chaperones of tumorigenesis
The enhanced expression of molecular chaperones have been found in a variety of tumors, and is often associated with an unfavorable prognosis and resistance to therapy (Calderwood et al., 2006).These molecular chaperones at high levels can promote tumorigenesis through facilitating the accumulation of overexpressed and mutated oncogenes and inhibiting apoptosis of tumor cells.According to Hanahan and Weinbeg (Hanahan and Weinberg, 2000), the tumorigenesis can be organized into six phenotypic changes in cellular functions: (i) autonomy in growth signaling; (ii) resistance to growth inhibition; (iii) evasion of apoptosis; (iv) unlimited proliferative potential; (v) persistent angiogenesis; and (vi) tissue invasion and metastasis.The increased expression of molecular chaperones thus could not only allow tumor cells acquire malignant capabilities, but also actively play a role in most stages of tumor development and the acquisition of drug resistance.
Calreticulin
Calreticulin (CRT) is a molecular chaperone primarily localized to endoplasmic reticulum, and has emerged as an early stage marker of NB.Although CRT is best known for its critical role in securing the correct folding and maturation of nascent proteins (Ellgaard and Helenius, 2003), it also involves in the regulation of Ca 2+ homeostasis, the modulation of integrin-dependent adhesion, the alteration of Ca 2+ -elicited signaling, and the inhibition of the transcriptional activities of steroid receptors (Coppolino et al., 1997, Dedhar et al., 1994, Michalak et al., 1999).The expression of CRT can be up-regulated under stress conditions and apoptosis, suggesting CRT as a stress protein (Nakamura et al., 2000).Consistent with these findings, mice deficient of CRT exhibit significant brain defects (Rauch et al., 2000), suggesting an essential role of CRT in the embryonic development of nervous system.The role of CRT in tumorigenesis has just begun to be elucidated, evidenced by the differential expression and localization of CRT in malignant versus non-malignant tissues.The nuclear localization of CRT in hepatocellular carcinoma and various carcinomas, but not in nonmalignant liver tissue, suggests that the interaction of calreticulin and nuclear matrix could be critical for the uncontrolled proliferation of carcinomas (Yoon et al., 2000).Furthermore, up-regulated expression of CRT can be observed in breast cancers, suggesting that CRT is pivotal for the malignant progression of carcinomas (Bini et al., 1997, Franzen et al., 1997).Interestingly, vasostatin, the N-terminal fragment of CRT, and the full-length CRT have been shown to suppress tumor growth by directly targeting endothelial cells to inhibit angiogenesis (Pike et al., 1998, Pike et al., 1999).A recent report further demonstrates that CRT can serve as a recognition ligand for LDL receptor-related protein (LRP) and signal for the removal of apoptotic cells in a CRT/LRP-dependent manner (Gardai et al., 2005).These findings depict dual functions of intracellular CRT and extracellular CRT in tumorigenesis.The former is likely to promote tumor growth by entering nuclei to alter the function of transcriptional machinery, while the latter could target specific surface receptors to hinder the growth of malignant cells.The essential role of CRT in the differentiation of NB cells has recently been established.The up-regulated expression of CRT in NB cells, coincident with the alteration of integrin profile on the surface, is particularly prominent upon differentiation (Combaret et al., 1994, Gladson et al., 1997, Rozzo et al., 1993, Coppolino et al., 1997), substantiating an essential function of CRT in mediating integrin-dependent calcium signaling.CRT in differentiating NB cells is localized to plasma membrane and could play an essential role in neurite outgrowth (Xiao et al., 1999a, Xiao et al., 1999b).These results suggest that CRT in NB, unlike in other carcinomas, can be re-distributed to cell surface to antagonize tumor growth upon induced differentiation.To verify this hypothesis, we evaluate the association of clinicopathologic factors and patient survival with the expression of CRT in patients with NB to determine whether CRT could affect the tumor behavior of NB (Hsu et al., 2005a).Our data show that positive CRT expression is strongly correlated with differentiated histologies in sixty-eight NBs.Its expression is also closely associated with known favorable prognostic factors such as detected from mass screening, younger age (≤1 year) at diagnosis and early clinical stages, but is inversely correlated with MYCN amplification.Overall, NB patients with higher levels of CRT fare significantly better in long-term survival, substantiating CRT as an independent prognostic factor.Moreover, CRT expression also predicted better survival in patients with advanced-stage NB, and its absence predicted poor survival in patients whose tumor had no MYCN amplification.Altogether, CRT could actively play a part in the differentiation, apoptosis and angiogenesis of NB as well as the pathogenesis of NB.
Glucose-regulated protein 78
Glucose-regulated protein 78 (GRP78) is a member of the family of heat shock protein 70 (HSP70) that is localized at the endoplasmic reticulum (ER) (Gething, 1999).Like other ERresident chaperones, GRP78 is essential for the correct folding and translocation of newlysynthesized secretory proteins across the ER membrane, and is also required for the retrotranslocation of aberrant and misfolded polypeptides destined for degradation in proteasome (Gething, 1999).In addition to being a constituent of the quality control system in ER, GRP78 also contributes to the maintenance of Ca 2+ homeostasis (Chevet et al., 1999).GRP78 expression in normal adult organs is generally maintained at low levels and could become escalated in tumors (Dong et al., 2004), suggesting that GRP78 is required for the propagation of cancers.Consistent with this finding, tumor progression in GRP78 heterozygous mice is significantly attenuated, accompanied by a longer latency period, reduced tumor size, and increased tumor apoptosis (Lee, 2007).Accumulated evidence also suggest that GRP78 overexpression could renders various cancers resistance to chemotherapy (Li and Lee, 2006).These findings provide the rationales for targeting GRP78 as an anticancer approach that could be used in conjunction with standard therapeutic agents to improve the prognosis.Like other HSP70 family members, GRP78 is constitutively expressed at high levels in neuroepithelial cells of the neural tube, suggesting that GRP78 and other HSP70 proteins could play significant roles in the development and differentiation of neural tissue (Barnes andSmoak, 2000, Walsh et al., 1997).By using a rat pheochromocytoma cell line PC12 as a cellular model of neuroblastoma, the levels of GRP78 protein are significantly enhanced in PC12 cells that are induced by nerve growth factor (NGF) to differentiate (Satoh et al., 2000).The overexpression of exogenous GRP78 can further augment the neurite outgrowth induced by NGF, while the down-regulation of GRP78 blocks the NGF-induced neurite outgrowth (Satoh et al., 2000), suggesting a functional synergism between NGF signaling and GRP78 function with respect to neuronal differentiation.Consistently, the inhibition of cell death in NGF-deprived neuronal cells reduces of the levels of GRP78 transcripts, suggesting a functional role of GRP78 in neuronal cell death (Aoki et al., 1997).The possibility thus exists that GRP78 could affect the differentiation and apoptosis of NB and may have a role in the tumor behavior of this cancer.In supporting this view, data from our lab have confirmed the clinical importance of GRP78 in NB.In a cohort of 68 neuroblastic tumors, forty (58.8%) of them display positive GRP78 expression by immunohistochemistry, and the positive GRP78 immunostaining is tightly correlated with differentiation histology of tumor and early clinical stages, but inversely correlated with MYCN amplification (Hsu et al., 2005b).Our findings also suggest that GRP78 expression could be an independent prognostic biomarker for favorable outcome in NB patients.Given that the differential roles of GRP78 in NB and other solid tumors, it would become increasingly critical to assess GRP78 expression level for the proper management of patients with NB versus other types of cancers.
Glucose-regulated protein 75
Glucose-regulated protein 75 (GRP75) is a member of heat shock protein 70 family and is first cloned from the cytoplasmic fraction of normal mouse fibroblast (Wadhwa et al., 1993).GRP75, also known as mortalin-2, is a member of mitochondrial molecular chaperones, but could also resides in other organelles, such as ER, plasma membrane, cytoplasmic vesicles, and cytosol (Kaul et al., 2002).GRP75 carries multiple cellular functions ranging from stress response, intracellular trafficking, antigen processing, control of cell proliferation, differentiation, and tumorigenesis (Wadhwa et al., 2002b).It has been shown that GRP75 is distributed in a pancytoplasmic pattern in normal cells but could be redistributed into a perinuclear mode in transformed cells (Wadhwa et al., 1995).The versatilities of GRP75's functions can also be exemplified by its interactions with many cellular proteins, including metabolic enzymes (e.g.diphosphomevalonate decarboxylase), mitochondrial proteins (e.g.voltage-dependent anion channel 1), and proteins involved in proliferation and differentiation (e.g.FGF-1, MKK7, and p53) (Wadhwa et al., 2003, Schwarzer et al., 2002, Wadhwa et al., 1998).The tumorigenic role of GRP75 is shown by its colocalization with p53 in the perinuclear region of various cancers, possibly through taking part in the suppression of p53 expression (Wadhwa et al., 2002a, Wadhwa et al., 1998).GRP75 can thus serve as a functional chelator of p53 by sequestering it in cytoplasm to suppress p53-dependent gene expression.Consistent with these data, overexpression of GRP75 is found to be crucial for the changes from immortal to malignant phenotypes, leading to aggressive proliferative potential (Czarnecka et al., 2006).The expression of GRP75 is evidently up-regulated in a large number of tumorigenic human cell lines, implicating its overexpression as a marker of cell transformation (Wadhwa et al., 2006).In the acute myeloid leukemia HL-60 cells, the level of GRP75 is down-regulated upon differentiation, while overexpression of GRP75 is able to attenuate RA-induced differentiation and prevent apoptosis (Xu et al., 1999).Furthermore, GRP75 has been shown to be critical for the malignancy of breast cancer cells, and cells with higher levels of GRP75 are prone to exhibit an anchorage-independent phenotype and form tumors in nude mice (Wadhwa et al., 2006).Together, GRP75 actively involves in the molecular mechanisms governing the carcinogenesis of various tumors and could represent an ideal candidate for gene therapy.The exact role of GRP75 in the tumorigenesis of neuroblastoma is still unclear.We have employed two-dimensional differential gel electrophoresis (2-D DIGE) to identify GRP75 as one of the most dramatically up-regulated proteins in differentiated NB cells.Immunohistochemical analyses of NB tissues further reveal that positive GRP75 immunostaining is strongly correlated with differentiated histologies, mass-screened tumors and early clinical stages, but inversely correlated with MYCN amplification.Consistent with these data, univariate and multivariate survival analyses demonstrate that GRP75 expression is an independent favorable prognostic factor.Our data substantiate an essential role of GRP75 in the differentiation of neuroblastoma and establish a novel function of GRP75 in promoting the differentiation of NB cells.Whether GRP75 localized at different intracellular compartments can play distinctive cellular functions is not clear.Nevertheless, our data demonstrate for the first time that the change in the intracellular distribution of GRP75 coincides with the development of neuronal phenotypes of differentiated NB cells, strongly suggesting a functional role of GRP75 in neuronal differentiation.
Conclusion
Current data have clearly suggested that the tumorigenesis of NB is controlled by a complex mechanism and is distinct from that of other cancers.This process could be driven by the intricate interactions among many gene products in multiple pathways.The best-known examples of molecular chaperones involved in the regulation of neuronal differentiation, such as CRT, GRP78, and GRP75, also turn out to be favorable prognostic markers of NB, paving the way for us to unveil the functional roles of molecular chaperones in the tumorigenesis of NB.A recent study has shown that GRP75 and GRP78, another favorable prognostic marker of NB (Hsu et al., 2005b), could bind to RHAMM with an associated downregulation of RHAMM in Jurkat cells (Kuwabara et al., 2006).The GRP75/78-RHAMM complex could then bind to the microtubules to stabilize the microtubules in the interphase and prevent the depolymerization of microtubules for the progression of mitosis (Kuwabara et al., 2006).The essential role of RHAMM in neurite extension has been suggested (Nagy et al., 1995), and the expression of RHAMM has been linked to the progression and metastasis of a variety of cancers (Maxwell et al., 2005).The possibility thus exists that the pancytoplasmic GRP75 in differentiating NB cells, along with GRP78, may prevent these cells from engaging into mitosis by binding with and downregulating RHAMM while promoting the neurite formation simultaneously.These findings thus suggest that the nonchepraone effects of these molecular chaperones might play an even bigger role in tumorigenesis of NB and other cancers.A number of molecular chaperones of ER and mitochondria, such as CRT, GRP78, GRP75, and GRP94, whose expression is affected by tumorigenic pathways could be re-distributed outside their primary resident organelles, such as plasma membrane, neurites, and nuclei, upon differentiation.Data from others and our own labs have clearly demonstrated the "offsite" localization of these molecular chaperones.There may also be other mechanisms for the re-localization of chaperones to the nuclei and neurites that are associated with the cellular transformation.In contrast, molecular chaperones, such as calnexin in ER, that are constitutively expressed despite oncogenic transformation would mostly remain immotile during differentiation of NB.It remains to be determined whether the off-site expression of molecular chaperones is restricted to specific types of cancer and what fractions of these chaperones are presented on different cellular locations in tumor cells.Nonetheless, in certain cancers, the surface-localized GRP78 has been utilized as a beacon to deliver therapeutic agents specifically into cancer cells (Fu and Lee, 2006).Biologic factors that predict a favorable outcome for neuroblastoma patients are usually associated with differentiation or regression of neuroblastoma cells and early clinical stages.It remains to be investigated whether the expression of these differentiation-associated molecular chaperones, including HSP45, GRP78, GRP75, and calreticulin, in neuroblastic tumors would be sufficient to counteract the MYCN-elicited tumorigenesis of NB.In summary, molecular chaperones that are expressed in increased amounts in NB during differentiation could play an essential role in NB by slowing down its autonomous growth through promoting neuronal differentiation.The increased abundance of molecular chaperones in differentiated NB cells also offers tempting targets for the development of gene therapy that can attenuate the malignant phenotype of NB.
|
v3-fos-license
|
2023-02-08T15:45:57.212Z
|
2020-05-15T00:00:00.000
|
256632571
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-020-16190-z.pdf",
"pdf_hash": "041b60648ce83216507e19129b41b7aa09a7d7d4",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43208",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "041b60648ce83216507e19129b41b7aa09a7d7d4",
"year": 2020
}
|
pes2o/s2orc
|
Modular nonlinear hybrid plasmonic circuit
Photonic integrated circuits (PICs) are revolutionizing nanotechnology, with far-reaching applications in telecommunications, molecular sensing, and quantum information. PIC designs rely on mature nanofabrication processes and readily available and optimised photonic components (gratings, splitters, couplers). Hybrid plasmonic elements can enhance PIC functionality (e.g., wavelength-scale polarization rotation, nanoscale optical volumes, and enhanced nonlinearities), but most PIC-compatible designs use single plasmonic elements, with more complex circuits typically requiring ab initio designs. Here we demonstrate a modular approach to post-processes off-the-shelf silicon-on-insulator (SOI) waveguides into hybrid plasmonic integrated circuits. These consist of a plasmonic rotator and a nanofocusser, which generate the second harmonic frequency of the incoming light. We characterize each component’s performance on the SOI waveguide, experimentally demonstrating intensity enhancements of more than 200 in an inferred mode area of 100 nm2, at a pump wavelength of 1320 nm. This modular approach to plasmonic circuitry makes the applications of this technology more practical. Chip-scale modular components are important for nanophotonic applications. Here, the authors demonstrate post-processing of a silicon-on-insulator waveguide into an integrated hybrid plasmonic circuit, consisting of a plasmonic rotator and a nanofocusser module, which together result in nanoscale nonlinear wavelength conversion.
C hip-based nanophotonic waveguides that incorporate photonic and electronic functionality on a compact, monolithic platform 1 promise to revolutionize communications, sensing, and metrology [2][3][4] . The most promising approach being pursued relies on expanding existing silicon-oninsulator (SOI) technologies from the electronic to the optical domain, to produce photonic integrated circuits (PICs) exhibiting superior performance in terms of bandwidth and speed 5,6 . The quest for optical miniaturization is ultimately limited by diffraction-which in silicon corresponds to a maximum achievable spatial confinement of approximately 200 nm at telecommunication wavelengths. One of the most promising approaches for overcoming the diffraction limit by several orders of magnitude relies on nano-plasmonic structures 7 , which harness metals to compress light down to the molecular, and even atomic scale 8,9 . Moreover, the giant intensity enhancement provided by plasmonic nanofocusing-typically~100-2000 times 10 -has attracted interest for ultrafast, high-bandwidth, low-power nonlinear optics applications 11,12 , e.g., for nano-scale sensing 13 and all-optical wavelength conversion 14 . Plasmonics can be harnessed for nanoscale second-and third-harmonic generation, which respectively relied on either the large surface χ (2) or bulk χ (3) of the metal itself [15][16][17] , or on the large intensity enhancement within a dielectric at a plasmonic nanofocus 14 . This has mainly been demonstrated in planar structures that cannot be efficiently interfaced to PICs 18 .
Interfacing waveguide-based PICs with plasmonic nanostructures is challenging: typically, the latter is hindered by large losses (due to metallic absorption) and low coupling efficiency (due to extreme differences in the participating mode profiles). PICs and plasmonics can be married using hybrid plasmonic waveguides (HPWGs) containing a low-index buffer layer between the metal and the high-index waveguide, enabling relatively low propagation loss without sacrificing plasmonic confinement, and providing a convenient intermediate interface for coupling between photonic and plasmonic waveguides 19,20 . Whereas the efficient energy transfer between PIC-compatbile photonic and plasmonic structures has been under intense experimental investigation with a diverse range of functionalities [21][22][23][24][25][26][27] , including HPWG experiments demonstrating tight confinement and low propagation losses [28][29][30] , nonlinear experiments using this platform have been limited 31 .
While a number of simple HPWGs have been reported, the next challenge is to incorporate them into a more complex circuit with multiple modular, functional elements 32 -analogously to conventional PICs 1 . Ideally, such structures would be entirely chip-based, and be accessible using standard, industry-norm photonic components, thus simplifying the integration with more conventional technologies. Here we present the design, fabrication, and characterization of such a circuit, operating at λ = 1.32 μm. It consists of two modules: a mode converter that efficiently transforms an incoming photonic transverse electric (TE) mode into a hybrid-plasmonic transverse magnetic (TM) mode, followed by a plasmonic nanofocuser that functions as a nonlinear wavelength converter. We note that standard solutions exist for the coupling of light into the TE photonic waveguide, which here is achieved by using a grating with an incident freespace Gaussian beam. In this way, our device thus represents a fully integrated chip by which a free-space Gaussian beam is focused to a cross-section that is almost two orders of magnitude below the diffraction limit in silicon, with a concomitant increase in intensity. To demonstrate that this increased intensity is due to the focuser, we fabricate and characterize two similar devices: one with a partial focuser and one with no focusing element at all. Note that while preliminary reports of both a TE-to-TM rotator 33 and directional-coupling-based TM-nano-focuser 30 have been reported separately, this is a proposal and demonstration of combining these two modular elements into a monolithic PICcompatible plasmonic integrated circuit. This approach has clear advantages in terms of both design flexibility (enabling an industry-standard TE-waveguide input to achieve plasmonic nano-focusing), and wider bandwidth (enabled by the quasiadiabatic modal evolution).
Results
Circuit design. Our on-chip hybrid plasmonic integrated circuit (HPIC) is formed by two in-series plasmonic elements on a SOI waveguide (WG): a mode converter and a focuser. The latter combines a taper and a sharp tip, which functions as a nonlinear nanoscale light source. In our particular demonstration, we probe second harmonic generation (SHG) in the visible from a nearinfrared pump. Figure 1a shows a schematic of the HPIC. The first component (i) is formed by a polarization rotator 33 (also operating as a TE-photonic to TM-plasmonic mode converter 34 ); the second (ii) is a nanofocusing gold tip 10 resulting in SHG due to the intense nanoscale localization of the optical field, combined with the large surface χ (2) of gold 18 . Figure 1c shows an electron micrograph of a fabricated HPIC on a SOI waveguide, highlighting the~10 nm tip sharpness, which is limited only by the gold grains generated during the evaporation process 35 .
To analyze our circuit we first consider the relevant HPIC modes during propagation. Figure 2a shows the result of 2D finite element (FE) simulations (COMSOL) of the modal evolution along the HPIC. Figure 2a also shows a top-view schematic of Fig. 1 for clarity. In the first instance, a gold film 36 (t Au = 50 nm) with a SiO 2 spacer underneath 37 (t spacer = 20 nm) gradually extends on a silicon waveguide (350 nm × 220 nm, n Si = 3.5) until complete coverage (here, ℓ strip = 30-300 nm, as defined in Fig. 2). The red line in Figure 2a shows how the hybrid-TE (HTE) mode evolves within the waveguide, in terms of the real effective index and loss. The input is the fundamental TE-SOI mode of the bare waveguide, which excites the HTE mode (i) that rotates into a hybrid-TM mode (HTM) (ii). The HTM mode is then converted to a deep-subwavelength HTM plasmonic mode (iii) 38 by reducing the gold strip width (w strip = 300-10 nm, as defined in Fig. 2). The z-component of the time-averaged Poynting vector S z associated with each participating mode is shown in Fig. 2b, and presents the salient features of the evolution of TE-SOI mode after it couples to the HTE mode. The modal evolution of the equivalent HTM mode is shown as the blue curve in Fig. 2a for completeness. The TE-SOI waveguide mode excites both the HTE and HTM hybrid plasmonic eigenmodes in location (i), each evolving in a non-trivial way along the device.
We next calculate the performance of the full device using full 3D FE simulations. Due to the many parameters, materials, and functionalities involved, the optimization of the complete device is challenging: first, a suitable compromise between adiabaticity (requiring a slow modal transition, i.e., a long device length) and loss (requiring short device lengths) is required; secondly, small changes in geometric parameters, alignment, and surface roughness can have a significant impact on the conversion efficiency. However, this process can be significantly simplified by using the modularity, which enables us to consider each element separately.
We model the fabricated structure shown in Fig. 1c. The crosssection of the E x and E y field components in the middle of the Si-WG are shown in Fig. 3a. Note in particular the polarization rotation in the spacer, manifesting as a vanishing E x component and an emerging E y component. A detailed plot of the electric field intensity |E| 2 within the spacer near the tip is shown in Fig. 3b, showing a strong local enhancement at the tip apex. We calculate a~1200× intensity enhancement at the gold ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-16190-z surface with respect to the peak intensity in the silicon for the TE-SOI input. Figure 3c shows S z in each xy cross-section as indicated by the dashed lines in Fig. 3a(i)-(iv). We calculate the conversion efficiency between the incoming TE-SOI mode and each of the participating modes in the full device by performing overlap integrals between the calculated 3D fields of Fig. 3c and the 2D modes of Fig. 2, as outlined in ref. 34 .
The mechanism that converts the TE mode at input (Fig. 3a (input)) to the TM mode at the end of the rotator (Fig. 3a(ii)) is complicated by the fact that the waveguide evolves continuously over wavelength-scale propagation lengths, and that this is a lossy structure. In the rotator section, the gold-nanofilm overlayer tapers sideways relative to the underlying silicon waveguide, beginning on one corner on top of the silicon waveguide. Since the beginning of the rotator is formed by a sharp gold nanotip off-axis ( Fig. 3a(i)), energy is distributed between the HTE and HTM mode, with most of the energy being coupled into the HTE mode for the TE SOI WG input considered. Our calculations using the method presented in ref. 34 indicate that the HTE and HTM modes at the start of the rotator are excited with a coupling efficiency of 70% and 30%, respectively ( Fig. 3c(i)). As the gold film gradually tapers sideways to cover the waveguide, these two orthogonal modes evolve by rotating their polarization. In a quasi-adiabatic treatment 33 , as the gold film gradually tapers sideways to cover the waveguide, the rotation mechanism can be interpreted to originate from the dominant electric field remaining orthogonal to the metal surface. Due to the asymmetry at input, the input HTE mode of the waveguide rotates into the HTM mode. A pioneering experimental study identified three possible regimes, depending on the rotator length chosen 33 : a non-adiabatic regime (short coupler, low power transfer); an adiabatic regime (long coupler, strong absorption); and a quasiadiabatic regime with good power transfer to the desired mode at an intermediate length, which is the region where we operate. We obtain a TE-to-HTM (rotator) conversion efficiency of 41%, comparable to previous reports 33 , and a TE-to-HTM (nanofocus) conversion efficiency of 12%, also comparable to the state of the art for plasmonic nanofocusing 14 . Note that 9% of the TE mode remains in the WG at output, which can be improved, for example, by more sophisticated multi-section rotator designs 39 .
Fabrication and linear experiments. With an eye on the potential for modular approach to enhance off-the-shelf photonic waveguides with tailored plasmonic functionality, we purposefully choose to integrate our HPICs on previously fabricated SOI-WGs with standard electron-beam lithography and Fig. 1c, were deposited on the WG in a subsequent step via combination of electron-beam lithography, SiO 2 /gold evaporation, and lift-off-note in particular the excellent quality of the gold film, the sharp tip obtained, and the high alignment precision (<10 nm resolution). The details of the HPIC fabrication procedure are presented in the "Methods" and in Supplementary Fig. 2. Preliminary experimental waveguide characterization in the near-infrared (NIR) was performed by coupling light from free space (λ = 1320 μm) onto the waveguide input grating coupler using a 100× near-infrared microscope objective (NA = 0.85-Olympus) and observing the light scattered by the device using a InGaAs camera (NIRvana, Princeton Instruments) (see "Methods" and Supplementary Fig. 3). The resulting measurement is shown in Fig. 4b. The field emerging from each nanoscale tip appears as a diffraction-limited spot, since all tips have physical dimensions well below the diffraction limit.
We observe a diffraction-limited spot at the expected location of the gold nanotip, as well as residual TE light contained within the waveguide (in agreement with the simulations, see Fig. 3a(iv)), originating from the output grating. Figure 4c shows the same measurement when inserting a polarizer between the sample and camera with different orientations: we measure that the diffractionlimited spot is longitudinally (TM) polarized 40 , confirming polarization rotation and that light exiting the grating is TE polarized. As further confirmation, Fig. 4d shows a direct comparison of the amount of light exiting the grating in the presence of the HPIC, with respect to an adjacent control sample without the HPIC. From the ratio of the total power scattered by each TE grating under comparable input conditions (see Supplementary Fig. 4), we conclude that the residual light in the TE waveguide in the presence of the HPIC, relative to the bare SOI waveguide, is (13 ± 1)%, in agreement with 3D simulations (see Fig. 3c(iv)).
Nanofocusing and nonlinear enhancement. Plasmonic nanofocusing leads to spot sizes that are well below the diffraction limit, so that far-field linear optical experiments are inherently incapable of characterizing the focusing performance of our HPIC.
Here we harness the high field intensities at the apex of the gold tip to estimate the field enhancement via nonlinear SHG experiments. Here, the surface nonlinear susceptibility χ (2) of gold dominates over that of all surface and bulk sources of all constituent materials 41 . Ultrashort pump pulses (λ p = 1320 nm, 200 fs, 80 MHz 31 ) are coupled into the TE mode of the photonic waveguide via a grating coupler. They then enter one of three HPIC-enhanced WGs, each possessing incrementally sharper tips: the three HPIC considered here are shown in Fig. 5a. Fig. 5b, c, respectively. While nonlinear generation/ scattering occurs during propagation across the entire HPIC 15 , due to the large absorption of silicon (approximately 12 dB over 10 μm at 660 nm 42 ), the absence of phase matching, and the wavelength-scale propagation lengths considered, we can attribute the measured nonlinear signal only to the localized intensity at the edge of the gold tip from which the NIR light emerges. The spectra of the NIR pump and the visible radiation are shown in the inset of Fig. 6a. The figure confirms that the visible radiation indeed is the second harmonic of the pump since λ SHG = λ p /2 = 660 nm. We observe that the sharpest tip causes the least amount of NIR scattering, consistent with 3D simulations (see Supplementary Fig. 5). In contrast, this tip also causes the strongest visible light emission (see Fig. 5b, c, magenta), even though the incident power is an order of magnitude smaller than in the other two cases-a preliminary indicator of nonlinear enhancement. In this case, the input power is reduced by 10 times in order to avoid damaging the sharp due to the high field strength.
To quantify the nonlinear response of each tip, we measure the raw spectral yield versus incident power at the SHG wavelength, as shown in Fig. 6a (circles). The linear relationship between the square root of the yield and the average power incident on the sample P in (corresponding to a quadratic input power dependence, I 1=2 SHG / P in ), further confirms the mechanism of SHG. As a first conclusion, we note the dramatic increase in SHG intensity for the sharpest tip, which indicates that nano-focusing was achieved. We compare the slopes of the three curves quantitatively via a linear fit to the experiment, as shown in the dashed lines of Fig. 6a, and infer the relative intensity enhancement with respect to the strip. The results are summarized in Fig. 6b, which shows the intensity enhancement as a function of the tip width obtained using different approaches. Black crosses show the measured enhancement, obtained by taking the square of the slopes in Fig. 6a, normalized to w strip = 300 nm. We experimentally observe a maximum intensity enhancement of a factor 216 ± 16 for the sharpest gold tip relative to the gold strip (uncertainties are obtained from the confidence intervals of the straight line slopes in Fig. 6a). The predicted range of theoretical enhancement at the tip is shown in the dark blue shaded region of Fig. 6b (left axis), and was calculated using an Eikonal approach 38 , in excellent agreement with both the experiment and the range of intensity enhancements at the tip predicted by full 3D simulations (light blue region-see Supplementary Fig. 5 for further details). Note that the enhancement predicted by the Eikonal approach is inversely proportional to the effective mode area A eff , the definition of which can vary 12,38,43 . For completeness, Fig. 6b shows the range of inferred effective mode area for w strip = 10 nm, i.e., A eff~5 0-200 nm [2 42 (black circles, in agreement with published calculations 38 ).
Finally, we estimate the SHG conversion efficiency. After taking into account the effect of all optical elements, we conclude that the maximum SHG power is emitted by the sample for the sharpest nanotip (Fig. 6a, magenta) is 2.3 fW for an incident power of 0.22 mW, corresponding to a net conversion efficiency of 10 −11 . Taking into account the coupling efficiency into the waveguide (14%, see Supplementary Fig. 1d), this corresponds tõ 0.7 × 10 −10 of the power in the waveguide before the plasmonic rotator, and~0.6 × 10 −9 of the inferred power in the TM mode at the tip (cfr. Fig. 2d(ii)). Though these values are comparable to optimized nonlinear plasmonic SHG geometries 44,18 , our geometry has the significant advantage of being on a PIC-compatible platform. It is worth noting that only~0.06% of the power generated by a TM point source on the surface of a silicon waveguide radiates upwards, whereas the great majority of the SHG light is scattered into (and absorbed by) the silicon waveguide (see Supplementary Fig. 6). Future work will focus on new strategies to make use of the generated SHG, e.g., using hydrogenated amorphous silicon with low absorption at visible wavelengths, which will enable measurements of the SHG signal captured by the photonic waveguide 45 .
Discussion
The conversion efficiency could be further improved by optimizing the individual modular elements. Separate calculations for each module predict a peak rotator conversion efficiency of 58% for a rotator length of 4 μm, and of 34% for a focuser length of 1 μm (keeping all other parameters constant), resulting in a compound conversion efficiency of 20%. This is in good agreement with equivalent calculations for the full device, predicting a maximum conversion efficiency of 24% for the same rotator and focuser lengths of 4 and 1 μm, respectively. Thus, we estimate that through modest changes of the device parameters (e.g., increasing the gold thickness or with multi-section tapers 39 with up to 95% conversion efficiency), the pump TE-to-TM efficiency could be improved by approximately 9×, which would lead to a~80-fold increase in nonlinear conversion efficiency. Further improvements may be achieved either by incorporating 2D materials on the waveguide surface, which possess a χ (2) that is at least 1 order of magnitude greater than gold surfaces 45 Further enhancement may be achieved with additional plasmonic modules, such as a bowtie nanoantenna 46 adjacent to the tip, or additional focuser and rotator modules which couple light back into the photonic waveguide. This experiment represents a PIC-compatible, integrated nonlinear-plasmonic-SHG nanoscale light source, that makes use of two, in-series hybrid-plasmonic circuit elements. This design, fabrication, and characterization represent a TM plasmonic nano-focuser that is monolithically interfaced with an industry-standard TE-input SOI waveguide, and which can be coupled into by a conventional grating coupler. This work opens the door to the development of modular plasmonic circuit elements that can be seamlessly integrated on off-theshelf photonic waveguides. Note that there has been a recent discussion of CMOS-compatible hybrid plasmonic waveguides 47,48 , which requires using aluminum or copper as metals. We believe that this PIC can be easily fabricated using CMOS-compatible metals such as Cu and Al, which would result in rotator modules with comparable TE-to-TM conversion efficiencies 34 , as well as focuser modules with similar enhancement 38 (see Supplementary Fig. 7). Our approach unifies the emerging modular nanophotonic-circuit paradigm 32 with hybrid-integration of plasmonic nano-elements on industrystandard waveguides 27,26 , extending the range of accessible structures to efficient hybrid plasmonic waveguides culminating in deep-subwavelength mode volumes, by performing three difficultto-access optical functions (namely rotation, nano-focusing, and nonlinear conversion) back-to-back and on an integrated platform. This approach will facilitate access to efficient PIC-compatible deep-subwavelength field enhancements for on-chip quantum photonics and spectroscopy 49 , nonlinear 13 and atomic-scale 9 sensing, and nanoscale terahertz sources and detectors 50 .
Methods
Photonic waveguide grating design and characterization. The waveguide gratings were designed in-house using a 2D solver CAMFR 51 , with infinite air cladding and silicon substrate layer, a box layer of 2 μm thick, and a silicon waveguide layer of 220 nm, presenting grooves with an etching depth of h e and a period of Λ. Here, h e = 80 nm and period Λ = 440 nm, resulting in a high coupling efficiency (T up = 51%), and wide bandwidth centered in λ = 1320 nm, low reflection (R = 3.5%), and a selective in-coupling angle (−11°). From images of the optimized coupling to the waveguide, referenced to a mirror, we obtain a grating coupling efficiency of 14%, assuming that the loss due to each grating is equal. Waveguide losses without the HPIC are measured to be 0.12 dB μm −1 using waveguides of different lengths. See Supplementary Fig. 1 for further details of the calculations, the calculated bandwidth, and experimental measurements of coupling-and propagation-losses.
Hybrid plasmonic integrated circuit fabrication. The plasmonic HPICs are integrated on the SOI waveguides as follows. First, the silicon waveguides are spincoated with polymethyl methacrylate resist, and the HPIC structures are written with standard electron-beam lithography and developed with methyl isobutyl ketone. 20 nm silica and 50 nm gold are subsequently coated with electron-beam evaporation. Finally, a lift-off step (methyl-isobutyl-ketone) removes the resist. The alignment precision (~10 nm) is obtained using local gold markers, placed in the immediate vicinity of our off-the-shelf waveguides. See Supplementary Fig. 2 for a schematic of the fabrication procedure and alignment markers used.
Experimental setup. A detailed schematic of the experimental setup is shown in Supplementary Fig. 3. The source is an optical parametric oscillator (λ p = 1320 nm, FWHM: 200 fs; repetition rate: 80 MHz). The power incident on the sample is controlled via a motorized half-waveplate placed before a polarizer. The beam is spatially shaped using a beam expander, telescope, and elliptical lens, so that its profile matches that of the input waveguide grating. A beamsplitter (BS PM ) and powermeter (PM) monitor the input power. A microscope holds the WGs and HPICs. Light is delivered and collected to the sample via a 100× NIR microscope objective (Olympus, NA = 0.85) and BS. A short-pass filter (850 nm) is included in SHG experiments to filter out the NIR light. The scattered light is measured with an imaging spectrometer, using NIR (NIRvana) and VIS (PIXIS) cameras. An additional NIR camera at a second output monitors alignment. The laser power drift is <±0.5%. Sample-to-sample waveguide coupling conditions fluctuate by ±4%, as obtained from the standard deviation of the total power emitted by the output grating of 10 nominally identical bare waveguide samples for optimized coupling conditions.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
|
v3-fos-license
|
2021-12-03T16:17:07.163Z
|
2021-11-30T00:00:00.000
|
244814471
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/26/23/7283/pdf",
"pdf_hash": "1eaea922f989d28fa0f3235d10b7bec9160bd8a6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43212",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "8bd4c9ab1f86f6a74cba0a3f258b1510db3f3a13",
"year": 2021
}
|
pes2o/s2orc
|
Volatolomics of Three South African Helichrysum Species Grown in Pot under Protected Environment
Helichrysum decorum DC, Helichrysum lepidissimum S. Moore, and Helichrysum umbraculigerum are three species traditionally used in the South African medicine. The present work deals with the investigation of the spontaneous emission and the essential oils obtained from these plants cultivated in open field under uniform conditions. Fractions of the volatile organic compounds of the three species were rich in monoterpene hydrocarbons, representing more than 70% of the total composition. Pinene isomers were the most representative compounds: β-pinene in H. decorum (53.0%), and α-pinene in H. lepidissimum (67.9%) and H. umbraculigerum (54.8%). These latter two species evidenced an important amount of sesquiterpene hydrocarbons (SH) especially represented by γ-curcumene (H. lepidissimum) and α- and β-selinene (H. umbraculigerum). On the contrary, in the EOs, sesquiterpenes compounds prevailed, representing more than 64% of the identified fraction to reach more than 82 and 87% in H. umbraculigerum and H. lepidissimum, respectively. Although the chemical classes and their relative abundances were comparable among the three species, the individual compounds of EOs showed large differences. In fact, caryophyllene oxide (26.7%) and γ-curcumene (17.4%) were the main constituents in H. decorum, and H. lepidissimum respectively, while neo-intermedeol (11.2%) and viridiflorol (10.6%) characterized H. umbraculigerum.
Introduction
The genus Helichrysum, belonging to the family Asteraceae, comprises more than 500 species, of which almost half are indigenous to South Africa [1][2][3]. Different species of Helichrysum are widely used in the traditional local medicine, thanks to the variety of secondary metabolites that the plants belonging to this genus can produce [3]. Their aerial parts are employed as herbal teas for the treatment of respiratory issues, digestive problems, as diuretic and anti-inflammatory agents, and for other purposes [4,5]. Several Helichrysum species are appreciated for their aroma profile, strictly connected to the presence of essential oils, produced and stored in the glandular trichomes located in almost all the vegetative epigeal parts of the plant [4]. The essential oil plays an important role for the taxonomic attribution of these species [6], as well as for their biological activities [7]. Helichrysum species are, indeed, characterized by a huge genetic variability, due to their polymorphisms, as a consequence of different environmental and growing conditions. It was observed that both the different morphological characters of the plants and the chemotypes of their EOs are attributable to the genetic heritage as well, and therefore the chemical composition can be used for the taxonomic identification [8]. In the last decades, it is no coincidence that the essential oils obtained from different species of this genus received increasing interest, for both their chemical composition and their biological activities [2,7,9].
Continuing our research on the utilization of Helichrysum spp. indigenous of South Africa, in collaboration with Centro di Ricerca Orticoltura e Florovivaismo (CREA-OF) located in Sanremo (Italy), three new Helichrysum species were investigated. Helichrysum lepidissimum S. Moore and Helichrysum umbraculigerum Less. are biennial or perennial herb shrubs, Helichrysum decorum DC is a plant growing in sandy grassland or open woodland from sea level to 900 m. The African Zulu inzagomas (diviners) smoked/inhaled or burned unspecific parts of the plant, which resulted in a trance state [3]. Growing on rocky grounds in submontane areas, H. lepidissimum is a perennial shrub [10] from which Mkhize (2015) isolated lepidissipyrone [11]. This compound showed a structure similar to arzanol, isolated from H. italicum ssp. microphyllum, and it was known for its antioxidant, antiinflammatory and anti-HIV activities. According to Lourens et al., 2008 the powder and ointment prepared from this species are used as a body ointment in traditional usage [3]. H. umbraculigerum, instead, is a perennial erected plant reported in several studies as the main natural source of cannabigerol [12].
Despite their important traditional uses, investigations on these Helichrysum species are lacking, and the studies reported in the literature only cite them without any other research on their biological activity or on their secondary metabolites content. This work aims to evaluate the chemical composition of both the spontaneous volatile emissions and the essential oils of the three South African species of Helichrysum cultivated at the CREA-OF (Italy). To the best of our knowledge these investigations have never been previously reported in the literature.
Volatiles Organic Compounds (VOCs)
Thirty-six compounds were detected by GC-MS methods in the spontaneous volatile emissions, with a percentage of identification ranging between 99.5% to 100% of the whole volatilome (Table 1). H. umbraculigerum was the richest plant for variety of compounds emitted (21) compared to H. decorum (16) and H. lepidissimum (15). Interestingly, only four constituents were shared by the samples, and two of them (β-pinene and α-pinene) were major ones.
Essential Oil Chemical Composition and Yield
The complete chemical composition and the hydrodistillation yields of the essential oils (EOs) obtained from the dried aerial parts of H. decorum, H. lepidissimum and H. umbraculigerum are reported in Table 2. Altogether 112 compounds were identified, representing 92.5 to 97.1% of the total chemical composition. H. umbraculigerum presented the highest number of constituents (47 vs. 41 in both H. decorum and H. lepidissimum), as in VOC analysis. Remarkable was the fact that these oils, apart from 3 minor constituents, had no compounds in common. Moreover, the essential oil yield of H. lepidissimum was 0.6% w/w, while for the other two species it was so low that it could not be determined. In general, the species of this genus are known to produce low amounts of essential oil [7]. Concerning the chemical composition, sesquiterpenes were the most represented class of compounds in all the EOs. The oxygenated form prevailed, accounting for 48.5, 60.4 and 55.2% in H. decorum, H. lepidissimum and H. umbraculigerum, respectively, while the hydrocarbon form ranged from 17.5% in H. decorum to 26.5 and 26.9% in H. lepidissimum and H. umbraculigerum, respectively. These three EO samples showed great differences in their compositions. Caryophyllene oxide was the main constituent in H. decorum (26.7%), followed by β-caryophyllene (8.4%). Despite the oxygenated sesquiterpenes (OS) dominated the H. lepidissimum EO, γ-curcumene, a sesquiterpene hydrocarbons, was the main constituent of this oil (17.4%), followed by β-bisabolol (12.5%), epi-globulol (7.4%), and rosifoliol (7.2%). The H. umbraculigerum essential oil instead was characterized by a predominance of OS, i.e., neo-intermedeol (11.2%) and viridiflorol (10.6%), followed by SH α-selinene (9.2%) and β-selinene (6.2%).
The presence of high percentages of caryophyllene oxide is not very common in the genus Helichrysum, even though this compound was reported by Rabehaja, D.J.R. et al. for the Malagasy H. benthamii Viguier & Humbert (4.0%) [17]. These authors also evidenced a similar behaviour with the same species concerning the predominance of sesquiterpenes, with prevalence of the oxygenated ones (73.5%) in H. hirtum Humbert. Caryophyllene oxide was also detected in other South African species, such as H. cymosum (L.) D. Don subsp. cymosum studied by Giovanelli et al. [4]. γ-Curcumene, a sesquiterpene hydrocarbon typical of the EO of H. italicum (Roth) G. Don was reported as the main component of Serbian samples [18]. This chemical compound, together with rosifoliol, was detected in appreciable percentages in the EO of H. italicum Don. subsp. microphyllum (Willd.) Cambess, also known as H. italicum subsp. tyrrhenicum, which is widely employed in aromatherapy [19]. Aćimović at al. [18], in fact, evidenced that Helichrysum chemotypes containing γ-curcumene as main component could be used in perfumery industries thanks to their appreciated fragrances, as well as in food and pharmaceutical industries as natural preservatives.
Noteworthy is the appreciable percentage of oxygenated diterpenes in the H. lepidissimum EO (5.6%) with geranyl linalool as unique identified compound. This class of constituent was also found in other Helichrysum species EOs, even though in higher percentages [7,25].
Plant Material
The South African Helichrysum plants studied in the present work (see Table 3 (Table 3).
Plant Material
The South African Helichrysum plants studied in the present work (see Table 3) belong to the collection of Centro di Ricerca Orticoltura e Florovivaismo (CREA-OF), located in Sanremo, Italy. The seeds were purchased from specialized companies in sailing seeds of African plant species (Silver Hill-PO Box 53108, Kenilworth, 7745 Cape Town, South Africa and B&TWorld Seeds-Paguignan, 34210 Aigues Vives, Gard, France). The plants were grown under the same edaphic substrate (perlite (2:1 v/v added with 4 g/L slow-release fertilizer) and climatic conditions (Csa in Köppen-Geiger climate classification with an average annual temperature of 16 °C and an annual rainfall of about 700 mm; frosts are light and very rare). After clonal propagation, the plants grew in pots in the open air and were periodically watered. Flowering took place after one year. A voucher sample of each plant was deposited at the herbarium of the Hanbury Botanical Gardens (La Mortola-Ventimiglia, Imperia, Italy) ( Table 3). Table 3. Botanical description of the three analyzed South African Helichrysum species.
H. decorum DC
Voucher: HMGBH.e/9006.2020.002 • Biennial or perennial herb up to 1.3 m tall grows in rough grassland or scrub, often on forest margins or in damp gullies and along streambanks • Stem stout: usually simple thinly greyish-white woolly, leafy.
• Radical leaves rosetted in the first year of growth, wanting at flowering, elliptic, narrowed to a broad clasping base, apex obtuse or subacute, apiculate, both surfaces thinly greyish-white woolly. Cauline leaves diminishing in size upwards, oblong-lanceolate or elliptic-lanceolate, apex usually acute, base clasping, upper surface glandular-setose, thinly cobwebby, lower thinly greyish-white woolly.
Spontaneous Emission Analysis and EO Extraction
Living fresh plant material (almost 1 g) was the subject of the HS-SPME (head space-solid phase microextraction) analyses which was performed using 100 µm polydimethylsiloxanes (PDMS) fiber manufactured by Supelco Ltd. (Bellefonte, PA, USA). As recommended by the manufacturer's instruction, prior to the analyses, the fiber was conditioned at 250 °C for 30 min in the injector of a gas chromatograph. The plant material was placed in a 50 mL glass vial, covered with an aluminum foil, and then left for 60 min (equilibration time). Exposition of the fiber in the headspace phase of the samples took place for 15 min at a temperature of 23 °C. Subsequently, the fiber was transferred to the injector of the gas chromatograph (temperature 250 °C), where the analytes were thermally desorbed [27]. The composition of the compounds desorbed from SPME fiber was examined using GC-MS.
Essential Oil Hydrodistillation
The essential oil was obtained from the dried aerial parts of the three species of Helichrysum by hydrodistillation with a Clevenger-type apparatus, performed for 2 h at 100 °C, according to the method reported in the European Pharmacopoeia [28]. The hydrodistillation was carried out in triplicates, on 50 g of plant material and the collected essential oil was refrigerated at 4 °C and maintained far from light sources until analyses.
Gas Chromatography-Mass Spectrometry Analyses
The essential oils were diluted to 0.5% in HPLC-grade n-hexane before the injection in the GC-MS apparatus. The GC/EI-MS analyses were performed with an Agilent 7890B gas chromatograph (Agilent Technologies Inc., Santa Clara, CA, USA) equipped with an Agilent HP-5MS capillary column (30 m × 0.25 mm; coating thickness 0.25µm) and an Agilent 5977B single quadrupole mass detector.
The analytical conditions were set as follows: oven temperature ramp from 60 to 240 °C at 3 °C/min; injector temperature, 220 °C; transfer line temperature, 240 °C; carrier gas helium, 1 mL/min. The injection volume was 1 µL, with a split ratio of 1:25. The acquisition parameters were: full scan; scan range: 30-300 m/z; scan time: 1.0 sec.
The Identification of the constituents was based on a comparison of the retention
Spontaneous Emission Analysis and EO Extraction
Living fresh plant material (almost 1 g) was the subject of the HS-SPME (head spacesolid phase microextraction) analyses which was performed using 100 µm polydimethylsiloxanes (PDMS) fiber manufactured by Supelco Ltd. (Bellefonte, PA, USA). As recommended by the manufacturer's instruction, prior to the analyses, the fiber was conditioned at 250 • C for 30 min in the injector of a gas chromatograph. The plant material was placed in a 50 mL glass vial, covered with an aluminum foil, and then left for 60 min (equilibration time). Exposition of the fiber in the headspace phase of the samples took place for 15 min at a temperature of 23 • C. Subsequently, the fiber was transferred to the injector of the gas chromatograph (temperature 250 • C), where the analytes were thermally desorbed [27]. The composition of the compounds desorbed from SPME fiber was examined using GC-MS.
Essential Oil Hydrodistillation
The essential oil was obtained from the dried aerial parts of the three species of Helichrysum by hydrodistillation with a Clevenger-type apparatus, performed for 2 h at 100 • C, according to the method reported in the European Pharmacopoeia [28]. The hydrodistillation was carried out in triplicates, on 50 g of plant material and the collected essential oil was refrigerated at 4 • C and maintained far from light sources until analyses.
Gas Chromatography-Mass Spectrometry Analyses
The essential oils were diluted to 0.5% in HPLC-grade n-hexane before the injection in the GC-MS apparatus. The GC/EI-MS analyses were performed with an Agilent 7890B gas chromatograph (Agilent Technologies Inc., Santa Clara, CA, USA) equipped with an Agilent HP-5MS capillary column (30 m × 0.25 mm; coating thickness 0.25 µm) and an Agilent 5977B single quadrupole mass detector.
The analytical conditions were set as follows: oven temperature ramp from 60 to 240 • C at 3 • C/min; injector temperature, 220 • C; transfer line temperature, 240 • C; carrier gas helium, 1 mL/min. The injection volume was 1 µL, with a split ratio of 1:25. The acquisition parameters were: full scan; scan range: 30-300 m/z; scan time: 1.0 s. The Identification of the constituents was based on a comparison of the retention times with those of the authentic samples, comparing their linear retention indices relative to the series of n-hydrocarbons. Computer matching was also used against commercial (NIST 14 and ADAMS 2007) and laboratory-developed mass spectra libraries built up from pure substances and components of commercial essential oils of known composition and MS literature data [14,[29][30][31][32][33].
Conclusions
The present study represents a contribution to increasing the knowledge about the chemical composition of the HSs and the essential oils of three South African Helichrysum species that were not studied yet. It should be a starting point for future investigations, which can lead to a more informed employment of these plants, as they are already used in the traditional local medicine. The studied species showed huge differences in the chemical composition of both the spontaneous emissions and the EOs.
The chemical differences of the aroma profile of the studied samples together with their habitus can be exploited for the ornamental use of these plants.
|
v3-fos-license
|
2020-07-30T02:07:32.727Z
|
2020-07-25T00:00:00.000
|
220857969
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/17/15/5366/pdf",
"pdf_hash": "e572e9d31fc723ae8f1fa3c90ee9345d3aabd41a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43214",
"s2fieldsofstudy": [
"Psychology",
"Engineering"
],
"sha1": "8f0aca6f0dc99991dd25e24f2718277a6c8cdbfc",
"year": 2020
}
|
pes2o/s2orc
|
Drivers’ Visual Attention Characteristics under Different Cognitive Workloads: An On-Road Driving Behavior Study
In this study, an on-road driving experiment was designed to investigate the visual attention fixation and transition characteristics of drivers when they are under different cognitive workloads. First, visual attention was macroscopically analyzed through the entropy method. Second, the Markov glance one- and two-step transition probability matrices were constructed, which can study the visual transition characteristics under different conditions from a microscopic perspective. Results indicate that the fixation entropy value of male drivers is 23.08% higher than that of female drivers. Under the normal driving state, drivers’ fixation on in-vehicle systems is not continuous and usually shifts to the front and left areas quickly after such fixation. When under cognitive workload, drivers’ vision transition is concentrated only in the front and right areas. In mild cognitive workload, drivers’ sight trajectory is mainly focused on the distant front area. As the workload level increases, the transition trajectory shifts to the junction near the front and far sides. The current study finds that the difference between an on-road test and a driving simulation is that during the on-road driving process, drivers are twice as attentive to the front area than to the driving simulator. The research provides practical guidance for the improvement of traffic safety.
Introduction
With the improvement of vehicle automation technology, we can see human-machine co-driving or partially automated systems existing in the foreseeable future. The world's major automobile manufacturers will continue the development of driver assistance systems, which would improve the safety and comfort of drivers and moreover, provide better protection as well. However, with the assistance of vehicle automation, drivers can feel freer to get distracted or bold to disengage from driving. Obviously, these automated systems interfere with drivers to make safe decisions during important moments and can result in misjudged safety risks [1]. As in other fields, driving safety increasingly depends on the comprehensive performance of human-machine interaction and automation technology and requires in-depth knowledge of the driver. Designers should not assume that automation can seamlessly replace human drivers or that drivers can safely adapt to the limitations of automation [2]. A driver's concentration level directly affects their driving safety. Cognitive attention is mainly Int. J. Environ. Res. Public Health 2020, 17, 5366 3 of 19 environment like experienced drivers and they will be more dangerous in the event of an emergency. The appearance of other road users on the trajectory that may cause collisions attracts the attention of all drivers, but new drivers are more susceptible to this kind of harm than experienced drivers [20]. Dynamic billboards significantly increase drivers' scope and attention span, greatly increasing the risk of driving [21]. By simulating drivers' visual characteristics when overtaking on the highway in a driving simulator, drivers focus on the gaze duration, the saccade duration, the saccade angle following a normal distribution pattern, and the primary visual area [22]. Other studies have conducted a driving simulation test on the five-digit (simple) or 11-digit (complex) visual digital judgment task of drivers and have recorded eye movements with an eye tracker. When the in-vehicle task requires high information processing, drivers' off-road fixation is frequent [23]. Performing the single and multiple secondary tasks under naturalistic driving, the old people are less likely to finish the tasks than young drivers [24].
At present, driving behavior research mainly adopts various technologies to evaluate drivers' eye movement information, such as bioelectricity (such as ECG, EMG, and EEG signals data through the use of cameras), head infrared sensors, and eye trackers. Some studies have compared eye movements and dangerous reaction times between simulated driving tasks and similar but video-based passive danger perception tasks. They found that during active driving, participants scanned the road less and fixed their gaze near the front of the vehicle [25]. When sending and receiving emails on a pad in the vehicle, sending emails greatly weakened drivers' attention and reduced their ability to control the vehicle. Meanwhile, the time spent looking at the display screen greatly increased [26]. The changes of drivers' gaze are observed by adjusting the radio in the car in different ways on the highway and classifying different drivers through the hidden Markov model. The voice input adjustment method has been found to be safe [27]. According to the eye movement data reported by 100 vehicles, drivers' gaze changes during the state of fatigue, radio adjustment and conversation are revealed, and such changes within six seconds are predicted through the HSMM model and RNN [28]. The impact of hands-free phones on eye movement patterns when driving is that the eyes gaze less at road signs, other vehicles, and speedometers. The spatial distribution of eye gaze is wide during hands-free calling. When hands-free phones are not used, the line-of-sight distribution depends to a certain extent on changes in driving tasks [29].
In summary, previous works in this domain have accomplished a series of achievements in the study of driving workload, visual attention distribution, and transition. For visual attention, previous studies have focused on the changes of eye movements on the radio inside the vehicle, billboards outside the vehicle, and in-vehicle gestures, etc. At present, studies have only looked at macroscopic cognitive distraction and there is a lack of in-depth exploration of visual attention under different degrees of cognitive distraction. Moreover, most tests have been conducted in a driving simulation environment and few have performed on-road driving experiments. In-depth studies on the characteristics of drivers' gaze fixation and transition under cognitive workload conditions are lacking and the relationship between visual and cognitive attention has not been fully explored. In view of these research shortcomings, the participants generated three different levels of cognitive workload through the calculation of math problems of different difficulties. We designed the on-road driving experiment under a real driving environment, exploring the driver's visual attention fixation characteristics through visual entropy theory and researching the driver's visual transition characteristics through Markov one-step and two-step transition matrix under different cognitive workloads. The main research ideas are illustrated in Figure 1.
Participants
Many studies have proved that subject to the constraints of experimental conditions, small samples can also be used in driver characteristic tests. Usually, the number of samples is greater than 6 to be effective [30][31][32][33]. Fourteen drivers were selected for the on-road driving test and 10 of them completed the experiment, including six male drivers and four female drivers. A questionnaire was used to obtain information and to assess the basic situation of the participants before the test. The age distribution ranged from 25 to 55 years old (Mean = 34; Standard Deviation (SD) = 8.65), with different driving experiences (Mean = 7.0; SD = 6.96) and driving mileage (Mean = 5.72; SD = 6.37). The test required all the drivers to have good driving manners; good vision after correction; good health; no visual and hearing impairment; sufficient rest the day before the test; no intake of stimulating food, such as alcohol and caffeine; and be non-smokers. The demographic information is shown in Table 1. Accidents in the Past Three Years --10 -None --6 60 Once --3 30 More than Once --1 10 Educational Background --10 -Bachelor --2 20 Master --6 60 PhD --2 20 Figure 1. Research framework.
Participants
Many studies have proved that subject to the constraints of experimental conditions, small samples can also be used in driver characteristic tests. Usually, the number of samples is greater than 6 to be effective [30][31][32][33]. Fourteen drivers were selected for the on-road driving test and 10 of them completed the experiment, including six male drivers and four female drivers. A questionnaire was used to obtain information and to assess the basic situation of the participants before the test. The age distribution ranged from 25 to 55 years old (Mean = 34; Standard Deviation (SD) = 8.65), with different driving experiences (Mean = 7.0; SD = 6.96) and driving mileage (Mean = 5.72; SD = 6.37). The test required all the drivers to have good driving manners; good vision after correction; good health; no visual and hearing impairment; sufficient rest the day before the test; no intake of stimulating food, such as alcohol and caffeine; and be non-smokers. The demographic information is shown in Table 1. --10 -None --6 60 Once --3 30 More than Once --1 10 Educational Background --10 -Bachelor --2 20 Master --6 60 PhD --2 20 Using Mobile Phone During Driving --10 -Hardly Ever --3 30 Sometimes --5 50 Always --2 20 -indicates an indicator that cannot be quantified.
Apparatus
The test vehicle was a 2017 Sagitar 1.6T (FAW-Volkswagen, Changchun, China), equipped with an advanced data acquisition system to capture data related to drivers' action and status in real time. The data acquisition system includes a laser distance sensor (INSIGHT-200A model, with a range of 0.5-200 m, ranging accuracy of ±0.5-1 m, measuring frequency of 10-50 Hz, input voltage of 6-24 V, and a laser wavelength of 905 nm); radar rangefinder (Fluke 419D, with a range of 0-80 m, range accuracy of ±1 mm, and laser wavelength of 635 nm, which can memorize 20 results); Tobii Glasses 2 wearable eye tracker (sampling frequency of 50 Hz/100 Hz, four eye movement cameras, automatic parallel parallax correction, 82 • angle of view with a vertical angle of 52 • ); and BIOPAC MP160 16-channel physiological recorder (a data acquisition & analysis device for ECG, HRV, EEG, EMG, EGG et al. Maximum sampling rate of 400 KHz and accuracy of ±0.003%). The four synchronized camera outputs include the front and rear of the vehicle, drivers' face and in-vehicle operation video, drivers' recorded eye movement, vehicle running characteristics, and traffic environment information for the whole process. The drivers operate in accordance with their own daily driving behaviors and habits, thereby collecting their natural driving behavior in a real environment, as shown in Figure 2. -indicates an indicator that cannot be quantified.
Apparatus
The test vehicle was a 2017 Sagitar 1.6T (FAW-Volkswagen, Changchun, China), equipped with an advanced data acquisition system to capture data related to drivers' action and status in real time. The data acquisition system includes a laser distance sensor (INSIGHT-200A model, with a range of 0.5-200 m, ranging accuracy of ±0.5-1 m, measuring frequency of 10-50 Hz, input voltage of 6-24 V, and a laser wavelength of 905 nm); radar rangefinder (Fluke 419D, with a range of 0-80 m, range accuracy of ±1 mm, and laser wavelength of 635 nm, which can memorize 20 results); Tobii Glasses 2 wearable eye tracker (sampling frequency of 50 Hz/100 Hz, four eye movement cameras, automatic parallel parallax correction, 82° angle of view with a vertical angle of 52°); and BIOPAC MP160 16channel physiological recorder (a data acquisition & analysis device for ECG, HRV, EEG, EMG, EGG et al. Maximum sampling rate of 400 KHz and accuracy of ±0.003%). The four synchronized camera outputs include the front and rear of the vehicle, drivers' face and in-vehicle operation video, drivers' recorded eye movement, vehicle running characteristics, and traffic environment information for the whole process. The drivers operate in accordance with their own daily driving behaviors and habits, thereby collecting their natural driving behavior in a real environment, as shown in Figure 2. The drivers fix their vision on a certain area during the driving process and transfer such vision between different areas. The visual attention fixation characteristics are analyzed by recording drivers' gaze duration. The visual attention transfer characteristics are studied by recording the horizontal and vertical coordinates of the fixation target.
Driving Tasks
Participants were required to work in strict accordance with the instructions of the research staff during the test. Under normal driving circumstances, drivers perform the main driving task; that is, to control the vehicle to drive at different speed limits on different sections and carry out the prescribed operations as prompted. In this experiment, the method of listening to and performing mathematical calculations was selected. All the participants had a bachelor's degree or above and good mathematical calculation ability. According to the difficulty level of the questions, they can be divided into three levels: simple, general, and complex. At the same time, three levels of cognitive workload are defined, namely, mild, moderate, and deep. The duration of each cognitive distracted driving test was approximately 2 min and each participant had to complete the tasks twice. During the mathematical calculation of each level, the experimenter flashed the camera and turned off the The drivers fix their vision on a certain area during the driving process and transfer such vision between different areas. The visual attention fixation characteristics are analyzed by recording drivers' gaze duration. The visual attention transfer characteristics are studied by recording the horizontal and vertical coordinates of the fixation target.
Driving Tasks
Participants were required to work in strict accordance with the instructions of the research staff during the test. Under normal driving circumstances, drivers perform the main driving task; that is, to control the vehicle to drive at different speed limits on different sections and carry out the prescribed operations as prompted. In this experiment, the method of listening to and performing mathematical calculations was selected. All the participants had a bachelor's degree or above and good mathematical calculation ability. According to the difficulty level of the questions, they can be divided into three levels: simple, general, and complex. At the same time, three levels of cognitive workload are defined, namely, mild, moderate, and deep. The duration of each cognitive distracted driving test was approximately 2 min and each participant had to complete the tasks twice. During the mathematical calculation of each level, the experimenter flashed the camera and turned off the light, which is used for the calibration of the post-analysis data and keeps records about performance. The interval between the two groups depended on the drivers' fatigue degree. See Table 2 for details.
The cognitive workload task was set up to allow drivers to experience different levels of cognitive distraction during the driving process. All drivers were told to improve the calculation accuracy as much as possible before the experiment. It was found that the calculation accuracy rate of the simple tasks exceeded 90% and in general, tasks also exceeded 80%, but as for the complex tasks, this was less than 60% and for the driving safety, some participants even gave up and provided a random answer. So, the purpose of setting math problems of different difficulties is to cause different levels of cognitive distraction. This research considers the characteristics of eye movement behavior and the influence of the calculation accuracy is not considered in the actual process analysis.
Experimental Routes
A road test was conducted during non-peak hours on weekdays (usually 9:30-11:00 in the morning and 2:30-4:00 in the afternoon). This paper selects the experiment with sunny weather conditions to analyze and when the temperature was suitable for driving. All the test drivers performed the actual vehicle test on the same road section to ensure that all the drivers had similar traffic conditions. Weather and other factors remained mostly consistent during the test. The test road section was selected to be at Zhongyuan Avenue in Harbin. The starting point was the intersection of Songpu Bridge and Zhongyuan Avenue and the ending point was the intersection of Zhongyuan Avenue and Xiang'an North Street. The whole journey was 10 km long and the route is depicted in Figure 3. The test road had six lanes in both directions, separated by the central separation belt from the traffic in the opposite direction. The traffic flow in this section was stable, the road alignment condition was good, the drivers' field of view was clear, and the cognitive workload test was reasonably safe. No collision accident occurred in all the experiments. The cognitive workload task was set up to allow drivers to experience different levels of cognitive distraction during the driving process. All drivers were told to improve the calculation accuracy as much as possible before the experiment. It was found that the calculation accuracy rate of the simple tasks exceeded 90% and in general, tasks also exceeded 80%, but as for the complex tasks, this was less than 60% and for the driving safety, some participants even gave up and provided a random answer. So, the purpose of setting math problems of different difficulties is to cause different levels of cognitive distraction. This research considers the characteristics of eye movement behavior and the influence of the calculation accuracy is not considered in the actual process analysis.
Experimental Routes
A road test was conducted during non-peak hours on weekdays (usually 9:30-11:00 in the morning and 2:30-4:00 in the afternoon). This paper selects the experiment with sunny weather conditions to analyze and when the temperature was suitable for driving. All the test drivers performed the actual vehicle test on the same road section to ensure that all the drivers had similar traffic conditions. Weather and other factors remained mostly consistent during the test. The test road section was selected to be at Zhongyuan Avenue in Harbin. The starting point was the intersection of Songpu Bridge and Zhongyuan Avenue and the ending point was the intersection of Zhongyuan Avenue and Xiang'an North Street. The whole journey was 10 km long and the route is depicted in Figure 3. The test road had six lanes in both directions, separated by the central separation belt from the traffic in the opposite direction. The traffic flow in this section was stable, the road alignment condition was good, the drivers' field of view was clear, and the cognitive workload test was reasonably safe. No collision accident occurred in all the experiments.
Experimental Process
(1) The experimental staff prepared the test notification. The test subject was informed of the whole process and the subject filled in the personal information form. Then, the staff placed the equipment on the test subject. (2) The staff guided the drivers into the test vehicle, switched on all the test equipment, and completed the calibration and synchronization of the equipment. Once the drivers were ready, they drove around the safety area for approximately 15 min to become accustomed to the setting. (3) The participants entered the designated test section to start driving and the staff recorded data from the start time of the test. The participants performed the prescribed operations under the guidance of the staff unless safety hazards arose. Data continued to be collected during the test to capture the driving environment and drivers' behavior under different environments. (4) On a specific road section, the drivers performed the cognitive workload task while maintaining the main driving task. They completed the corresponding mathematical calculation according to the experimental design setting. In case hazardous situations occurred, the drivers had the option to terminate the task. (5) After the test, the staff turned off the test equipment and assisted the test subject to take off the test equipment. The participants completed the questionnaire and received the test compensation (200 RMB). The staff compiled, copied, and archived the test data.
Area of Interest (AOI) Division of Fixation
Peter et al. [34] divided the line of sight into nine equal non-overlapping fixation areas. Falkmer et al. [35] divided the view area into three categories: distant, front, and near body areas. To cover the special fixation points (left and right mirrors, steering wheel, etc.), the drivers' fixation interest area was divided into nine categories, as displayed in Figure 4. According to Tobii's default settings, two or more AOI glances with a blink interval of less than 75 ms were merged from the analysis. Considering that head movement will affect the result, we manually encoded the fixation based on the fixation targets in Table 3, where a glance to the right side is coded as AOI-H, a glance to the left side is coded as AOI-B regardless of the head direction, the rest may be deduced by analogy. This research adopts the identification by Velocity Threshold (IVT), which is an algorithm that can detect fixation easily that functions in this way [36,37]. The fixation targets of each AOI are described in Table 3.
Area of Interest (AOI) Division of Fixation
Peter et al. [34] divided the line of sight into nine equal non-overlapping fixation areas. Falkmer et al. [35] divided the view area into three categories: distant, front, and near body areas. To cover the special fixation points (left and right mirrors, steering wheel, etc.), the drivers' fixation interest area was divided into nine categories, as displayed in Figure 4. According to Tobii's default settings, two or more AOI glances with a blink interval of less than 75 ms were merged from the analysis. Considering that head movement will affect the result, we manually encoded the fixation based on the fixation targets in Table 3, where a glance to the right side is coded as AOI-H, a glance to the left side is coded as AOI-B regardless of the head direction, the rest may be deduced by analogy. This research adopts the identification by Velocity Threshold (IVT), which is an algorithm that can detect fixation easily that functions in this way [36,37]. The fixation targets of each AOI are described in Table 3.
Research Method of Driver Visual Attention Fixation Characteristics
The information entropy method was used to study the visual attention measure of different regions and the randomness of the drivers' glance is reflected by the magnitude of the entropy rate, which was designed to bridge the gap between the qualitative methods currently used for evaluating vision in driving research and quantitative methods, which are actually desired [38]. The discrete variable entropy information is E, as shown in Equation (1). The maximum value in Equation (2) and the fixation entropy value E n are given in Equation (3).
D is the number of fixation areas, D = 9; P x i is the fixation probability for a certain area; T x i is the average fixation time of the drivers in a certain area; i is the serial number of the region (serial numbers range from 1 to 9).
Research Method of Driver Visual Attention Transition Characteristics
The fixation process is continuous and random and the fixation point continuously moves periodically to form their eye movement. To study the characteristics of fixation transition, considering the correlation among the current, previous, and next fixed gaze behaviors is usually necessary.
Assume that X(t), t ∈ T} is a random process. For any positive integer n and continuous time t i , P(X(t 1 ) = x 1 , . . . , X(t n−1 ) = x n−1 ) > 0. If the conditional distribution satisfies Equation (4), then X(t) is called a Markov process. P X(t n ) ≤ x n X(t 1 ) = x 1 , . . . , X(t n−1 ) = x n−1 } = P X(t n ) ≤ x n X(t n−1 ) = x n−1 } The drivers' visual transition characteristics are studied by constructing a Markov transition matrix. When the steady state is reached, the steady distribution of fixation can reflect the probability of each area during driving. By solving the steady distribution of the matrix, the degree of drivers' attention to each area can be determined when the time is long enough.
One-step transition is defined as the transition of a fixation point from the previous to the current fixation point or from the current fixation point to the next. The probability of this transition is called one-step transition probability. The process in which the fixation point is transferred from the previous fixation point to the next through the current fixation point is defined as a two-step transition. Its probability is called two-step transition probability. The K step transition probability matrix is represented by P(k).
p ij is the element of the i-th row and the j-th column of the transition probability matrix. The sum of the transition probabilities of each row is 1.
The Markov chain has ergodicity. After a period, the system can reach a stable state. After a certain transition duration in the Markov process, the probability of reaching j tends to be stable from the initial state i. If a state probability vector X= (x 1 , x 2 , . . . , x n ) exists, then make XP(k) = X; where X is called the stationary distribution of P(k). That is, x 1 + x 2 + · · · + x n = 1 (7)
Analysis of Drivers' Visual Attention Fixation Characteristics
Visual entropy is the subjective measure of the human eye on image information, which can reflect the driver's visual attention. When E = 0 and P x i = 1, entropy rate value En takes the minimum value of 0, indicating that during the driving process, drivers' fixation is concentrated on one area while ignoring other areas. If the focus is always on the central area, then visual attention is only concentrated on the road ahead. The overall traffic environment is weak and the safety hazard is small under the condition of simple traffic conditions. If the focus is on the road ahead for a long time, then identifying unexpected situations is difficult without observing other areas and drivers may properly look away from the forward road scene to decrease the likelihood of safety hazards due to unexpected hazards. If the vision is focused on other areas, then dangerous situations and traffic accidents may also occur.
In this study, the visual entropy of the entire driving process is statistically analyzed from a macro level. After data processing and analysis, the fixation duration and probability of each region of the 10 test subjects are shown in Table 4 (abnormal data are much lower than other values, which are marked in gray), as presented in Table 5. The entropy value of each driver's fixation is illustrated in Figure 5.
The average fixation duration can reflect drivers' attention in each region at the time. Table 4 shows that most drivers can reasonably allocate the fixation duration of each area. Through the anomaly data, a small number of drivers have a short staying time on the left-view, far-left, and far-right mirrors. Therefore, drivers do not give enough attention to targets further away from them. Table 5 presents that drivers have the largest fixation probability on the area near the front. The gaze probability on the overall area in front is more than 90%, indicating that each driver has a clear gaze in front most of the time. No. 1-6 are male drivers, whereas No. 7-10 are female drivers. As displayed in Figure 6, the average entropy rate of male drivers exceeds 3.2, whereas the average value of female drivers is just above 2.6, with a difference of 23.08%.
The samples tested for normality are shown in Table 6 and Figure 7. The average En value of the sample is 3.04016, the median is 3.0027, and the sample size is 10. Based on the Kolmogorov-Smirnov test results, sig. = 0.2 > 0.05, which is subject to normal distribution. The Q-Q test in Figure 7 reveals that all points are located near the straight line and the sample is normal. The fixation entropy rate of driver No. 5 is closest to the mean and median values. The sample has normality; thus, the results of the visual transition characteristics analysis of driver No. 5 are selected to reflect the regularity of the overall sample. p is the element of the i-th row and the j-th column of the transition probability matrix. The sum of the transition probabilities of each row is 1. The Markov chain has ergodicity. After a period, the system can reach a stable state. After a certain transition duration in the Markov process, the probability of reaching j tends to be stable from the initial state i. If a state probability vector 1 2 =( , ,..., ) n X x x x exists, then make = ( ) XP k X ; where X is called the stationary distribution of P(k). That is,
Analysis of Drivers' Visual Attention Fixation Characteristics
Visual entropy is the subjective measure of the human eye on image information, which can reflect the driver's visual attention. When E = 0 and = 1 i x P , entropy rate value En takes the minimum value of 0, indicating that during the driving process, drivers' fixation is concentrated on one area while ignoring other areas. If the focus is always on the central area, then visual attention is only concentrated on the road ahead. The overall traffic environment is weak and the safety hazard is small under the condition of simple traffic conditions. If the focus is on the road ahead for a long time, then identifying unexpected situations is difficult without observing other areas and drivers may properly look away from the forward road scene to decrease the likelihood of safety hazards due to unexpected hazards. If the vision is focused on other areas, then dangerous situations and traffic accidents may also occur. In this study, the visual entropy of the entire driving process is statistically analyzed from a macro level. After data processing and analysis, the fixation duration and probability of each region of the 10 test subjects are shown in Table 4 (abnormal data are much lower than other values, which are marked in gray), as presented in Table 5. The entropy value of each driver's fixation is illustrated in Figure 5. The samples tested for normality are shown in Table 6 and Figure 7. The average En value of the sample is 3.04016, the median is 3.0027, and the sample size is 10. Based on the Kolmogorov-Smirnov test results, sig. = 0.2 > 0.05, which is subject to normal distribution. The Q-Q test in Figure 7 reveals that all points are located near the straight line and the sample is normal. The fixation entropy rate of driver No. 5 is closest to the mean and median values. The sample has normality; thus, the results of the visual transition characteristics analysis of driver No. 5 are selected to reflect the regularity of the overall sample.
Visual Transition Trajectory Characteristics
The fixation trajectory of drivers in mild, moderate, and deep cognitive workloads along with normal driving state is displayed in Figure 8. The horizontal and vertical coordinates indicate the
Visual Transition Trajectory Characteristics
The fixation trajectory of drivers in mild, moderate, and deep cognitive workloads along with normal driving state is displayed in Figure 8. The horizontal and vertical coordinates indicate the range of the fixation points recorded by the eye-movement apparatus. From the area of eye diversion, the drivers' vision is mainly concentrated on the front during the whole test. In normal driving, drivers have the widest coverage of the current driving lane, covering almost the front and far areas, however maybe they were just bored in the normal driving task and looked around more. When mild cognitive workload occurs, fixation transition mainly focuses on the distant area directly in front. As the workload level increases, the fixation transition area gradually shifts to the junction between the near and distant areas located directly in front. Based on previous works [39] and the outcome of this study, visual and cognitive rules occupying drivers' attention in different states are obtained. As illustrated in Figure 9, the cognitive and visual effects on drivers' total attention can be derived; in normal driving conditions, drivers can pay close attention to all areas due to sufficient visual resources. When mild cognitive workload occurs and occupies attention resources, the workload is lower than the visual resources. Drivers can also look ahead and ensure safe driving. As the degree of cognitive workload increases, cognitive resources gradually exceed visual resources. To drive safely, drivers must focus on the front and the far junction area and continue to have a wide range of line-of-sight movements. The degree of workload is deepened and the hidden dangers of driving safety increase. Based on previous works [39] and the outcome of this study, visual and cognitive rules occupying drivers' attention in different states are obtained. As illustrated in Figure 9, the cognitive and visual effects on drivers' total attention can be derived; in normal driving conditions, drivers can pay close attention to all areas due to sufficient visual resources. When mild cognitive workload occurs and occupies attention resources, the workload is lower than the visual resources. Drivers can also look ahead and ensure safe driving. As the degree of cognitive workload increases, cognitive resources gradually exceed visual resources. To drive safely, drivers must focus on the front and the far junction area and continue to have a wide range of line-of-sight movements. The degree of workload is deepened and the hidden dangers of driving safety increase. occurs and occupies attention resources, the workload is lower than the visual resources. Drivers can also look ahead and ensure safe driving. As the degree of cognitive workload increases, cognitive resources gradually exceed visual resources. To drive safely, drivers must focus on the front and the far junction area and continue to have a wide range of line-of-sight movements. The degree of workload is deepened and the hidden dangers of driving safety increase.
Visual Attention Transition Characteristics in Various Areas
Areas A and B belong to the left side of the current driving lane; areas D and E belong to the front of the current driving lane; and areas G and H belong to the right side of the current driving lane. These areas are divided for analysis in the statistical data. The one-and two-step transition The one-step transition probability matrix can reflect the position change of drivers' vision at a continuous moment, whereas the two-step transition probability matrix can reflect the looking back of drivers. Whether the line of sight returns to the same area after two shifts is observed; that is, whether a certain area needs a continuous gaze.
By comparing Figures 10 and 11, certain similar points can be noted. For example, drivers' vision has the highest probability of shifting from the front to the front (Area D+E to D+E) in different states, and such a shift is above 0.9. In the normal driving state, drivers can visually search and observe the traffic environment well. When performing cognitive workload tasks, drivers show obvious rigidity in their visual search and the fixation point is mainly shifted from the fixation area in front to the right. As the degree of cognitive workload increases, the number of times the drivers' vision stays in the fixation area in front of the current lane decreases. When the cognitive workload task is performed, the main task of drivers is lane maintenance and safe driving with the vehicle. The reduction of attention to the front fixation area undoubtedly increases the risk of driving.
Further comparison of Figures 10 and 11 can reveal certain differences. Tables 7 and 8 show the areas and differences in the one-and two-step transition probabilities of the drivers' vision under different states, respectively. During normal driving, the probability of the drivers' vision shifts from the in-vehicle to the left side and the front increases by 49.97% and 40.02%, respectively. The proportion of moving inside the vehicle decreases by 60.01%. The large probability of one-step transition indicates that the information in the vehicle is complex and staying at multiple consecutive gaze points is necessary to obtain information. The probability of two-step transition drastically dropped, indicating that drivers' fixation on the interior of the vehicle is not a continuous behavior. Even if a short stay exists, the vision can be transferred to the left and front areas in time. The proportion of attention to the right-view mirror also doubles, indicating that continuous attention is required when observing the right-view mirror. In mild and moderate workloads, the proportion of drivers' vision transition from the right side to the front area increases. By contrast, the proportion of continuous staying in the right area is significantly reduced, indicating that under the influence of mild and moderate cognitive workloads, drivers' eyes still shift to the right front area, but the proportion of improvement decreases as workload increases. In the case of depth workload, no significant change is observed in the one-and two-step transition probabilities of the drivers' vision. The one-step transition probability matrix can reflect the position change of drivers' vision at a continuous moment, whereas the two-step transition probability matrix can reflect the looking back of drivers. Whether the line of sight returns to the same area after two shifts is observed; that is, whether a certain area needs a continuous gaze.
By comparing Figures 10 and 11, certain similar points can be noted. For example, drivers' vision has the highest probability of shifting from the front to the front (Area D+E to D+E) in different states, and such a shift is above 0.9. In the normal driving state, drivers can visually search and observe the traffic environment well. When performing cognitive workload tasks, drivers show obvious rigidity in their visual search and the fixation point is mainly shifted from the fixation area in front to the right. As the degree of cognitive workload increases, the number of times the drivers' vision stays in the fixation area in front of the current lane decreases. When the cognitive workload task is performed, the main task of drivers is lane maintenance and safe driving with the vehicle. The reduction of attention to the front fixation area undoubtedly increases the risk of driving.
Further comparison of Figures 10 and 11 can reveal certain differences. Tables 7 and 8 show the areas and differences in the one-and two-step transition probabilities of the drivers' vision under different states, respectively. During normal driving, the probability of the drivers' vision shifts from the in-vehicle to the left side and the front increases by 49.97% and 40.02%, respectively. The proportion of moving inside the vehicle decreases by 60.01%. The large probability of one-step transition indicates that the information in the vehicle is complex and staying at multiple consecutive gaze points is necessary to obtain information. The probability of two-step transition drastically dropped, indicating that drivers' fixation on the interior of the vehicle is not a continuous behavior. Even if a short stay exists, the vision can be transferred to the left and front areas in time. The proportion of attention to the right-view mirror also doubles, indicating that continuous attention is required when observing the right-view mirror. In mild and moderate workloads, the proportion of drivers' vision transition from the right side to the front area increases. By contrast, the proportion of continuous staying in the right area is significantly reduced, indicating that under the influence of mild and moderate cognitive workloads, drivers' eyes still shift to the right front area, but the proportion of improvement decreases as workload increases. In the case of depth workload, no significant change is observed in the one-and two-step transition probabilities of the drivers' vision.
Visual Attention Characteristics in Steady State
Assuming that the drivers' visual attention transfers to a sufficient number of times, a steady state is observed in which their visual attention can be analyzed. The steady distribution of fixation is shown in Equation (8). Note that X1, X2, X3, and X4 represent the steady distribution values under mild, moderate, deep cognitive workloads, and normal driving state.
From the perspective of the smooth distribution value in the normal driving state, the fixation point of drivers is scattered to each fixation area, with the largest probability of occurrence in the front of the current lane, followed by the right of the current lane. In mild and moderate cognitive workloads, drivers' fixation point can only appear in the front and right lanes of the current lane. In deep cognitive workload, drivers' gaze can also have a small probability to focus on the left lane.
FALKMER research shows that both urban roads and rural roads increase the driver's cognitive needs, but after the driver enters the city, they pay more attention to the outside world and require a higher level of cognitive attention [40]. Figure 12 illustrates a comparison of various works on rural roads [41], the Chinese urban trunk roads in [42], and the Chinese highways in [43]. Due to different AOI areas in different literature, AOI areas are divided according to the left, front, right, and other areas of fixation. Reference [42] and [43] conducted the real vehicle test, whereas Bao [41] performed the driving simulation test. Considering the situation in China and the United States, the attention of Chinese drivers to the front is more than twice that of American drivers by percentage points. When driving on urban roads, the attention to the front is higher than that on the highway. In the real road driving environment, the visual attention of the driver bears more mental workload than the simulated environment [44]. A significant difference also exists between the results from the driving simulation and actual driving environment, and some researchers have also confirmed this view [18,21,45]. In actual driving, the driver bears a greater risk than the driving simulator, with drivers focusing on the forward transition in the real driving environment.
areas of fixation. Reference [42] and [43] conducted the real vehicle test, whereas Bao [41] performed the driving simulation test. Considering the situation in China and the United States, the attention of Chinese drivers to the front is more than twice that of American drivers by percentage points. When driving on urban roads, the attention to the front is higher than that on the highway. In the real road driving environment, the visual attention of the driver bears more mental workload than the simulated environment [44]. A significant difference also exists between the results from the driving simulation and actual driving environment, and some researchers have also confirmed this view [18,21,45]. In actual driving, the driver bears a greater risk than the driving simulator, with drivers focusing on the forward transition in the real driving environment.
Limitations and Future Work
The study has certain limitations and the following problems will be solved in future research: (1) At present, we only studied the overall accuracy of mathematical calculations under different cognitive workload conditions, but did not consider the relationship between mathematical calculation accuracy and driver's visual attention. Future research will take calculation accuracy as an indicator to analyze the influence of different cognitive workloads on such accuracy. (2) At present, there is no analysis of the visual entropy of different cognitive levels, but only the whole process of driving. The next step will be to explore its changes under different levels of cognition. (3) Other comprehensive eye movement indicators, such as pupil diameter change and blink frequency, can be considered to analyze the visual attention of drivers under the cognitive workload condition. Dynamic AOI division will also be adopted for research. (4) More participants will be recruited to explore the visual attention characteristics of drivers under cognitive workload and we will add some driving simulation experiments for supplementary comparative analysis. (5) At present, we have generated three different degrees of cognitive workload for participants through the calculation of mathematical problems. In future work, we will quantify the indicators and give specific definitions for the three different levels.
Conclusions
Our findings are summarized into the following observations: (1) Drivers' fixation entropy rate values for each area are calculated from the experimental data and their visual attention is measured by a quantitative method. The comparison between male and female drivers shows that the mean fixation entropy rate of males is 23.08% higher than that of female drivers. (2) Under normal circumstances of driving, drivers can cover almost the front fixation area. In the case of mild workload, drivers' eyes are mainly focused on the distant area directly in front.
As cognitive workload increases, the area where the eyes are shifted moves toward the junction between the near and far areas directly in front. The relationship between cognitive and visual occupying attention resources is also analyzed. When driving normally, visual resources are sufficient. When cognitive workload occurs, visual and cognitive resources have a competitive relationship. As the degree of cognitive workload increases, cognitive attention occupies resources more than the visual. to the left and the front area very quickly. In mild and moderate workloads, the proportion of drivers' vision transition from the right side to the front area is increased, whereas the proportion of continuous stay in the right area is significantly reduced. Therefore, under the influence of mild and moderate cognitive workloads, drivers' eyes still shift to the right and front areas. However, the proportion of improvement decreases as workload increases. In the case of in-depth workload, no significant change is observed in the one-and two-step transition probabilities of the drivers' vision. (5) The geometry of the road is not considered because the driver's line of sight is on the curve or tangent of the road and the effect on dynamic vision exists. This issue will be further considered in the next work.
|
v3-fos-license
|
2023-08-07T15:12:38.074Z
|
2023-08-04T00:00:00.000
|
260657957
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/15/15/12001/pdf?version=1691143681",
"pdf_hash": "c27643b20c73446c76e9e5fbdebe78ba2027f461",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43215",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "975b33cdc74ec316a512df6d59fdc519460385b2",
"year": 2023
}
|
pes2o/s2orc
|
Assessing the Impact of Engineering Measures and Vegetation Restoration on Soil Erosion: A Case Study in Osmancık, Türkiye
: The prioritization of preventing soil loss in Türkiye’s watersheds has become a pressing concern for planners. Numerous mathematical models are presently utilized on a global scale for soil erosion prediction. One such model is the Revised Universal Soil Loss Equation (RUSLE), commonly used to estimate average soil loss. Recently, there has been an increased emphasis on utilizing USLE/RUSLE in conjunction with Geographic Information System (GIS) technology, enabling grid-based analysis for predicting soil erosion and facilitating control measures. This study evaluates the effectiveness of erosion and flood control initiatives started in the 1970s within the Emine Creek watershed and its tributary rivers in Osmancık, Türkiye, utilizing RUSLE/GIS technologies. Two distinct maps illustrating the potential erosion risks were produced for two distinct temporal intervals, and a comparative analysis was conducted to evaluate the alterations that transpired. The implementation of various measures such as terracing, afforestation, and rehabilitation in the watershed led to a notable prediction of decreasing soil loss in the watershed. From 1970 to 2020, the rate of estimated soil loss was reduced from 417 to 256 metric tons per hectare per year, demonstrating the effectiveness of soil conservation measures in a semi-arid and weakly vegetated area at reducing potential soil loss.
Introduction
Water erosion, which leads to soil loss, poses a significant threat to natural resources and agricultural productivity [1].Erosion harms soil health, water quality, hydrological systems, crop yields, habitats, and ecosystem services [2,3].Soil erosion is impacted by various factors, including wind, precipitation, associated runoff processes, soil erosion susceptibility, land cover, and management characteristics [4,5].The impacts have led to an estimated average annual soil loss of 12 to 15 tons per hectare on erodible lands worldwide [6].In the countries of the European Union, this figure stands at 2.46 tons per hectare per year [5], while in Türkiye, it reaches 8.24 tons per hectare per year [7].
Various engineering measures and soil conservation techniques are globally used to combat water erosion and restore degraded lands.When degraded by deforestation, overgrazing, inappropriate land use, etc., and subjected to heavy rainfall, degraded lands can experience extensive erosion.In many parts of the world, terracing and engineering measures are taken to control water erosion on sloped lands [8][9][10][11].Diversely constructed terraces are a barrier that slows the water flow rate and increases water retention.In addition, the method of contouring is used for comparable purposes in various regions of Europe and China [12][13][14][15].Vegetation on degraded lands can aid in forming plant root systems that prevent wind and water erosion by holding the soil together.In countries with wastelands and hilly areas, such as South America and Nepal, this technique (bioengineering) is used [16][17][18].In addition to mitigating slope erosion, engineering interventions, such as the implementation of check dams, are employed to manage gully erosion [19].With these structures, it is essential to reduce the water's flow rate and hold sediment.These engineering measures aid in preventing water erosion while also enhancing water quality in the watershed [20][21][22].To combat water erosion and prevent ecosystem deterioration, the riverbanks are protected so that gully erosion damages do not increase [23].
Monitoring soil erosion is an essential component of soil conservation planning.Even in experimental plots, determining soil losses is expensive and time-consuming [24].Understanding how these erosion processes occur and identifying areas susceptible to soil loss can significantly enhance land management.Several empirical water erosion prediction models have been developed to assess regions with high erosion intensity and to predict regions with limited data [4,25].The Revised Universal Soil Loss Equation (RUSLE), proposed by Wieschmeier and Smith [26] and Renard et al. [27], is the most widely used experimental model for soil loss estimation.The RUSLE model, which can be evaluated for watershed protection, is adaptable, time-and cost-efficient, and practical for estimating soil losses in areas with insufficient data [28].By considering relationships between land use and cover, topography, soil type, and precipitation, RUSLE can provide estimates of long-term annual soil loss.Soil loss is determined using the RUSLE model, which involves the multiplication of six parameters: The RUSLE model determines soil loss through the multiplication of six parameters: (1) the erosivity factor (R), (2) the soil erodibility factor (K), (3) the slope length factor (L), (4) the slope steepness factor (S), ( 5) the cover management factor (C), ( 6) the support practice factor (P).These parameter values are determined through field and laboratory research [26].Factors C and P are associated with soil conservation and land use.In contrast, R, K, and LS factors are associated with the ecological characteristics of the study area.In contrast, factors R, K, and LS are linked to the ecological characteristics of the study area [29].
The Cover management factor (C factor) represents the most easily manageable conditions and is a variable that land planners can readily influence to reduce soil loss rates [30][31][32].C factor values range from 0 to 1 and are determined by the weighted average of soil loss rates (SLR) [27].The most significant impact on the C factor arises from changes in land use, particularly deforestation caused by the expansion of agricultural land.Additionally, in other management-related applications, the P factor is also considered.The P factor encompasses mitigation practices that reduce the erosive capacity of water flow by modifying drainage patterns, flow concentration, flow velocity, and hydraulic forces [27].Globally, soil erosion is effectively controlled by soil and water conservation measures.Even on steep slopes (25 • ), soil erosion can be reduced by as much as 70% by applying soil conservation measures [33].
Nevertheless, modifying the support application factor (P) typically requires increased monetary investments and soil conservation subsidies [13,31,34,35].In addition, the C and the P factors can be used in scenario analysis to evaluate the impact of various soil conservation practices on soil loss, determining whether they mitigate or exacerbate the problem [36].Ozcan and Aytas [37] simulated various soil conservation practices using the RUSLE model to estimate the impact of those measures on soil loss and sedimentation in the Bakkal Dolin Lake (Çankırı/Turkey).
This study aims to determine the variation in the amount and severity of potential soil loss due to engineering measures implemented in a semiarid watershed with sparse vegetation.In addition, it is to evaluate the effectiveness of slope (terracing) and gully (check dams) improvement measures that effectively combat erosion.We aimed to predict the spatial influence of integrating afforestation and support practices on soil loss flux using RUSLE/GIS.The study area chosen was the Emine Creek watershed in Osmancık, known for its sensitive climate and topography prone to erosion.The selection of this watershed is based on its potential for addressing erosion issues in comparable geographical areas.Situated within the Mediterranean basin, where erosion is widespread, this watershed allows the relevant institution to address and mitigate erosion actively.Additionally, this watershed is a valuable research site for studying erosion processes.
As intended, erosion control measures were applied in the study area in 1970.The LS, K, and R factor values were assumed to be stable, allowing us to observe the effects of the measures.This study represents the first examination of the impact of erosion control measures in Türkiye.However, more information is needed regarding the effects of land cover and management techniques.Thus, planning the implementation and evaluation of erosion mitigation measures will be simplified.
Study Site
The research site is situated in the northern part of the Central Anatolia Region, positioned at the interface between the humid climate region of the Black Sea and the semi-arid climate region of Inner Anatolia.During the evaluation period of this study, approximately 63.5% of the project area consisted of fertile and degraded forest lands.Residential and agricultural areas accounted for 24.3% of the watershed (Figure 1).The elevation of the watershed ranges from 410 to 1541 m, with an average elevation of 815 m, and exhibits a very rough and faulted topographic structure.The soil belongs to the order of Inceptisols (Typic Xerochrepts) [38], with a depth varying between 20-120 cm.Severe surface erosion occurred in areas with high elevation and low cover, and A horizon was carried.Severe surface erosion occurred in areas with high elevation and low vegetation cover, leading to the loss of the A horizon and exposing the bedrock.The average annual rainfall in the area is 355 mm, with insufficient rainfall and irregular seasonal distribution exacerbating the effect of drought.According to the Thornthwaite method, the climate type is classified as "semi-drought, mesothermal, no excess water or slight excess water, close to maritime climate" [39].The common forest stands in Central Anatolia consist of oak (Quercus pubescens Willd., Q. cerris L., Q. infectoria G.Olivier), pine (Pinus nigra J.F.Arnold subsp.pallasiana (Lamb.)Holmboe var.pallasiana, P. slyvestris L.), and juniper (Juniperus oxycedrus L. subsp.oxycedrus, J. excelsa M.Bieb., J. foetidissima Willd.).The dominant tree species in the study area's forests are pine, oak, and juniper, depending on height and aspect.
The variable A is used to represent the average annual soil loss (t ha −1 year −1 ); R is the symbol for the rainfall erosivity factor (MJ mm ha −1 h −1 y −1 ); K signifies the soil erodibility factor (t ha h ha −1 MJ −1 mm −1 ); L corresponds to the slope length factor, S represents the slope steepness factor, C indicates the cover management factor, and P denotes the support practice factor.
The R factor, also known as the factor of rainfall erosivity, is determined by multiplying the highest rainfall intensity for 30 min (I30) by the product (EI30) of the total rainfall energy (E) [26,40].In this study, the R factor was directly obtained from the study area by Erpul et al. [41] (Table 1).Due to the limited number of meteorology stations in the study area for kriging, it is necessary to consider the influence of elevation classes on the current precipitation levels [42], and the Digital Elevation Model (DEM) of the study area was utilized to form the spatial R surface.Adjustments were made to account for the equalization of rainfall levels based on the elevation within the watershed area [42] using Equation ( 2): Table 1.The RUSLE-R values (MJ mm ha −1 h −1 year −1 ) and their annual distribution at the study site [43].Ry refers to the corrected R factor value of the unknown unit; Py represents the average annual precipitation (mm) of the unknown unit; and Pr represents the average annual precipitation (mm) of the known reference station.
The study area spans an elevation range of 410 to 1541 m.The Osmancık meteorology station, selected as the reference station, is at an elevation of 419 m.R values for the unknown units were computed using DEM and ArcMap 10.6.1, based on Equation (2), which indicates that precipitation increases by 50 mm for every 30p0-m difference in altitude.
The study used a 1:25,000 scale K factor map, representing the soil's resistance to erosion, obtained from the Turkish Erosion Database [44].K factors were digitized for soil unit measurements using Equation (3), as suggested by Torri et al. [45].For the K factor calculation and mapping, 23,000 profile data points from 0-30 cm were utilized [46].The digitization of soil units involved using the equation suggested by Romkens et al. [47] and revised by Renard et al. [27].This was done by associating the vector layers containing polygons of soil classes, erosion, and texture combinations, separating different K classes.Furthermore, the results were compared with K factor values from the Türkiye Big Soil Groups [48].
K represents the RUSLE soil erosion sensitivity (ton ha −1 ha MJ −1 h mm −1 ); whereas the geometrical average diameters of the principal soil components (mm) are represented by DG; OM denotes organic matter, and C is the percentage of clay.
Topography is the most influential factor controlling soil erosion risk.To calculate the LS factor, which accounts for slope length and steepness, the DEM of the watershed was used in conjunction with Equation (4) and the ArcMap 10.6.1 Hydraulic Accumulation tool.Hydraulic Accumulation has the advantage of considering the area contributing to the slope and the slope's characteristics, allowing for a more comprehensive assessment of the intricate topography [49,50].
In this context, LS represents the RUSLE topographical factor, X denotes the surface flow condensation number, n signifies the size of the cells in which calculations are performed, and θ represents the slope steepness ( o ).
The C factor corresponds to a cover management factor that varies depending on the type and coverage of vegetation.Generally, vegetation reduces the kinetic energy of raindrops before they impact the soil surface, significantly influencing erosion.According to other factors, the C factor has a substantial impact on the increase or decrease of soil loss over a brief period.The C factor, which ranges from zero (a well-protected land cover) to one (barren areas), consists of the following five subfactors: prior land use (PLU), soil moisture (SM), surface cover (SC), surface roughness (SR), and canopy cover (CC).Consequently, while C factor values for forest and grassland uses are low, C factor values for settlement areas are high.This study identified land use in 1970 and 2020 using CORINE Maps, afforestation project maps, and forest maps generated from aerial photos and land surveys.The C Factor can frequently be calculated using remote sensing techniques [37].But the 1970 mapping of Factor C using remote sensing techniques could not be evaluated due to a lack of satellite images.Forest maps were prepared with aerial photos and field measurements, and their accuracy was verified using ground control points every 400 m, resulting in high spatial accuracy and pixel resolutions of the stand information.The land use in the study area was classified into seven categories: forest, grassland, settlement, agriculture, water, sandy area, and rocky area.In addition, according to the forest cover, it was classified into three parts.In Table 2, scientists identified C factor values for each land use.Referring to studies reporting values for similar land cover or studies conducted in the same area or region is an easier way to determine the C factor [50]. Ozcan et al. [51], Saygin et al. [52], and Ozcan and Aytas [37] fit C factor values for forest and grassland uses because they were in the same forest region (Pinus nigra subsp.pallasiana var.pallasiana, Quercus pubescens).Wischmeier and Smith [26], Gabriels et al. [53], Panagos et al. [31], and Ebabu et al. [13] all did the same for agricultural land uses.The C factor value of agricultural areas was found to be 0.5.The most important reason was that agricultural lands were in marginal areas.
Within the scope of the watershed improvement project, slope improvement (terraces) and gully improvement (threshold and reverse dams, etc.) measures were implemented for the P factor.A combined length of 4646 km of terraces was constructed on a land area spanning 2194 ha to mitigate slope erosion.Additionally, 13,464 check dams were built to address erosion in gullies.In this study, the 1970 P factor calculated based on engineering measures was set to 1 (RUSLE-P = 1).The P factor for 2020 was calculated using Wenner's Method [54,55] (Equation ( 5)).
In this context, S represents the percentage of slope steepness.
Several RUSLE studies have extensively used this equation [55][56][57][58].At the scale of the large watershed, it is very difficult to represent conservation measures such as terracing, tillage, and others on the land use map [56].In these instances, using empirical equations becomes a viable method for calculating the P factor [55]. Wenner's method presupposes that the P factor relates to topographical features and inclination angle [34,57,59].
Erosion of a slope occurs when runoff transports soil particles and nutrients down a sloped surface.The primary causes of slope erosion are raindrop impact, sheet erosion, rill erosion, and gully erosion.Widespread erosion control techniques for slope erosion include terracing, mulching, cover crops, and contour plowing.When concentrated water flow in a well-defined channel creates deep trenches in the soil, this is called gully erosion.The mechanisms that contribute to gully erosion are headcut erosion and bank erosion.The most common erosion control methods for gully erosion are vegetation restoration, rock check dams, grading and fill placement, diversion channels, gabions, and riprap [60].
In our study, slope erosion rehabilitation engineering measures include terracing with mini excavators, geosynthetic terraces, stone cordon terraces, and afforestation.To prevent gully erosion, masonry check dams, gabion check dams, wicker check dams, stone check dams, and silt fence check dams are utilized.
Results
Annual soil losses over the research site, according to the RUSLE model, were estimated using RS/GIS.All parameters were converted into 10 × 10 grids and multiplied.The spatial distribution of predicted soil loss from the study area was then obtained.R and K factors were calculated independently, considering that LS, R, and K factors would not change in a short time from 1970 to 2020 when engineering measures were taken, but C and P factors would change.The RUSLE-R factor layer was calculated by Erpul et al. (2009) [41] using the DEM of the watershed.The RUSLE-R factor at the research site varied between 683.5 and 919.42 MJ ha −1 mm h −1 year −1 , with a mean value of 745.02 MJ ha −1 mm h −1 year −1 .The LS factor was calculated by Equation ( 4) with the DEM of the study area, accounting for the interactions between flow accumulation and topography.LS factor values ranged from 0 to 430.64, with a mean value of 73.94.Most of the study area consists of brown forest soils (85%), and their K factor values, depending on texture and soil depth (horizons), varied from 0.02 to 0.04 t ha h ha −1 MJ −1 mm −1 and the mean value is 0.03 t ha h ha −1 MJ −1 mm −1 (Table 3) (Figure 3).The C factor values are given in Table 4 by taking the land use and cover into account according to [26,29,51,52,61].The change in land use type/land cover (LUT/LC) gives the change in the C factor.For that reason, 1/25,000-scale forest maps from 1970 and 2020 were used.Forest areas expanded significantly between 1970 and 2020, reaching 4193.23 ha in the watershed, representing an increase of approximately 35%, reaching a total of 7517.35 ha (61.9%).When this expansion was evaluated according to the classes of forest cover, it was determined that there was an increase of approximately 29% in forest cover with 0-30% cover (C factor 0.05) and about 6% in forest cover with 30-100% cover (C factors 0.10 and 0.15).The most significant loss of cover occurred in grassland, which decreased from approximately 5314.62 ha in 1970 to just under 1703.58ha in 2020.Additionally, rangelands endured the greatest loss of cover, declining from about 5314 ha in 1970 to approximately 1714 ha in 2020.The residential area within the watershed experienced a notable expansion from 92.96 hectares in 1970 to 520 hectares in 2020, primarily attributed to population growth (Figure 4).In the study conducted in the year 2020, the P factor value pertaining to the terraces in the designated study area was determined to be 0.2.Consequently, this finding indicates a reduction in the anticipated soil erosion within these regions, irrespective of any additional factors.In areas where afforestation is viable, either through machinery or human labor, the P factor may assume a value of 0.2.However, in regions with limited suitability for afforestation, the P factor remains at 1. Upon conducting a comprehensive assessment, the P factor for the entirety of the watershed was determined to be 0.75 (Figure 3).
All factors in the study area were multiplied for 1970 and 2020 to determine potential soil losses.This value decreased from 417 t ha −1 year −1 in 1970 to 256 t ha −1 year −1 in 2020 on average.Thus, 160 t ha −1 year −1 of potential soil loss has been prevented because of the watershed improvement projects (Figure 5).In addition, the improvement in P and C factors has prevented soil erosion in the area by an average of 1057 t ha −1 year −1 (%71) for fifty years (Table 5).Although there was an expansion of the residential area by 520 hectares, the potential soil loss experienced a reduction of 158 t ha −1 year −1 (Table 6).
Discussion
The mean value of the RUSLE-R factor at the research site is 745 MJ ha −1 mm h −1 year −1 .Due to the elevated altitude of the study area, reaching a maximum of 1451 m, the R factor values exhibited a notable increase.In contrast to the European R-factor values ranging from 0-900 MJ ha −1 mm h −1 year −1 [62], the observed R-factor value of approximately 100 MJ ha −1 mm h −1 year −1 in Saudi Arabia [63] indicates a notable erosion potential at the research site.Bayramin et al. [64] showed that the R factor is very remarkable for the semi-arid region of Central Anatolia.In these semiarid regions of Central Anatolia, climatic inconsistency is a significant indicator of potential risk, as extreme weather events occur, and rainy and growing seasons rarely coincide.
The spatial analysis on the LS factor revealed that the study area's topography mainly supported erosion, meaning that steeper slopes collecting more runoffs would result in less erosion in only a small portion of the study.Although the resolution of the digital elevation model (DEM) employed to compute the LS factor within the study region is deemed sufficient at 10 × 10 m, it is worth noting that it could potentially yield elevated values in regions characterized by slopes exceeding 20 degrees [65].The average value of the LS factor (73.94), calculated for the study area, is approximately ten times higher than Austria, which has the highest average LS factor (6.95) in the European Union [66].The LS factor is the sole determinant contributing to greater erosion in the study area compared to Europe, irrespective of other influencing factors.
The K factor values indicate that the study area has high soil erodibility.Sandy soils with high infiltration rates have low K factor values, making sediment transport less easy.Clay soils have low K factor values due to their high resistance to soil detachment.Silt soils have high K factor values compared to others because of their high runoff rates [67].Regarding soil erosion, inefficient agricultural and grassland practices in the watershed pose a moderate to high risk [68].In addition, due to the semi-arid climate of the research area, the organic matter content of the soil is even lower (0.2%) than that of dry tropical forests [69].Thus, the low organic matter content is reflected in the high RUSLE K-factor values calculated for the region.
The analysis of landscape change revealed that forest and settlement areas increased while agricultural and grassland areas decreased.In 1970, engineering and afforestation measures were implemented in the study area, particularly on slopes where labor was employed to construct terraces.The significant increase in forest coverage is mainly due to reforestation efforts in the 1980s, which resulted in a substantial expansion of forest areas and their canopy cover.Examining the LUT/LC distributions for 2020 reveals the success of the watershed improvement initiatives.Particularly by controlling grazing, the degraded forest areas were rehabilitated.As a result of the successful afforestation studies, there have been increases in forest areas and their proximity.
Moreover, the control and reduction of grazing allowed vegetation to become more diverse, and the spacing between plants increased.As a result, the proximity of recovering oak and juniper forests has slowed layer and gully erosion.Vegetation can now be observed growing within gullies and cracks, which decreases gully erosion and the amount of transported sediment.
The C-factor influences the effects of all associated cover and management variables [30].The values of C are subject to variation within the range from nearly zero, indicating wellprotected soils, to 1.5, representing surfaces with finely tilled ridges that are highly susceptible to soil erosion.On local scales in Türkiye, Ozcan et al. [29,37,51] and Ozhan et al. [61] studied only forest and grassland for the C factor.In Türkiye, Hacisalihoglu [70] determined the C variable to be 0.01 for coniferous forests and 0.07 for pastures in semi-arid regions, while Ozhan et al. [61] determined the value to be between 0.001 and 0.021 for deciduous forest and 0.13 for inner-forest glade in humid regions.Mati [71] used surface cover and canopy cover to calculate variable C and obtained a value of 0.007 for Kenyan forests, and according to Mahamud et al. [72], the C factors in the Malaysian study area range from 0.01 to 1.00.Our results are comparable to those of Hacisalihoglu [70] for forests and grasslands, while Ozhan et al. [61] found lower C values for grasslands.This difference in the C factor value could be due to variations in topography, vegetation density, and climate.Rehabilitation works such as afforestation and planting carried out in the area for various reasons can reduce soil erosion by at least 6%, and the protection of the forested regions and grasslands can aid in reducing soil erosion [73].
The P factor represents the ratio between soil loss caused by the support practice and soil loss caused by upslope and downslope tillage.The P factor, along with the C factor, is the primary driving factor among the erosion factors [74].By modifying the flow pattern, gradient, or direction of surface runoff and decreasing runoff volume and velocity, these practices have a proportional impact on erosion [70].P-factor values range from approximately 0.5 for reverse-slope bench terraces to 1.0 without erosion control measures [26].
Implementing afforestation initiatives aimed at converting low-slope grasslands into forested regions has resulted in a notable rise in the average erosion rate experienced by these grasslands.The study conducted by Waseem et al. [75] in India, a country with topographical similarities to Türkiye, revealed that despite two-thirds of the watershed being cultivated and vegetated, approximately one-third of the watershed experiences a significant erosion rate due to its slope exceeding 30%.
While there was an annual average loss of 417 t ha −1 year −1 in the research area as of 1970, it was calculated that this loss amount decreased to 256 t ha −1 year −1 in 2020 with the watershed improvement works.The implementation of terracing and afforestation activities as part of the improvement project has primarily targeted unproductive pastures and agricultural lands.Consequently, the erosion-induced soil loss in the area has been mitigated by an estimated reduction of approximately 40%.Without the implementation of engineering studies, which are essential for addressing erosion in the region, the estimated average soil loss would have increased significantly from 417 to 1474 t ha −1 year −1 within 50 years.This value is equivalent to the amount of soil carried away by erosion after a fire in Greece, which has a similar ecosystem [76].
The RUSLE method is specifically designed to forecast soil erosion in agricultural regions [77].Therefore, its applicability may be limited to predicting erosion in various ecosystems, such as forests, scrublands, and urban areas [78,79].Also, in some regions, it may be difficult to obtain the necessary data, making it challenging to make precise estimates [80].Uncertainties in meteorological data, such as precipitation amount and intensity, can impact the R factor and, consequently, the accuracy of forecasts [81,82].In addition, factors affecting erosion processes, such as soil characteristics (calculating the correct K factor) [83] and incorrect identification of vegetation, may also influence the results [84,85].Certain parameters can be directly measured, while others are approximated.Insufficient, constrained, or untrustworthy input data may hinder the ability to make precise predictions.In regions with limited and incomplete data, the imprecise estimation of extreme values can substantially influence soil loss estimates [83,86].Field-measured data may be insufficient or unmeasured to validate against RUSLE results.
Consequently, there may be issues with the verification and dependability of RUSLE estimates.The application of RUSLE at various scales can lead to varying outcomes [87].Despite the possibility that different results can be calculated in some local areas of the study area, this difference was not significant enough to affect the dependability of potential soil loss results throughout the entire watershed.
Conclusions
Soil erosion is a worldwide environmental issue that falls under Goal 15 of the United Nations' Sustainable Development Goals, and it holds significant importance in the context of a changing and growing global population.The Revised Universal Soil Loss Equation (RUSLE) provides an interactive and dynamic model for assessing vegetation-erosion interactions, as it incorporates physical subfactors representing different vegetation properties to calculate Soil Loss Ratios (SLR).Recently, RUSLE and GIS have gained widespread usage in calculating erosion losses in small valleys and identifying the contributing factors.This study demonstrates the significance of factors C and P in combating soil erosion, with the objective of stabilizing, halting, and reversing land degradation while promoting the sustainable use of land resources.Land use and land cover (LUT/LC) have changed in the Emine Creek Watershed because of soil conservation measures (P factor).Based on the RUSLE model, the alterations resulted in a notable decrease of 38% in the potential soil erosion within the watershed region for 50 years.If the engineering measures had not been implemented, there would have been a 353% increase in potential soil loss over a period of 50 years.
Consequently, due to enhancements in the P and C factors, the potential soil loss of the watershed after 50 years has been diminished by approximately four.This study shows how effectively applying soil conservation measures in a semi-arid and weakly vegetated area reduces potential soil loss.Further investigation is warranted to examine the reliability of projected soil erosion levels in relation to different precipitation patterns, severity conditions and support practice factor activities.In addition to focusing on the complex dynamics and interactions between R, K, LS, C, and P factors and considering cost-benefit analyses related to engineering activities carried out in marginal areas, realistic solutions can be attained.
a
Flux of total energy flux; b Flux of average energy; c Flux of average monthly percentage of energy flux; d Flux of maximum energy flux.
Figure 1 .
Figure 1.Map of the Research Area.
Figure 2 .
Figure 2. A diagram illustrating the application of the RUSLE model for estimating soil loss within the research area.
Figure 4 .
Figure 4. Change of factor C between 1970 and 2020.
Figure 5 .
Figure 5.The soil loses a map of the study area.
Table 4 .
LUT/LC areas and rate of changes within the study area.
Table 5 .
Predicted soil loss differences according to the RUSLE.by 3611 ha, total potential soil loss in this class has increased by 66.37 t ha −1 year −1 .-Agricultural land decreased by 1102 ha over the past fifty years, and potential soil loss decreased by 97 t ha −1 year −1 .-
|
v3-fos-license
|
2023-01-05T14:24:10.716Z
|
2015-09-12T00:00:00.000
|
255428951
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11948-015-9697-2.pdf",
"pdf_hash": "e22531a5315ff97cc0ba395cdb36e5ad5562cc14",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43218",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"sha1": "e22531a5315ff97cc0ba395cdb36e5ad5562cc14",
"year": 2015
}
|
pes2o/s2orc
|
Reversible Experiments: Putting Geological Disposal to the Test
Conceiving of nuclear energy as a social experiment gives rise to the question of what to do when the experiment is no longer responsible or desirable. To be able to appropriately respond to such a situation, the nuclear energy technology in question should be reversible, i.e. it must be possible to stop its further development and implementation in society, and it must be possible to undo its undesirable consequences. This paper explores these two conditions by applying them to geological disposal of high-level radioactive waste (GD). Despite the fact that considerations of reversibility and retrievability have received increased attention in GD, the analysis in this paper concludes that GD cannot be considered reversible. Firstly, it would be difficult to stop its further development and implementation, since its historical development has led to a point where GD is significantly locked-in. Secondly, the strategy it employs for undoing undesirable consequences is less-than-ideal: it relies on containment of severely radiotoxic waste rather than attempting to eliminate this waste or its radioactivity. And while it may currently be technologically impossible to turn high-level waste into benign substances, GD’s containment strategy makes it difficult to eliminate this waste’s radioactivity when the possibility would arise. In all, GD should be critically reconsidered if the inclusion of reversibility considerations in radioactive waste management has indeed become as important as is sometimes claimed.
Introduction
Ever since nuclear energy technologies were developed after the Second World War (WW2), we have been learning about the risks of nuclear energy production and how to deal with them. 1 However, more than 50 years after nuclear power plants first supplied electricity to the grid, 2 the Fukushima nuclear disaster made it excruciatingly clear that we are nowhere near done learning. Not only are there residual uncertainties about the risks of already widely deployed nuclear energy technologies, but new technologies are being developed (e.g., Generation IV reactors), while older ones have not seen widespread introduction even after decades of effort (e.g., geological disposal of radioactive waste).
However, how is this learning to be organized? The uncertainties and risks connected to nuclear power plant operation and radioactive waste management (RWM) have led van de Poel (2011) to propose that we should consider nuclear energy as a social experiment. This would mean that specific decisions on the acceptability of a technology, which now often occur before its actual introduction into society, would be replaced by an ongoing and conscious process of learning about its risks and benefits, as well as what is to be considered acceptable. So, understanding a nuclear energy technology as a social experiment would allow us to learn more about that technology's risks and benefits as the experiment unfolds. Nonetheless, it also means that we might at one point learn what we, in a sense, would rather not, i.e., that continuing the experiment is no longer responsible or even that it is simply no longer desirable. What is an experimenter to do then? At the very least, she should be able to stop the experiment, and hazards should be contained as far as possible (van de Poel 2011). 3 In earlier work (Bergen, in press) I contended that these two conditions, the ability to stop further development and implementation of a technology (the experiment) and undoing its undesirable consequences (e.g., hazards), are constitutive of technological reversibility. In other words, the technology experimented with should be reversible if the experimenter wants to be prepared for the experiment taking a turn for the worst.
This paper further explores what it means for a technology to be reversible by applying the abovementioned conditions for technological reversibility to a technology in which reversibility is already a salient consideration: the geological disposal of radioactive waste (GD). In doing so, the paper also provides an answer to whether or not GD can be considered reversible in the way required for responsible social experimentation.
Reversibility as an Issue in Radioactive Waste Management
A quick exploration of publications by major nuclear organisations 4 revealed five broad uses of the concepts of reversibility and irreversibility in the field of nuclear energy. The first three uses describe basic processes and consequences that are implicated in the production of nuclear energy: • (Ir)reversible mechanical/chemical/thermodynamic processes during the production of nuclear energy or radioactive waste management (e.g., spent fuel reprocessing, drilling damage to repository host rock, and nuclear fission) • A specific but important sub-category of the above: (ir)reversibility of flows and migrations, mostly of radioactive isotopes (e.g., in technical, environmental or geological systems). This aspect is often connected to standards for radioactive waste management facilities • (Ir)reversibility of consequences, e.g.: • Mutations and cell damage in living tissue due to irradiation • Damage to the environment and its ecosystems While these uses are useful for describing (ir)reversible aspects connected to nuclear energy, they are not actually oriented towards making a nuclear energy technology more reversible. However, the last two uses are oriented as such, since they provide specific design goals or strategies for reversible radioactive waste management (RWM) technology and its implementation: • Retrievability of radioactive waste from a waste storage or geological disposal facility. • Reversibility of (consequences of) decisions during the implementation process of a waste storage or geological disposal facility (e.g., Interagency Review Group on Nuclear Waste Management 1978; OECD Nuclear Energy Agency 2011; U.S. Department of Energy 1991) Different RWM technologies differ in their plans for reversibility or retrievability of radioactive wastes, broadly determined by two factors. First, the type of radioactive waste is relevant. Generally, three categories of radioactive waste are distinguished based on their lifetime and radioactivity: low, intermediate, and highlevel waste (IAEA 2009). 5 High-level waste from nuclear energy production can be further divided based on the nuclear fuel cycle from which it results: it usually consists of either unprocessed spent nuclear fuel (SNF) or the still highly radioactive rest products of SNF reprocessing (HLW). 6 The second relevant factor is the specific stage of RWM. For example, an interim storage facility has different ambitions for retrieving SNF or radioactive wastes than a final disposal site. 7 For many low and intermediate-level waste, disposal and monitored storage (onor near-surface) are considered realistic solutions until radioactive decay has rendered the wastes sufficiently unhazardous. Interim storage (on-surface, near surface or otherwise) for high-level waste 8 is employed for (a) letting it decay and cool down to a point at which they become eligible for emplacement in a disposal facility, and/or (b) storing it until disposal facilities are available (Bonin 2010). In both cases, retrievability is an essential design feature. After such storage, however, a more permanent solution is generally deemed necessary for the further management of SNF and HLW, given the immense span of time that these materials remain potentially radiotoxic: geological disposal. A geological disposal facility, or repository, combines the protection offered by stable geological layers deep below the earth's surface with multiple engineered barriers (e.g., overpack, clay, bentonite) around waste packages that contain either solid SNF or liquid HLW from reprocessing that has been stabilized in a confinement matrix (e.g., glass or concrete). All this is supposed to prevent radionuclides from reaching the human living environment until they have reached a safe level of decay (Bonin 2010). Given the time it takes for this level of decay to be reached, emplacement of SNF or HLW in a repository is, for all intents and purposes, meant to be indefinite. 9 This solution supposedly allows the current generation to take responsibility for the radioactive wastes it produces, while not burdening future generations with it, nor counting on the longevity of institutions to maintain waste management practices for thousands of years.
Despite its ultimate goal of indefinite disposal of SNF and HLW for the reasons specified above, reversibility is increasingly recognized as a possibly important aspect of GD (e.g., Aparicio 2010;Weiss et al. 2013;OECD Nuclear Energy Agency 2011;Swedish National Council for Nuclear Waste 2010). Arguably, the most systematic proposal that describes how reversibility is supposed to feature in 6 In a fuel cycle without reprocessing of SNF (the 'open' fuel cycle, e.g., in Canada, Sweden, and the USA), SNF is considered high-level waste when it is accepted for disposal. In a fuel cycle with reprocessing of SNF to extract uranium and plutonium for recycling (the 'closed' fuel cycle, e.g., in France, India, and Japan), high-level waste from nuclear energy production consists mainly of the fission products left over from this reprocessing (IAEA 2006), which are normally solidified before disposal. This distinction between HLW from reprocessing and SNF without reprocessing is highly significant: while unprocessed SNF has a waste lifetime of about 200,000 years, reprocessing can reduce high-level waste lifetime to about 5000 years (Taebi and Kloosterman 2008). 7 Per definition, in the case of storage, retrieval of the waste is envisioned for some point in the future, whereas disposal implies emplacement of waste without the intent of eventual retrieval. 8 In line with the distinction given above, I use the formulation 'high-level waste' to mean the category as defined by the IAEA (2009), which in the case of nuclear energy production includes both SNF and HLW as presented in footnote 6. 9 Note that this does not mean that actual confinement of radionuclides is guaranteed indefinitely (which is technically impossible), just that the timescales involved prescribe extremely long-term emplacement. geological disposal has been put forward by the OECD Nuclear Energy Agency (NEA) as a result of their Reversibility & Retrievability project, in which it explored the role of reversibility considerations in GD. According to the NEA (2011): • Reversibility ''describes the ability in principle to change or reverse decisions taken during the progressive implementation of a disposal system […] The implementation of a reversible decision-making approach implies the willingness to question previous decisions in the light of new information, possibly leading to reversing or modifying them, and a decision-making culture that encourages such a questioning attitude'' (p. 23; emphasis in the original). • Retrievability, on the other hand, is ''the ability in principle to recover waste or entire waste packages once they have been emplaced in a repository. Retrievability is the final element of a fully-applied reversibility strategy'' (p. 24; emphasis in the original). Note that this does not mean that all high-level waste will also be practically accessible: past actions such as HLW vitrification might still exclude this possibility.
Both reversibility and retrievability apply here to the period before final closure of the repository, possibly up to 100 years after initial emplacement. Reversibility refers to a step-wise decision-making process, in which previous decisions can be undone. However, reversibility diminishes over time, as actions based on these decisions are partly cumulative and increase the costs and effort involved in undoing past decisions. Retrievability also gets more and more difficult as waste packages get sealed in place and the repository gets backfilled over time (OECD Nuclear Energy Agency 2011). Thus, final closure of the repository also means the end to a realistic possibility of reversibility and retrievability. Indeed, reversibility and retrievability are not considered to be ''design goals'' for GD. Rather, they are seen by the NEA as ''attributes of the decision-making and design processes that can facilitate the journey towards the final destination of safe, socially accepted geological disposal'' (OECD Nuclear Energy Agency 2012 p. 22). In other words, they are only instrumental in achieving the ultimate (design) goal of GD that has been set forward since its origins in the 1950s (e.g., National Research Council 1957) 10 : passive safety, or safety without human intervention. Still, a number of reasons are put forward as justifying the importance of reversibility and retrievability for GD: • Reversibility would allow future generations to use the emplaced materials as a resource, especially since SNF contains plutonium and uranium which might have value as a future source of energy. • Further technical advances might make it possible to render radioactive wastes (more) harmless.
• If a repository performs worse than expected, remedial action would be facilitated by reversibility provisions. • Finally, reversibility can help foster public acceptance of waste disposal facilities, or help adapt waste management if public or policy attitudes change over time (OECD Nuclear Energy Agency 2011).
However, as Barthe (2010) points out: the goal of final disposal of wastes a century after initial emplacement as well as the regressive nature of reversibility and retrievability seem contradictory to these reasons for adopting reversibility in the first place. First of all, it will probably take a significant amount of time to develop technology for using a repository's contents as resources or for making high-level waste less harmful. If this is the case, why would one want to have reversibility and retrievability diminish and possibly disappear before such technology can be developed and implemented on a sufficient scale? Secondly, repository performance becomes significantly more difficult to assure with increasing extrapolation into the long-term future. As such, reversibility and retrievability as a response to worsethan-expected repository performance has a higher change of becoming useful as time goes by. These considerations cast doubt on the extent to which GD could live up to the NEA's own reasons for reversibility given above. On top of all this, it is clear that the choice of technology is a foregone conclusion in the NEA's framework. It is concerned with how to implement a specific technology: GD. Yet, the recognition that changing public and/or policy attitudes towards RWM should be able to influence RWM strategies, as is shown in the fourth reason for reversibility, is of importance here. What if, for whatever reason, GD does not turn out to be the apt solution the technical community takes it to be (OECD Nuclear Energy Agency 1995), 11 and/or democratic considerations would point us towards other technologies? Should our past decision for GD not also be reversible?
If GD reversibility provisions were, analytically speaking, fully in line with the reasons given for these provisions, these discrepancies should not exist. And yet, these discrepancies are here and warrant our attention. In this paper I would like to propose an outlook on technological reversibility that could (a) provide some insights in how technologies like GD become irreversible, (b) could help explain why the discrepancies above exist as they do, and (c) provide input concerning the way technologies like GD could be made more reversible. In so doing, I explore whether GD can actually be considered a reversible technology. If it can, then the above criticisms might be moot. However, it might also turn out that despite the efforts visible in the NEA's Reversibility & Retrievability project, GD cannot be considered properly reversible. If so, we might need to reconsider either GD as the dominant high-level waste management technology or whether, why and to what extent we want reversibility in the first place.
To answer the question whether GD can be considered properly reversible, it is necessary to have an idea of what constitutes a reversible RWM technology.
Elsewhere, I have argued that for a nuclear energy technology to be considered reversible, two conditions need to be both met: • The ability to stop the further development and deployment of said technology in a society • The ability to undo the undesirable outcomes of the development and deployment of the technology when so desired (Bergen, in press).
While arguably adequate as abstract descriptions of what constitutes 'ideal' reversibility, these conditions are not yet sufficiently operationalized to be useful in considering practical cases such as the one presented here. For one, their form does not yet invite either questioning or qualified answers. Secondly, they are not yet case-specific. As such, I would like to rephrase the conditions as two GD-specific questions that, if both answered affirmatively, would show that GD is reversible. These questions are: 1. Would an authorized body be reasonably able to switch from GD to an alternative solution if problems with GD were to arise? If not, the first of the conditions for reversible technology would not be met, and GD cannot be considered fully reversible. 12 2. Does GD exhaust the possibilities of undoing the consequences connected to high-level waste, and the hazards that could come about due to the use of GD for managing this waste? Again, a negative answer to this question would disqualify GD as a reversible technology.
In what follows, I deal with these two questions in turn. In ''On the Ability to Stop Further Development and Deployment of Geological Disposal'', the first question is examined by taking a closer look at the historical development of GD through the lens of path dependence and lock-in. I answer the second question in ''On Geological Disposal's Capacity for Undoing Consequences'', where I propose that the ability to undo undesirable consequences of GD is connected to the choice between different design strategies, and that GD's chosen strategy is less-thanideal.
On the Ability to Stop Further Development and Deployment of Geological Disposal
In this section, the following question is considered: would an authorized body be reasonably able to switch from GD to an alternative solution if problems with GD were to arise? To answer this question, it is important to understand why switching to an alternative could become difficult or impossible in the first place.
According to the theory of path dependence and lock-in, such difficulties can arise as a result of a historical process of technological development that leads to a situation in which switching to another solution becomes increasingly difficult: the technology becomes locked-in. As such, investigating the development of GD through the lens of path dependence and lock-in could help answer the question at hand. Before discussing GD's historical development and whether it is locked-in or not, the theory behind path dependence and lock-in is briefly introduced below.
Path Dependence and Lock-in
We call the development and implementation process of a specific technology path dependent if that process is determined by its own history (David 2007). That is, due to its specific characteristics, such a process can become inflexible in terms of the practical possibility of changing their course due to them being unable to ''shake free from their histories'' (David 2001 p. 19). Such path dependent processes, contain two main elements (Arthur 1989): • A contingent starting period. This period is contingent in the sense that it does not originate in a smooth and predictable historical sequence of events but rather that a new element (e.g., the introduction of a new technology) sets history off on a novel path. • A period exhibiting 'increasing returns'. Arthur identified four major types of increasing returns: scale economies, learning effects, adaptive expectations and network economies (Arthur 1994). While increasing returns can be conceived of quite narrowly as increasing efficiency, David (2007) considers it more appropriate to conceive of them as ''self-reinforcing, positive feedback mechanisms governing decisions such as the choice among alternative production techniques, or consumer goods, or geographical locations for production activities'' (David 2007). This self-reinforcement consists of both ''positive and negative mechanisms that decrease the likelihood that alternative paths will be selected'' (Vergne and Durand 2011). Positive mechanisms directly support the path (e.g. economies of scale or learning effects), while negative mechanisms operate by rendering alternative paths less interesting. As such, these mechanisms sustain the path that was contingently selected.
In some cases, this self-reinforcement can be so efficacious that it leads to an irreversible outcome, i.e., lock-in (Mahoney 2007;Vergne and Durand 2011). While initially options are open and multiple outcomes are possible, path dependence and self-reinforcement lead to (more and more) irreversibility that, without exogenous shock, could be incredibly persistent. If so, the potential for endogenous change becomes rather low (Mahoney 2007).
According to David (2007), one fundamental aspect of these self-reinforcing dynamics or increasing returns is the presence of micro-level irreversibilities, which occur when ''a finite and possibly substantial cost must be incurred to undo the effects of the resource allocation decision in question'' (David 2007 p. 101). So, these micro-irreversibilities make agents favour certain options for action, while disfavouring others due to the relative opportunities and costs involved in pursuing them. This effect is further strengthened if these micro-irreversibilities are interdependent, since it becomes less favourable to undo specific micro-irreversibilities if this requires undoing others as well. As such, they are constitutive of the selfreinforcement of dominant structures by guiding agents' behaviour towards adherence to the most dominant (technological) solution, eventually strengthening its lock-in and increasing its irreversibility. Two notes about these microirreversibilities are in order. First, different types of lock-in seem to correspond to different sorts of micro-irreversibilities driving path-dependent processes. Indeed, different types of drivers of a technology's lock-in can be found in the literature. For example, there is political (e.g., Walker 1999Walker , 2000, institutional (e.g., Foxon and Pearson 2008;Walker 1999Walker , 2000, economical (e.g. Arthur 1989;Liebowitz and Margolis 1995), and infrastructural (e.g., Frantzeskaki and Loorbach 2010; Scrase and Smith 2009) lock-in of a specific technology or technological project or system. While these have a different emphasis on what is most determinative of the lock-in in question, they all refer to sets of symbolic, institutional and/or material microirreversibilities that underlie the reinforcing dynamics. For high-level waste management, such micro-irreversibilities cover a spectrum of elements, from stabilization and packaging of HLW, test sites for GD and nuclear reactors producing SNF (material) to preferred methods of risk evaluation, nuclear regulations and policy prescriptions and practices, and institutional commitment (institutional), as well as underlying narratives, themes and values (symbolic). Secondly, it is interesting that non-material micro-irreversibilities could drive pathdependent processes. As such, even if a process of technology development results mainly in institutional or symbolic elements, a technology could (in theory at least) become locked-in without significant deployment of said technology in the real world as long as increasing returns are sufficient to keep actors committed to that technology. Finally, micro-irreversibilities lie at the basis of the positive and negative mechanisms that can lead to lock-in, i.e., make a technology practically irreversible. As such, this framework seems to combine both (ir)reversibilities within GD as micro-irreversibilities (which include the matters the NEA's concepts of retrievability and reversibility is meant to address), as well as the (ir)reversibility of GD itself as a technology for radioactive waste management.
In what follows I will use the history of civil nuclear energy and high-level waste management in the USA between 1944 and 1987 as an example to show how GD's development can exhibit the characteristics of a path-dependent process. While certainly not an exhaustive example [GD is held to be the appropriate solution in most nuclear energy-producing countries (U.K. Nuclear Decommissioning Authority 2008, 2013)], I hope it is sufficiently powerful for showcasing the type of historical process that lies at the basis of GD's dominance. First, I present a sketch of GD's contingent genesis in the years after WW2, after which I elaborate on its history from the late 1950s onwards.
Geological Disposal's Contingent Starting Period: Nuclear Development Between 1944 and 1957
The contingent starting period that set the stage for our current situation in which GD is the dominant solution for civil high-level waste management in the USA can be situated in the period between 1944 and 1957.
During WW2, nuclear development was dominated by military applications, both in weapons technology (developing the atomic bomb in the Manhattan project) as well as reactor design (producing plutonium for the weapons program). This dominance continued in the years after the war, one result of which was the development of the pressurized water reactor [PWR] for use in submarines 13 (Cowan 1990). Given the circumstances of WW2, this initial focus on developing nuclear applications was rather straight-forward. For all that, these developments were prioritized over the careful and necessary management of the wastes they produced. While in 1944, the first HLW facility was constructed at the Hanford site in the State of Washington to store liquid HLW from the military nuclear program, 14 many low-and intermediate level wastes were dealt with through 'dilute and disperse' strategies (Mckinley, Alexander and Blaser 2007;Miller, Fahnoe and Peterson 1954). This early focus on applications rather than proper waste management was further exacerbated by how the Atomic Energy Commission (AEC) was set up in 1946, being responsible for both the promotion as well as regulation of nuclear development. Its focus was much more on promoting the development of nuclear applications than on strict regulation, and secrecy ensured its control over nuclear matters (Clarfield and Wiecek 1984). In combination with relatively small waste volumes and the isolated location of the facilities at which this waste was produced, this led to RWM being basic (if not haphazard) until the early 1950s. In at least one sense, this was surprising: WW2 had graphically shown both the potency as well as the destructive capabilities of splitting the atom. In 1953, however, nuclear safety found strong political expression in Eisenhower's 'Atoms for Peace' speech, 15 which was to set the stage for the development of a peaceful civil nuclear energy program, separate from the military one. As Jasanoff and Kim (2009) argue, the speech was aimed at symbolically containing the atom's destructive potential so graphically illustrated in Japan only a few years before. It was also aimed at containing international fear of the USA as a nuclear superpower, and was to open the way towards the exploitation of the atom's peaceful applications. With 'Atoms for Peace', then, a strong theme of containment of the dangers connected to the atom lay at the basis of the nuclear energy industry. The speech also called for a private nuclear energy industry. 16 This meant limited government influence in the new industry, and making investment in it interesting for private investors. With the 1954 Atomic Energy Act, the patenting of nuclear energy technologies was opened up, and secrecy was partly lifted so that private parties could use previously confidential technical knowledge to develop nuclear energy applications.
After 'Atoms for Peace', the nuclear energy industry indeed started to develop. Since the nuclear energy program was pressured by Eisenhower's intentions into a mode of urgency, a reactor type was chosen with which significant experience had already been accumulated in the military program: the PWR (Cowan 1990). However, a final solution for the disposal of HLW had not yet been settled upon. An important step towards that goal was taken when, at the request of the AEC, the Committee on Waste Disposal of the National Research Council produced a report (National Research Council 1957) that would prove to be foundational for the development of GD and the values or aspirations it embodies. It argued that (after additional research), deep geological disposal could be both a safe and feasible option for HLW disposal, and called for more research into the solidification of HLW which mostly took a liquid form at the time. 17 On top of this, in the case of GD, the HLW was to ''disposed of without concern for its recovery'' (p. 86). As such, confidence that HLW could be safely contained and disposed of in the near future was established by the Committee's research.
1957-Present: The Path to Lock-in
This promise of GD as a passively safe future solution for HLW disposal provided the nascent nuclear energy industry with the reasonable assumption of manageable long-term safety, which was important given the risks involved. On top of this, the dominant assumption from the late 1950s until the mid-1970s was that SNF from the civil nuclear energy program would be reprocessed to extract fissionable uranium and plutonium, which would be reused for further energy production 18 (Walker 2009). As such, the future development of both a civil reprocessing industry as well as GD facilities was considered a sufficient and realistic HLW management strategy. Nuclear authorities stood by the idea that the problem of radioactive waste was technically soluble (U.S. Atomic Energy Commission 1962 p. 55). Also, based on a series of hearings by the Joint Committee on Atomic Energy 16 What was also contained was the nuclear influence of the USSR. The USA's nuclear industry development had to be privatized in order to be ideologically in line with the American liberal ideal so different from the USSR's statist communism. 17 The report mentions that the Commission was ''convinced that radioactive waste can be disposed of safely in a variety of ways and at a large number of sites in the United States'' (p. 3), adding that the ''most promising method of disposal of high level waste […] seems to be in salt deposits'' (p. 4). Moreover, it promotes the ''stabilization of the waste in a slag or ceramic material'' (p. 6) as another promising method, away from the predominantly liquid HLW at the time. 18 There was already significant experience with reprocessing technology in the military program. Moreover, the AEC promoted reprocessing out of concern for uranium supply shortages for the nuclear energy industry. Together with the breeder reactors the AEC was looking into, reprocessing would substantially increase the sustainability of uranium resources (Stewart and Stewart 2011). in 1959, the authorities were convinced that the radioactive waste problem should not slow down the development of the nuclear energy industry and that it would be possible to protect the public during this development (Metlay 1985 p. 236). This confirmed the AEC's confidence in the possibility of safe radioactive waste management and its prioritisation of industrial promotion over HLW management. This attitude endured for over a decade despite a number of incidents at early aboveground HLW storage sites between 1959 and the mid-1970s (Metlay 1985), which nonetheless spurred the adoption of additional safety features in the 1960 and 1970s such as multi-layered storage casks for HLW and the solidification of HLW where it had been liquid before 19 (Metlay 1985). As such, additional steps were taken towards the greater capacity to contain HLW and its risks. While the 1960s saw an exponential increase in orders for nuclear power plants, serious practical research into GD was also being undertaken by the AEC, and in 1966 a follow-up committee reaffirmed the conclusions of the 1957 report that GD was the most promising solution for the disposal of HLW (National Research Council 1966). Moreover, the civil reprocessing industry saw its humble beginnings (heavily promoted by the AEC) with the start-up of the reprocessing facility at West Valley, New York in 1966. With a second plant at Morris, Illinois and a third at Barnwell, South Carolina receiving construction permits in 1967 and 1970 respectively, the development of the civil reprocessing industry had apparently been kick-started (Metlay 1985).
In 1970, Lyons, Kansas was proposed as the site for the very first full-scale GD demonstration project. 20 This decision was supported by further research by the National Research Council that confirmed Lyons' adequacy as a pilot facility site and again stressed GD's appropriateness for HLW disposal (National Research Council 1970). However, not everyone shared the AEC's optimism about the safety and appropriateness of the site [there were numerous boreholes present due to earlier explorations for oil and gas and some water migration could not be properly accounted for (Metlay 1985)], and the proposal was dropped 2 years later for technical and political reasons. Despite this setback, the AEC still pushed for an expansion of the geological disposal program, extending the search for other possible sites for GD. Nonetheless, in the wake of the difficulties with the Lyons site, and as public opposition to nuclear energy was picking up in the early 1970s, other possibilities for HLW management were considered (Vandenbosch and Vandenbosch, 2007). Firstly, an attempt was made by the AEC to implement Retrievable Surface Storage Facilities as a possible medium-term solution for HLW. This proposal was rejected by opponents, including the public, politicians and the Environmental Protection Agency (EPA, set up in 1970), partly out of fear that these facilities would become low-budget permanent solutions (U.S. Congress Office of Technology Assessment 1985). It was subsequently dropped in 1975. Secondly, several options for the final disposal of HLW were further investigated and compared, like extra-terrestrial disposal, disposing of waste in the seabed, in or under ice sheets in the Arctic, transmutation of certain waste types and indeed, GD (e.g., U.S. Atomic Energy Commission 1974). 21 On top of these difficulties for the GD program, the reprocessing industry was not at all thriving in the way the AEC had hoped. The West Valley plant stopped operation in 1972, when modifications to solve operational and environmental regulatory issues were deemed uneconomical. The Morris plant, finished in 1974, never came into full operation due to technical problems and equipment failures and was abandoned in the same year. Finally, the Barnwell facility was meant to start operation in 1974, but construction delays and licensing issues prevented that deadline from being met (Stewart & Stewart 2011). In short, the AEC's plans for HLW management were not running smoothly.
Not only HLW management was in some trouble around this time: the nuclear energy industry had to learn the hard way that the optimism about atomic energy ''too cheap to meter'' 22 was sorely misplaced, especially as the AEC was obliged to enforce stricter regulations on the industry under growing pressure from environmental groups and the EPA. As such, orders for power plants dropped significantly. This same pressure laid bare the conflict of interest the AEC operated upon (promoting as well as regulating the nuclear energy industry), which led to the AEC being disbanded by the Energy Reorganization Act in 1974, its responsibilities split between the Energy Research and Development Administration (ERDA; promotion) and the Nuclear Regulatory Commission (NRC; regulation, licensing, materials management and setting of safety standards) (Stewart and Stewart 2011). This led to even stricter regulation, which increased costs and made it even more difficult to get licenses for nuclear power plants (Clarfield and Wiecek 1984). As the expansion of nuclear energy production capacity was slowly grinding to a halt in the latter half of the 1970s, the societal pressure that previously led to the disbanding of the AEC rekindled critical attention as well as urgency for HLW management. So despite other options at least being investigated, ERDA continued the AEC's quest for the expansion of GD with the National Waste Terminal Storage (NWTS) Program in the latter half of the 1970s, wanting to build six repositories by 2000. In light of these developments, however, that period also saw increased critical input from geologists and physicists on GD's feasibility. The optimism that generally governed the AEC's attitude towards GD now met with more critical inquiry, which was reflected in the Interagency Review Group on Nuclear Waste Management's (1978) report to the US president. The report acknowledged that knowledge, experience and predictive capability on repository operation was lacking. And while it still strongly recommended proceeding with GD, it also advised using a ''technically conservative'' approach (e.g., p. 46), which includes reversibility of waste emplacement decisions (p. 18) and temporary retrievability of emplaced high-level 21 Although some of these options had at times been considered, this was the first time they were as officially and systematically compared. 22 This phrase was coined by the chairman of the AEC, Lewis Strauss, in a 1954 speech to the National Association of Science Writers (Strauss 1954). While it has become iconic of the economic optimism at the time concerning nuclear power, it is not to be taken as what was actually considered a realistic cost estimate. waste during an initial period of repository operation (e.g., p. 46). 23 Other developments helped increase the USA's dependence on GD, as the closed fuel cycle that the AEC had pushed for two decades was plagued with even more difficulties. While the newly founded NRC was investigating the proliferation concerns connected to plutonium recycling and the safeguards necessary to make it work in 1975-76 (which worried a nuclear energy industry that still favoured reprocessing), reprocessing received increased public attention (Walker 2009). This escalated when reprocessing became a prominent theme in the presidential race between President Ford and Jimmy Carter, in which both eventually expressed reservations with regards to the appropriateness of reprocessing SNF. After Carter became president, he issued a statement (Carter 1977) that the USA would ''defer indefinitely the commercial reprocessing and recycling of the plutonium produced in the U.S. nuclear power programs'', and that ''a viable and economic nuclear power program can be sustained without such reprocessing and recycling''. Official policy turned against reprocessing and the Barnwell reprocessing facility was mothballed and never came online, which effectively meant the end of the civil reprocessing industry. 24 So not only was GD the only technology of the AEC's old program that had any promise of becoming a reality, but without reprocessing of SNF the U.S. nuclear fuel cycle would generate larger quantities of high-level waste that remain radioactive for significantly longer than in a fuel cycle with such reprocessing (Taebi and Kloosterman 2008), since it would have to dispose of unprocessed SNF. As such, it became even more critical to look for high-level waste management technologies that were focused on maximal long-term safety, something GD was already known for. From this point on, there was little question as to which technology would be best for the management of high-level waste [as it was in the mid-1970s (e.g., U.S. Atomic Energy Commission 1974)], despite the fact that not reprocessing SNF put more severe demands on repository design and siting. Implementation of GD still proved difficult though, as the search for possible sites in light of the NWTS met with many negative reactions from state executives and lacked permissions for exploration. Combined with federal budget cuts, this forced the geological disposal program to forego the desired expansion. Nevertheless, efforts to operationalize GD continued. Shortly after the publication of the abovementioned IRG report, the DOE (formerly ERDA) published its Generic Environmental Impact Statement on Commercial Radioactive Waste Management in 1980, which was intended to support a programmatic decision to focus efforts on mined GD (Metlay 1985). Around the same time, the NRC was working on its proposal for the technical criteria that should govern repository licensing, 25 also focussing on GD as the standard solution (Walker 2009). This coalescence of institutional efforts towards the implementation of GD was subsequently expressed 23 Note that the NEA's R-scale (NEA 2011) provides a specific timeline and a more gradual decline of retrievability than does the IRG, and is more operationalized. 24 Although the Reagan administration withdrew the ban on reprocessing in 1981 (U.S. Congress Office of Technology Assessment 1985), it never became part of official U.S. radioactive waste policy again. 25 These criteria included many concepts still visible in the NEA's proposal today, like multiple barriers, the validation of models, geological uncertainties, and the problem of human intrusion (Metlay 1985). in the 1982 Nuclear Waste Policy Act (NWPA), which followed the DOE's and the NRC's commitment to mined geological disposal. Moreover, the act added even more urgency into the equation by aiming for repositories to be operational by 1998 (and capable of taking unprocessed SNF), and shifting some focus away from Monitored Retrievable Storage 26 (MRS, similar to the AEC's Retrievable Surface Storage Facilities), saying it was not a complete alternative to GD (Vandenbosch and Vandenbosch 2007). On top of all this, the government would provide only limited support for temporary storage as it could be perceived as a reason to delay final disposal efforts. Following the establishment of the 1982 NWPA, nine sites were selected as possible candidates for repository construction. In the following years, a complex process of negotiations narrowed this list down to three: Hanford, Washington; Deaf Smith County, Texas; and Yucca Mountain, Nevada. However, partly driven by political and cost considerations, the search was even further narrowed down in the 1987 Nuclear Waste Policy Amendments Act, limiting site characterization efforts to Yucca Mountain, Nevada only.
Yucca Mountain's history is interesting in its own right, 27 as it has been central to decades of struggle to construct a working GD facility. However, I think it unnecessary to elaborate on it here, for two reasons. Firstly, the analysis as presented above contains the necessary elements for explaining GD's rise to dominance and why it could be difficult to do otherwise (see ''Is Geological Disposal Locked-in?''). Further describing the case of Yucca Mountain and the policy-making around it would not take the analysis in a significantly different direction. Secondly, the case of Yucca Mountain and the adherence to GD even after Yucca's failure arguably serves better as evidence for GD's tenacity rather than as an explanation for it (aside from increased commitment and added urgency factors which were certainly not absent before). Indeed, due to significant technical as well as social and political hurdles, Yucca Mountain never became the USA's first nonmilitary GD site. In 2011, the Obama administration even gave up further efforts to make it into a working disposal site for SNF, as such eliminating hope of having an operational repository in the near future. However, in spite of a history riddled with difficulties (of which three decades revolved around Yucca Mountain), GD remains the go-to option for high-level waste management in the USA (e.g., Blue Ribbon Commission on America's Nuclear Future 2012).
Is Geological Disposal Locked-in?
After all this, is GD locked-in in the USA? Let me start by discussing two objections to the idea that this is even possible, or that we can know that it is so.
First, one could question how it could be possible for GD to be locked-in if it seems incapable of actual implementation, even after decades of effort. However, as 26 While industry favoured MRS as a temporary solution, environmental groups again protested it out of fear of MRS facilities becoming de facto permanent disposal sites. The NWPA only foresaw inquiry into the need for and feasibility of MRS, but did not order any concrete construction (Vandenbosch and Vandenbosch 2007). 27 For a comprehensive overview of the policy and technical difficulties in SNF management in this period, see Vandenbosch & Vandenbosch (2007) and Macfarlane & Ewing (2006). argued in ''Path Dependence and Lock-in'', if symbolic and institutional microirreversibilities are sufficient to drive actors to continuously commit to a specific technology, this could be all that is necessary for that technology to be locked-in. At least, it could be enough to make the process of technology development and implementation inflexible in terms of the practical possibility of changing its course, i.e., path-dependent. In other words, having many material manifestations does not make a technology irreversible; having the relevant actors repeatedly orienting their actions towards making that technology (even more of) a reality does. 28 The way this worked out in the case of GD is summarized below. Second, can we know if GD is locked-in if no 'realistic' alternatives are currently available to which one could switch? After all, in many famous (albeit not uncontroversial) cases of path dependence and lock-in, equally good or even better alternatives were available but not being selected, for example with VHS tapes (Arthur 1990), the QWERTY keyboard (David 1985), or PWR reactors (Cowan 1990). Would most actors still commit to GD if a better solution was available? Unfortunately, this is a counterfactual that is impossible to prove. As such, it would seem at first glance that any claim that GD is irreversible can only be trivially true, i.e., it is impossible to switch to an alternative as long as there are none. This, however, neglects three factors. One, what counts as an equally good or better alternative is not set in stone. That safety and containment have long been leading in the judgment that GD is the only realistic path to follow is to some extent historically and politically contingent. Two, it is possible to add plausibility to the claim that GD is locked-in by showing that its history exhibits characteristics of a path-dependent process, i.e., micro-irreversibilities driving increasing returns in favour of GD, leading up to a point at which it is difficult to do something other than GD. Three, the fact that no realistic alternatives are available at this point in time partially follows from the very historical developments that lead to GD's dominance. All three factors are discussed in this section.
Already gaining salience during GD's genesis before 1957 and inspired by the post-WW2 period, the themes of safety and containment have since guided the management of HLW and SNF. As such, these themes have been increasingly embodied materially (e.g., solidification of liquid HLW, multi-layered storage containers, and of course, the technology that is GD) and institutionally (e.g., the separation of the military and civil nuclear energy program, Carter's decision to refrain from reprocessing to contain the atom's proliferations risks, the urgency in the NWTS and NWPA for curtailing above-ground SNF build-up and continuous institutional commitment to GD as a way of doing so). In turn, these embodiments have helped reinforce and operationalize safety and containment as leading values. As such, the adoption and continuous reaffirmation of these values functioned as symbolic micro-irreversibilities that supported the path of GD as an appropriate solution for HLW and later, SNF. 29 As GD's story unfolded after its contingent starting period (1944)(1945)(1946)(1947)(1948)(1949)(1950)(1951)(1952)(1953)(1954)(1955)(1956)(1957), an accumulation of micro-irreversibilities occurred favouring GD. These, combined with broader societal developments, have repeatedly helped drive actors to adhere to GD as the final solution for HLW and SNF. Indeed, after the themes of safety and containment gained prominence and the 1957 Committee on Waste Disposal report proposed GD as the most promising method for making them a technological reality, GD received the institutional commitment of both the AEC and the industry (albeit in combination with reprocessing of SNF). GD was now embedded as an essential part of policy for future HLW management. During the 1960s, serious research into GD (including small-scale test sites) acknowledged its feasibility as well as increased its lead compared to alternatives, which were not systematically looked into since optimism concerning GD's appropriateness and feasibility was wide-spread. However, as GD came closer to real implementation it ran into difficulties (exemplified by the failure at Lyons, Kansas), as did the organisation responsible for it: the AEC. The AEC was disbanded out of worry about the conflict of interest it operated upon, and alternatives for GD were more systematically investigated. However, several factors kept GD on its dominant course. Firstly, while actors were more critical of GD during this time, the value system behind its selection was not under similar scrutiny. Secondly, the pressure on the nuclear energy program to urgently provide solutions was significantly increased by a number of factors: the end of reprocessing and the fact that now SNF needed to be disposed of, Carter's strong political stance on the dangers of proliferation combined with increasing SNF build-up, increased societal displeasure with the nuclear energy industry, and the failure to implement a temporary arrangement in the form of the Retrievable Surface Storage Facility. It is unsurprising, then, that the response to critical inquiry into GD in the late 1970s actually was greater commitment to GD under an increased sense of urgency. Like when PWRs were selected for power generation (Cowan 1990), urgency can be an important driver for conservatism in technology selection. What was needed was a technology with which there was considerable experience, even if there may have been alternative technologies for the job eligible for (further) development. Thirdly, ERDA continued the AEC's quest for expansion of the GD program, assuring continuity of institutional commitment. As a result of all this, GD survived its minor 1970s crisis. After this point, GD's practicability (with increased knowledge, experience, and increasingly structured institutional frameworks) and political legitimacy (with the explicit commitment to GD in the 1982 NWPA) further increased, as such making it even more into the 'realistic' solution it is still taken to be.
In addition to these mechanisms supporting GD, there were also reasons why alternative paths were specifically not selected. For example, in a situation of limited resources available for organizing high-level waste management (especially at a time when the focus was on developing the energy industry rather than on ways to manage its wastes properly), it is clear that commitment to GD would mean even more limited resources available for development of possible alternatives, especially when it is assumed that there is little reason to do so. Indeed, until the mid-1970s, the AEC and the industry saw little need to systematically look into and develop alternatives to reprocessing and GD. Some possible alternatives, like disposal in the seabed or under Arctic ice sheets, would have also been unpopular both with an increasingly environmentally aware public in the 1970s as well as other countries across the world. Also, further development of more advanced fuel cycles that would reduce waste lifetime (and as such, lessen demands on disposal technologies) were incompatible with the ban on reprocessing in 1977 as they were judged to give rise to unacceptable proliferation concerns.
After all this, the case of Yucca Mountain, its failure, and the subsequent retention of GD as the most favourable solution for high-level waste disposal attests to the fact that a point has been reached at which switching to an alternative solution for high-level waste management has become difficult (not least because possible alternatives, other than temporary storage, are underdeveloped). Still, this is quite peculiar given the lack of working civil GD sites in the USA. 30 Apparently, it can become extremely difficult to change course on the choice for a specific technological solution despite extremely few actual working instances of the technology itself.
Finally, allow me to briefly elaborate on the evolution of reversibility considerations in GD over the course of its history. It is interesting that while the National Research Council's 1957 report contends that HLW should be emplaced in geological repositories without concern for its retrieval, the 1979 IRG report features provisions for limited retrievability on the basis of epistemic and prudential considerations. This was both politically salient as well as in line with the critical appraisal of GD in the late 1970s. And while the NEA's reasons for retrievability presented in ''Reversibility as an issue in radioactive waste management'' have significantly expanded in scope to considerations of justice when compared to the IRG's, the practical side of reversibility and retrievability does not seem to have followed suit. Indeed, while the reasons for reversibility considerations have significantly evolved, our choice and design of the technology meant to fulfil these has not sufficiently done so, as evidenced by the discrepancies also presented above. On the one hand, if GD is locked-in, this could possibly help to explain why these discrepancies exist between the NEA's reasons for reversibility and retrievability and the extent to which GD seems to be an appropriate means of achieving them, since it would be extremely difficult to change to a solution more in line with new reasons for wanting reversibility. On the other hand, the inclusion of reversibility and retrievability considerations in GD does not seem to have lessened its dominance. Au contraire, making GD compatible with increased demands on high-level waste management [be it epistemic demands (IRG) and/or demands for justice (NEA)] would make it less pressing to work towards alternatives. So the inclusion of reversibility considerations, while lowering the probability of problems with GD arising, has not alleviated GD's lock-in.
The history of GD sketched above contains ample micro-irreversibilities that would lead GD to become locked-in by making it more likely that agents favour GD. By the same token, and partly due to the same developments that led to GD's dominance, alternatives have not been extensively pursued. So, in addition to GD being locked-in in a trivial sense (no 'realistic' alternatives are currently available), these factors provide plausibility to the idea that GD is locked-in due to being unable to shake free from its own history. As such, considering the first question put forward in ''Reversibility as an issue in radioactive waste management'': • Would an authorized body be reasonably able to switch from GD to an alternative solution if problems with GD were to arise?
, it seems that, at least for the USA, one would have to conclude that it would be at least difficult and at worst impossible for an authorized agency to step down from GD as the dominant high-level waste management technology, at least within a reasonable timeframe. Given that GD is the preferred solution to the high-level waste problem in most nuclear energy-producing countries, 31 and that other countries do not have access to more alternatives to GD than the USA does, I think it not unreasonable to expect that in some of these countries, GD might be similarly locked-in. 32 If all the above holds true, GD at least partly fails to meet one of the conditions and can thus not be considered a truly reversible technology (in those specific cases). However, one could ask whether GD's lock-in is really problematic, given that (a) scientific confidence in the capacity of engineered barriers and geology to contain high-level waste is significant, and (b) that no technology is readily available on a satisfactory scale to turn high-level waste into benign substances? That is, is it not a good strategy for 'undoing' the morally undesirable consequences of nuclear energy technologies? This question relates directly to the second question put forward in ''Reversibility as an issue in radioactive waste management'': • Does GD exhaust the possibilities of undoing the consequences connected to high-level waste, and the hazards that could come about due to the use of GD for managing this waste? 31 One should not forget the impact of international organisation and cooperation. For example, given that the IAEA was set up in 1957 (pushed by the Eisenhower administration after the 1953 'Atoms for Peace' speech), one can imagine the subsequent international spread of the themes of containment and safety (e.g., IAEA, 1956). 32 However, even if this expectation is reasonable, any claim to a specific country having GD as a locked-in technology would have to be backed up by the necessary socio-historical analysis.
In the following section, I contend that there are different general strategies for undoing such consequences that one can follow in developing a technology, and that some are preferable over others, at least qua reversibility. GD is principally focused on one of these strategies, albeit not the most preferable one.
On Geological Disposal's Capacity for Undoing Consequences
What does it mean to 'undo the consequences connected to high-level waste'? What would constitute an 'ideal' undoing of consequences is, practically speaking, impossible: one simply cannot go back in time and start over. Nevertheless, what sorts of action could one still undertake towards the undoing of consequences, limited as they may be? In what follows, I present four practical strategies for 'undoing consequences' in order of decreasing similarity to 'ideal' undoing: 1. Remediation bringing (parts of) the system under consideration back to a previous state by eliminating the problem source and using (part of) the system's internal dynamics to undo the unwanted effects of the technology's development and implementation. This seems to require the least invasive effort, and leaves a solid basis for other developments. 2. (Re)construction bringing (parts of) the system under consideration back to the state by eliminating the problem source and actively reconfiguring system parts to reconstruct the previous state so as to undo the unwanted effects of the technology's development and implementation.
Note that the previous two imply elimination of the problem source. In the case of RWM in general and of GD in particular, high-level waste would have to be considered the most important 'problem source', and this is what the rest of this section will focus on. Other possible problem sources might be specific institutional arrangements or possibly outdated value systems (i.e. institutional or symbolic elements mentioned as micro-irreversibilities above). Given this possibility, undoing certain consequences may be as 'simple' as reverting to a state in which multiple possible paths were open, i.e. getting rid of lock-in. However, there are two more strategies for undoing consequences, ones in which the problem source is not eliminated: 3. Containment Containment of the problem source without eliminating it, shielding potential victims from its harmful effects. 4. Compensation Compensate victims for the undesirable consequences of the technology development project when even containment not possible.
One important point to make about these strategies is that if one wants to reasonably ensure that these options are available when the need arises, the technology in question needs to be designed according to these strategies. Another point is that these strategies are not mutually exclusive, and will most likely have to be used in conjunction. In doing so, there is a preferable order to these approaches: what cannot be solved by remediation should be tackled by reconstruction, etc. In this way, the potential for undoing unwanted consequences is exhausted to the greatest possible extent. These insights do have their implications though, the most important of which is probably the following: already during the development of a technology, one should aim for remediable and reconstructible solutions rather than ones dependent on containment or compensation. From the point of view of reversibility, the latter are little more than 'end-of-pipe' solutions necessitated by our incapability to construct more reversible technologies by eliminating problem sources. The question is: which of these strategies does GD exemplify?
One could argue that GD is a technology based on remediation. After all, the internal dynamics of the system (radioactive decay) will eventually undo the unwanted effects connected to high-level waste. When, after thousands of years, the waste reaches the radiation level of natural uranium ore, would the situation not be remediated? Well, at least not in the way that remediation is meant here as a strategy for undoing consequences: remediation would have to include the elimination of the problem source, no active steps towards which are actually undertaken in GD. Charitably to GD, however, one could argue that our actions implementing GD now do eliminate high-level wastes eventually. However, can we then really say that our actions eliminate these wastes? High-level waste and the risks connected to it (while diminished through multiple engineered and natural barriers) exist as possibly problematic for an extended amount of time, one that far surpasses any example of institutionalized practice or organized action. As such, even on this charitable reading GD fails to eliminate the problem source within a timeframe that is relevant for a practical conception of remediation as a strategy for undoing consequences. As such, we cannot claim that GD is a remediationbased technology.
Despite appearances (it requires very specialized and scientifically advanced construction after all) GD is also not reconstruction-based for the same reason mentioned above: the high-level waste is just not eliminated quickly (or actively) enough. At most, it could be said that retrievability considerations in GD's design do allow for some reconstructive action in case unwanted effects do occur, whether these effects are connected to the dangers of radiotoxicity or intergenerational injustice. However, the limited timespan for which retrievability is envisioned, combined with its diminishing nature and the fact that while retrieval would remove the problem source from its location but not entirely eliminate it, leaves GD's potential for reconstruction rather limited.
In the end, GD corresponds largely to the containment strategy: despite limited retrievability provisions, containment is indeed the design goal of GD. Rather than actively eliminating the problem source, it is contained behind multiple barriers, e.g., a vitrification matrix, multi-layer canisters, the repository with its multiple engineered barriers and even stable geological layers. But this is not all. The 'containment' strategy is so pervasive in GD that even its institutional and symbolic elements were oriented towards containment, at least for a large part of GD's history. For example, technocratic elites have generally left little room for public participation in how high-level waste was to be handled, 33 especially during the early decades of nuclear energy development. Additionally, by viewing this waste in terms of difficult-to-control and largely irreversible risks, legitimation of technical and passively safe solutions was assured (especially when combined with a general distrust of social solutions). In short: GD works according to containment all the way down, from the top echelons of nuclear policy making to hundreds of meters below the earth's surface.
In GD's defence, however, one might rightly bring up the point that eliminating high-level waste is currently practically impossible. No technology is actually available to turn such waste into benign substances. Given this fact, is GD not the best technology available for taking our responsibility towards future generations? Two points need to be made in response to this. First, while it is true that no technology is currently able to 'eliminate' high-level waste, this does not mean such technologies are not at least realistic. For example, a process called partitioning and transmutation (P&T) is being developed which could theoretically decrease total high-level waste volume as well as limit its lifetime to as little as 500-1000 years (Condé et al. 2004). This would constitute at least a partial elimination of the problem source and as such, could be part of a reconstruction strategy to undo the unwanted effects of high-level waste. While a fuel cycle including P&T would be more expensive than using a more traditional fuel cycle, and comes with its own security and safety concerns, it would at least be a step up in terms of undoing unwanted consequences in the form of long-term risks of high-level waste radiotoxicity (Taebi and Kadak 2010). The second point to be made concerns the manner in which containment of high-level waste is achieved in GD: it prohibits or at least makes it incredibly arduous to switch to a more reversible strategy in the future due to a lack of retrievability. For example, by the time P&T would actually be available on a large enough scale to make a significant difference, repositories could be largely or completely closed. And even if retrievability was fully implemented and maintained, the possibility of reprocessing/recycling/destroying some high-level waste would prove difficult, e.g., due to being stabilized in glass or concrete. Indeed, it would seem that the epitome of containment entails closure, not only of repositories and institutional orders, but also closure of different options for switching strategies for undoing (the effects) of high-level waste.
Let me make two qualificatory notes. First, a reconstruction-based technology like P&T is not likely to become a complete replacement of GD, or the strategy it represents. With HLW that remains radioactive for 'only' a couple of thousands of years, decent containment would still be necessary. As such, the containment strategy still has a place in the management of high-level waste, but only insofar as reconstruction's potential has been exhausted first. Secondly, these new circumstances might open up options for the form a containment strategy/technology may take, possibly loosening the lock-in of GD as the dominant final solution for highlevel waste management. However, P&T's infrastructural and institutional demands may institute their own path-dependent processes and possibly locked-in technologies, and have their own negative consequences other than long-term risks of radiotoxicity. In other words, optimizing one of the conditions for reversible technologies might entail losing out on the other. Moreover, since, the operationalization of the two conditions do not allow for a comparison on one similar measuring scale, balancing the two conditions would likely need a careful exercise in practical and/or political reason.
There is currently some attention for the role of compensation in RWM (e.g., Kojo et al. 2013). While not usually linked to the reversibility debate outside of a public demand for retrievability provisions, I hope that the four strategies presented provide a clue as to how compensation features in reversible GD. It is a possible strategy for achieving more capacity of undoing unwanted consequences, but only to be applied when the other three are sufficiently exhausted. Although there may be other principal and practical reasons like justice or social acceptance to resort to compensation outside of reversibility considerations, any claim to increased reversibility directly because of compensation should be treated with caution.
To conclude, it seems that the second reason (next to being significantly lockedin) that GD, even with reversibility provisions, apparently fails to meet the NEA's own justification of reversibility is that its strategy for undoing unwanted consequences is less-than-ideal for two reasons. Firstly, a remediation-or reconstruction-based technology would at least in principle be more able to live up to the NEA's justification of reversibility. Secondly, the way GD embodies containment is so severe that it disallows remediation or reconstruction of geologically disposed high-level waste at a point in the future at which these options would become viable.
Conclusion
At the start of this paper, the way reversibility features in GD was explored, and a number of critical discrepancies were noted between the reasons given for the inclusion of reversibility provisions in GD and GD's ability to live up to these reasons. This prompted the question whether GD could be considered a reversible technology, since such a reversible technology would arguably be able to fulfil the reasons given in ''Reversibility as an issue in radioactive waste management''. It was then put forward that for GD to be considered reversible, two questions need to be answered affirmatively: • Would an authorized body be reasonably able to switch from GD to an alternative solution if problems with GD were to arise? • Does GD exhaust the possibilities of undoing the consequences connected to high-level waste, and the hazards that could come about due to the use of GD for managing this waste?
Considering the second question, it was found that GD's strategy for undoing high-level waste-related consequences was less-than-ideal, since it relies mainly on a strategy of containment rather than on eliminating the waste through reconstruction or remediation. And while it is true that no technology exists that embodies these more ideal strategies, the way GD's containment works makes switching to these strategies for existing wastes rather difficult. Still, if and when such a technology eventually becomes available, could we not simply switch to it? The answer to the first question gives us reason to worry about this possibility. It would appear that GD is currently locked-in in the USA and quite possibly in other countries that espouse GD as well. The historical process of GD's development and operationalization has brought us to a point where these societies' symbolic, institutional and material investment in GD makes it difficult and thus unlikely that GD will be replaced with an alternative any time soon. With at least one of the questions answered negatively the conclusion follows that despite laudable efforts to the contrary in the past decades, GD is currently a practically irreversible technology for RWM. Taking measures both to avoid lock-in situations as well as exhausting the (future) possibilities for reconstruction and remediation for undoing high-level waste-related consequences could be fruitful strategies for increasing RWM's reversibility. However, optimizing both of these conditions of technological reversibility might prove difficult with some technologies, since scenarios are imaginable in which solutions in favour of one of the conditions decrease the potential of the other. For example, the future introduction of P&T could improve the ability to undo high-level waste-related consequences (since it is a reconstruction-based technology) and even lessen the severity of GD's lock-in (if applicable). On the other hand, P&T relies on extensive and complex infrastructures which could themselves become locked-in and have their own undesirable consequences. This creates an additional difficulty for finding truly reversible technologies or for balancing the two aspects in a way that is satisfactory, since an affirmative answer to both questions is necessary to truly speak of technological reversibility.
Of course, it must be remembered that this paper is focussed specifically on reversibility. However, when deciding on the specific form an RWM technology is supposed to take, or even which one(s) to select, more values are bound to be eligible for serious consideration. Indeed, issues of safety, justice, feasibility, efficiency, etc. also need to be considered, and might turn out to be partly incommensurable with a technology's reversibility. As such, this paper is not meant as a plea for the sole consideration of reversibility in the GD debate. Rather, it provides a clarification of what technological reversibility entails and how it is to be achieved, which is essential if reversibility is to be considered next to other important values.
What does all this mean for the hypothetical experimenter the paper opened with? After all, if she is to be prepared for learning that the experiment is to be stopped, the technology she is experimenting with should in principle be reversible. If the analysis presented in this paper is correct, reversibility can only be ensured by its proactive consideration, both in designing the nuclear energy technology in question (according to strategies for undoing undesirable consequences) as well as keeping alternative solutions viable and avoiding disproportionate institutional and symbolic commitment (avoiding lock-in). This would mean that GD's lock-in as well as its less-than-ideal prioritization of the containment strategy require careful revision. As it stands, however, the inclusion of reversibility in GD by the NEA is hardly adequate to the reasons provided for it, let alone to the standards of properly reversible experiments with RWM technology.
Lastly, to what extent do the results of this case-specific analysis carry over to the general framework of social experimentation with new technologies? While the recommendations for avoiding lock-in could work for other technologies due to their generality, the strategies for undoing consequences might need reconsideration. That is, some technologies may have different options for undoing consequences which invite different strategies for doing so, although this would have to be determined on a case-by-case basis. The original phrase by van de Poel ''containment of hazards as far as reasonably possible'' (2011 p. 289) turns out to be overly specific in this regard, since containment is just one possible strategy for undoing undesirable consequences. It must be noted, however, that responsible social experimentation demands more of a responsible experiment than it simply being reversible (van de Poel 2011). So, while (some) reversibility might be a necessary condition of responsible experimentation, it is by no means sufficient.
|
v3-fos-license
|
2019-05-06T14:05:37.504Z
|
2013-11-10T00:00:00.000
|
145074079
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://estudogeral.sib.uc.pt/bitstream/10316/89005/1/Drivers%20of%20in%20group.pdf",
"pdf_hash": "d0ea0ba241a4f34482533f068c725664d184dd79",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43219",
"s2fieldsofstudy": [
"Business"
],
"sha1": "0aa7abd03ce1083b11dfd1881f213bff758091b5",
"year": 2013
}
|
pes2o/s2orc
|
Drivers of in‐group and out‐of‐group electronic word‐of‐mouth (eWOM)
Purpose – The purpose of this study is to address a recent call for additional research on electronic word‐of‐mouth (eWOM). In response to this call, this study draws on the social network paradigm and the uses and gratification theory (UGT) to propose and empirically test a conceptual framework of key drivers of two types of eWOM, namely in‐group and out‐of‐group.Design/methodology/approach – The proposed model, which examines the impact of usage motivations on eWOM in‐group and eWOM out‐of‐group, is tested in a sample of 302 internet users in Portugal.Findings – Results from the survey show that the different drivers (i.e. mood‐enhancement, escapism, experiential learning and social interaction) vary in terms of their impact on the two different types of eWOM. Surprisingly, while results show a positive relationship between experiential learning and eWOM out‐of‐group, no relationship is found between experiential learning and eWOM in‐group.Research limitations/implications – This is the first study inve...
Introduction
[While] word of mouth has always been the most effective form of communication, [nowadays] there is a lost generation of marketeers. . . who do not understand the web and social networks (Simon Clift, Unilever Head of Marketing, Financial Times, April 6, 2010).
Social networks are a defining feature of today's electronic landscape (Bruyn and Lilien, 2008). Within these social networks, it is common for individuals to provide and receive information and informal advice on products and services. This is usually referred to as electronic word-of-mouth (eWOM), which is conceptualised as "any positive or negative statement made by . . . [an individual] . . . which is made available to a multitude of people and institutions via Internet" (Hennig-Thurau et al., 2004, p. 39).
In contrast, word-of-mouth (WOM), the precursor to eWOM, may be defined as person-to-person, oral communication between a receiver and a sender (Lee and Youn, 2009). In this communication, the source is perceived as a non-commercial message that relates to a brand, product or service (Alon and Brunel, 2006;Arndt, 1967). WOM has been recognised as a key force in the marketplace as it influences overall consumers' attitudes, beliefs and behaviour patterns (Bansal and Voyer, 2000;; see Sweeney et al., 2011;Mazzarol et al., 2007), and specifically consumers' product judgements (Bone, 1995;Summers, 1972) and purchase decisions (Lampert and Rosenberg, 1975;Lau and Ng, 2001).
While most traditional WOM occurs among individuals who know and trust each other (Gupta and Harris, 2010), the internet facilitates not only communication with family, friends, and co-workers but also unknown people (Kavanaugh et al., 2005). Indeed, most eWOM occurs with individuals who are strangers (Gupta and Harris, 2010). Given the dissimilar tie strengths among individuals, two different types of eWOM develop, namely eWOM in-group (eWOM with close friends or family), and eWOM out-of-group (eWOM with individuals beyond a person's social, familial and collegial circles) (see Brown and Reingen, 1987;Matsumoto, 2000). This study aims to investigate these two types of eWOM.
Given the "ease of eWOM generation and dissemination" (Gupta andHarris, 2010, p. 1042) and its impact on consumer buying behaviour , researchers have been calling for more research into eWOM for a number of years (Gupta and Harris, 2010;Valck, 2006;Zhang et al., 2010). Thus far, scholars have examined a wide range of eWOM issues, including the value of eWOM to organisations (e.g. Liu, 2006), its links with purchase decisions and purchase intentions (e.g. , its ability to persuade consumers (e.g. Zhang et al., 2010), its antecedents (e.g. Jayawardhena and Wright, 2009;Gruen et al., 2006;Mazzarol et al., 2007;Sweeney et al., 2008), and its consequences (e.g. Park and Lee, 2008;Huang et al., 2011;Wangenheim and Bayón, 2004). Despite the considerable volume of studies on eWOM, it is important to acknowledge that eWOM still remains a very under-researched area (Zhang et al., 2010). Specifically, what drives individuals to engage in different types of eWOM characterised by diverse tie strengths remains underexplored.
Accordingly, this study's objective is to address this gap in the eWOM literature by investigating the impact of usage motivations on eWOM In-goup and out-of-group. This distinction is important because information circulated through weak ties is more novel than information that flows through strong ties (Granovetter, 2005; see Weenig EJM 47,7 and Midden, 1991), and, therefore, the impact of usage motivations on eWOM might differ for in-group and out-of-group. Although some studies distinguish between in-group and out-of-group for the traditional WOM (see Brown and Reingen, 1987;Granovetter, 1973), to the best of our knowledge, no study has investigated the determinants of these two types of eWOM.
From a managerial perspective, understanding the drivers of eWOM in-group and out-of-group can help the company as a whole benefit from consumers' generated eWOM and marketing managers, in particular, in implementing strategic decisions on website design and product positioning aligned with our results.
This study draws on the social network paradigm and the uses and gratification theory (UGT) to propose a conceptual framework of the motivational drivers of eWOM in-group and out-of-group. In the next section, the theoretical background that underpins the relationships in this study is presented, and the research hypotheses are developed. In the following sections, the research methodology is discussed followed by the analysis and the results. A discussion of the results and their implications for academics and practitioners is presented. The paper concludes with the study's limitations and future research directions.
Model development and hypotheses
The conceptual framework postulates that motivations to use the Internet are positively related to eWOM. Figure 1 shows the conceptual framework.
Social network paradigm
It is our contention that the social network paradigm provides a strong theoretical basis for explaining eWOM. A social network can be defined as a social structure representation in which people are points, connected by lines that represent relationships (Granovetter, 1976). This paradigm assumes these ties link "social actors" (Freeman, 2004, p. 3) in a network formed by one or more "nodes" of individuals in social networks or using websites (Wellman, 2008). Information is exchanged among people who have interpersonal ties that differ in strength. The ties' strength results Drivers of electronic wordof-mouth from a "combination of the amount of time, the emotional intensity, the intimacy [. . .] and the reciprocal services which characterise the tie" (Granovetter, 1973(Granovetter, , p. 1361. Depending on the strength of the ties, these can be classified as weak or strong ties. Weak ties, also called secondary ties, are those established with people with whom one rarely has contact with; strong or primary ties are those connections with family members, close friends and colleagues (Granovetter, 1973;see Brown and Reingen, 1987). Therefore, the social network paradigm is important in an eWOM context, since weak ties tend to connect members of different groups, and therefore out-of-group communication emerges. On the other hand, strong ties tend to be established in specific groups in which in-group communication takes place (Matsumoto, 2000;Granovetter, 1973). Both strong and weak ties are important to promote eWOM because, in combination, they allow widespread information diffusion from one tightly knit group to a bigger, cohesive social segment (Brown and Reingen, 1987;Granovetter, 1973).
Uses and gratification theory: internet usage drivers
Much of the research on internet usage (e.g. Cuillier and Piotrowski, 2009;Grant, 2005) suggests that internet usage is driven by different drivers. An underlying theory that supports this notion is the uses and gratification theory (UGT) (Blumler and Katz, 1974). Employing the UGT in an internet context is not new. In fact, from its early days, researchers have applied UGT to explain internet usage (Morris and Ogan, 1996;Newhagen and Rafaeli, 1996;Charney and Greenberg, 2001;Flanagin and Metzger, 2001). The UGT builds upon three basic principles (Blumler, 1979): first, individuals are goal directed in their behaviour; second, they are active media users; and third, these active users are aware of their needs and select media to gratify them. Scholars have long recognised the importance of individual differences in determining behaviours. Furthermore, it has been shown that individual desires influenced by personality affects how a person seeks gratification (Conway and Rubin, 1991). An individual's values, beliefs, needs, and motives affect his or her behaviours, such as media usage and selection, in order to satisfy a set of psychological needs. As such, the use of a medium such as the internet is aligned with the three principles of the UGT.
We rely on both the UGT and the social network paradigm in our conceptual framework's hypotheses development.
Mood enhancement and escapism
Moods are attached to all human activities, and influence a wide range of cognitive processes and explicit behaviours (Bagozzi et al., 1999;Cohen and Andrade, 2004;Schwarz, 1998). In fact, researchers focused on the question of how moods influence behaviour in shopping, information search, and selecting preferential channels, for a considerable period of time (Puccinelli et al., 2009). Evidence suggests that mood enhancement is based on the pleasure-seeking principle, according to which individuals are thought to constantly search for feel good activities to attain a good mood (Cohen and Andrade, 2004). Indeed, mood enhancement has been found to be one of the strongest motivations for internet usage, especially among young people (Grant, 2005).
EJM 47,7
Escapism, on the other hand, is "a classic motivation associated with most types of media" and particularly with the internet amongst young people (Grant, 2005, p. 612). Escapism has been defined as a state of psychological immersion and absorption (Mathwick and Rigdon, 2004) in which people escape from their everyday concerns and responsibilities for a period of time. Several internet activities are suited to escapism, including surfing the news, weblogs, social networking sites, participating in forums and chat room discussions, as well as spontaneous and constant e-mailing (Charney and Greenberg, 2001;Grant, 2005).
It is possible to identify motives that reflect such needs and personal goals. Using UGT, previous research identified escapism (Abelman and Atkin, 1997) as a motive for using that media. Given that mood enhancement is one of the strongest motivations for internet usage, and because it encourages individuals to think in a broader, more abstract fashion (Labroo and Patrick, 2009) thus facilitating an individual's immersion and absorption, we postulate that: H1. The internet's use for mood enhancement is positively related to the internet's use for escapism.
Mood enhancement and experiential learning
Experiential learning is related to becoming familiar with a certain subject through some type of exposure (Braunsberger and Munch, 1998). Muthukrishnan and Kardes (2001) postulate that individuals often feel that they are learning from experiences when these experiences are enjoyable. Furthermore, research indicates that visual elements, such as pictures (McQuarrie and Mick, 2003), colours (Gorn et al., 2004;Mandel and Johnson, 2002) and aesthetic designs (Veryzer and Hutchinson, 1998) greatly influence information search and elaborative processing (Loken, 2006). Hence, if the use of the internet involves websites that contain features such as those described above (pictures, aesthetic design, etc.), the individual's mood might be enhanced and therefore (enjoyable) learning might take place. Grant (2005, p. 611) argues that while mood enhancement is "a more powerful motivator in absolute terms [. . .], information searching for learning purposes", i.e. experiential learning, "may ultimately be the internet's real point of difference." From a theoretical perspective, we observed that UGT postulates that an individual uses the internet not only because he/she is goal directed but because they also seek to gratify their needs. Based on the UGT framework, Abelman and Atkin (1997) observed the information seeking behaviour of internet users. It is plausible that internet users find that while experiential learning takes place, a pleasurable experience is also occurring. This is because, "the primary use of computer-mediated forms of communication and the Web involves entertainment" (Eighmey and McCord, 1998, p. 189). Additionally, gratification (such as mood enhancement) can be sought in an electronic communication medium, such as the internet, through informational learning and socialisation ( James et al., 1995). Finally, research has demonstrated that a strong correlation exists between moods and learning (Bagozzi et al., 1999). Given that individuals can experience experiential learning through the use of the internet and they also find the use of the internet a pleasurable experience, it is conceivable that: H2. The internet's use for mood enhancement is positively related to the internet's use for experiential learning.
Drivers of electronic wordof-mouth
Escapism and social interaction. Internet activities motivated by escapism are generally associated with positive social outcomes , namely social connectivity. This is because online connectivity offers new opportunities to individuals for social interaction by enabling them to interact with large numbers of others. If not for the internet, such interactions and resulting relationships would have been unlikely, if not impossible, to emerge (Bargh and Mckenna, 2004). The internet offers different forms of social interaction. On one hand, it enables one-to-one relationships with a high level of privacy and personalisation (Kang, 2000). For instance, the internet supports long-distance relationships -across regional boundaries and the globe (Wellman et al., 2001) -and facilitates nearly cost-free, continual communication among family members, friends, colleagues and acquaintances, long-lost friends and co-workers who are physically distant. Such social interaction is supported by software programmes such as Skype which allow individuals to communicate across the world, not only with text but with real-time voices and images, thus resembling actual, in-person communication. On the other hand, individuals can send and receive a great deal of information via social networks, e-mails and blogs, across socially integrated online communities (Lee and Zaichkowsky, 2006), and consequently achieve escapism.
Research on internet developments, such as Second Life, also confirms the importance of social interactions (Chesney et al., 2009). Nevertheless, the strength and quality of online relationships can vary. Some researchers argue that they are very similar to those developed in person (Parks and Floyd, 1996) while others indicate that online relationships are less valuable than offline ones, with their benefits dependent on whether they supplement or substitute offline social relationships . What is not disputed is that the internet allows users to escape reality. This escapism does not threaten social life and in fact allows users to enlarge their social networks (DiMaggio et al., 2001;Howard et al., 2001) and is aligned with the principles of the social network paradigm. Overall, online tools may promote escapism and will probably expand social contacts (Wellman et al., 2001). Thus, we propose the following hypothesis: H3. The internet's use for escapism is positively related to the internet's use for social interaction.
Social interaction and eWOM. WOM in virtual communities is a key marketing issue, because within these groups information can reach millions of individuals (Brown et al., 2007). Community is defined as a set of interlinked relationships that meets members' needs (Kalyanam and McIntyre, 2002). Virtual communities can resemble traditional primary reference groups, such as friends and family members (Jepsen, 2006), as well as secondary reference groups, such as colleagues and co-workers. Virtual community members consider those communities as "places" for contact with people who share their interests (Maignan and Lukas, 1997;Wellman and Gulia, 1999). These virtual communities offer many opportunities for developing friendships and nurturing close relationships, as a consequence of shared interests, values and beliefs (McKenna et al., 2002). Membership and participation in a relevant virtual group may indeed become a central part of an individuals' social life (Bargh and Mckenna, 2004). The fact that virtual community members tend to engage in substantial WOM exchanges (Alon et al., 2002) justifies eWOM's importance from a marketing perspective. Based on the social EJM 47,7 network paradigm, following Brown and Reingen (1987) and Matsumoto (2000), we can observe that eWOM in-group occurs in groups characterised by close relationships or strong ties, such as family and close friends; while eWOM out-of-group generally occurs between people with weaker ties, such as in social networking groups aimed at reaching the mass public. Since eWOM is a social phenomenon that occurs in group settings (see Alon and Brunel, 2006;Brown and Reingen, 1987), the more consumers interact in a group, the more likely they will be to use eWOM to reflect their knowledge and enhance their reputation as experts about specific products (see Wojnicki, 2006). Hence, it can be postulated that: H4a. The internet's use for social interaction is positively related to eWOM in-group.
H4b. The internet's use for social interaction is positively related to eWOM out-of-group.
Experiential learning and eWOM. E-communication enables people to share information and opinions with others more easily than ever before . The internet has extended consumers' options for gathering assumedly unbiased product information from their peers . Furthermore, the internet provides consumers the opportunity to offer their unique consumption-related advice by engaging in eWOM in message boards, internet forums, chat rooms and social networking sites. In particular, internet forums give consumers the opportunity and ability to share experiences, opinions and knowledge with other consumers (Bickart and Schindler, 2002). When consumers generate information based on their personal experiences, this information tends to exert more impact on others' attitudes and holds more credibility than if it were generated by advertising companies and corporate marketing departments (Walsh et al., 2009;Bickart and Schindler, 2002;Kempf and Smith, 1998). Moreover, eWOM's credibility is justified by the fact that other "consumers are perceived to have no vested interest in the product and no intentions to manipulate the reader" (Bickart and Schindler, 2002, p. 428). Hence, consumers find the information exchanged on internet social networks more relevant and trustworthy, as the information reflects product consumption in real-world settings by other consumers and is free from marketeers' interests (Bickart and Schindler, 2002;Jepsen, 2006). As Granovetter (1973) noted in his expounding of the social network paradigm, this information exchange may depend on a combination of the amount of time, the emotional intensity, and the intimacy of the networks. Based on the UGT framework, earlier it was argued that internet users use the internet medium for experiential learning as this was likely to be positively related to mood enhancement. Therefore, consumers who become familiar with a service or product through experiential learning are therefore likely to engage in eWOM about that experience with other consumers as it is a positive experience. Hence, we expect that: H5a. The internet's use for experiential learning is positively related to eWOM in-group.
H5b. The internet's use for experiential learning is positively related to eWOM out-of-group.
Research method
To test the hypotheses, we conducted a survey of internet users in Portugal. We used a convenience sample of internet users. The individuals in the sampling frame were university undergraduate students from one faculty within a university who were invited to participate in the study through an e-mail. In the subsequent lectures students were made aware of the importance of this study. Three hundred and ten e-mails were sent to students and 302 students agreed to participate. This study's questionnaire was initially developed in English and then translated into Portuguese. To avoid translation errors, the questionnaire was back-translated into English by a different researcher (see Douglas and Craig, 1989). The questionnaire was then given to a pre-test sample of thirty young adults who use the internet regularly before being distributed to the 302 respondents.
The respondents' ages ranged from 18 to 35 years old, 25 per cent of the students were 21 years old or younger, 50 per cent of the students were 22 or 23 years old, and the remaining students were 24 years old or older. Most students were female (58.9 per cent). With regard to the internet usage behaviours, 33.1 per cent of our respondents use the internet on a daily basis for up to 29 minutes, 27.5 per cent use it from 30 to 59 minutes, 22.8 per cent use it from 1 h to 1 h 59 m and the remaining (16.6 per cent) use it for more than 2 hours daily. These results are in line with the fact that an estimated 97.3 per cent of Portuguese young adults use the internet on a regular basis (Marktest, 2009).
Measures for the constructs were adapted from existing studies (Grant, 2005;Lam and Mizerski, 2005). The six constructs were mood enhancement, escapism, experiential learning, social interaction, eWOM in-group and eWOM out-of-group. Respondents were asked to assess all the items, using a seven-point Likert scale, ranging from "1 -strongly disagree" to "7 -strongly agree". A complete listing of the questionnaire items can be found in Table I. All scales' internal reliability (Cronbach, 1951) is significant: an average of 0.81 (see Lages et al., 2008). Although all constructs present Cronbach alphas above the recommended value of 0.70 (Nunnally, 1978), the construct "Social Interaction" presents a a of 0.67, which may be considered questionable (Cronbach and Shavelson, 2004). We have decided to include this construct because this value is near the recommended level of 0.70 considering that this construct comprises only two variables. Other studies in many contexts present a values between 0.60 and 0.70 (see Lages and Lages, 2005;Ntoumanis, 2001).
Measurement analysis
To assess the measures' validity, the items were subjected to a confirmatory factor analysis (CFA), using LISREL 8.72 (Jöreskog and Sorbom, 1996). In this model, each item is restricted to load on its pre-specified factor. Despite the fact that the chi-square for this model is significant (x 2 ¼ 648:43, df ¼ 174, p , 0:001), fit indices reveal an acceptable fit: the comparative fit index (CFI) is 0.93, the incremental fit index (IFI) is 0.93 and the Tucker-Lewis fit index (TLI) is 0.92. Since fit indices can be improved by allowing more terms to be freely estimated, we also assessed the root mean square error of approximation (RMSEA), which assesses fit and assigns a penalty for lack of parsimoniousity (Holbert and Stephenson, 2002). The RMSEA of this measurement model is .095, which indicates a satisfactory fit to the population (Chen et al., 2008). We EJM 47,7 also assessed the standardised root mean square residual (RSMR), which has a value of 0.069 and thus indicates a good fit (Hu and Bentler, 1999).
All six constructs have acceptable levels of composite reliability, namely 0.7 or higher (Bagozzi, 1980). Also Fornell and Larcker's (1981) variance extracted values are above the recommended level of 0.50 for all six constructs (see Table II Notes: a ¼ Internal reliability (Cronbach, 1951); r ¼ composite reliability (Bagozzi, 1980) Sources: a Grant (2005); b Lam and Mizerski (2005) Drivers of electronic wordof-mouth of their intercorrelation) being less than the average variance explained in the items by the construct (Fornell and Larcker, 1981). The correlations among all constructs and the average variance extracted for each construct are presented in Table II. Convergent validity is evidenced by each item's large and significant standardised loadings on its intended construct, with an average loading size of 0.76 (see Table I). Hence, none of the correlations in the final model were sufficiently high to jeopardise discriminant validity (Anderson and Gerbing, 1988).
Structural model estimation
In line with recent research (Cinite et al., 2009;Walsh et al., 2009), we estimated the structural equation model (see Figure 2), using the maximum likelihood (ML) estimation procedure in LISREL 8.72. The model contains six constructs, which correspond to 21 observable variables (see Table I). Where covariance based structural equation modelling is employed, Nunnally (1978) suggests an ad hoc rule of thumb that requires ten observations per indicator. (Fornell and Larcker, 1981); ME ¼ Mood enhancement, EL ¼ Experiential learning, Esc ¼ Escapism, SI ¼ Social interaction, eWOMIG ¼ e-word-of-mouth in-group, eWOMOG ¼ e-word-of-mouth out-of-group reveal that the final model reproduces the population covariance structure, and that the observed and predicted covariance matrices have an acceptable discrepancy between them. Because the reduced chi-squared statistic (x 2 =df ¼ 3:88) is more than the recommended threshold of 3 (Hair et al., 2006), we proceeded to examine the Mardia's coefficient and found that its value is superior to 3, which suggests that the data might not be normally distributed. When faced with such a distribution, Satorra and Bentler (2001) argue that it may be more appropriate to correct the test statistic rather than to use different estimation methods. The SB chi-square statistic (which incorporates a scaling correction for the chi-square statistic when distributional assumptions are violated), corrected for non-normality is calculated at 406.33 (x 2 =df ¼ 2:34). Since this study comprises a large sample size -of 200 or more -, the "detrimental effects of nonnormality" are reduced and may even be negligible (Hair et al., 2006, p. 80). Also Tabachnick and Fidell (2001) highlight that for large samples, variables with statistically significant kurtosis do not usually have a big impact in the analysis. Table III contains the estimation of direct, indirect and total effects for the structural model.
Our results indicate that mood enhancement has a highly positive direct impact on escapism, as well as on experiential learning, which provides support for H1 and H2. Mood enhancement explains 29 per cent of escapism's variance and 32 per cent of experiential learning's variance (see Figure 2). H3 was also confirmed, as escapism has a highly positive impact on social interaction. The percentage of variance in social interaction, explained by its antecedents, is 49 per cent. Surprisingly, we found that while experiential learning has a non-significant impact on eWOM in-group, it has a highly positive impact on eWOM out-of-group. Finally, we proposed that social interaction has a positive impact on eWOM in-group (H5a) and on eWOM out-of-group (H5b). Our results therefore support both H5a and H5b. Overall, the variance in eWOM in-group and eWOM out-of-group, explained by their respective antecedents, is 67 per cent and 51 per cent, respectively.
With the use of path models, we estimated not only the direct, but also indirect and total effects among latent variables (Bollen, 1989). Table III shows that all five indirect effects are highly significant and positive. Mood enhancement has a positive indirect effect on eWOM in-group (0.35, p , 0:01), and eWOM out-of-group (0.37, p , 0:01). The total and indirect effect of escapism on eWOM in-group is highly significant and positive (0.56, p , 0:01); likewise, the indirect effect of escapism on eWOM out-of-group is positive (0.43, p , 0:01).
Discussion
EWOM is an important tool for all organisations, as it influences consumer behaviour and attitudes towards products, brands and the organisation itself. WOM, and in particular eWOM, has an impact on customer loyalty intentions (Gruen et al., 2006), influences sales (Chevalier and Mayzlin, 2006) and ultimately the firm's revenue (Liu, 2006). Despite its importance and a considerable amount of research on eWOM, there have been recent calls for additional research on the topic (Gupta and Harris, 2010;Zhang et al., 2010 The current study is therefore an attempt to advance our understanding of eWOM, and in particular the drivers of different types of eWOM. In the next sections, the theoretical and practical implications of the research are discussed.
Theoretical implications
This paper provides a number of theoretically grounded contributions to eWOM literature. First, this study offers insights into eWOM dynamics. In particular, our results demonstrate that when internet users aim to enhance their mood, namely through entertainment, amusement, excitement and relaxation, they enter a state of psychological immersion and absorption, which takes them away from their everyday worries and responsibilities, setting the ground for social interaction. Simultaneously, when using the internet to enhance their mood, individuals tend to become more familiar with certain goods and services by gathering information from other peer consumers and thus experiencing learning.
Second, in examining the influence of experiential learning on eWOM in-group and out-of-group, the results demonstrate that experiential learning is not related to eWOM in-group, but it does have a positive relationship with eWOM out-of-group. This differential effect of experiential learning on eWOM out-of-group and eWOM in-group is anchored in the premise that the information circulated through weak ties is more novel than information that flows through strong ties (Granovetter, 2005;see Weenig and Midden, 1991) as strong-tie individuals tend to validate their common knowledge when sharing information (see Phillips et al., 2004). The underlying reason is that an individual's in-group members move in the same circles and therefore a substantial overlap of information already exists among them. On the other hand, an individual's out-of-group members have contact with people whom the individual does not know. As such, more novel information is generated (Granovetter, 2005(Granovetter, , 1983) and more experiential learning may occur and subsequently be shared with out-of-group members. Additionally, weak-tie sources are more numerous and varied than strong-tie sources strengthening the argument that information gathered in weak-tie groups is richer and more meaningful to information seekers (Duhan et al., 1997). Thus, while group members with strong ties tend to validate their common knowledge when sharing information, unique knowledge is received from individuals with whom one has weak ties (see Phillips et al., 2004). Another possible explanation for our results is that when individuals use the internet for experiential learning, they engage more in eWOM out-of-group, given that they spend less or no (face-to-face) time with their out-of-group members (Granovetter, 2005) in comparison to their in-group members. Hence, in line with socio-psychological studies (e.g. Weenig and Midden, 1991), this study's results support Granovetter's (1983Granovetter's ( , 1973) "strength-of-weak ties" hypothesis.
Finally, we also respond to a call in the literature for additional research on the mood enhancement construct (Davis, 2009), by illustrating its central role in driving other internet usage motivations and ultimately eWOM. Mood enhancement has been found to be positively related to escapism, which in turn is positively related to social interaction. This study also found that social interaction among users will ultimately influence eWOM in-group and eWOM out-of-group.
Managerial implications
In line with Kozinets et al.'s (2010) and Ha and Perks' (2005)work, our results suggest that when individuals use the internet, they are likely to engage in eWOM. Thus, in their marketing efforts, companies should design their websites to generate entertainment and amusement, while providing information about their products which appeals to consumers. Companies can capitalise on the internet by coordinating web designers and marketeers' tasks to provide an ingenious and appealing website design anchored in rich content, such as videos, aimed at lifting consumers' moods. For example, Blendtec (a seller of powerful blenders, mainly to private households), created a web page containing a video where an iPhone was thrown into a blender soon after the launch of the iPhone. The light-hearted video was a resounding success with 6.9 million views, and dedicated social media pages with discussions on the virtues of Blendtec products, which resulted in sales growth of 700 per cent. It is also apparent that, if websites facilitate social interaction, they will benefit from consumers engaging in eWOM with both in-group and out-of-group members. Our results confirm findings of online environment research that asserts that consumers come together to interact socially Jepsen, 2006). As a result, discussion participants share product information and gain general information about the company itself. For example, visitors to the website www.clubpenguin.com/puffle/ can play online games, interact with fellow visitors, engage in eWOM about the site and "puffles", and ultimately buy "puffles" in a retail store. Companies should therefore strive to provide opportunities for social interactions on their website and, at the very least, provide links to Facebook and other social websites, which ultimately promote eWOM. Dominos Pizza, for example, showed a 10 per cent increase in sales in 2010, following a Facebook recipe campaign which encouraged users to start an eWOM campaign (Ohngren, 2012). Another example is Babylicious, a company that relies solely on eWOM for promotion. Marketing managers should also consider whether their product lends itself to promotion via eWOM, and if the product is responsive to eWOM promotion, managers should facilitate it.
We found a differential influence of experiential learning on eWOM in-group and out-of-group, specifically that experiential learning is important in eWOM out-of-group. This signifies that individuals are prepared to devote their time and energy to start conversing with others provided they feel that they are learning and it is enjoyable. "My Starbucks idea" (http://mystarbucksidea.force.com/ideaHome) is perhaps an illustration of this. The site allows users to submit suggestions to be voted on by Starbucks consumers, and the most popular suggestions are highlighted and reviewed. In effect, Starbucks have managed to get individuals to create content, and harness the resulting eWOM by adding a feature called "Ideas in Action" blog that gives updates to users on the status of changes suggested.
Consumers do regard eWOM as reliable information sources, far more so than advertising and marketing messages (Walsh et al., 2009;Bickart and Schindler, 2002;Kempf and Smith, 1998). Although companies might be advised to make their websites entertaining and informative, and to provide opportunities for social interaction (for example by creating discussion boards about specific brands and products), such provision can also lead to adverse eWOM, particularly from dissatisfied consumers. This is the main reason why some companies, such as Ryanair, still do not provide this service. However, in an environment in which dissatisfied consumers are free and able EJM 47,7 to create their own forums, discussion boards, and so on, it might be more prudent to cater to their needs and offer these on the company's own website, rather than having them setting up their own information channels. If a firm provides customers with an appropriate forum or discussion board on their website, the firm will benefit from gaining up-to-date information and feedback on consumer dissatisfaction and, as such, will be able to monitor and address the consumer's concerns promptly.
In summary, organisations need to develop websites that are simultaneously entertaining and informative, and that provide social interaction opportunities in order to generate eWOM. Given that online communities are open to everyone, the firm may decide to monitor the information exchanged in the most important communities (e.g. Facebook, MySpace, Twitter and LinkedIn). The firm can then act upon the eWOM information, whether it is positive or negative, and regard it as a great opportunity to receive product feedback, and also to reach their consumers in a more subtle way. For example, the company might post reply messages on online communities to help "spread" their message. In addition, businesses may also apply content-management practices to eWOM content and use it to their advantage. Owing to the interest in social networks and their potential marketing effect, many organisations around the world have an extremely strong financial incentive to understand and facilitate information exchange among individuals who engage in eWOM.
Research limitations and future research directions
Despite this study's theoretical and practical contributions, we acknowledge its limitations. The first limitation is that the questionnaire might have created common method variance, which might in turn have inflated the relationships among the constructs. This could be a threat if the respondents were aware of the conceptual framework of interest. However, respondents were not informed of the purpose of the study, and all of the constructs' items were separated and mixed, making it difficult for respondents to detect which items measured which factors.
A second limitation relates to the convenience sample characteristics, which limit the generalisability of the results. In particular, the sample comprises young adults, who are University students, in Portugal. Future studies with larger samples could allow for a comparison between young, middle-aged and older internet users. This research was conducted in a country in which the internet usage rate among young adults is extremely high (97.3 per cent). Future studies could replicate our study across a different sample and in diverse cultural contexts, characterised by various levels of internet access and usage. It may be that the internet usage motivations will differ, as well as their impact on both eWOM in-group and out-of-group.
Another potential limitation stems from the fact that we used two items to reflect the social interaction construct. It would be desirable if future studies would use at least three items to measure this construct. Another key issue to be explored in future research is the consequences of these two types of eWOM -in-group and out-of-group -and their relative impact on the firm's performance. Additionally, there may be moderator relationships that have not been taken into account in this model. Nevertheless, given that the proposed hypotheses are new, from a theoretical viewpoint, it is important to first understand the direct relationships and then, in a later study, once these relationships are well-established, to explore the role of possible Drivers of electronic wordof-mouth moderator variables. Suggestions for further research include considering age, gender and education level as moderators of the relationships between social interaction and e-WOM in-group and out-of-group and between experiential learning and e-WOM in-group and out-of-group. Finally, future studies can investigate the antecedents of both eWOM in-group and out-of-group by focusing on the volume of eWOM generated for each type.
|
v3-fos-license
|
2018-11-18T16:16:28.760Z
|
2018-09-01T00:00:00.000
|
53439645
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/14559/1579397416-20200119-6254-g02vk3.pdf",
"pdf_hash": "7702c119fdd6baef2084784e4c2107f0309d7d4d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43220",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1910e26b3b404c4253cc4bc76d44701056a9fd89",
"year": 2018
}
|
pes2o/s2orc
|
Pain Management in Metastatic Bone Disease: A Literature Review
Cancer means an uncontrolled division of abnormal cells in the body. It is a leading cause of death today. Not only the disease itself but its complications are also adding to the increase in mortality rate. One of the major complications is the pain due to metastasis of cancer. Pain is a complex symptom which has physical, psychological, and emotional impacts that influence the daily activities as well as social life. Pain acts as an alarm sign, telling the body that something is wrong. Pain can manifest in a multitude fashion. Management of bone pain due to metastasis involves different modes with some specific treatments according to the type of primary cancer. Over the years various treatment modalities have been tried and tested to improve the pain management including the use of non-steroidal anti-inflammatory drugs (NSAIDs), opioids, bisphosphonates, tricyclic antidepressants, corticosteroids, growth factors and signaling molecules, ET-1 receptor antagonists, radiotherapy as well as surgical management. The topic of discussion will cover each one of these in detail.
Introduction And Background
Cancer means an uncontrolled division of abnormal cells in the body. It is a leading cause of death today and according to the "World Cancer Report 2014" on September 3, 2014, the incidence and mortality due to cancer are increasing with the higher incidence in China among all the Asians. This incidence has shown remarkable progress in the year 2012. The report predicts that global cancer cases will increase rapidly from 14 million in 2012 to 19 million in 2025 and to 24 million in 2035 [1].
Not only the disease itself but its complications also are adding to the increase in mortality rate. One of the major complications is the pain due to metastasis of cancer. Statistically, approximately 60%-90% of patients with advanced cancer have the complication of variable degrees of pain during their lifetime, of which almost 30% of patients have been suffering from persistent severe pain [2]. Bone cancer pain occurs in many patients of cancer and the reason is metastasis to bone which later on leads to invasion of the surrounding tissues which leads to signal transmission through pain fibers and thus the perception of pain occurs [3]. Two-thirds of patients with advanced cancer are prone to bone metastases. The most common organs that give the metastases to the bone are lung, breast, prostate, and ovaries [4].
Pain is a complex symptom which has physical, psychological, and emotional impacts that influence the daily activities as well as social life. Pain acts as an alarm sign, telling the body that something is wrong. Pain can manifest in a multitude fashion. Pain behaviors such as spontaneous pain, hyperalgesia, and allodynia are related to the release of neurochemicals such as substance P as well as c-Fos, and Dynorphin expression. Bone metastases are a frequent complication in patients with advanced cancer.
Over the years, various treatment modalities have been tried and tested to improve the protocols of pain management including methods such as bisphosphonates, chemotherapy, surgery, nerve block, adoptive tumor immunotherapy, and gene knockout. But the clinical treatment of cancer pain is yet to focus on the three-step program as established by the World Health Organization. According to the degree of pain, the patients will be given a non-steroidal anti-inflammatory drug (NSAID) (mild pain) and/or opioid therapy (moderate and severe pain). However, there are many patients suffering from resistant cancer pain along with other complications of the treatment such as "mirror pain," morphine tolerance, and constipation, respiratory depression for opioid drugs, stomach ulcers and kidney toxicity for NSAIDs [5]. These side effects have limited the use of these drugs for a longer period of time [6]. Due to the fact that the molecular mechanisms of bone cancer pain have not been elucidated, and that the side effects and tolerability of clinically available drugs cannot be overcome, some 45% of patients with cancer accompanied by pain cannot be effectively controlled [7]. May it be the stretch on the periosteum or direct effect of the destructive lesions, bone metastases cause excruciating pain. Among patients with cancer, there is substantial heterogeneity in how and at what time of the day pain is perceived. Treatment of such metastatic cancer patients is not complete if the pain is not considered as the fifth vital sign and addressed the way it deserves. The advances in molecular mechanisms involved in the bone pain would help in the improvement of the treatment modalities for pain control.
Review
Bone cancer pain is a chronic pain with complicated pathogenesis. Various studies over the years have shown that bone cancer pain may be due to substances produced by tumor cells and inflammatory cells, as well as sustained activation of osteoclasts and nerve compression and injury caused by tumor growth and invasion in adjacent tissues [8]. It can also be caused by local pressures exhibited caused by increasing tumor sizes.
Clinical analysis of bone metabolism in patients with bone metastases showed that tumorinduced bone destruction (osteolysis) is closely related to the occurrence of cancer pain. Osteoclastic activity is under the influence of tumor necrosis factor alpha (TNF alpha) and other cytokines that are secreted by the cancer cells. Then, bone-resorbing osteoclasts secrete protons and acidic enzymes that dissolve the bone. This acidic environment activates the nociceptors resulting in pain perception. The severity of pain depends on the number of neurochemical changes at the dorsal root ganglia in the spinal cord [9]. Primary sensory neurons located in the dorsal root ganglia can be divided into two general types: A-fiber and Cfiber in which A-β fibers conduct the non-noxious stimulation, whereas A-δ fibers and peptidergic C fibers are sensory neurons innervating the bone with different receptors to feel different stimulations. These receptors are transient receptor potential vanilloid 1, cold receptor (cold-and menthol-sensitive receptor), transient receptor potential melastatin 8, mechanically gated ion channel P2X3 receptor, endothelin (ET) receptor, and PG receptor. This noxious stimulation can be converted to electrochemical signaling by these receptors which are transmitted to the central nervous system (CNS) where the pain is actually perceived as a sensation. Also, another neurotransmitter named ET-1 also increases during bone metastasis.
The hyperalgesia induced by cancer involves central and peripheral sensitization [10]. During continuous peripheral stimulation, the sensitivity of ganglia and its neurons changes, and pain threshold lowers, resulting in hyperalgesia. For central sensitization, the mechanism explains that neurochemical changes in the spinal cord causes hypertrophy of astrocytes and increased expression of dynorphin and c-Fos which decreases the pain threshold [11]. Another cause of bone pain in metastasis is due to the pathological factors that follow the weakening of bones.
Medical management of metastatic bone pain
It involves different modes with some specifying treatments according to the type of primary cancer.
Non-steroidal Anti-inflammatory Drugs
Non-steroidal anti-inflammatory drugs since long are in use for the treatment of pain control for all the diseases. They also have an anti-inflammatory effect that makes them an ideal drug for the inflammation caused by certain cancer types during extensive tissue invasion and destruction. A meta-analysis of 25 randomized controlled trials related to the use of NSAIDs in cancer pain stated that although NSAIDs significantly reduced cancer-related pain above placebo, their role in the treatment of bone pain due to metastasis is still under consideration [12]. More recently, a Cochrane review was done for the use of NSAIDs in cancer pain in 42 clinical trials. They were tested alone as well as in combination with opioids. The main deal lacked the evidence of their superior efficacy or safety of one NSAID over the other [13]. The basic mechanism for this drug is to inhibit cyclooxygenase (COX) enzymes that are involved in the production of prostaglandins to regulate various cell functions including pain perception. In tumor cells, COX-2 has increased activity. Therefore reducing the activity of COX would inhibit the perception of pain as well. In support of this notion, acute administration of selective COX-2 inhibitors to rodents with cancer-induced bone pain was done and the experiment demonstrated attenuated (pain) behaviors, whereas chronic treatment reduced tumor burden as well, and osteoclast destruction in addition to producing significant pain relief [14]. The main drawback is that its effects are limited due to the short duration of action and lack of long-lasting effects.
Opioids
The second most commonly used drugs are opioids. It is one of the most effective and widely used drugs for cancer pain. Opioid drugs produce a long-lasting analgesic effect. Therefore more than 80% of patients with cancer need to use opioids to improve or control pain in some part of their lives. Analgesic effect of opioids is largely dependent on μ-receptor saturation and is thus influenced by the type and severity of the pain, prior exposure to opioids, and individual distribution of receptors. Major side effects of opioid drugs are physiological dependence, tolerance, addiction, sedation, constipation, nausea, vomiting, and respiratory depression that limit the further application. Clinicians can adjust the opioid analgesic effect by the following two aspects. First, individualized treatment based on pharmacogenomics studies of the cancer type makes the best analgesic effect possible with minimal adverse reactions. Second, pharmacodynamics and pharmacokinetics studies of the drug make us achieve the best performance of opioids with minimal dosages. As far as the side effects are concerned, various medications are used to decrease its side effects such as metoclopramide for nausea, laxatives for constipation, and methylphenidate for sedation. Opioid sensitization is also a major problem that leads to a decrease in the efficacy of the drug when used for prolonged periods of time. There are some receptors that actively take part in its desensitization. These are Nmethyl-D-aspartate (NMDA) receptors. Prolonged opioid therapy may both contribute to an apparent decrease in analgesic efficacy, regardless of the progression of the pain. Thus, in some instances, treating increasing pain with increasing doses of the same opioid may be futile [15].
Bisphosphonates
The third most commonly used drug is bisphosphonate which is generally used to cure the hypercalcemic states in the body. These drugs improve the acidic microenvironment of the local tumor bone tissue, causing a decrease in the dissolution of bone and thus reduce the activation of acid-sensing ion reduces and reduce cancer pain [16]. The bisphosphonate drugs should be considered as the treatment drugs when the analgesic drugs and radiation therapy are not effective in the treatment of bone cancer pain. These drugs are safe to use but have not proven as the most effective treatment model for alleviating pain due to cancer metastasis.
Tri-cyclic Antidepressants
Another drug used to treat bone pains in metastasis is tri-cyclic antidepressant (TCA) in cancer patients due to their positive effects on mood and sleep. The efficacy of these drugs for treating malignant pain is limited but its use in the treatment of nonmalignant pain is well studied and proved [17]. Various clinical trials and physicians have reported their effectiveness for changing the pain perception and reducing depressive symptoms in the cancer patients. So, their use can be justified as they have an antidepressant action that helps in advanced cancer patients. However, the use of TCA, especially in medically ill or elderly patients may be limited due to frequent side effects similar to those seen with opiates, which include drowsiness, constipation, urinary retention, and dry mouth, as well as such serious adverse effects as orthostatic hypotension, coma, liver function impairment, and cardiotoxicity [18]. But few selective serotonin reuptake inhibitors (SSRIs) such as paroxetine, citalopram, and selective norepinephrine reuptake inhibitors (SNRIs) such as venlafaxine and duloxetine have proved to be efficacious for treating neuropathic pain.
Corticosteroids
Corticosteroids which belong to another major group of medications are widely used as an adjuvant therapy for cancer-related pain syndromes. These are bone pain, neuropathic pain from infiltration or metastatic compression of neural structures, headache due to increased intracranial pressure, arthralgias, pain due to ongoing inflammation and the pressure on surrounding structures, and pain due to obstruction of hollow viscus or distention of an organ capsule [19]. However, it should be always taken into attention that corticosteroids, when used for a longer period of time, can produce significant adverse effects, such as immunosuppression, hypertension, hyperglycemia, gastric ulcers, and psychosis; although in cancer patients the risk versus benefit analysis reveals benefits that outweigh the risks involved in the use of steroids, particularly in cases of central nervous system involvement.
Growth Factors and Signaling Molecules
Another treatment modality caters the use of growth factors and signaling molecules responsible for the growth. One of them is osteoprotegerin which is a negative regulator of bone dissolving cells. It is among the class of TNF receptors. It inhibits bone destruction by activating the receptor activator of nuclear factor-kappa B ligand (RANKL) on osteoclasts and thus augment the apoptosis of osteoclasts [9]. This apoptosis causes the reduction in the amount of bone damage and thus reduces pain. This also helps in reducing the number of pathological fractures and the pain associated with them as well.
Endothelin-1 Receptor Antagonists
The endothelin-1 (ET-1) is a neurotransmitter that is secreted by neuronal cells, non-neuronal cells, and tumor cells [20]. Hyperalgesia in bony metastasis occurs due to sensitization of primary afferent nociceptors that contain ET-1 receptors. Therefore, ET-1 receptor antagonist causes alleviation of bone pain by antagonizing the effect of nociceptive stimuli at the receptors [21]. The ET system drugs, such as atrasentan, have been investigated for the clinical treatment of pain by causing the release of beta-endorphins and activation of the opioid pool. These antagonists have an indirect effect as well in which they cause a decrease in the disruption of cellular junctions preventing metastasis. The ET receptor antagonists may provide new advancement in the treatment modalities for bone pain in advanced carcinomas.
Radiotherapy
Radiotherapy (RT) is the most effective mode of treatment for alleviating pain in cancer patients. The Radiation Therapy Oncology Group reported that 80%-90% of patients receiving RT for osseous metastases experience partial to complete pain relief within 10-14 days of RT initiation [22]. Three types of radiotherapies are used for the treatment of bone metastasesone is external beam radiotherapy (EBRT), hemi-body irradiation (HBI), and radiopharmaceuticals [23]. Systematic review shows that EBRT, whether given as single or multiple fractions, produces 50% pain relief in 41% of patients and complete pain relief at one month in 24% of patients [24]. Also, a prospective study involving 91 patients with painful bone metastases who were treated with a median total dose of 46 Gray (Gy) found that complete and partial pain relief (≥50%) were obtained in 49% and 91% of patients, respectively [25].
There is no difference in the degree of pain relief depending on the fractions of RT. This is proved by the systemic review and meta-analysis of randomized controlled clinical trials which found that single-fraction RT with 1 × 8 Gy is as effective for pain relief as multi-fraction regimens such as 5 × 4 Gy in one week or 10 × 3 Gy in two weeks [26]. Although the optimal dose fractionation for radiation of metastatic bone lesions has been debated, an internet survey consisting of radiation oncologists, with members participating from the American Society for Radiology Oncology, Canadian Association of Radiation Oncology, and Royal Australian and New Zealand College of Radiologists, concluded that the most accepted fractionation schemes are 8 Gy in a single fraction and 30 Gy in 10 fractions [27].
Radioactive isotopes of phosphorus (P)-32 and strontium (Sr)-89 were the first bone-seeking radiopharmaceutical drugs approved by the United States (US) Food and Drug Administration (FDA) for the treatment of painful bone metastases, followed by samarium (Sm)-153, rhenium (Re)-186, and Re-188 [28]. Sr-89 chloride (Metastron™) and Sm-153-lexidronam (Quadramet®) are effective for treating ProstaticCa cell-induced bone metastases with 80% of patients having osteoblastic lesions achieving pain relief following strontium-89 administration [29][30]. In patients with metastatic bone pain, a Cochrane review found evidence to support their use as analgesics with a number needed to treat (NNT) of five and four to be complete and complete/partial relief, respectively [31][32]. Survival benefits have been shown by radium use in patients with castration-resistant prostate cancer. In Phase II clinical studies, the α-emitting radioisotope radium (Ra-223) demonstrated significant improvements in overall survival. Also, there was significant improvement seen in pain response as well as biochemical parameters [33]. However, Phase III randomized clinical trial (ALSYMPCA) aimed at the analysis of the analgesic efficacy, survival benefit as well as safety profile of Ra-223 (50 kBq/kg i.v.) is currently ongoing (NCT00699751).
Surgical management of metastatic bone pain
Surgery is very rarely considered an option to treat bone pains due to metastatic lesions. This trend is not as popular as various pharmacological drugs over the years have gained success in achieving adequate pain control. Among them, long-acting opioids are effective for managing pain. Patients have reported pain relief with oxycodone, morphine, and fentanyl patches. Not only drugs but nerve blocks, neurolytic agents, and radiofrequency ablations have been getting immense popularity for the relief of this pain. Two types of surgical procedures have been used so far. One is neurodestruction and the other is neuromodulation.
Neurodestruction
Neurodestruction causes damage and disruption of the pain pathways carrying the signal through the spinal cord to the brain. This disruption can be done at any level, either it is nerve, nerve root, nerve root ganglia, spinal cord, thalamus or brain stem or in combination if the disease process is complex. The level of the block depends on the severity of pain. This procedure has been tried in many patients including those with spinal metastatic disease and has been found effective with long-lasting effects. Procedures like anterior decompression and spinal stabilization have proved to be effective without any progression in the neurological impairment making these procedures widely acceptable for use [34]. There are many procedures being used for this process but one of the most common is spinal cordotomy that disrupts the spinothalamic tract at the level of cervical or thoracic spinal cord. This causes loss of pain sensation from the opposite part of the body thus relieving pain [35]. This procedure causes insensitivity towards pain perception mimicking neuropathy of a particular site which forms the basis of its complications. It can cause accidental burns, ulcerations due to lack of sensations, dry damaged skin, and various others. Therefore midline myelotomy which is a form of the same procedure is reserved for only those patients who have visceral bilateral pain that is resistant to other modes of treatment. In midline myelotomy; the central spinal cord is disrupted but it involves a nonspecific pathway for the disruption of pain signal transmission [36]. Thalamotomy is another procedure which is done at the level of nuclei in somatosensory areas and anterior areas of thalamus. These areas relay pain, therefore, are used for malignant intractable pain. Contrast-guided (CT-guided) anteromedial pulvinotomy and centromedian thalamotomy are done in this procedure for pain relief [37]. Another neurodestructive procedure is cingulotomy that involves disruption of the pathway at the level of the limbic system which also modulates the psychological effects of pain and memory associated with the pain. But it is reserved for patients with resistant pain that has failed to respond to palliative pharmaceuticals because of the neurocognitive impairment the patients experience after the procedure. A case report was also published describing the effectiveness of this procedure citing three patients who underwent this treatment [38].
There are many benefits of neurodestructive procedures. The procedures are easy to perform with the modern medical technology, cause immediate pain relief, pain relief in resistant cases, and long-lasting effects as compared to pharmacological drugs. However, these procedures are irreversible, cause numbness, weakness, paresthesia, neurocognitive impairment, and inability to use for future testation regarding the effectiveness of the treatment. The numbness and weakness take a long time to recover during which patient is at higher risk for the development of other complications, especially with bilateral procedures. Also, there are certain limitations of the procedures. They are contraindicated in coagulopathies (which is common in most visceral cancers due to the release of substances that causes hypercoagulable states of the body). These procedures are particularly useful if the life expectancy is two to three months because their effect lasts for three to four months. These procedures will render the patient free of any drug use for pain control later on thus decreasing the burden of adverse effects of pharmacological therapy. Injections of neurolytic substances at the ganglion are also an effective mode of treatment for treating pain. We can treat chronic abdominal pain associated with pancreatic cancer by celiac plexus block (injection of a neurolytic agent near the celiac plexus at the level of T-12). This block has proved to be effective and safe to control pain by providing painlessness in 70%-90% of patients with various types of abdominal cancers with mean pain decreased by 40% in the majority of patients [39]. Celiac plexus block causes orthostatic hypotension, local pain, and diarrhea as the most common side-effects, but can be managed with early detection and adequate conservative treatment. In other cases, hypogastric plexus block can also be used. It is used in cases of visceral and pelvic pain associated with extensive gynecologic, colorectal, or genitourinary cancers [40]. However hypogastric plexus block is rarely used as it is found to be less effective than celiac plexus block because of the widespread extent of the disease at the time of diagnosis in this group. But in cases of medically intractable pelvic pain, a hypogastric block can still be used and it has no serious complications reported till far. Local nerve blocks or neurolysis with phenol or alcohol can also be used for treating localized pain and kyphoplasty can be used for painful vertebral compression fractures in patients with metastatic cancer [41][42].
Neuromodulation
Electrical neuromodulation is the second surgical mode of treating pain in patients. This procedure involves the electrical stimulation of peripheral nerve or dorsal column of spinal cord and brain. Spinal cord stimulation primarily deals with neuropathic pain, such as in patients with arachnoiditis, but it has not played a significant role in the perception of nociceptive pain. Spinal cord stimulation causes 60% reduction in the severity of pain which has improved the quality of life for three years or more but it is not yet considered as one of the first line treatment options for intractable pain [41].
Another mode of treating cancer pain is the use of drugs intrathecally that reduce the perception of pain. For this various drugs can be used including opioids, ziconotide, local anesthetics, and baclofen. Intrathecal opioids given alone or in combination with other drugs such as alpha agonists or local anesthetics are used for intractable pain relief. These drugs are mostly given by self-controlled pump that delivers the medication at a specific rate in the intrathecal space depending on the requirement and severity of pain. Intrathecal administration of these drugs helps reduce systemic side effects of the drugs and thus are widely accepted in those with contraindications of these drugs due to co-morbid conditions. This also helps in increased amounts of cerebrospinal fluid (CSF) concentrations of the drug that would increase the sensitivity of the drug and would require lesser dosages to alleviate pain. Intrathecal drug delivery system leads to comprehensive management of cancer pain. This is proved by one multicenter, randomized clinical trial which demonstrated that patients with refractory cancer pain are more effectively treated with the addition of an implantable intrathecal drug delivery system to the standard therapy [43]. The systemic side effects were decreased by 50% through infusion by the intrathecal pump. Patients reported lesser fatigue, sedation, constipation and also showed improved survival rate at six months. And finally, patients with the implanted intrathecal drug delivery system had significant reduction of fatigue and depressed consciousness, as well as an improved rate of survival at six months [44]. This pump is implanted in the subcutaneous fat of the abdomen that provides continuous infusion. More commonly used pumps nowadays have programmable devices that contain an electronic module that allows adjustment of the drug infusion rate using telemetry programming. All pumps have to be refilled at regular time intervals which are done every one to three months in office or clinic settings by simple insertion of the needle into the center of the reservoir through the skin. Also, clonidine and bupivacaine are the most commonly used non-opioid medications for intrathecal administration in cancer patients. They are both used in combination with morphine to strengthen its analgesic effect. Clonidine produces analgesia by its action on alpha-2 receptors on presynaptic primary afferents and postsynaptic dorsal horn neurons of the spinal cord and causes a decrease in the release of neurotransmitter from Cfibers (e.g., substance P) and thus cause inhibition of preganglionic sympathetic transmission [45]. Local anesthetic bupivacaine can also be used to produce its analgesic effect by blocking voltage-sensitive sodium channels. This prevents the generation and conduction of nerve impulses. But we have limited its use due to side effects including neuropathy, cardiotoxicity, and bladder and bowel incontinence. These adverse effects can be controlled by slow titration.
GABA-B agonist baclofen can be used in cancer patients who experience severe spasticity [46]. When administered intrathecally, baclofen inhibits both monosynaptic and postsynaptic reflexes at the spinal level and help restore electrical signals that cause relaxation of the muscles. Baclofen is used for treating neuropathic pain but it has various side effects as well including sedation, hypotonia with weakness, and urinary retention. Sudden discontinuation of the therapy can be life-threatening as severe rebound spasticity occurs that leads to intractable pain. It is also accompanied by high fevers, confusion, muscle weakness, rhabdomyolysis, and multiple organ failure.
Several years ago Elan Pharmaceuticals introduced a new analgesic drug, ziconotide, which is a synthetic, non-opioid analgesic agent for the amelioration of severe and chronic pain. It was given the FDA approval in 2004 after several human and animal safety and efficacy studies. Ziconotide binds to specific N-type voltage-sensitive calcium channels found in neural tissue and acts by inhibiting the release of nociceptive neurochemicals like substance-P, glutamate, and calcitonin gene-related peptide in the spinal cord, thus relieving pain [47]. Although it is a safe drug as the long-term side effects are unknown and also due to severe adverse effects during the initial phase of starting the therapy, it is not considered the first choice for many patients for pain management [48].
The adjustability and reversibility of intrathecal pumps provide added benefit for controlling the rate of infusion and drug composition during the course of therapy. Also, this mode of therapy is highly testable as the patients provide data on the degree of pain relief and the effectiveness of the treatment mode. This mode of treatment carries various risks and side effects along with it such as site infection, increased costs, long term treatment, and device malfunctions. Besides adverse effects, there are not many serious complications reported for intrathecal drug delivery system implantation itself. Some of the common complications are infection of the meninges, granuloma formation at the tip of the subarachnoid catheter, bleeding or hematoma at the site of the surgery, and malfunctioning of the device. Theses device malfunctions are reversible and the rest of the complications are treatable. This makes the use of intrathecal pumps as a widely accepted mode of treatment for long-term cancer patients suffering intractable pain due to bony metastasis [49].
Conclusions
Keeping in mind all the medical and surgical options available, it can be safely concluded that pain management is extremely crucial in cancer patients. Even though we have a multitude of options for this purpose, we are still far from pain-free outcomes. The future prospects of metastatic pain management are still wide open to further enhance patient satisfaction, decrease psychological impacts, and improve the overall quality of life of the patient.
|
v3-fos-license
|
2018-04-03T03:35:11.735Z
|
2017-09-07T00:00:00.000
|
3267137
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-017-11458-9.pdf",
"pdf_hash": "73c31d0d8aa5421ed3af919f729a8b4e05faec49",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43221",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "44eb2dde30c8cb42ce8321094832b8599cb01bcf",
"year": 2017
}
|
pes2o/s2orc
|
Mild traumatic brain injury induces memory deficits with alteration of gene expression profile
Repeated mild traumatic brain injury (rmTBI), the most common type of traumatic brain injuries, can result in neurological dysfunction and cognitive deficits. However, the molecular mechanisms and the long-term consequence of rmTBI remain elusive. In this study, we developed a modified rmTBI mouse model and found that rmTBI-induced transient neurological deficits and persistent impairments of spatial memory function. Furthermore, rmTBI mice had long-lasting detrimental effect on cognitive function, exhibiting memory deficits even 12 weeks after rmTBI. Microarray analysis of whole genome gene expression showed that rmTBI significantly altered the expression level of 87 genes which are involved in apoptosis, stress response, metabolism, and synaptic plasticity. The results indicate the potential mechanism underlying rmTBI-induced acute neurological deficits and its chronic effect on memory impairments. This study suggests that long-term monitoring and interventions for rmTBI individuals are essential for memory function recovery and reducing the risk of developing neurodegenerative diseases.
Although increased animal model-based studies of mTBI have been reported, most of them focused on short-term effect, in the range of hours to 10 days post-mTBI [16][17][18] . However, changes at early time points may not fully represent the long-term outcome. A few studies showed that long-term changes can last for more than 1 year after mTBI in animals and patients [19][20][21][22] . More importantly, TBI, including rmTBI, even occurring at young age, significantly increases the risk of neurodegenerative diseases [1][2][3] .
Most studies on the cellular and molecular processes by TBI focused on examining the expression of individual genes involved in the impairment and recovery of brain functions 23,24 . Gene expression microarray is a powerful approach to determine the alteration of whole genome gene expression, which not only indicates the change of each individual gene but also directly indicates the correlation among each gene and each molecular pathway. Accordingly, the affected biological functions can be predicted and validated. Recently, several whole genome gene expression studies have been performed to define the molecular changes of TBI in animal models and in vitro models 25 . However, although rmTBI is the most common form of TBI, the alteration of gene expression profiles and molecular pathways in rmTBI has not been investigated.
In the present study, we first examined both short-term and long-term effects of rmTBI on memory functions by employing a modified weight-drop model of rmTBI on mice. We found that significant memory deficits were detected at 2, 8 and 12 weeks after rmTBI. Microarray analysis suggested that rmTBI significantly altered the expression of 87 genes involved in apoptosis, metabolism, transcription, protein trafficking, stress response and synaptic plasticity.
Results
Acute neurological impairment is not exaggerated by rmTBI. To examine the acute neurological responses, the duration of the loss of righting reflex (LORR) in rmTBI and control mice were recorded. rmTBI mice received the concussive-like head injury once a day for 5 uninterrupted weeks, while control mice were treated following the same procedure except for concussive-like head injury (Fig. 1A). One mouse displaying forelimb paralysis was excluded. The duration of LORR was recorded after each injury. After single mTBI, the average duration was 98.71.70 ± 4.87 seconds in mTBI mice, while the control mice had an average duration of 12.23 ± 1.6 2 seconds, p < 0.05 after the 1 st TBI (Fig. 1B). The duration of LORR was not altered by the number of injuries, 100.94 ± 4.51 seconds after 25 th mTBI compared with 98.71.70 ± 4.87 seconds on day 1, p > 0.05 (Fig. 1B). However, both single mTBI and rmTBI significantly increased the duration of LORR compared with the sham treatment, p < 0.05 (Fig. 1B). This result demonstrates that mTBI significantly increased the duration of LORR compared with the sham treatment. However, rmTBI did not exaggerate the increase of the duration of LORR (Fig. 1B). More importantly, no death and skull fracture occurred in rmTBI mice. rmTBI induces persistent memory deficits. Since memory impairment is among the top complaints of patients with rmTBI, next we examined whether rrmTBI lead to memory deficits in this modified rmTBI mouse model. Two weeks after the last injury, the effect of rmTBI on spatial memory was determined by Morris water maze. To rule out the possibility that the performance of rmTBI mice in water maze was affected by muscular strength and balance ability, wire hanging test was utilized to examine the muscular strength and balance ability before the water maze test. The mice in both control group and rmTBI group hung on the wire for at least 60 seconds, indicating that there was no motor dysfunction in rmTBI mice. In the visible platform test of Morris water maze, rmTBI and control mice had similar escape latency (20.93 ± 3.20 and 25.92 ± 1.54 s, P > 0.05 ( Fig. 2A) and path length (4.33 ± 0.61 and 4.44 ± 0.11 m; P > 0.05) (Fig. 2B), indicating that rmTBI did not affect mouse mobility or vision. In the hidden platform-swimming tests, the escape latency was significantly longer in rmTBI mice than that in control mice from day 3 to day 5, 36.93 ± 4.66 vs. 16.04 ± 2.93 on day 3, 39.39 ± 3.35 vs. 18.65 ± 3.43 on day 4 and 27.82 ± 3.62 vs.14.73 ± 4.27 son day 5 (P < 0.05) (Fig. 2C). The rmTBI mice swam significantly longer distances to reach the platform (4.76 ± 0.80, 4.77 ± 0.68 and 3.86 ± 0.50 m) compared with the control mice (2.69 ± 0.43, 3.32 ± 0.51 and 2.59 ± 0.95 m) on the 3 rd , 4 th and 5 th day (P < 0.05) (Fig. 2D). In the probe trial on the last day of testing, the number of times rmTBI mice traveled into the platform zone, where the hidden platform was previously placed, was significant less compared with that in control mice, 0.40 ± 0.24 vs. 2.80 ± 0.97 (P < 0.05) (Fig. 2E). These data demonstrated that spatial memory was significantly impaired in rmTBI mice compared with that in control mice at two weeks after the last injury.
Eight weeks after injury, the effect of rmTBI on spatial memory was also examined by Morris water maze. In the visible platform test of Morris water maze, rmTBI and control mice had similar escape latency (44.62 ± 4.32 and 44.1 ± 3.63 s, P > 0.05) (Fig. 3A) and path length (5.45 ± 1.53 and 7.25 ± 1.82 m, P > 0.05) (Fig. 3B), indicating that rmTBI did not affect mouse mobility or vision at this point. In the hidden platform-swimming tests, the escape latency was significantly longer in rmTBI mice than that in control mice on day 5, 29.07 ± 6.23 vs.13.2 ± 0.87 s (P < 0.05) (Fig. 3C). The rmTBI mice swam significantly longer distances to reach the platform compared with the control mice on the 5 th day, 3.89 ± 0.55 vs. 2.29 ± 0.16 m (P < 0.05) (Fig. 3D). In the probe trial, the number of times rmTBI mice traveled into the platform zonewas significant less than that in control mice, 1.83 ± 0.65 vs. 4.33 ± 0.92 (P < 0.05) (Fig. 3E). These data demonstrated that spatial memory was significantly impaired in the rmTBI mice at 8 weeks after the last injury. Memory deficits at 2 weeks after the last injury of rmTBI. A Morris water maze test consists of 1 day of visible platform tests, 4 days of hidden platform tests and 1 probe test 24 hours after the last hidden platform test. In the visible platform tests, rmTBI and control mice had similar latency (A) and swimming distance (B) to escape onto the platform. The values were expressed as mean ± SEM, N = 5/group, p > 0.05 by Student's t-test. In the hidden platform test, mice were trained with 5 trials per day for 4 days. rmTBI mice had longer latency (C) and swimming distance (D) on the 3 rd , 4 th and 5 th day. The values were expressed as mean ± SEM, N = 5/group, *p < 0.05 by ANOVA. (E) In the probe trial, the number of times the rmTBI mice traveled into the platform zone was significantly less than that of the control mice. The values were expressed as mean ± SEM, N = 5/group, *p < 0.05 by Student's t-test.
To further investigate the chronic effect of rmTBI on memory functions, Morris water maze was performed at twelve weeks after the last injury. In the visible platform test of Morris water maze, rmTBI and control mice had similar escape latency (44.13 ± 3.61 and 45.67 ± 5.24 s, P > 0.05) (Fig. 4A) and path length (7.27 ± 0.40 and 6.42 ± 0.53 m, P > 0.05) (Fig. 4B). In the hidden platform-swimming tests, the escape latency was significantly longer in rmTBI mice than that in control mice from day 3 to day 5, 39.27 ± 2.83 vs. 17.9 ± 2.83 s on the 3 rd day, 34.7 ± 5.40 vs. 14.63 ± 2.29 s on the 4 th day and 26.1 ± 4.9811.73 ± 0.61 son the 5 th day (P < 0.05) (Fig. 4C). The rmTBI mice swam significantly longer distances to reach the platform, 6.20 ± 0.71 vs. 2.52 ± 0.45, 5.24 ± 0.86 vs. 2.04 ± 0.34 and 3.62 ± 0.79 vs. 1.70 ± 0.07 mon the 3 rd , 4 th and 5 th day (P < 0.05) (Fig. 4D). In the probe trial, the number of times rmTBI mice traveled into the platform zone was significant less compared with that in control mice, 2.17 ± 0.65 vs. 4.17 ± 0.65 (P < 0.05) (Fig. 4E). These data demonstrated that spatial memory was persistently impaired in rmTBI mice even 12 weeks after the last injury.
Gene expression profiling in rmTBI mice and functional classification.
To elucidate the molecular mechanism of rmTBI-induced memory deficits, gene expression profiling was performed on hippocampus of rmTBI mice and control mice after the behavioral tests. 2 weeks after the last injury, Morris water maze was performed in rmTBI mice and control mice. Microarray experiments were performed one day after the Morris water maze tests. 87 genes were identified to be differentially expressed at cut-off 1.2 folds (p < 0.05) in TBI mice (Supplementary Table 1). Among them, 43 genes were up-regulated and 44 genes were down-regulated. The encoding proteins were classified into nucleic acid binding, transcription factor, transporter, receptor and membrane traffic protein etc. categories (Fig. 5A), suggesting that they play an important role in biological processes and molecular functions. Moreover, functional classification analysis showed that the differentially expressed genes belonged to apoptotic process, response to stimulus, metabolic process, developmental process and biological regulation etc. biological processes (Fig. 5B). Furthermore, they represented a diverse spectrum of molecular functions, including translation regulator activity, transcription factor activity, catalytic activity and enzyme regulator activity etc. (Fig. 5C). Although no signaling pathway was significantly affected, the differentially regulated genes did involve in synaptic vesicle trafficking, ionotropic glutamate receptor pathway, opioid prodynorphin pathway, and Huntington disease and Parkinson disease pathways. These data indicated that rmTBI significantly altered the expression of many key molecules which may have a broad effect on brain functions and diseases pathogenesis, including memory function and neurodegenerative disorders.
Discussion
Although the information of cognitive assessments and brain imaging from mTBI patients is available, mTBI studies in human are challenged by many difficulties. Animal models of mTBI are useful to investigate the cellular and molecular mechanisms of mTBI and monitor long-term effects of mTBI on cognitive functions, providing better understanding of the neurobiological and behavioral outcomes of mTBI, and helping the development of effective therapeutic approaches. Among the species of TBI models, rodent models are primarily being used. The classical weight-drop model, Marmarou model, causes a high rate of skull fracture and mouse death 26 . In this study, we developed a modified weight-drop model with a 20 g weight and a 25 cm height according to the previous study 18 , Notably, no skull fracture and mouse death occurred in all TBI mice. In addition, the motor function was not affected in this mTBI model. It indicates that this model is better than the classical mTBI model. Although a few of modified weight-drop models of rmTBI with different injury times and intervals have been reported, the injury times are between 2 to 5, which are still not sufficient to represent the situation in human individuals who often experienced far more times of injuries, such as sports-related rmTBI in boxing, soccer, hockey and football etc. Therefore, we developed a new rmTBI model in which the mice were injured once a day for 25 uninterrupted days. More importantly, this novel model with milder injury and extended injury days did not impair muscular strength, balance ability and vision mice according to their performance of wire hang test and the first day test of Morris water maze. Using this new model, we found that a single mTBI significantly induced acute neurological dysfunction by increasing the duration of LORR. However, the uninterrupted injuries, up to 25 times, did not exaggerate the effects of mTBI on acute neurological dysfunction, assessed by duration of LORR. It may correlate with that the injury are too mild to cause accumulated effects on the acute neurological test. In addition, the adaptive responses may protect them from exaggerated injuries.
Memory decline is a most common complaint in mTBI patients, even up to 1 year after TBI 19,21 . However, most studies on mTBI focused on short-term pathological, pathophysiological and behavioral changes, in the range of hours to 10 days post-mTBI [16][17][18] . In the current study, the memory function was assessed at 2, 8 and 12 weeks after the last injury by Morris water maze to determine both short-term and long-term effects of rmTBI on memory function. We found that rmTBI mice showed significant memory deficits compared with control mice from 2 to 12 weeks after the last injury. It indicates that rmTBI does have a chronic effect on memory impairments in addition to short-term memory deficits. This is the first study to examine the chronic effect of rmTBI on memory functions in a mouse model, which not only provides valuable in formation of chronic effects of rmTBI . Memory deficits at 12 weeks after the last injury of rmTBI. In the visible platform tests, rmTBI and control mice had similar latency (A) and swimming distance (B) to escape onto the platform. The values were expressed as mean ± SEM, N = 5/group, p > 0.05 by Student's t-test. In the hidden platform test, rmTBI mice had longer latency (C) and swimming distance (D) on the 3 rd , 4 th and 5 th day. The values were expressed as mean ± SEM, N = 6/group, *p < 0.05 by ANOVA. (E) In the probe trial, rmTBI mice traveled into the platform zone less times than control mice. The values were expressed as mean ± SEM, N = 6/group, *p < 0.05 by Student's t-test.
on memory functions but also suggests that long-term monitoring and interventions are imperative to recover the memory function in rmTBI patients. Moreover, the data of probe tests that the passing times in rmTBI mice were 12.5% of controls at 2 weeks, 42% of controls at 8 weeks and 52% of controls at 12 weeks, showed that the memory impairment was gradually recovered, which highly indicated that early intervention and treatment may be beneficial for rmTBI individuals to accelerate the improvement of memory functions.
To investigate the molecular mechanism of rmTBI-induced memory deficits, we performed gene expression profiling in rmTBI mice and control mice. The most dramatic memory deficit was observed at 2 weeks after the last injury, and the hippocampus, vulnerable to injury even with mild brain trauma, plays a critical role in learning and memory. We specifically profiled the whole genome gene expression in the hippocampus one day after the behavioral tests at 2-week time point. 87 differentially expressed genes in rmTBI mice were classified into transcription factor, transporter, receptor and membrane traffic protein etc. categories, which belonged to apoptotic process, response to stimulus, metabolic process, transcription regulation and enzyme regulator etc. functional groups. In addition, the dysregulated genes were also involved in synaptic vesicle trafficking, ionotropic glutamate receptor pathway, Huntington disease and Parkinson disease pathways. It is worth to note that majority of these altered genes were involved in pathological processes which promote development and progression of neurodegenerative diseases.
Neuroinflammation has been suggested to exaggerate the outcomes of neurodegenerative diseases. Several studies revealed that traumatic CNS injury could trigger severe systemic effects that lead to inflammation and pathological autoimmunity 27,28 . Consistent with previous reports, we found rmTBI triggered upregulations of genes that were involved in pro-inflammatory response. Lcp1 (encoding L-plastin)was suggested to regulate the function of integrin in leukocytes 29 , which is important for leukocytes infiltration to the CNS 30 and in turn induction of neuroinflammation. Another up-regulated gene is Neu1, which has been found to regulate Toll-like receptor (TLR) activation 31 . TLRs play an important role in the pathophysiology of infectious diseases, inflammatory diseases, and possibly in autoimmune diseases.Errfi1 (also known as Mig6) was another dysregulated gene, which is an immediate early gene transcriptionally induced by a divergent array of extracellular stimuli and has a possible role in the response to persistent stress 32 . In general, the data set demonstrated a predominantly inflammatory response, as the majority of pro-inflammatory genes were up-regulated and anti-inflammatory genes were down-regulated.
Free radicals produced under oxidative stress attack macromolecules, especially DNA, and in turn induce apoptosis. Within the differentially expressed genes, genes promoting oxidative stress were upregulated in the hippocampus from the rmTBI mice. Cyb5r1 is believed to be associated with the oxidative state of cells, and Erp29, is involved in the processing of secretory proteins within the endoplasmic reticulum (ER) and has been shown to take part in the ER stress signaling 33 . Therefore, it is possible that rmTBI could trigger oxidative stress responses in the hippocampus, leading to consequent neural dysfunction. The ubiquitin proteasome system (UPS) controls the turnover of innumerable cellular proteins. It targets misfolded or unwanted proteins for general proteolytic destruction and tightly controls destruction of proteins involved in development and differentiation, cell cycle progression, apoptosis and many other biological processes. Dysregulation of the UPS is believed to be both a cause and result of neurodegenerative diseases [34][35][36][37][38] . In the rmTBI mice, two genes involving E3 protein-ubiquitin ligase were dysregulated. These are Fbxo10, as a component of the SCF (SKP1-CUL1-F-box protein)complex which acts as an E3 protein-ubiquitin ligase 39 , and Stub1 per se as an E3 ubiquitin-protein ligase. Studies revealed that Stub1 was upregulated in the hippocampus of the SAMP8 mice, which is a typical animal model to investigate the fundamental mechanisms of age-related learning and memory deficits associated with neuronal degeneration 40,41 . It has been shown that down-regulation of Stub 1 after treatment with a Chinese medicine could ameliorate age-related learning and memory deficits 40,41 . These results indicatedthat protein degradation and quality control is one of rmTBI's effects on the brain.
Taken together, our results indicated that rmTBI may facilitate neuronal apoptosis. The data also provides insights into the possible mechanisms of rmTBI contributing to memory impairments and increased risk of neurodegenerative diseases [1][2][3] . Future studies are warranted to further examine pathophysiological changes and molecular alterations with proteomics-based analysis after rmTBI at different time points and to find novel valid targets for the development of effective rmTBI therapies.
Materials and Methods rmTBI mouse model. Animal experiment protocols were approved by the University of British Columbia
Animal Care and Use Committee, and the experimental procedures were carried out in accordance with the guidelines and regulations of The University of British Columbia Animal Care and Use Committee and Biosafety Committee. Male C57BL/6 mice of 10-week old were housed in the animal facility under standard conditions, 22 ± 1 °C with a 12:12 hr light-dark cycle. After 2 weeks housing for adapting to the laboratory environment, 17 micereceived sham treatment and 17 mice received rmTBI treatment. The modified weight drop device consists of a hollow Plexiglas tube with 2 cm in diameter and 40 cm length, a cylindrical-shaped acrylic stick (20 g) and a foam platform. The tube is kept vertical to the surface of the mouse head and guides the freely falling acrylic stick onto the head. The diameter of the upper part of the stick is 1.8 cm, which allows little lateralization of the stick to hit the head. The end of the stick is flat, round and 1 cm in diameter which will hit the top of the mouse head encompassing the area over the frontal and parietal bones. The mice were anesthetized by 2.5% is oflurane and the disappearance of extremity reflex stimulated by the rear foot toe pinch was considered to be suitable for injury. The mouse was quickly placed in a prone position on the platform so that the mouse head was closely underneath the lower opening of the Plexiglas tube. Next, the stick was released from 25 cm height. After consciousness (indicated by return of righting reflex and mobility) was regained, the mouse was put back to its home cage. The control mouse was anesthetized and placed on the platform in the same fashion as the injured ones, but without any injury. rmTBI mice received the concussive-like head injury once a day from Monday to Friday for 5 uninterrupted weeks, while control mice were treated following the same procedure except for concussive-like head injury. Behavioral tests were performed on mice in control group and rmTBI group at 2, 8 or 12 weeks after sham treatment or rmTBI treatment, respectively. The time line of the whole experiment is shown in Fig. 1A. The mice were sacrificed after water maze test at each time point.
Acute neurological evaluation. Acute neurological evaluation was performed every day right after the injury was delivered by recording the time of loss of consciousness (LOC). LOC was evaluated by the duration of the loss of righting reflex (LORR). LORR was measured as the time interval between loss of the righting reflex and regain of the righting reflex 42 . Convulsion was also recorded throughout the acute evaluation period. Briefly, the mouse was placed in a clear Perspex observation box right after the injury was delivered. While recording for the time of LORR, the number and intensity of convulsion were also recorded. The duration of each convulsion was graded as short (1 to 10 s), medium (11 to 30 s), or long (≥31 s). According to the Charles River Laboratories grading system, the intensity of each convulsion was classified into mild, moderate, and severe convulsion 43 .
Wire hanging test. In order to measure motor neuromuscular activities and motor coordination, the endurance of wire hanging testwas measured by placing the mouse on top of a wire mesh (1 × 1 cm grid) which was taped around the edge and suspended 50 cm above a soft bedding material. The mesh was gently shaken so that the mouse griped the wire and then it was turned upside down and the amount of time spent holding on to the mesh with all the four legs was recorded, up to a maximum of 60 seconds.
Morris water maze. The Morris water maze test was performed as we previously described 44,45 . Briefly, a 1.5-m-diam, water-filled, cylindrical tank was used to perform Morris water maze. Extra-maze visual cues with different colors and shapes for orientation were permanently placed on the four walls around the tank. The temperature of the water in the tank was kept constant at 22 ± 1 °C. A 10-cm-diam platform was placed in a certain quadrant of the tank. The procedure consisted of 1 day of visible platform tests and 4 days of hidden platform tests, plus a probe trial 24 h after the last hidden platform test. In the visible platform test, the platform was lifted 0.5 cm above the water surface in the southeast, northeast, northwest, southwest quadrant and the center of the pool respectively in each trial. There were 5 contiguous trials, with an inter-trial interval of 1 hour. Mice were placed next to and facing the wall of the tank successively in north (N), east (E), south (S), and west (W) positions. In each trial the mouse was allowed to swim until it found the platform, or until 60 seconds had elapsed. In the latter case, the mouse was guided to the platform where it remained for 20 seconds before being returned to the cage. In the hidden platform tests, the platform was placed 0.5 cm below the water surface in the southeast quadrant and mice were trained for 5 trials per day from all the N, E, S, W positions with an intertrial interval of 1 hour. The probe trial was conducted by removing the platform and placing the mouse next to and facing the N side. The time spent in the previously platform quadrant (southeast quadrant) was measured in a single 60 seconds trial. Tracking of animal movement was achieved with ANY-maze video-tracking system. RNA preparation. 2 weeks after the last injury, behavioral tests were performed in rmTBI mice and control mice. One day after all the behavioral tests, RNA was extracted from the hippocampi of 3 mice in control group and 3 mice in rmTBI group, respectively. Briefly, the mice were decapitated and the isolated hippocampi were stored immediately at −80 °C. Total RNA was extracted from hippocampi using TRI Reagent (Sigma-Aldrich, Inc., St. Louis, MO).
Illumina whole genome gene expression assay. The complementary RNA (cRNA) was amplified from an input of 500 ng total RNA using the Illumina TotalPrep RNA Amplification Kit (Applied Biosystems Inc., Foster City, CA, AMIL1791) following the manufacturer's instruction. The cRNA samples were then assessed for quality by measuring A260/A280. All of them should fall in the range of 1.7 to 2.1. The Illumina's MouseWG-6 v 2.0 Expression BeadChip containing more than 45, 200 well annotated RefSeq transcripts allowed 6 samples to be interrogated in parallel on a single Bead chip. Labeled and amplified cRNAs (1.5 μg/array) were hybridized to Illumina's Sentrix ® Mouse-6 Expression Bead chips at 58 °C for 16 h according to Illumina ® Whole-Genome Gene Expression with IntelliHyb Seal System Manual. The array was washed and stained with 1 μg/ml cya-nine3-streptavidin then scanned using an Illumina BeadStation 500 G -Bead Array Reader (Illumina, Inc., San Diego, CA). Reference, hybridization control, stringency and negative control genes were checked for proper chip detection. The results were extracted with the Illumina's Bead Studio software with quantile normalization and background subtraction. The Illumina custom error model with multiple testing corrections (Benjamini & Hochberg false discovery rate) was applied to dataset to identify genes differentially expressed following injuries (filtered by Illumina's "detection p-value < 0.05" and Diff. p-value < 0.05).
Statistics.
The data of duration of LORR were analyzed by one-way ANOVA. The data of Morris water maze were analyzed by two-way ANOVA or two-tailed Student's t-test. All data were presented as means ± SEM. P < 0.05 was considered as statistically significant.
|
v3-fos-license
|
2023-08-13T15:07:56.976Z
|
2023-08-11T00:00:00.000
|
260856530
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2673-6284/12/3/54/pdf?version=1691746860",
"pdf_hash": "11952a029de4355a743ba86ead711c0503ebb65f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43222",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "b24e6b693a32b2e16eab4f0e8f6aaf521e4260ce",
"year": 2023
}
|
pes2o/s2orc
|
Characterization of Enzyme-Linked Immunosorbent Assay (ELISA) for Quantification of Antibodies against Salmonella Typhimurium and Salmonella Enteritidis O-Antigens in Human Sera
Nontyphoidal Salmonella (NTS) is a leading cause of morbidity and mortality caused by enteric pathogens worldwide in both children and adults, and vaccines are not yet available. The measurement of antigen-specific antibodies in the sera of vaccinated or convalescent individuals is crucial to understand the incidence of disease and the immunogenicity of vaccine candidates. A solid and standardized assay used to determine the level of specific anti-antigens IgG is therefore of paramount importance. In this work, we presented the characterization of a customized enzyme-linked immunosorbent assay (ELISA) with continuous readouts and a standardized definition of EU/mL. We assessed various performance parameters: standard curve accuracy, dilutional linearity, intermediate precision, specificity, limits of blanks, and quantification. The simplicity of the assay, its high sensitivity and specificity coupled with its low cost and the use of basic consumables and instruments without the need of high automation makes it suitable for transfer and application to different laboratories, including resource-limiting settings where the disease is endemic. This ELISA is, therefore, fit for purpose to be used for quantification of antibodies against Salmonella Typhimurium and Salmonella Enteritidis O-antigens in human samples, both for vaccine clinical trials and large sero-epidemiological studies.
Introduction
Worldwide, Salmonella Typhimurium and Salmonella Enteritidis are leading causative agents of foodborne illness in both children and adults [1]. Emerging lineages of Salmonella enterica serovars Typhimurium and Enteritidis have been associated with invasive nontyphoidal Salmonella (iNTS) disease, especially in sub-Saharan Africa. In this area, iNTS disease represents one of the major causes of morbidity and mortality, resulting in about 67,000 deaths each year [2]. Four risk factors associated with the prevalence of iNTS (malaria, HIV, malnutrition, and access to improved drinking water sources) have been taken into consideration in order to generate the iNTS risk factor (iNRF) index. An evaluation based on the iNRF index has shown that the risk level of iNTS varies among different geographies in sub-Saharan Africa, not only at the country level but also within the same country [3]. Nontyphoidal Salmonella (NTS) isolates in these resource-limited settings, where routine surveillance for antimicrobial resistance is rare, have been associated with multidrug resistance (MDR) [4].
The development of a vaccine against iNTS disease is becoming an urgent need in endemic areas, as no licensed vaccines to prevent this disease are currently available [5]. Several vaccine candidates are under development based on various technologies, both traditional and innovative [6]. All the approaches have a common denominator, which is the assumption that the key drivers of immunity against nontyphoidal Salmonella, as for other Gram-negative bacteria, are outer membrane surface antigens, with a particular focus on surface polysaccharides. One of the most advanced vaccine candidates, currently in clinical development, includes NTS components consisting of glycoconjugates of lipopolysaccharide-derived core and O-polysaccharide (COPS) linked to FliC flagellin [7]. A different strategy to deliver O-antigen polysaccharides uses the Generalized Modules for Membrane Antigens (GMMA) technology. A bivalent vaccine candidate of Salmonella Typhimurium and Salmonella Enteritidis GMMA is currently in clinical phase I [8][9][10][11]. GMMA are exosomes naturally released from Gram-negative bacteria that have been engineered to disrupt the linkage between the inner and the outer membrane generating a hyper-blebbing phenotype [12] and to deacylate the lipid A of the lipopolysaccharide reducing the risk of systemic reactogenicity when delivered parenterally [9]. The use of GMMA technology allows us to present surface polysaccharides and outer membrane proteins in their native conformation [12,13]. Furthermore, GMMA have self-adjuvanting properties, likely because they naturally contain various pathogen-associated molecular pattern molecules, because of their size, and their potential to present multiple antigens in their native conformation which is optimal to induce strong immunogenicity [13]. Since GMMA technology involves a relatively simple production process without the need for complex conjugation, it represents an attractive and affordable option particularly relevant for low-and middle-income countries [10,12]. Preclinical results have supported the transition of an iNTS GMMA-based vaccine into clinical development. In a comparison between iNTS GMMA and glycoconjugates, GMMA showed superiority to classical conjugates when tested in mice in terms of antibody quality and functionality [10].
Salmonella infections show a complex pathogenesis that consists of an intracellular antibody-refractive growth phase and an extracellular antibody-susceptible phase of spread [14]. Therefore, the measurement of antigen-specific antibodies in the sera of vaccinated or convalescent individuals could be fundamental to understanding the incidence of disease and the potential efficacy of vaccination. Traditionally, the method of choice to evaluate the level of specific anti-antigens IgG is the Enzyme-Linked Immunosorbent Assay (ELISA), which has been extensively used to test the response to vaccine antigens, both polysaccharides, and proteins, of several bacterial pathogens [14][15][16][17][18][19].
Various ELISA methods have been developed over the years to measure antibodies in samples including serum, plasma, urine, or feces [20]. Essentially, there are two major types of assays: titer-based with discrete readout, and assays based on continuous readout which rely on calibrated standard curves. The advantages of the titer-based approach are the simplicity of the assay and the fact that it is not mandatory for the setup of a calibrated standard serum, simplifying the effort when large studies need to be performed in multiple laboratories. However, a weakness is the discrete readout, and, thus, the difficulty to fully assess and compare the performances among different laboratories and between runs. In contrast, the assays based on continuous readout have the advantage of offering the possibility to measure the concentration of antibodies against a calibrated standard curve. These assays are the most commonly used as they allow us to compare results among laboratories and between runs by relying on calibrated standard curves in each plate. Results in continuous readout are expressed as antibody concentration (µg/mL), arbitrary ELISA Unit (EU/mL), or international units (IU/mL) for assays calibrated against international standards. A disadvantage of these assays is the need for a representative standard sera, whose availability can represent a bottleneck with the resulting need to put in place systems to bridge secondary standards. In terms of throughput, currently both titers-based and assays-based in continuous readout are similar with different levels of automation in terms of sample handling and data analysis allowing to speed up substantially the data generation and elaboration.
Here, we present the intra-laboratory characterization of a customized ELISA assay to determine anti-S. Typhimurium O-antigen and anti-S. Enteritidis O-antigen total IgG in human sera. We have characterized the assay determining standard curve accuracy, dilutional linearity, repeatability (intra-day precision), intermediate precision (inter-day precision), and specificity and have determined the limit of blanks and qualifications in addition to a series of solid quality control acceptance criteria.
Reagents
Phosphate Buffer Saline at pH 7 (PBS) was used for the preparation of different buffers: PBS + milk 5% (by adding 5% fat-free milk, Sigma, to PBS), washing buffer-PBS-T-(by adding 0.05% Tween-20) and secondary antibody buffer by adding 0.1% BSA (Sigma) to PBS-T. The coating buffer is a 0.05 M carbonate buffer, pH 9.6 (Sigma-Aldrich). Anti-human IgG-alkaline Phosphatase (Sigma cod. A3187) was used as the secondary antibody. The O-Antigens (OAg) used as coating antigens were extracted from GMMA purified from S. Typhimurium (STm) and S. Enteritidis (SEn) ∆tolR∆pagP∆msbB strains [9], by direct acid hydrolysis; polysaccharides were fully characterized analytically in terms of sugar content, O-acetylation level, protein, and nucleic acid impurities as previously reported [21]. OAg aliquotes for S. Typhimurium and S. Enteritidis were stored at −80 until use. The main OAg population for both coatings was at a molecular size of about 30 kDa, with protein impurities < 1% and nucleic acid impurities < 10 ng/mL.
ELISA Procedure and Calculation
Anti-STm OAg and anti-SEn OAg specific total IgG are measured in sera samples using S. Typhimurium and S. Enteritidis OAg as coating antigens at a final concentration of 5 µg/mL or 15 µg/mL, respectively, following the below protocol: coating overnight (16 h) at 4 • C of NuncMaxisorp 96-well round bottom (Nunc) plates, followed by aspiration (without wash) and blocking with 5% PBS milk for 1 h at 25 • C; washing of plates 3 times with PBS-T, before addition of primary antibodies (sera samples) diluted in 5% PBS milk, incubated for 2 h at 25 • C. Each human serum sample was run in triplicate at different dilutions (1:100, 1:4000, and 1:160,000, respectively) in PBS milk 5%. Plates are then washed 3 times with PBS-T and incubated for 1 h at 25 • C with secondary antibodies diluted 1:5000 in PBS-T plus 0.1% BSA, before the final 3 washes with PBS-T and addition of p-Nitrophenyl phosphate substrate (Sigma-fast, Sigma-Aldrich, Massachusetts, United States) for 1 h at 25 • C. Absorbances were read with a spectrophotometer at 405 and 490 nm (Biotek automatic plate reader), maintaining strict timing between plates.
ELISA units are expressed in relation to a five-parameter human antigen-specific antibody standard serum curve composed of 10 standard points and 2 blank wells (run in duplicate on each plate). One ELISA unit is defined as the reciprocal of the dilu-tion of the standard serum that gives an absorbance value (optical density measured at 405 nm subtracted to optical density measured at 490 nm-the latter being the background wavelength of the plastic of the plate) equal to 1. High control and low control (HC and LC, respectively) were run on each plate at dilution able to give as result a range equivalent to 1.3-2.8 EU/mL, respectively, for LC and HC.
The primary anti-Salmonella standard sera was calibrated against each coating antigen, and antigen concentration was set at saturation of the signal for each antigen.
Several QC criteria were applied on each run, in particular, the R-square value for the 5 PL curve fit of the standard dilution series (for both STm and SEn), maximum background OD, the minimum value of OD maximum, range of acceptance in terms of OD for 1 EU/mL (in case of SEn, deviation to the expected EU/mL both for high and low controls). If at least one of the above-mentioned criteria was not met, the entire layout was repeated under the same experimental conditions. A sample is instead considered valid if the EU/mL determined as average from the values obtained in the three independent plates have a CV% < 30% at the dilution selected for obtaining results (the ones in which OD values obtained fall within the linear part of the standard curve); if not met, the sample was re-run under the same assay conditions.
Ethical Statement
The human serum pool used in this study was derived from Malawian healthy donors originally enrolled in the STRATAA (Strategic Typhoid Alliance across Africa and Asia) epidemiological study [22]. The relevant ethical and regulatory approval was obtained from the respective institutional and national ethics review committees (National Health Sciences Research Committee approval # 15/11/1511). Written informed consent was obtained before enrollment from all subjects and the trial was designed and conducted in accordance with the Good Clinical Practice Guidelines and the Declaration of Helsinki.
Serum Samples
iNTS Primary Human Standard Serum has been generated by Malawi-Liverpool-Wellcome Trust Clinical Research Programme (MLW) by pooling sera from highly positive Malawian adult subjects who were originally enrolled in a community-based randomly selected cohort within the STRATAA [22] epidemiological study. Forty positive (20 against STm OAg and 20 against SEn OAg) human sera from adults naturally exposed to iNTS were also used to characterize the assay. Working aliquots of the standard serum and the human single sera were stored at −80 • C until use.
Various aliquots of iNTS Primary Human Standard Serum and the 40 human single sera were used and treated as described below to determine the different assay parameters. Human IgG-depleted serum (Molecular Innovations cod. HPLA-SER-GF) was used as a negative matrix.
Samples used to assess standard curve accuracy: iNTS Primary Human Standard Serum has been used to prepare 24 standard curves, each composed of ten 2-fold dilutions of a curve at 10 EU/mL and 2 blanks.
Samples used to assess precision, and the lower and upper limit of quantification: 40 human single sera from adults subjects (20 previously screened for positivity against STm OAg and 20 against SEn OAg) were assayed independently by two operators working on the same days, in two independent replicates on each plate, on three different days (12 measurements in total for each individual serum).
Samples to assess specificity: two high responders (>500 EU/mL in the assay) human sera for STm and two high responders human sera for SEn, prediluted 1:50 in PBS + 5% milk, were preincubated overnight at 4 • C with an equal volume of homologous competitor at the final concentrations of 250, 50, 20, 5, and 1 µg/mL in PBS + 5% milk, in comparison to sera diluted overnight 1:100 in PBS + 5% milk (negative control). The lowest concentration of OAg able to cause a reduction of the ELISA Units ≥ 80% was then used to determine the homologous (in the presence of STm OAg for STm specificity and SEn OAg for SEn specificity) and heterologous specificity, assessed with samples incubated with OAg from a different species (Shigella flexneri 3a OAg) in comparison to undepleted control. All samples were incubated overnight (16-18 h) at 4 • C prior to being tested.
Statistical Analysis
Test results have been analyzed using Excel and GraphPad PRISM software version 7. Geometric and arithmetic mean, standard deviation, and coefficient of variation are the major statistics. To support the assessment of linearity, a log-log regression model was applied to measure the sensitivity of the response (i.e., the geometric mean of the test results) to the dilution levels. The limits of standard curve accuracy were calculated by linear interpolation. The precision results (CVs%) were resampled 1000 times by the bootstrap method, and the geometric mean was calculated at each iteration, obtaining the distribution of geometric means. The expected geometric mean was the geometric mean of the distribution. The lower limit of 95% confidence intervals (CI) was obtained as the quantile that divides the data distribution, leaving 2.5% of the distribution to its left, and the upper was obtained as the quantile that divides the data distribution leaving 2.5% of the distribution to its right.
Experimental Method and Controls Setup
Understanding the level of antibodies generated against specific antigens upon natural exposure to a pathogen is one of the drivers to evaluate the response, and infer potential protection of a vaccine against the same targets. Besides the complexity of nontyphoidal Salmonella (NTS) infection, the importance of antibodies elicited against the O-antigen portion of the lipopolysaccharide in NTS has been reported [14] and the most advanced vaccines are targeting the O-antigen. To perform a fully quantitative assessment of O-antigen-specific antibodies both from clinical trials and sero-epidemiological studies, it is essential to have a highly sensitive, efficient, and versatile assay. To this aim, we set up and characterized an ELISA to determine the level of IgG elicited against Salmonella Typhimurium and Salmonella Enteritidis O-antigens in human samples. In this assay, one ELISA unit is defined as the reciprocal of the dilution of the standard serum that gives an absorbance value equal to 1. High control and low control (HC and LC, respectively- Figure 1C) are run on each plate together with a calibrated standard curve ( Figure 1B) composed of 10 serial dilution points in duplicate and 4 blank wells. Individual samples are tested in up to three dilutions, each run in triplicates, in different plates. EU/mL are therefore obtained by interpolating the OD values against an antigen-specific standard curve run on each plate. By using this method up to 70 different sera can be assayed on a set of 96-well plates (a "layout", Figure 1A). The setup plates are composed of up to nine 96-well plates, three in which each individual sample is tested at the dilution 1:100, three in which each individual sample is tested at the dilution 1:4000, and three in which each individual sample is tested at the dilution 1:160,000.
To be valid, an assay must pass several non-mutually exclusive quality control criteria both for the standard curve and controls (Table 1). If only one of those criteria is not met, the entire layout must be repeated under the same experimental conditions. A sample is instead valid if the EU/mL determined as average from the values obtained in the three independent plates have a CV% < 30% at the selected dilution. To be valid, an assay must pass several non-mutually exclusive quality control criteria both for the standard curve and controls (Table 1). If only one of those criteria is not met, the entire layout must be repeated under the same experimental conditions. A sample is instead valid if the EU/mL determined as average from the values obtained in the three independent plates have a CV% < 30% at the selected dilution. The primary anti-Salmonella standard sera was calibrated against each coating antigen, and antigen concentration was set at saturation of the signal for each antigen.
The assay was characterized in terms of standard curve accuracy, precision, dilutional linearity, and specificity. As samples, ad hoc dilutions of the standard sera or sera (20 against STm and 20 against SEn) from naturally exposed individuals were used.
Standard Curve Accuracy
Lower and upper limits of standard curve accuracy (LLSCA and ULSCA) represent, respectively, the lowest and the highest concentration of analyte based on the standard curve that can be quantitatively measured with suitable accuracy under assay conditions. To evaluate standard curve accuracy, 24 independent replicates of the standard curve were run in a standard assay. For each of the antigens, limits of standard curve accuracy were calculated as equivalent to the last and the first datapoints, respectively, at which the confidence interval of residual error percentage (RE%) felt within the acceptance range of [−25%; +25%] with 90% probability (Figure 2).
The assay was characterized in terms of standard curve accuracy, preci dilutional linearity, and specificity. As samples, ad hoc dilutions of the standard se sera (20 against STm and 20 against SEn) from naturally exposed individuals were u
Standard Curve Accuracy
Lower and upper limits of standard curve accuracy (LLSCA and ULSCA) repre respectively, the lowest and the highest concentration of analyte based on the stan curve that can be quantitatively measured with suitable accuracy under assay condit To evaluate standard curve accuracy, 24 independent replicates of the standard c were run in a standard assay. For each of the antigens, limits of standard curve accu were calculated as equivalent to the last and the first datapoints, respectively, at w the confidence interval of residual error percentage (RE%) felt within the acceptance r of [−25%; +25%] with 90% probability (Figure 2). Values of LLSCA and ULSCA resulted to be 0.043 EU/mL and 4.313 EU/mL f Typhimurium and 0.134 EU/mL and 9.795 EU/mL for S. Enteritidis, respectively. EU/mL of a specific sample is, therefore, calculated if the reading value falls within above-reported accuracy range of the standard curve. Actual EU/mL for specific sam are subsequently calculated by multiplying the corresponding dilution from w EU/mL in the well were retrieved.
Linearity
The linearity of an analytical procedure is defined as the ability of the metho obtain, within a given range, test results that are directly proportional to the concentra of the analyte being measured. To evaluate the linearity, standard serum was teste nine independent dilutions (neat, 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, and 1/256), and dilution was prepared two times independently (for each coating antigen) by two op tors, on three different days, resulting in twelve independent measurements in tota each dilution tested.
For both S. Typhimurium and S. Enteritidis, the results for each specific dilu tested were similar to each other, with no variation due to operator, days, or repeats ure 3). Linearity was confirmed within the tested range, both in terms of regression a ysis and deviation from linearity, calculated by multiplying the mean of the obta Values of LLSCA and ULSCA resulted to be 0.043 EU/mL and 4.313 EU/mL for S. Typhimurium and 0.134 EU/mL and 9.795 EU/mL for S. Enteritidis, respectively. The EU/mL of a specific sample is, therefore, calculated if the reading value falls within the above-reported accuracy range of the standard curve. Actual EU/mL for specific samples are subsequently calculated by multiplying the corresponding dilution from which EU/mL in the well were retrieved.
Linearity
The linearity of an analytical procedure is defined as the ability of the method to obtain, within a given range, test results that are directly proportional to the concentration of the analyte being measured. To evaluate the linearity, standard serum was tested at nine independent dilutions (neat, 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, and 1/256), and each dilution was prepared two times independently (for each coating antigen) by two operators, on three different days, resulting in twelve independent measurements in total for each dilution tested.
For both S. Typhimurium and S. Enteritidis, the results for each specific dilution tested were similar to each other, with no variation due to operator, days, or repeats ( Figure 3). Linearity was confirmed within the tested range, both in terms of regression analysis and deviation from linearity, calculated by multiplying the mean of the obtained value at each specific dilution by the specific dilution, and by dividing it by the median obtained when testing the undiluted sample (thus, used as nominal value). The average deviation from linearity was within the predefined range of acceptability [0.7-1.3] for both antigens ( Table 2).
Both assays resulted to be linear in the tested range. Linear regression applied to the base-2 log-transformed data (geometric means and dilutions) yielded a significant slope both for S. Enteritidis (slope = 0.986, t-stat = 73.4, p < 0.0001) and for S. Typhimurium (slope = 0.965, t-stat = 149.2, p < 0.0001). These two similar findings confirmed that the method produced results within the specification limits of linearity [0.7-1.3]. The minimum raw value measured on all runs and dilutions is expressed as the Lower Limit of Linearity (LLL), which resulted to be 3.5 EU/mL in the case of S. Typhimurium and 1.7 EU/mL in the case of S. Enteritidis.
Precision
To evaluate the precision of the assay, which is the ability of a measurement to be consistently reproduced, EU/mL of 40 human single sera from adults subjects (20 against STm OAg and 20 against SEn OAg) were determined independently by two operators working on the same days, in two replicates, on three different days (12 measurements in total for each individual serum), and results obtained are reported in Figure 4. Precision was evaluated as repeatability (intra-assay precision) and intermediate precision (inter-assay precision). Repeatability expresses the precision under the same operating conditions while intermediate precision expresses within-laboratories variations, such as different days, different analysts, different equipment, calibrants, batches of reagents, columns, and spray needles.
The critical thresholds (CV%) for repeatability and intermediate precision were 20% and 25%, respectively. We found that repeatability and intermediate precision were below their corresponding threshold across all samples. To generalize these findings, we calculated the 95% bootstrap confidence interval of the geometric mean of the samples' CV%. Thus, precision results were resampled 1000 times and the geometric mean was calculated at each iteration. For S. Typhimurium Oag, the expected geometric mean of the intermediate precisions was 2.66% (CI95% from 2.09% to 3.25%) and the geometric mean of the repeatability was 7.10% (CI95% from 2.38% to 14.80%). The bootstrap analysis of the S. Enteritidis OAg data produced a geometric mean of intermediate precisions equal to 3.42% (CI95% from 2.74% to 4.12%) and a geometric mean of repeatability equal to 6.94% (CI95% from 1.20% to 14.49%).
The day, the operator and the replicates did not influence assay variability for the 20 individual samples of S. Typhimurium and 17 out of 20 individual samples tested against S. Enteritidis O-antigens.
The evaluation of the Lower Limit of Precision (LLP) implied two steps: first, the raw data of precision above 600 were divided by 4000 and all the other data were divided by 100; second, the minimum of these rescaled data was considered as the LLP, and this resulted to be 15 EU/mL for both assays.
Lastly, the lower limit of quantification (LLoQ) was calculated as the most conservative value among the lower limit of standard curve accuracy, lower limit of precision, and lower limit of linearity, all obtained considering the limit per well multiplied by the lowest sample dilution tested (100), as reported in Table 3. The methodology used for the assay does not have an upper limit of quantification, as in the case of readings above the upper limit of standard curve accuracy, a higher dilution of samples (40-fold higher than previous) can be tested to obtain readings falling within the limit of standard curve accuracy and with appropriate precision.
Specificity
The specificity of the assay was determined for both S. Typhimurium and S. Enteritidis OAg. This parameter represents the ability of the analytical procedure to determine solely the concentration of the analyte that it intends to measure. To evaluate the homologous specificity, an initial set-up experiment was performed by pre-incubating two samples with high anti-S. Typhimurium and S. Enteritidis OAg IgG titers, respectively, with an equal volume of homologous S. Typhimurium and S. Enteritidis purified O-Ag at the final concentrations of 250, 50, 20, 5, and 1 µg/mL prior to being tested by ELISA. The goal was to determine the lowest concentration of OAg able to inhibit ≥80% of the EU/mL in comparison to the non-inhibited sample. The lowest Salmonella OAg concentration that was able to cause an inhibition of ≥80% of the detected OAg IgG concentration was 20 µg/mL (data not shown). Therefore, in a subsequent experiment, 20 µg/mL of homologous OAg and heterologous competitor (OAg from Shigella flexneri 3a) were added to the samples prior to testing them in a standard assay. The percentage of inhibition was calculated in comparison with undepleted control sera ( Figure 5). For both S. Typhimurium and S. Enteritidis assays, the percentage of inhibition tected was confirmed to be ≥80% with a homologous OAg and <20% with a heterolog competitor; therefore, both assays are considered specific.
Discussion and Conclusions
To perform a fully quantitative assessment of antigen-specific antibodies in sam from vaccine clinical studies or in sera of subjects after natural exposure to a pathoge is key to have a simple and highly sensitive assay. The ELISA methodology prese here has been extensively used at a preclinical level to develop a vaccine to prevent i disease [10] and to determine the level of antigen-specific human antibodies in clin trials of a vaccine against Shigella in endemic [23] and non-endemic settings [24][25][26] this work, we set up, optimized, standardized, and characterized the ELISA metho determine the level of specific IgG elicited by S. Typhimurium and S. Enteritidis OAg human sera. The method resulted to be precise, accurate, repeatable, linear, and spe for both S. Typhimurium and S. Enteritidis OAgs. The assay has a broad standard a racy range, is linear and specific for the OAg coated on the plate, and has a dynamic ra going from a very low limit of quantification to essentially no upper limit of quanti tion. The assays for both S. Typhimurium and S. Enteritidis were demonstrated to be cise, with a low variability on 20 individual samples, with neither the operator, nor day of analysis, nor the replicates being significant to the overall variability. Furtherm for the precision assessment, mainly samples with medium to high levels of antigencific IgG induced upon natural exposure have been tested; thus, we cannot exclude with a different and broader selection of samples, including the ones with low titers would determine even a lower limit of precision. This latter aspect will be reassessed ing the formal validation of the assay when samples from vaccines will also be availa The ELISA assay described here has a continuous readout, which has several vantages compared with the titer-based approach. A critical advantage is related to ability to evaluate and compare the immune response induced by vaccination among ferent individuals. Indeed, one of the most widely used methods to evaluate vaccin sponse is the ELISA with the seroconversion expressed as a 4-fold increase compare baseline as a proxy for it [27]. In titer-based assays, often a small change of even 0.1 might result in a half or double titer and this could lead to misinterpretation of the im nological results. This is not the case for fully quantitative assays with continuous read such as the ELISA methodology presented in this work. For both S. Typhimurium and S. Enteritidis assays, the percentage of inhibition detected was confirmed to be ≥80% with a homologous OAg and <20% with a heterologous competitor; therefore, both assays are considered specific.
Discussion and Conclusions
To perform a fully quantitative assessment of antigen-specific antibodies in samples from vaccine clinical studies or in sera of subjects after natural exposure to a pathogen, it is key to have a simple and highly sensitive assay. The ELISA methodology presented here has been extensively used at a preclinical level to develop a vaccine to prevent iNTS disease [10] and to determine the level of antigen-specific human antibodies in clinical trials of a vaccine against Shigella in endemic [23] and non-endemic settings [24][25][26]. In this work, we set up, optimized, standardized, and characterized the ELISA method to determine the level of specific IgG elicited by S. Typhimurium and S. Enteritidis OAgs in human sera. The method resulted to be precise, accurate, repeatable, linear, and specific for both S. Typhimurium and S. Enteritidis OAgs. The assay has a broad standard accuracy range, is linear and specific for the OAg coated on the plate, and has a dynamic range going from a very low limit of quantification to essentially no upper limit of quantification. The assays for both S. Typhimurium and S. Enteritidis were demonstrated to be precise, with a low variability on 20 individual samples, with neither the operator, nor the day of analysis, nor the replicates being significant to the overall variability. Furthermore, for the precision assessment, mainly samples with medium to high levels of antigen-specific IgG induced upon natural exposure have been tested; thus, we cannot exclude that with a different and broader selection of samples, including the ones with low titers, we would determine even a lower limit of precision. This latter aspect will be reassessed during the formal validation of the assay when samples from vaccines will also be available.
The ELISA assay described here has a continuous readout, which has several advantages compared with the titer-based approach. A critical advantage is related to the ability to evaluate and compare the immune response induced by vaccination among different individuals. Indeed, one of the most widely used methods to evaluate vaccine response is the ELISA with the seroconversion expressed as a 4-fold increase compared to baseline as a proxy for it [27]. In titer-based assays, often a small change of even 0.1 OD might result in a half or double titer and this could lead to misinterpretation of the immunological results. This is not the case for fully quantitative assays with continuous readout such as the ELISA methodology presented in this work.
One of the limitations of the traditional ELISA, like the assay presented here, could be the fact that it is not multiplexed. However, this type of ELISA assay is usually of low cost and can be easily transferred and used in other laboratories, including resource-limiting settings. Furthermore, traditional ELISA can be easily automated to increase throughput, reducing the time to produce results compared with multiplex assays, and in terms of costs, considering the low cost of consumables in classical ELISA versus the costs of multiplexbased assays requiring more expensive reagents, equipment and infrastructure, unless a high number of analytes is simultaneously assessed (i.e., >5).
The assay presented here is versatile and, with minimal adjustments, can be easily adapted to determine antibody levels from different sample sources (feces, plasma, and saliva) or against different types of antigens (protein or polysaccharide), and detect other antigen-specific immunoglobulin classes (IgM or IgA) or IgG subclasses. These other possibilities would still maintain the same definition of EU/mL, which represents the most important criterion of standardization in our assay. The presence of a standard serum as an inter-laboratory calibrator guarantees the robustness of the assay, and the simplicity and affordability of the assay make it suitable for other laboratories interested in analyzing large sero-epidemiological studies or vaccine clinical trials. Indeed, the assay has been already successfully transferred to different laboratories, including sites in African endemic regions, and has demonstrated the interlaboratory transferability of the results presented in this work. Unlike some other platforms, an ELISA format is easily transferred and accessible to resource-limiting settings, where the iNTS disease is endemic. We have also recently presented a high-throughput method to evaluate the ability to kill S. Enteritidis and S. Typhimurium in serum samples by using an iNTS standard sera [28]. Performing the correlation among the level of antibodies and antibody functionality from serum samples and clearance from the disease might allow us to define a protective threshold or correlate of protection, which could speed up vaccine development.
To conclude, the assay presented here can accurately quantify the antibodies against S. Typhimurium and S. Enteritidis OAgs in human serum samples and can be applied to both large sero-epidemiological studies and vaccine clinical trials. The use of the same assay in both studies will allow us to compare the data and evaluate potential differences in immune response between vaccinated subjects or subjects recovering from the disease. Furthermore, this analysis could be applied to select appropriate sites and vaccine trial strategies, to advance the development of vaccines against iNTS disease.
|
v3-fos-license
|
2022-09-30T15:27:21.040Z
|
2022-09-27T00:00:00.000
|
252602467
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jecs.pl/index.php/jecs/article/download/1450/1264",
"pdf_hash": "87bf42db26d74192b250f19df5ee0cd6cfc44c47",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43226",
"s2fieldsofstudy": [
"Education"
],
"sha1": "c03fd4d37116f140e4c479f878850a295b7cf0aa",
"year": 2022
}
|
pes2o/s2orc
|
Developing General and Subject Competences of Primary School Pupils in the Context of Integrated Education: the Case of one Lithuanian School
Aim. The aim of the research is to provide a scientific justification for the inte grated development of general and subject-specific competences of primary school pupils. Methods. The exploratory qualitative case study was conducted in a private school in Lithuania. The integrated activities covered the content of Lithuanian language and science education, as well as general competences such as communication and digital competences. The activity was implemented in grade 4 with 12 pupils (8 boys and 4 girls). A semi-structured interview with the class teacher was also conducted. Results and conclusion. The data from the empirical study were analysed according to the following thematic clusters: interest in the activity, group work, searching for information in the encyclopaedia and in electronic sources, working with a robot when integrating the digital skills and the content of science educa tion, descriptions of animals as a summarising and consolidating activity. The study found that if the educational process is well thought out, if the pupils are interested, they can work independently and support each other in explaining the content, while the teacher becomes an observer, a facilitator, and can concentrate on the pupils’ individual activities during this process. Well-designed tasks with the robot develop not only digital literacy skills, but also the reinforcement of subject content related to the and the detailed description of the activities shows which tasks support students’ independent learning.
Introduction
In today's world, the education of pupils requires a great deal of effort if it is to be innovative, high-quality, and responsible. The aim is for pupils to be lifelong learners, to be able to manage and understand large amounts of data and information, and to be able to solve a wide range of problems. Students are expected to be creative thinkers. Pupils need skills and abilities that allow them to navigate and adapt in an ever-changing environment (Drake & Reid, 2018). Skills such as collaboration, creativity, problem-solving, and critical thinking make up the generic competences that are both important in education and are acquiring a new meaning and relevance with the rise of technological advances (Buchs, Voogt et al., 2013). The Council of the European Union's Recommendation on key competences for lifelong learning (2018) notes that quality education and a broad approach to competence development lead to better progress in key competences. The role of general competences is more profound than that of a specific subject for each individual's development, employability, social inclusion, active citizenship, living in harmony with the environment. One of the strategic objectives for changing the content of education in Lithuania, as formulated in the strategic document Bendrųjų programų atnaujinimo gairės (2019) (Guidelines for the Renewal of the General Curricula, [translated from Lithuanian by the authors]), is to develop General Curricula oriented towards the development of general competences. The development of general competences is associated with active learning, where students analyse situations, pose and critically evaluate problems, apply knowledge and skills in practice, learn to make decisions, and act responsibly and creatively (Bendrųjų programų atnaujinimo gairės, 2019; Lietuvos pažangos strategija "Lietuva 2030", 2012) (Lithuanian Progress Strategy "Lithuania 2030", [translated from Lithuanian by the authors], 2012), as only in this way can students develop sustainable knowledge. However, it is important to find the right ways to make targeted and purposeful changes to educational practices in school. Teachers know more about how to teach mathematics, or any other subject, than how to develop students' competences (Drake & Reid, 2018). A collaborative teacher attitude helps to create an integrated learning environment, but this is hampered by the lack of flexibility in the school's organisational structure and culture (Rousseau et al., 2015;Rey, 2016).
General Principles that Enable the Development of Integrated Education that Builds Students' Subject and General Competences
Creating authentic content in each classroom, taking into account the needs and aptitudes of the pupils, is based on the identity of the primary teacher, where the teacher is understood as an organiser of the educational process, a planner of activities, problem solver, monitor of pupils' achievements, and an assessor of pupils' performance. In this perspective, the classroom education process includes the following interrelated and complementary elements: integral educational content (including subject content and general competences) and strategies to support its understanding/mastery, as well as means of assessing pupils' achievements; means of providing individual support to pupils; the establishment of a system of learning situations; the use of a range of learning supports/interviews and the mobilisation of the necessary resources. Karin Bacon (2018) stresses that in order to engage children in the educational process, the educational content must be composed of the "real" world; a real world that is not separated into disciplines or subjects. Researchers (Bacon, 2018;Chevalier & Deschamps, 2019;Lasnier, 2001;Lenoir, 2009;Roiné, 2014;Reverdy, 2019;Sakho, 2017), advocates of integrated education, stress that many primary teachers are able to create problem situations and select appropriate content, but that the difficulties lie in developing a system of assistance/individual support, certain learning methods, special environments or other means of providing students with specialised teaching, as it is necessary to keep in mind not only the development of students' overall, but also their academic achievement (Chauvière, 2018). In other words, teachers still tend to think about what children should be taught, but there is a lack of attention to educational strategies, advice on how to assimilate the available material, and the ways in which it can be learned (Drake & Reid, 2018).
General education is important insofar as it fosters each person's common human, holistic values, and is particularly important for each person's socio-emotional development and digital literacy. Educators should design educational situations in such a way as to foster, first and foremost, the common human values of holistic education: helping and empathy, compassion, understanding, communication and cooperation, support, and the development of socio-emotional competences ( ). However, the development of social and emotional competencies is still receiving insufficient attention among primary teachers during their university training (Beaumont & Garcia, 2020;Brackett et al., 2019). In addition, the content is also very different, so the knowledge and expression of this competence often depend solely on the erudition of the teacher who teaches the subject (Beaumont & Garcia, 2020). In Lithuania, the development of children's social and emotional competences is identi-fied as one of the priorities of the updated Pradinio ir pagrindinio ugdymo bendrųjų programų projektai (Nacionalinė švietimo agentūra, 2021) (Draft framework programmes for primary and basic education, [translated from Lithuanian by the authors]), which is integrated in all subjects. However, there is a lack of research in this area. Moreover, the aim is to develop digital literacy skills, so that students are not only active users of information communication technologies (ICT) but also creators, which makes skills such as encoding, searching, adapting information, and learning algorithmic and repetitive situations of particular importance (Lawrence & Tar, 2018;Mohr & Welker, 2017).
The school should take into account the psychology of the learner's age, the principles of social life, aspects of citizenship, and cultural contexts when designing educational situations. The cultural foundation is a prerequisite for an integrated educational context (Drake & Reid, 2018). Human rights and the duties and freedoms associated with them are the basis for the development of all educational content. The Council of Europe (2017) and Nadia Rousseau et al. (2015) emphasise the accessibility, affordability, acceptability, and adaptability of education (according to the individual student's needs and capabilities). However, schools still do not take enough account of a learner's cognitive or behavioural problems in any subject, nor do they pay enough attention to the child's family, social, and cultural situation (Ebersold & Detraux, 2013;Roiné, 2014).
By analysing educational practices and following general principles, teachers take into account the very specific context and situation of their classroom and choose educational methods that will improve the achievement and motivation of their students. By combining the pupils' cognitive, social, and cultural contexts, the teacher can see a picture of the content of integrated education. Only such an analysis can reveal a real educational situation, a real learning outcome involving a "deep" integrative relationship between learning processes, and an integrated set of learning skills. The teacher is the creator of the educational content, which is why the competences they possess, and which are linked to their professional development and the direction they choose to take in the educational environment are particularly important (Gouvernement du Québec Ministère de l'Éducation, 2020). As Valérie Benoit (2016), Céline Buchs (2017), Andy Hargreaves & Michael Fulan (2019), Sylvie Marcotte (2020) argue, today our school communities are in essential need of continuous learning and collaborative processes to ensure that each member is able to perform their job well, as "groups, teams, and communities are much more capable than individuals" (Hargreaves & Fulan, 2019, p. 21). Research shows the value of collegial learning. Simon Burgess et al. (2019) found that when one teacher observes the work of another teacher, the students of both teachers benefit through an improved performance. Collegial learning is also associated with higher teacher job satisfaction and self-efficacy (OECD, 2020). Thus, peer learning can be identified as one of the most effective factors for teachers' professional development (Campbell, 2019). In-service learning is also needed due to changes in educational practices, such as the integrated development of digital competences from pre-school age (Chauret et The situation and context of primary education in Lithuania in relation to integrated education. For many years, Lithuania has been trying to base practice on the idea of integrated education (Jakavonytė-Staškuvienė, 2017). This is also reflected in the foundational strategic document of Lithuanian education policy, Geros nokyklos koncepcija (2015) (The Good School concept, [translated from Lithuanian by the authors]), which encourages the development of creative school communities. In addition, the concept emphasises the personal qualities to be developed, such as openness, communicativeness, flexibility, adaptability, the creation of an identity, a value "backbone", and a personal meaning in life. This makes the development of the personality's value orientation, the social, civic, and moral maturation of the person, particularly relevant. All this can be developed through an integrated approach. 2021-2022 ir 2022-2023 mokslo metų pradinio, pagrindinio ir vidurinio ugdymo programų bendrieji ugdymo planai (Nacionalinė Švietimo Agentūra, 2021) (General curricula for primary, primary and sec-2021) (General curricula for primary, primary and secondary education for the academic years 2021-2022 and 2022-2023, [translated from Lithuanian by the authors]) are a positive development in the Lithuanian education policy in terms of integrated education, as the school decides on the forms of organisation of the educational process, the measures to help pupils to achieve a higher level of learning achievement and progress, and the provision of educational assistance, the preparation and implementation of project work, etc. The school decides on changes of the form of the educational process or the distribution of the learning period, for example, to intensify the educational process, to implement the content of the educational process by means of project activities, to teach one or two subjects a day, and in less conventional ways, such as through integrated content. This means that the initial conditions for integrated education are in place at the policy level, while further decisions are left to the school community. The document Bendrųjų programų atnaujinimo gairės (2019) also refers to integrality, although the concept is explained too narrowly, covering only academic knowledge, i.e., "strengthening the interconnectedness of content across different subjects in order to support the development of a holistic worldview in the learner" (p. 14, [translated from Lithuanian by the authors]). In order to develop students' general competences, it is important to ensure multidimensional links between the different curricular areas, subjects and the real world. Interdisciplinary integration helps the pupil to develop a comprehensive view of the phenomena under study. Interdisciplinary integration reveals the broader context of the subject and helps to address issues that often cross subject boundaries. As the organisation of educational practice in Lithuania according to the updated curriculum content is planned to start in 2023, now is the right time to look at what kind of environments would allow this to happen in practice and what kind of skills teachers have to create such educational situations and contexts, which is why an exploratory qualitative empirical study was chosen. The aim of the exploratory study was to investigate integrated activities in primary education in practice when integrating the content of Lithuanian language and science and the communicative and general digital competences.
Empirical Study Strategy, School and Classroom Context, Case Study Model
Stage 1: planning the study of developing subject and general competences. The first step was to decide on the appropriate strategies and content of the activities (which subject and generic competences would be developed in the activities). As this is a revision lesson (the tenses and purposeful use of verbs in a sentence in Lithuanian, and the identification of vertebrates and invertebrates in science, where and how they live and what they eat), it was decided to combine these subjects in the design of the activities. Among the generic competences, two were chosen: the targeted development of digital competence (Chauret, 2018;Chauret et al., 2021) through the use of a robot, and the development of communication competence, where students are encouraged to collaborate in a targeted way. Pupils were encouraged to effectively search for information and build on it by solving a crossword puzzle and programming the robot. The theme of the activity, the questions and the problems were linked to problem-based learning and integrated education in Lithuanian language and science (Mohr & Welker, 2017). The activities were carried out in targeted working groups formed by the teacher, with students of different abilities who could negotiate and collaborate in the tasks. In addition, tablets were chosen and individually owned by all students for targeted information and concept searches during the tasks. In this way, the focus was on the development of communication and general digital competences.
In preparation for this lesson, the teacher reflected on the integrated tasks of Lithuanian language and science education. At the beginning of the lesson the teacher reminded the children some concepts of both Lithuanian language and science. The way the tasks were presented to the pupils can be seen in Figure 1:
Figure 1
Explanations of key science and English language concepts that will be used in the activity Source: own research. Figure 1 shows that the teacher, in preparation for the activity, reflects in advance on how to recall the concepts from both subjects that will be used in the lesson. It is important that children are provided with visual material (Doubet & Hockett, 2017;Evagorou et al., 2015;Hockett, 2018), and that content is presented in diagrams, which aids retention. The distribution of animals into vertebrates and invertebrates will be done in the lesson by every student. Children are also given a diagram of verb parsing as a reminder. In the lesson, the children will work in groups of three to complete the tasks of writing down the correct verbs and analysing the given verbs according to the scheme given in. Figure 1. In addition, they will have to write down the correct names of the animals that are described according to certain clues. The children will be able to work in turns, searching for the information they need in an encyclopaedia and on a tablet (by typing in key words on certain websites). Pupils are encouraged to help each other (Benoit, 2016;Buchs, 2017;Hargreaves & Fulan, 2019). They are also informed that they will work in groups for 30 minutes.
In addition, the teacher has thought about how the robot programming activity can be targeted and linked to the development of language and science concepts. The carpet that has been prepared, with the science concepts that the students will use to program the robot, can be seen in Hockett, 2018), and that content is presented in diagrams, which aids retention. The distribution of animals into vertebrates and invertebrates will be done in the lesson by every student. Children are also given a diagram of verb parsing as a reminder. In the lesson, the children will work in groups of three to complete the tasks of writing down the correct verbs and analysing the given verbs according to the scheme given in. Figure 1. In addition, they will have to write down the correct names of the animals that are described according to certain clues. The children will be able to work in turns, searching for the information they need in an encyclopaedia and on a tablet (by typing in key words on certain websites). Pupils are encouraged to help each other (Benoit, 2016; Buchs, 2017; Hargreaves & Fulan, 2019). They are also informed that they will work in groups for 30 minutes.
In addition, the teacher has thought about how the robot programming activity can be targeted and linked to the development of language and science concepts. The carpet that has been prepared, with the science concepts that the students will use to program the robot, can be seen in Figure 2: It is important to stress that while the children are working in groups to discuss and find answers about vertebrates and invertebrates, another activity is taking place in parallel: one child at a time, they come to the carpet and program the robot to travel according to the classification, i.e. so the designated animal is classified as vertebrate or invertebrate. This provides a targeted and integral learning of the skills necessary for digital competence ( The robot whose movements are programmed by children can be seen in To consolidate what the children have done in the activity, they had to write a description of one animal at home, answering the questions: "What kind of animal is it?", "What does it eat?", "Where does it live?". Impor- tantly, the children were assigned different animals, and to summarise the material covered in the lesson, the children had to use the verbs correctly in their creative work, writing them down correctly, and to refer to specific information about the animal, both in terms of describing it and in terms of classifying it as a vertebrate or an invertebrate. Stage 2: classroom research (case study and semi-structured interview with the teacher). School context. The study was conducted in a private school in a major Lithuanian city. This school not only implements the national primary curriculum, but also integrates engineering education, STEAM, and the targeted use of information technology, while also developing students' digital competences. IT technologies such as tablets, Photon robots, Blue-bot, iMo cubes are used in lessons. The school does not only work on a classroom basis, but also uses integrated education -an integrated day, as well as cross-curricular integration (when dealing with any topic or problem, it tries to develop a wide range of students' subject-specific and general competences). The distinctive feature of the integrated day is that the teacher chooses topics and issues from the pupils' world and real-life realities, thus creating a close link with classroom life and tailoring the content to the children's interests. The integrated day is a natural day for pupils. Learning time is allocated according to the needs of the pupils, and motivation is very important, as the subject areas are directly related to the pupils' daily lives. This way of teaching is more challenging for the teacher as it requires very careful preparation and planning of activities. Classes have up to 18 pupils, which is optimal for high quality learning, and more attention is paid to the individual learning needs of pupils and to differentiation and individualisation according to the pupil's abilities.
Teacher competence and context of work. Grade 4 teacher has a degree in primary education from a Lithuanian university. She has been working at the school for 2 years and has a total of 3 years of teaching experience. She has a distinctive characteristic of actively and purposefully using information technology in various subjects, linking technology to the subject matter of the activity, and creating interactive tasks for the pupils herself. This teacher was chosen for the study because she is able to find and integrate the content of different subjects, to select activities for pupils according to their abilities during integrated lessons, and knows how to help children to remember the necessary information faster. Moreover, during integrated lessons the teacher makes pupils' learning more meaningful, enhances, extends, and links pupils' knowledge and skills, arouses pupils' motivation for learning, develops pupils' ability to communicate and collaborate, distributes the learning time in a quality manner, and encourages pupils to work in teams. There are 16 pupils in this Year 4 class. The class includes pupils with a wide range of abilities and knowledge. One pupil has special educational needs. There are 6 girls and 10 boys.
The case study process. Two classrooms (3 and 4) were selected in the context of the case study (Baškarada, 2014;Thomas, 2021;Yin, 2018); a con-Baškarada, 2014; Thomas, 2021; Yin, 2018); a con-, 2014; Thomas, 2021; Yin, 2018); a convenience sample was used. Due to the limited scope of the paper, we will only describe in detail the activities of students in grade 4. In the lesson, students worked in mixed ability and gender groups. All activities were videotaped and the researchers analysed the footage, recording the students' work in an observation protocol. The groups were made up of pupils of different abilities and academic achievements. The groups were deliberately assigned by the teacher to one student with higher academic achievement, one student with intermediate academic achievement and one student with learning difficulties. The study was conducted on 11 November 2021. The participants were 12 pupils (aged 10-11). The total duration of the integrated activity was 1 hour 30 minutes. The study was conducted in accordance with the principles of research ethics. Parents of the children gave individual consent for the activities to be filmed and analysed under confidentiality conditions.
To further deepen the data obtained and its analysis, a semi-structured interview (Baškarada, 2014; Dane, 2011; Yin, 2018) was conducted on 27 December 2021 with the classroom teacher, who provided further contextual details and shared her insights on the integrated activities carried out. The teacher was asked to articulate the strengths of the activity, the most successful episodes, and to comment on what, how, and why it would be useful to improve in the future. This research process allows for a detailed qualitative analysis of the activities. Evidence is coded according to its meaning, attributing images or interview statements to one aspect or another. The logical progression of the data analysis of the study on the integrated development of general and subject competences of primary school students is presented in Figure 4. The description of the data was done through the presentation of the interest activity; the students' group work, the search for information in different sources (encyclopaedia and websites using personal tablets), the work with the robot and the features of the animal description. The analysis of the themes was carried out through several layers of analysis (subthemes), illustrated by data examples and complemented by data from a semi-structured interview with the teacher (Baškarada, 2014;Bioy, et al., 2021;Creswell, 2007;Dane, 2011). Finally, the findings of the exploratory study are summarised and conclusions are drawn by demonstrating elements of integrated education in practice.
Analysis of Empirical Data
Before the classroom activity, the teacher had prepared activity sheets for each student, a book and tablets were placed in a place accessible to all children, and a carpet suitable for robot activities was laid in another part of the classroom. A 66 minutes and 48 second long video of the lesson was recorded. 12 pupils took part in the activity, 4 girls and 8 boys.
Analysis of Empirical Data
Before the classroom activity, the teacher had prepared activity sheets for each student, a book and tablets were placed in a place accessible to all children, and a carpet suitable for robot activities was laid in another part of the classroom. A 66 minutes and 48 second long video of the lesson was recorded. 12 pupils took part in the activity, 4 girls and 8 boys.
Interest in the activity.
The integrated activity started with an interest task, linked to a targeted grouping of pupils.
At the beginning of the lesson, each pupil found a different verb on their desks. The interest activity was aimed at developing students' higher-order thinking skills. The way it was carried out can be seen from the data in Figure 5: Interest in the activity. The integrated activity started with an interest task, linked to a targeted grouping of pupils. At the beginning of the lesson, each pupil found a different verb on their desks. The interest activity was aimed at developing students' higher-order thinking skills. The way it was carried out can be seen from the data in Figure 5: Lucas et al., 2020). Importantly, in this lesson, interest activity was directly related to the content of a Lithuanian language lesson which was being reinforcedverbs. Through reasoning, the lesson, interest activity was directly related to the content of a Lithuanian language lesson which was being reinforced -verbs. Through reasoning, the children looked for a deeper meaning of the verb they were assigned, thus thinking about how words are related.
Group work
Once the pupils are in their groups, they are informed that they will be able to discuss the tasks among themselves and search for information on the Internet (by typing certain key words) as well as in an encyclopaedia about animals. Working in groups, the children will have to solve a crossword puzzle and then write the tense, person, and number next to certain verbs. The time limit for the group work is also told, which is 30 minutes.
To analyse the group work situation, we asked the teacher in a semi--structured interview how she thought the children worked in groups. The teacher emphasised the children's friendliness: The children are quite friendly and able to work in a team: they can help each other, find things for each other, explain to each other, and usually there are no difficulties or questions about group work because we often work in groups. <...> There were some groups that were more productive, others were more difficult because of their composition.
Importantly, the teacher herself noticed that there were children who worked better in groups and those who tended to do the work individually. The teacher took into account the following aspects of grouping: I look at it from all sides: that there is at least one stronger or faster learner in the group who can maintain the pace of the group. There is also a child with a weaker ability, i.e. a child who is struggling in a subject. Others are relational, if I know that they get along, I don't put them in the same group because it will be harder for the group to work, the children will get distracted.
It is important that there are children of different abilities in the group. It is also possible to give clear responsibilities to each member of the group (Burke, 2011). In addition, it is important to take into account personal qualities, as there are cases where children cannot work constructively in one group because they are passive or conflicted (Burke, 2011). The principles of group work applied in this lesson are in line with the European Commission's recommendations on education included in European ideas for better learning: the governance of school education systems (2018). This way of learning makes children more interested in the educational process, more likely to listen to each other's opinions, to help each other, and to agree, and all of these skills are very much needed in life.
Search for information in encyclopaedias and electronic sources. This element of the lesson is significant because the pupils searched for relevant information on invertebrates and vertebrates. They did this using 2 different sources: a book (encyclopaedia) and tablets. Figure 6 shows how the pupils performed in this activity: Regarding the use of tablets and a book in the classroom, the teacher made the following observation: -Most of the time we use tablets to find information. Mostly it is the pupils themselves who search for information. Sources are found on the Internet. This lesson was a less frequent activity in what we do. We are less likely to use a real book. <...> On the other hand, they had to try to plan their time in their team because they could do several different tasks at the same time. Meaning, they could observe and go to the book when it was free. <...> Maybe we use books less because it is important that they contain the right information. In this case, the topic was about animals, so books are more likely to be available at school.
We would emphasise the use of both alternative tools (tablet and book), especially if the children's science book contains the necessary information (Abtokhi et al., 2018;Le Grange, 2010).
Search for information in encyclopaedias and electronic sources. This element of the lesson is significant because the pupils searched for relevant information on invertebrates and vertebrates. They did this using 2 different sources: a book (encyclopaedia) and tablets. Figure 6 shows how the pupils performed in this activity: Regarding the use of tablets and a book in the classroom, the teacher made the following observation:
Working with a robot
This activity purposefully integrates understanding of natural science content based on conceptual knowledge, i.e. classifying animals as vertebrates or invertebrates, with digital literacy skills, developed through the way the activity is carried out (by guiding the programmed robot to the correct concept) (Bobko et al., 2018;Pedro et al., 2019;Piedade et al., 2020;Slangen, 2016). The aspects of the activity, which we have managed to capture in the video, can be seen in Figure 7: Source: own research.
The teacher revealed during the interview that she felt that: the activity with the robot was a complete success. <...> For the sake of time, two robots could be used instead of one robot (so that two children could do the tasks at once) to make the activity go faster. I imagined that it would go faster, but it was slower.
The fact that this tool is found to be an attractive one for teaching programming not only by teachers but also by children has been investigated by researchers in other countries (Bobko et al., 2018;Smyrnova-Trybulska et al., 2017, Smyrnova-Trybulska et al., 2020. Researchers argue that robots are suitable for the implementation of educational content. This is confirmed by our study, as the fourth grade students not only programmed a robot, but also reinforced the use of vertebrate and invertebrate concepts by attributing specific animals. We would emphasise the use of both alternative tools (tablet and book), especially if the children's science book contains the necessary information (Abtokhi et al., 2018;Le Grange, 2010).
Working with a robot
This activity purposefully integrates understanding of natural science content based on conceptual knowledge, i.e. classifying animals as vertebrates or invertebrates, with digital literacy skills, developed through the way the activity is carried out (by guiding the programmed robot to the correct concept) (Bobko et al., 2018;Pedro et al., 2019;Piedade et al., 2020;Slangen, 2016).
The aspects of the activity, which we have managed to capture in the video, can be seen in Figure 7:
Figure 7
Example of students working individually with a robot
Analysis of the animal description data
At the beginning of the lesson, when introducing all the activities, the teacher mentioned that at the end of the activity the students would have to write a short description of the animal answering three questions: what does the animal look like? What does it eat? Where does it live? However, there was not enough time to complete this task, so homework was assigned as a summary of what had happened in the class.
The lesson has helped the fourth graders to reinforce their knowledge about vertebrates and invertebrates. And after completing the task with the Photon robot (programming the robot to arrive at the correct answer), they were given the name of an animal to describe. Each student had the opportunity to describe a different animal (stork, turtle, green toad, fire salamander, bat, hummingbird, bee). The teacher indicated the criteria for describing an animal, which were content-oriented: the animal's appearance, what it eats, where it lives. She did not focus on the structure of the text because the pupils have been writing descriptions from the second grade on.
After analysing the students' descriptions, it can be said that they understood the description task as simply answering questions, and that their descriptions were based only on visual details. The texts analysed lacked an overall impression, which is an important structural part of a description. Only one description contained the general impression: "Hummingbirds are very petite and small birds". This general impression was followed by a precise description of the appearance: "They are only 5.51 cm long...". The activities in the lesson have helped the children to understand and identify different animals: to describe and imagine what they look like, where they live, and what they eat. This means that the children really remembered the lesson well. This idea is close to the goal of description identified by Vaiva Schoroškienė (2010) "The listener or reader should imagine as accurately as possible what the author has seen or imagined" (p. 25).
Reading the descriptions allows one to understand and clearly imagine the animals being described. The students' descriptions are similar to business texts, with a strong emphasis on accuracy, as they give details of the objects in a coherent manner: -"The fire salamander is 30 cm long and black and yellow", "The toad's back is marbled with green spots and covered with warts", "The stork is white, black, and has a red beak and legs".
In our opinion, these works could be initial descriptions that students could refine to higher quality in other lessons.
Difficulties encountered by pupils during the integrated activities. From the questions that the students raised, we can say that all the problem areas were related to the lack of knowledge of the concepts. Here are some specific examples from the students' activities ( Figure 8): Source: own research.
Researcher
Derek P. Hurrell (2021) argues that educators often simply use subject concepts without looking deeper into their meaning, which causes problems in both procedural and conceptual knowledge formation. Indeed, this idea is supported by the findings of our exploratory study, because if a pupil does not understand a concept describing an animal, he or she is unable to imagine it or to classify it in the right category. Practical tasks are useful when they allow pupils to develop their knowledge by using certain concepts about the object of their learning (Svensson & Holmqvist, 2021). We think that the teacher's handouts explaining which animals are vertebrates and which are invertebrates were also useful for understanding concepts. In addition, it became clear that reading skills are very important for the concept analysis, as they are directly related to the understanding of the information, because if a child misreads a word, he or she does not immediately understand its meaning in a particular context. Researcher Derek P. Hurrell (2021) argues that educators often simply use subject concepts without looking deeper into their meaning, which causes problems in both procedural and conceptual knowledge formation. Indeed, this idea is supported by the findings of our exploratory study, because if a pupil does not understand a concept describing an animal, he or she is unable to imagine it or to classify it in the right category. Practical tasks are useful when they allow pupils to develop their knowledge by using certain concepts about the object of their learning (Svensson & Holmqvist, 2021). We think that the teacher's handouts explaining which animals are vertebrates and which are invertebrates were also useful for understanding concepts. In addition, it became clear that reading skills are very important for the concept analysis, as they are directly related to the understanding of the information, because if a child misreads a word, he or she does not immediately understand its meaning in a particular context.
Conclusions
When education is organised on the basis of an integrated curriculum, covering both subject and general competences, activities can be more flexible and more acceptable to students. It is important that the teacher is not only able to develop the content, but also has the enabling conditions, such as ICT tools (e.g. robots or tablets), encyclopaedias, and reference books. Learning with modern ICT tools enables students to perform a variety of tasks flexibly, to learn from experience by looking, feeling, measuring, and comparing different objects in the environment, as ICT creates a richer learning environment. Learning and teaching become dynamic and flexible, providing a broader range of experiences and widening students' perspectives on different phenomena. ICT contributes to enhancing students' motivation to learn. The use of ICT in the educational process and learning to use it in a targeted way develops pupils' information communication skills and general competences, such as group work, independent learning, and higher level thinking skills (problem solving, information seeking, and creative work). Integrated learning and teaching change students' attitudes towards science and enhances their motivation to learn.
After analysing the exploratory study, we can say that if the educational process is well thought out, if the pupils are interested, they can work independently and support each other in explaining the content while the teacher becomes an observer, a facilitator, and can concentrate on the pupils' individual activities during this process. Well-designed tasks with the robot develop not only digital literacy skills, but also the reinforcement of subject content related to the use of concepts. In addition, the information students gather can be the basis for high-quality creative work (e.g. describing an animal). For the development of integrated content, it is important to prepare the teacher to be both knowledgeable in subject content and skilful in general competences, to know the principles of group work, to be good at managing and continuously developing ICT skills, and to be able to reflect on his/her work.
|
v3-fos-license
|
2024-02-08T14:16:13.160Z
|
2024-02-07T00:00:00.000
|
267523999
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://microbiomejournal.biomedcentral.com/counter/pdf/10.1186/s40168-023-01735-3",
"pdf_hash": "277cff747660940c8d4a305bcc9a295cc78f7a31",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43228",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"sha1": "45a5e6af6608f767275623a2c1748fe30db18b53",
"year": 2024
}
|
pes2o/s2orc
|
Divergent maturational patterns of the infant bacterial and fungal gut microbiome in the first year of life are associated with inter-kingdom community dynamics and infant nutrition
Background The gut microbiome undergoes primary ecological succession over the course of early life before achieving ecosystem stability around 3 years of age. These maturational patterns have been well-characterized for bacteria, but limited descriptions exist for other microbiota members, such as fungi. Further, our current understanding of the prevalence of different patterns of bacterial and fungal microbiome maturation and how inter-kingdom dynamics influence early-life microbiome establishment is limited. Results We examined individual shifts in bacterial and fungal alpha diversity from 3 to 12 months of age in 100 infants from the CHILD Cohort Study. We identified divergent patterns of gut bacterial or fungal microbiome maturation in over 40% of infants, which were characterized by differences in community composition, inter-kingdom dynamics, and microbe-derived metabolites in urine, suggestive of alterations in the timing of ecosystem transitions. Known microbiome-modifying factors, such as formula feeding and delivery by C-section, were associated with atypical bacterial, but not fungal, microbiome maturation patterns. Instead, fungal microbiome maturation was influenced by prenatal exposure to artificially sweetened beverages and the bacterial microbiome, emphasizing the importance of inter-kingdom dynamics in early-life colonization patterns. Conclusions These findings highlight the ecological and environmental factors underlying atypical patterns of microbiome maturation in infants, and the need to incorporate multi-kingdom and individual-level perspectives in microbiome research to improve our understandings of gut microbiome maturation patterns in early life and how they relate to host health. Video Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s40168-023-01735-3.
Background
In early life, the gut microbiome undergoes successional shifts in composition leading to the establishment of stable microbial communities around 3 years of age [1].This begins as a primary succession event where pioneer microbes colonize the sterile gut upon birth, and subsequently, increase in taxonomic and/or functional diversity over time, depending on the microbial kingdom in question [2][3][4].For bacteria, these successional events are shaped by host-driven (intestinal pH, oxygen pressure, glycan expression in breastmilk and on the gut mucosa (FUT2 secretor status), etc.) and environmental factors, such as mode of birth, breastfeeding, introduction of solid foods, and antibiotic use [5][6][7][8].Meanwhile, few characterizations of the role of these factors in gut fungal succession have been performed, but current research suggests the mycobiome may be more strongly influenced by dietary and geographical factors [3,9].Like other microbial ecosystems, the mammalian intestine is a stage for inter-kingdom interactions, in which gut fungi play important ecological roles in shaping bacterial microbiomes and vice versa [10][11][12][13][14][15].However, our knowledge of how bacterial and fungal interactions influence successional shifts in diversity during early-life microbiome establishment remains very limited.
Deviations from typical patterns of bacterial microbiome maturation during the first year of life have been reported in association with disease states, such as type 1 diabetes, asthma, and celiac disease [17,29,30], highlighting the need to better understand dysbiotic maturational trajectories.Early life is regarded as a critical window when microbial colonization exerts potent influences on human development [31,32].In the absence of eubiotic patterns of microbiome establishment, the developmental programming of host physiology may be altered, potentially having detrimental and lasting implications on host health [31,32].When examined through this lens, it is critical to both characterize and be able to distinguish the continuum of typical vs. atypical gut microbiome maturation patterns in early life.Beyond disease paradigms, our understanding of the variability in patterns of gut microbial colonization across infants is further limited by a focus on group-based analyses in microbiome research.This includes exploring microbiome changes across infants based on specific factors (e.g., delivery mode, nutrition, antibiotic exposure) or between different cohorts (e.g., geographically), but typically does not include examining how these factors influence the microbiome within each individual.While this streamlines the handling of large microbiome datasets, it inherently limits our understanding of individual differences in patterns of microbiome maturation.In parallel, research efforts have primarily focused on the bacterial microbiome, despite the co-existence of several other microbial kingdoms contributing to the makeup of the gut microbiome [33].Together, the shortage of individual-level and multi-kingdom perspectives in microbiome research to date have limited our understanding of eubiotic vs. dysbiotic patterns of gut microbiome maturation in early life.
In this work, we begin to address this knowledge gap by evaluating both bacterial and fungal gut microbiome maturation patterns in 100 infants from the CHILD Cohort Study [34] over the first year of life using individual-level and multi-kingdom perspectives.Using a similar approach for mycobiome maturation, we have previously shown increasing vs. decreasing fungal richness over the first year of life is differentially associated with early childhood body mass index (BMI) z-scores in this sub-cohort, with this relationship being mediated by the influence of antibiotics exposure, maternal BMI and diet, and bacterial beta diversity [35].Here, we determined that divergent patterns of bacterial and fungal gut microbiome maturation are more common in term-born infants than previously considered, occurring in over 40% of infants in this sub-cohort, which are characterized by differences in ecological and metabolic properties that may reflect altered rates of microbiome maturation.These maturational trajectories were differentially associated with prenatal, environmental, genetic, and ecological factors, highlighting the need to consider the variable influences these factors have on bacterial vs. fungal gut microbiome maturation in early life.
Study design & population
We investigated the early-life microbiome in 100 infants from the CHILD Cohort Study -a prospective population-based birth cohort recruiting women with healthy singleton pregnancies who delivered after 35 weeks' gestation (n = 3,264) [34].Mother-infant dyads were recruited from 2008 to 2012 across four Canadian provinces and study sites: Vancouver (British Columbia), Edmonton (Alberta), Toronto (Ontario), and Winnipeg (Manitoba) and two adjacent rural towns, Morden and Winkler [34].In this study, we evaluated a sub-cohort of 100 mother-infant dyads previously selected for a nested case-control study on the influence of maternal artificially sweetened beverage consumption during gestation on the infant bacterial gut microbiome [36].Motherinfant dyads were divided equally between mothers who reported little to no artificially sweetened beverage consumption (less than one per month) or high artificially sweetened beverage consumption (one or more per day) during pregnancy.These groups were balanced for six potentially confounding factors known to influence the gut microbiome in early life: infant sex assigned at birth, delivery mode, breastfeeding status at 3 and 12 months, infant antibiotic exposure before 12 months (exposure prior to 3 months was an exclusion criterion), and maternal BMI [36].The study was approved by the University of Calgary Conjoint Health Research Ethics Board and ethics committees at the Hospital for Sick Children, and the Universities of Manitoba, Alberta, and British Columbia.Written informed consent was obtained from mothers during study enrollment and prior to data collection at each subsequent visit.
Infant, early-life & maternal factors
We considered the influence of infant, early-life, and maternal factors with known influences on the gut microbiome, while controlling for maternal diet and artificially sweetened beverage consumption during gestation given the original selection criteria of this subcohort [36].Infant sex assigned at birth, gestational age, delivery mode, and prenatal, intrapartum, and early life (0-12 months) antibiotics exposure were recorded from infant and maternal medical records.Infant feeding was reported using a standardized questionnaire at 3, 6, and 12 months [34].This included breastfeeding status at 3 months, breastfeeding duration or age at breastfeeding cessation (months), and age at introduction of solid foods (months).Breastfeeding status at 3 months was classified as ''exclusive'' (human milk only), "partial" (human milk supplemented with formula milk or solid foods), or ''none'' (no human milk).Infant and maternal secretor status was determined from the single nucleotide polymorphism (SNP) rs601338 in the FUT2 gene and classified based on genotype: "AA" (homozygous non-secretor), "AG" (heterozygous secretor), and "GG" (homozygous secretor).Due to a limited racial distribution in this sub-cohort (n = 81 Caucasian vs. n = 10 Asian mothers), we were not powered to examine the rs1047781 SNP associated with secretor status in Asian populations, but only one mother had the missense genotype for this locus [37].Maternal diet was evaluated using a validated food frequency questionnaire in the second or third trimester of pregnancy, with modifications to capture typical dietary patterns throughout the current pregnancy [34,38].The Healthy Eating Index (HEI) was derived from the food frequency questionnaire based on the 2010 guidelines [39] and used as a measure of maternal diet quality.Artificially sweetened beverage consumption during pregnancy was determined based on consumption of diet sodas (1 serving = 355 mL or one can) or artificial sweetener added to tea or coffee (1 serving = 1 packet) [36].Maternal BMI was determined using measured height and self-reported pre-pregnancy weight.Infant BMI z-scores were derived from weight and length measurements at 3 and 12 months recorded by CHILD Cohort Study staff based on the 2011 World Health Organization (WHO) standards [40].Participant characteristics have been summarized in Table 1.
Sample collection & processing
Infant fecal and urine samples (n = 200 each) were collected from soiled diapers by CHILD Cohort Study staff at the 3-month home visit and 12-month clinic visit for each participant, using standardized methods across both timepoints [41].For home visits, samples were transported on ice back to the laboratory and frozen within 8 h of collection.All samples were stored at -80 °C until further processing [41].
Fecal DNA extractions
Genomic DNA was extracted from fecal samples using the DNeasy PowerSoil Pro Kit (Qiagen, Germany) according to the manufacturer's instructions.Extraction kit negatives were processed alongside fecal samples and extractions for all fecal samples (n = 200) were performed during the same period.DNA concentrations and quality were quantified using a NanoDrop Lite spectrophotometer (Thermo Scientfic, USA).DNA was stored at -20 °C until further processing.
16S & ITS2 rRNA gene sequencing
16S and ITS2 rRNA gene sequencing of fecal DNA was performed by Microbiome Insights (Vancouver, Canada).PCR amplification of the V4 region of the bacterial 16S rRNA gene with 515F/806R primers [42] and the fungal internal transcribed spacer 2 (ITS2) region with ITS1F/ITS4 primers [43] was performed using Phusion Hot Start II DNA Polymerase (Thermo Scientific, USA) to generate ready-to-pool, dual-indexed amplicon libraries, as previously described [44].Microbial contamination was controlled for throughout the PCR and downstream sequencing steps using mock communities with defined amounts of select bacteria or fungi and controls lacking microbial DNA.The pooled and indexed amplicon libraries were denatured, diluted, and sequenced in a single run on an Illumina MiSeq (Illumina Inc., USA) in paired-end modus.
Sequence processing was performed in R v.4.2.1 [45] using the DADA2 v.1.26.0 pipelines for 16S and ITS2 data [46].The median read count of samples after DADA2 processing was 33,103 (27,369) for 16S and 15,740 (9,954) for ITS2 (Table S1 and Figure S1A-B).Taxonomic assignment based on amplicon sequence variants (ASVs) was performed using the following databases at 99% sequence similarity: SILVA v.132 [47] for bacteria (16S) and UNITE v.8.0 [48] for fungi (ITS2).Preprocessing of bacterial and fungal data was performed using phyloseq v.1.42.0 [49] and has been previously reported [35,36].In brief, 954 unique bacterial ASVs were identified.Samples were filtered to remove those with less than 1,000 reads, singletons, and ASVs appearing less than 2 times in a minimum of 10% of the samples, leaving 540 bacterial ASVs for downstream analysis [36].For fungi, 3,328 unique ASVs were identified.The same filtering criteria were applied with the following modifications: ASVs belonging to the kingdom Plantae were removed and a higher threshold of less than 2,000 reads was applied based on the lower sequencing depth of these samples (Figure S1C-D).604 unique fungal ASVs remained in the dataset for downstream analyses [35].
Untargeted urine metabolomics
Untargeted quantitative metabolomics of urine samples was performed at the Calgary Metabolomics Research Facility (Calgary, Canada), using a combination of direct injection mass spectrometry (MS) with a reverse-phase liquid chromatography (LC)-MS/MS assay, as described previously [36].This assay enables the identification and quantification of up to 150 metabolites, including sugars, amino acids, acylcarnitines, biogenic amines and derivatives, uremic toxins, glycerophospholipids, and sphingolipids [50,51].Isotope-labeled internal standards and quality control standards were used for metabolite quantification.Mass spectrometry was performed on a 4000 QTRAP ® LC-MS/MS mass spectrometer (SCIEX, USA) equipped with a 1260 Infinity II LC System (Agilent Technologies, USA) using a sequential combination of LC and direct injection approaches.
Exclusion of data
Infants lacking samples at both timepoints were excluded from all downstream analyses involving bacterial (n = 2) and/or fungal (n = 9) alpha diversity trends, given two samples were required to determine trend directionality.Infants with an atypical alpha diversity trend for both bacteria and fungi (n = 2) were excluded from interkingdom co-occurrence network analyses due to sample size limitations.Further, samples with missing data for infant, maternal, early-life, or ecological covariates were excluded from random forest and logistic regression analyses.This included prenatal antibiotics (n = 2), intrapartum antibiotics (n = 3), and introduction of solid foods (n = 2) for both bacteria and fungi analyses; fungal alpha (Shannon) and beta (PCoA1) diversity at 3 (n = 5) and 12 (n = 5) months for the bacterial alpha diversity trend random forest; and bacterial alpha (Shannon) and beta (PCoA1) diversity at 3 (n = 1) and 12 months (n = 1) for the fungal alpha diversity random forest.
Statistical analysis
Bacterial and fungal alpha diversity were quantified at 3 and 12 months using the Shannon (diversity) and Chao1 (richness) indices with phyloseq v.1.42.0 [49] and reported as mean and standard deviation (SD).Changes in alpha diversity with age were assessed by Mann-Whitney U test, after determining data was non-normally distributed using the Shapiro-Wilk test.To assess individual-level shifts in alpha diversity, Shannon and Chao1 metrics were classified into "increase", "decrease", or "unchanged" categories based on the change in these metrics per infant from 3 to 12 months and assessed by paired t-test.Bacterial and fungal beta diversity were evaluated using the Bray-Curtis dissimilarity index with variance-stabilizing transformation and differences based on age and alpha diversity trend were assessed by permutational analysis of variance (PERMANOVA) using vegan v.2.6.4 [52].Multivariate homogeneity of groups dispersions (beta dispersion) was also assessed by age and alpha diversity trend using a permutation test with vegan v.2.6.4 [52].
The relative abundance of the 15 most abundant bacterial and fungal genera were compared by age and alpha diversity trend.Relative abundances were center log-ratio (CLR) transformed and zeros were handled by adding a small pseudo-count using microbiome v.1.22[53] to control for compositionality prior to assessing statistical differences in abundance.Normality was determined using the Shapiro-Wilk test.Non-normally distributed CLR-transformed abundances were assessed by Mann-Whitney U test and normally distributed abundances were assessed for equality of variance using the F-test, then evaluated by Student's t-test or Welch's Two Sample t-test if the variance was equal or unequal, respectively.Differential abundance analysis by age and alpha diversity trend was performed at the ASV level for bacteria and fungi using DESeq2 v.1.38.3 [54].Bacterial and fungal count datasets underwent variance-stabilizing transformation and were filtered for taxa with at least 5,000 reads summed across all samples to limit the overrepresentation of rare ASVs.The typical bacterial or fungal alpha diversity trend was set as the reference level to identify ASVs that were differentially abundant in infants with an atypical alpha diversity trend at 3 and 12 months.
Random forest was performed to identify factors predictive of bacterial and fungal alpha diversity trend direction using 10-fold cross-validation, 500 trees, and 1,000 permutations by the randomForest v.4.6.14 and caret v.6.0.90 packages [55,56].Factors known to be associated with microbiome maturation in early life were included [19,20,57], alongside bacterial or fungal alpha and beta diversity measures to evaluate for inter-kingdom influences.The mean decreasing Gini index (GI) was computed to identify factors most strongly associated with bacterial and fungal alpha diversity trends.Next, multivariable logistic regression examining factors associated with an atypical bacterial (decreasing) or fungal (increasing) alpha diversity trend was performed using stats v.4.1.1 [58] to determine the directionality of associations between alpha diversity trends and early-life factors.Logistic regression models were assessed for multi-collinearity and optimized using performance v.0.8.0 [59].Bacterial and fungal alpha and beta diversity measures were excluded from this analysis to prevent overfitting and enable focused investigations of how clinical and early-life factors influence changes in alpha diversity over the first year of life.The results are presented for each factor as the log-transformed odds ratio (OR) and 95% confidence interval (CI).The typical bacterial (increasing) and fungal (decreasing) alpha diversity trend was set as the reference level for both random forest and logistic regression analyses.Any potential confounding effects based on the subcohort selection criteria were controlled for by including maternal dietary factors (HEI and gestational artificially sweetened beverage consumption) in these analyses.
Urine metabolites were evaluated by age and alpha diversity trend using the web-based server, MetaboAnalyst [60].Metabolite concentrations were normalized using the median, log-transformed, and pareto scaled (mean-centered and divided by the square root of the standard deviation of each metabolite).Differences in normalized urine metabolite concentrations between increasing and decreasing bacterial and fungal alpha diversity trends at 3 and 12 months were assessed by t-test using a false discovery rate (FDR) threshold of < 0.1.
Bacterial and fungal inter-kingdom co-occurrence network analysis was performed at the species level using NetCoMi v.1.1.0[61].Networks were generated based on the combination of bacterial and fungal alpha diversity trends exhibited by each infant and were allocated into the following groups: typical (increasing bacterial and decreasing fungal alpha diversity; n = 50), bacteria atypical (decreasing bacterial and fungal alpha diversity; n = 21), fungi atypical (increasing bacterial and fungal alpha diversity; n = 16), or both atypical (decreasing bacterial and increasing fungal alpha diversity; n = 2).Infants who displayed atypical alpha diversity trends for both bacteria and fungi were excluded from the main analysis due to sample size limitations, but were included when comparing typical vs. all atypical patterns of alpha diversity changes in combination.Networks were constructed using variance-stabilizing transformed count data, pseudo zero handling, and a Pearson correlation threshold of ± 0.4, then were assessed using a fast greedy clustering algorithm.Node size was determined based on degree centrality.Hub taxa were identified as those having the highest betweenness centrality.Pair-wise network comparisons were made between typical (inverse), bacteria atypical, fungi atypical, and all atypical overall alpha diversity patterns combined at 3 and 12 months by calculating the following measures using 5,000 permutations: centrality (degree, betweenness, closeness, eigenvector), clustering coefficient, modularity, edge density, positive edge percentage, connectivity (natural, vertex, edge), average dissimilarity, average path length, and hub taxa.Co-occurrence networks at 3 and 12 months were also generated for bacteria and fungi in isolation using the same approach to examine co-occurrence dynamics amongst microbiome members of the same kingdom between increasing and decreasing alpha diversity trends.
Participant characteristics
This work was performed as a secondary analysis of a previously published nested case-control study examining the influence of prenatal consumption of artificial sweeteners on infant bacterial microbiome maturation patterns and body mass index (BMI) in early life.Of the 100 infants evaluated in this study, 46% were female and 64% were delivered vaginally.Antibiotic exposure in the prenatal and postnatal periods occurred in 12% and 20% of mother-infant dyads, respectively, while 52% were exposed to intrapartum antibiotics.Breastfeeding status at 3 months was approximately equally distributed between none (36%), partial (30%), and exclusive (34%), with a mean breastfeeding duration of 7.8 ± 7.3 months.Introduction of solid foods into the infant diet occurred at 4.8 ± 1.2 months.The proportion of mothers and infants who secrete ABO histo-blood group antigens in other bodily fluids, such as breastmilk and the intestinal mucosa, was determined by FUT2 genotype [62].Approximately 80% of mothers and infants were secretors, with secretor status balanced by genotype (AA or non-secretor: 24% of infants and 22% of mothers; AG or heterozygous secretor: 39% of infants and 41% of mothers; GG or homozygous secretor: 37% of infants and mothers; Table 1), consistent with the genotypic distribution of FUT2 secretor status in the general population [63].
The bacterial and fungal gut microbiome exhibit divergent alpha diversity maturational patterns in the first year of life
Bacterial gut microbiome maturation patterns have been well-described, with increases in alpha diversity and decreases in beta diversity known to occur over the first years of life [16][17][18][19][20].On average, we observed comparable overall changes in diversity metrics from 3 to 12 months of age in this cohort.Bacterial alpha diversity (Shannon) and richness (Chao1) increased from 3 to 12 months (Shannon: mean 1.55 ± 0.66 vs. 2.20 ± 0.61, p < 0.001; Chao1: mean 26.95 ± 13.12 vs. 52.13± 18.20, p < 0.001; Fig. 1A) and beta diversity decreased over the same period, indicating reductions in compositional dissimilarity of the microbiome between individuals at 12 months of age (R 2 = 0.077, p < 0.001; Fig. 1B).While group-wise analyses have facilitated the identification of typical patterns of microbiome maturation in early life [16][17][18][19][20], our understanding of maturational patterns deviating from these descriptions is limited.We sought to investigate this by examining changes in bacterial alpha diversity and richness at the individual level and observed divergent patterns of microbiome maturation in the first year of life (Fig. 1C and Figure S2A, respectively).Although most infants (n = 74, or 75%) displayed the expected increase in bacterial alpha diversity from 3 to 12 months, 25% (n = 24) exhibited a decrease in alpha diversity over this period (p < 0.001; Fig. 1C).This divergence in microbiome maturation was also reflected in compositional differences (beta diversity) at 3 and 12 months, with alpha diversity trend explaining 1.5% and 1.9% of the variance, respectively (3 months: p = 0.054; 12 months: p = 0.005; Fig. 1D).Bacterial richness (Chao1) exhibited similar maturational patterns and compositional differences over the first year of life (Figure S2A-B).
In contrast to bacteria, descriptions of fungal gut microbiome maturation patterns are lacking, with inconsistent reports across studies regarding changes in fungal alpha and beta diversity in early life [3,9,[21][22][23][24][25][26][27][28].In this cohort, group-wise evaluations revealed that, on average, fungal alpha diversity (Shannon) and richness (Chao1) decreased from 3 to 12 months of age (Shannon: mean 2.23 ± 0.77 vs. 1.29 ± 0.83, p < 0.001; Chao1: mean 29.43 ± 7.65 vs. 24.18± 8.92, p < 0.001; Fig. 2A).Significant differences in beta diversity were also observed by age, with the composition of the mycobiome between individuals displaying opposite maturational patterns relative to bacteria, increasing in dissimilarity from 3 to 12 months (R 2 = 0.048, p < 0.001; Fig. 2B).However, like bacteria, divergent alpha diversity and richness maturational trends were observed when assessed at the individual level (Fig. 2C and Figure S3A), with 80% (n = 73) of infants displaying a decrease and 20% (n = 18) increasing in fungal alpha diversity from 3 to 12 months (p < 0.001; Fig. 2C).Compositional differences at 3 and 12 months were also observed by fungal alpha diversity trend, explaining 1.6% and 2.3% of the variance, respectively (3 months: p = 0.015; 12 months: p = 0.005; Fig. 2D).The compositional differences observed by alpha diversity trend were further exemplified by significant differences in the distance to centroid, also known as beta dispersion, at 3 months (p = 0.010), but not at 12 months (Fig. 2E).No differences in beta dispersion were observed for bacterial alpha diversity trend.Divergent maturational patterns were also observed for fungal richness (Figure S3A-B), and previous work by our group has associated these changes with early-life BMI z-scores in this cohort [35].
Considering a substantial proportion of infants (20-25%) deviated from the predominant or "typical" pattern of change in bacterial and fungal alpha diversity observed in this sub-cohort, we sought to determine if "atypical" patterns of bacterial and fungal alpha diversity occurred in the same individual.However, we found that the infants that had an "atypical" (decreasing) bacterial alpha diversity trend generally had a "typical" (decreasing) fungal alpha diversity trend and vice versa, with only 2 infants displaying "atypical" trends for both bacteria and fungi.Overall, only 50 infants (56%) with data at both timepoints displayed "typical" trends for both bacterial and fungal alpha diversity.This highlights the importance of individual-level trajectory analyses in generating more nuanced understandings of bacterial and fungal microbiome maturation patterns in early life, particularly given the divergent alpha diversity trends observed were masked when performing group-wise analyses by age.For the remainder of this paper, alpha diversity trends will also be referred to as typical or atypical based on the predominant direction of change infants displayed in this sub-cohort: an increasing trend is typical for bacteria, whereas a decreasing trend is typical for fungi.While these definitions of "typical" and "atypical" are well-supported for bacterial alpha diversity in early life [16][17][18][19][20], our use of this terminology for fungi is specific to this sub-cohort and we cannot definitively say what is considered "typical" for fungal alpha diversity maturational patterns based on the current literature [3,9,[21][22][23][24][25][26][27][28].
Differences in taxonomic community structure are exhibited in infants with atypical bacterial or fungal alpha diversity trends
Given over 40% of infants displayed atypical alpha diversity trends (for bacteria and/or fungi) and that community composition (beta diversity) differed by these trends, we next explored whether specific taxa were differentially associated with bacterial and fungal alpha diversity trends in the first year of life.First, we compared the relative abundances of the 15 most abundant bacterial and fungal genera by age, which represented 88.3 ± 14.6% and 85.3 ± 16.5% of the total community for bacteria at 3 and 12 months, respectively.The bacterial microbiome exhibited a more heterogenous taxonomic structure at 3 months, with Bacteroides, Escherichia, and Bifidobacterium being the most abundant genera, then shifted towards Bacteroides dominance at 12 months (Fig. 3A).In line with previous reports of broad compositional shifts occurring over the first year of life [16][17][18][19][20], we found significant differences in the relative abundances of all of the top 15 genera, except Akkermansia, Haemophilus, Parabacteroides, Ruminococcus, and unclassified Rikenellaceae, between 3 and 12 months (Table S2).
Next, we assessed the taxonomic structure by age and bacterial alpha diversity trend and observed smaller structural differences at both 3 and 12 months between infants with increasing and decreasing trends (Fig. 3B, Figure S4 and Table S2).At 3 months, the atypical or decreasing bacterial trend was associated with a significantly lower relative abundance of Escherichia (p = 0.013) relative to the typical or increasing trend; whereas, at 12 months, the atypical trend was associated with enrichment of Bacteroides (p = 0.016; Fig. 3B, Figure S4 and Table S2).To further probe what taxa were able to distinguish the typical alpha diversity trend from the atypical trend, we performed differential abundance analysis at the ASV level using DESeq2 [54].At 3 months of age, infants that displayed an atypical bacterial alpha diversity trend had lower abundance of Bacteroides caccae (p < 0.001) and higher abundance of Bacteroides ovatus (p < 0.001; Fig. 3C).At 12 months, the atypical trend was associated with an elevated abundance of Akkermansia muciniphila (p < 0.001), Bacteroides uniformis (p < 0.001), Prevotella copri (p < 0.001), and unclassified Rikenellaceae (p < 0.001), alongside a lower abundance of unclassified Bacteroides (p < 0.001; Fig. 3C) relative to infants with a typical or increasing alpha diversity trend.Evidence of both the genus-and ASV-based taxonomic differences identified were also apparent when the relative abundance of the top 15 bacterial genera was assessed at the individual-level (Figure S4).
For fungi, the 15 most abundant genera represented 86.9 ± 10.5% and 93.3 ± 11.3% of the total community at 3 and 12 months, respectively.The mycobiome exhibited shared dominance of Candida, Malassezia, and Mycosphaerella at 3 months, then shifted towards Saccharomyces dominated communities at 12 months, with significant changes in the relative abundance of most of the top 15 genera exhibited over this time, except Alternaria, Candida, Ganoderma, Resinicium, and Rigidoporus (Fig. 3D and Table S3).Fungi exhibited more pronounced shifts in taxonomic structure when assessed by alpha diversity trend compared to bacteria, with the atypical or increasing trend being significantly enriched with Candida at 3 months relative to the typical trend (p < 0.001), alongside having lower abundances of Malassezia (p = 0.039), Cladosporium (p = 0.038), unclassified Sclerotiniaceae (p = 0.014), Naganishia (p = 0.006) and Meyerozyma (p < 0.001; Fig. 3E, Figure S5 and Table S3).At 12 months, the degree of Candida dominance was reduced, albeit remained more abundant than in infants with a typical fungal alpha diversity trend (p = 0.001; Fig. 3E, Figure S5 and Table S3).Rigidoporus was also significantly enriched in infants with an increasing fungal alpha diversity trend at 12 months (p = 0.033), while Saccharomyces (p = 0.005) and Alternaria (p = 0.001) were depleted (Fig. 3E, Figure S5 and Table S3).At the ASV level, infants with an increasing or atypical fungal alpha diversity trend had an enrichment of Candida parapsilosis (p < 0.001) at 3 months and Cyberlindnera jadinii (p < 0.001), Meyerozyma guilliermondii (p < 0.001), and specific Saccharomyces cerevisiae ASVs (p < 0.001) at 12 months relative to those with a decreasing trend (Fig. 3F).Again, these findings were supported by individual-level assessment of the relative abundance of the top 15 genera by alpha diversity trend and age (Figure S5).Altogether, this analysis revealed that dynamic shifts in bacterial and fungal community structure underlie patterns of alpha diversity change in early life.
Atypical alpha diversity trends are associated with altered inter-kingdom co-occurrence dynamics
Next, we employed co-occurrence network analyses at the species level to examine inter-kingdom dynamics between bacteria and fungi.These methods facilitate the prediction of potential hub species and rank their positional importance within the ecosystem [69].Metrics applied in ecological network analysis are also used to characterize and compare the organization and functioning of ecosystems, facilitating the inference of multitrophic interactions amongst species [70].Infants were evaluated based on the combination of alpha diversity trends they exhibited for both bacteria and fungi.Overall, 89 infants had adequate data for both timepoints and kingdoms.Of these, 50 (56%) infants displayed a typical or inverse overall trend between bacterial and fungal alpha diversity with bacterial alpha diversity increasing and fungal alpha diversity decreasing from 3 to 12 months, 21 (24%) had an atypical decrease in bacterial alpha diversity, 16 (18%) had an atypical increase in fungal alpha diversity, and 2 (2%) displayed an atypical trend for both bacteria and fungi.Infants with an atypical trend for both bacterial and fungal alpha diversity (n = 2) were omitted from subsequent analyses due to sample size limitations.
Inter-kingdom network analyses revealed distinct differences in the structure of co-occurrence networks between typical (inverse) and atypical (bacteria or fungi) overall alpha diversity trends at both 3 and 12 months (Fig. 4 and Table S4).This was reflected in significant differences between various network measures of centrality and the emergence of distinct hub taxa in each network (Fig. 4 and Table S4), highlighting the unique inter-kingdom co-occurrence dynamics exhibited across all three patterns of microbiome maturation at both timepoints.In the co-occurrence network for the typical (inverse) bacterial and fungal alpha diversity trends at 3 months, members of the core infant gut microbiome, Roseburia spp., unclassified Lachnospiraceae, Bacteroides dorei, and Akkermansia muciniphila were identified as hub taxa (Fig. 4A), forming a functional cluster consistent with the metabolic cross-feeding dynamics involved in butyrate metabolism in these ecosystems [71][72][73][74].In contrast, atypical networks displayed a higher density of co-occurrence relationships, with a mix of core and opportunistic microbes emerging as hub taxa (Fig. 4).These hub taxa were dispersed across more functional clusters compared to the networks obtained from the typical alpha diversity trends, but some core members were maintained as hubs in the fungi atypical network at 3 months (e.g., A. muciniphila), which likely reflects the expected increase in bacterial alpha diversity observed in these communities (Fig. 4).This is further supported by the higher natural connectivity (robustness) of the fungi atypical network compared to the bacterial atypical network (p = 0.017), being more comparable to the typical network (Table S4), suggesting that an atypical decrease in bacterial alpha diversity may have different and stronger effects on community dynamics than an increase in fungal alpha diversity.
When comparing network measures of centrality at 3 months, the atypical bacterial alpha diversity trend network exhibited differences in betweenness centrality (network members that 'bridge' between nodes; p = 0.013) and Eigenvector centrality (level of influence of a node within a network; p = 0.003) relative to the typical inverse network (Table S4).Similarly, networks from infants exhibiting an atypical fungal trend differed in terms of degree (total number of edges or links between nodes; p = 0.003), closeness centrality (shortest path between nodes; 0.013), betweenness centrality (p = 0.041), and Eigenvector centrality (p = 0.013) compared to the typical trend, with comparable differences also emerging between bacteria and fungi atypical networks (Table S4).Centrality metrics denote how important a node is for the connectivity and interactions within the network.That is, a node of high centrality is required for paths leading to other nodes, and consequently, have a greater likelihood of being involved in the network's predicted food chains [75].Through this lens, this analysis suggests that typical microbiome maturation during infancy favours hubs of higher positional importance within the networks.
At 12 months, the network from infants with typical alpha diversity trends maintained functional clusters of core bacterial microbiome members (i.e., A. muciniphila and Eubacterium dolichum) identified as hub taxa.In contrast, both atypical trends continued to display more densely connected networks and contained a combination of hub taxa that were present at 3 months (i.e., Clostridium, Ganoderma) or are core microbiome members (i.e., Blautia, Ruminococcus; Fig. 4).Candida parapsilosis also emerged as a hub in the fungi atypical network at 12 months (Fig. 4B), consistent with the significantly higher relative abundance of Candida in infants with an atypical fungal alpha diversity trend at 3 and 12 months relative to those with a typical trend (Fig. 3E, F and Table S3).Network metrics at 12 months showed very similar results to those at 3 months (Table S4).The typical network exhibited differences in degree (p = 0.002) and betweenness centrality (p = 0.027) relative to the network from infants with an atypical bacterial alpha diversity trend, as well as differences in degree (p = 0.027) and Eigenvector centrality (p = 0.008) when compared to the network for those with an atypical fungal alpha diversity trend (Table S4).Meanwhile, the two atypical networks exhibited differences across degree (p < 0.001), betweenness (p = 0.003), closeness (p = 0.003), and Eigenvector centrality (p = 0.013; Table S4).This further supports the idea that the typical alpha diversity trend network favours highly centralized nodes within trophic webs.In parallel, these consistent differences in network structure and hub taxa between typical and atypical alpha diversity trends at 3 and 12 months may indicate either earlier or delayed community transitions in infants with atypical bacterial or fungal alpha diversity trends.
To determine whether the observed co-occurrence dynamics were exclusively a function of inter-kingdom influences, we generated networks for each kingdom in isolation (Figures S6-7).In both cases, the structure of the typical vs. atypical bacterial or fungal networks mirrored those of the inter-kingdom networks, with fewer taxa passing the correlation threshold in the typical networks (Figures S6-7 and Tables S5-6).To evaluate if the differences in the network structures were a function of unequal sample sizes between the three groups, we compared the inter-kingdom networks between a typical inverse relationship for bacterial and fungal alpha diversity (n = 50) and all the atypical patterns combined (decreasing bacterial alpha diversity, increasing fungal alpha diversity, or both; n = 39).While this provided a level of control for the number of species passing the correlation threshold (i.e., the number of species that pass Bacterial species are represented by circles and fungal species by triangles.Node size was determined based on degree centrality.Hub taxa are those with the highest betweenness centrality and are labelled with their shape perimeter bolded.Shape colour represents clusters of species more likely to co-occur with one another than with species from outside of these modules.Pair-wise comparisons of network measures were calculated using 5,000 permutations (see Table S4) the Pearson threshold is proportional to the number of samples evaluated), making it more comparable between the typical and atypical networks, many of the observed differences in network properties and hub taxa persisted (Figure S8 and Table S7).Together, this suggests atypical shifts in alpha diversity in the first year of life are associated with altered microbial co-occurrence dynamics when compared to infants with a typical inverse overall trend between bacterial and fungal alpha diversity and that these changes may reflect differences in the rate of gut microbiome maturation.
Bacterial and fungal alpha diversity trends are associated with multi-kingdom dynamics, FUT2 secretor status, and other known microbiome-modifying factors in early life
We next sought to understand whether early-life, infant, maternal, nutritional, and ecological factors were linked to the divergent microbiome maturation patterns we observed in the first year of life.First, we employed random forests to determine the factors that were predictive of whether an infant displayed an increasing or decreasing alpha diversity trend.For the bacterial alpha diversity trend, breastfeeding duration (GI = 5.06), fungal alpha and beta diversity at 3 (alpha: GI = 3.55; beta: GI = 3.92) and 12 months (alpha: GI = 2.95; beta: GI = 2.77), and maternal healthy eating index (GI = 3.73) had the greatest discriminatory power, followed by breastfeeding status at 3 months (GI = 1.78), age at introduction of solid foods (GI = 1.76), and maternal (GI = 1.46) and infant (GI = 1.04)FUT2 secretor genotypes (Fig. 5A).The directionality of these relationships was then explored using logistic regression, while considering the confounding effects of infant, early-life, and maternal factors.Infants who were breastfed at 3 months, either partially (OR = 0.16, CI: 0.03-0.75,p = 0.029) or exclusively (OR = 0.05, CI: 0.00-0.29,p = 0.003), were less likely to have an atypical or decreasing bacterial alpha diversity trend (Fig. 5B and Table S8).Maternal FUT2 secretor genotype displayed a similar inverse relationship for the homozygous (GG) allele (OR = 0.04, CI: 0.00-0.43,p = 0.010), whereas the homozygous (GG) allele in infants was positively associated with an atypical bacterial alpha diversity trend (OR = 23.02,CI: 2.96-280.06,p = 0.006; Fig. 5B and Table S8).Delivery via C-section also emerged as positively associated with a decreasing bacterial alpha diversity trend (OR = 11.57,CI: 1.76-112.13,p = 0.019), alongside prenatal antibiotics exposure (OR = 15.80,CI: 1. 96-194.97,p = 0.017; Fig. 5B and Table S8).
The fungal alpha diversity trend was similarly predicted by multi-kingdom dynamics, including bacterial alpha and beta diversity at 3 (alpha: GI = 3.66; beta: GI = 3.10) and 12 months (alpha: GI = 3.18; beta: GI = 2.99) of age, maternal healthy eating index (GI = 3.23), and breastfeeding duration (GI = 2.45; Fig. 5C).Maternal consumption of artificially sweetened beverages during gestation (GI = 1.47), age at introduction of solid foods (GI = 1.32), infant FUT2 secretor genotype (GI = 1.06), and breastfeeding status at 3 months (GI = 0.97) also emerged as strong predictors (Fig. 5C).Logistic regression revealed maternal consumption of artificially sweetened beverages during gestation was positively associated with an atypical or increasing fungal alpha diversity trend (OR = 8.32, CI: 1.98-48.59,p = 0.008; Fig. 5D and Table S9), but no other significant associations were observed.Together, our analyses suggest that the role of known microbiomemodifying factors, such as breastfeeding duration and birth mode, in the developmental patterns of the bacterial microbiome may not be as influential on fungal microbiome maturation.This work further revealed that multi-kingdom diversity metrics are associated with bacterial and fungal alpha diversity trends, and maternal and infant FUT2 secretor genotypes are associated with bacterial alpha diversity, prompting for further explorations of the effects of ecological interactions between bacteria and fungi, as well as secretor status, on microbiome establishment.
Atypical bacterial and fungal alpha diversity trends are associated with metabolomic shifts in urine at three months of age
To investigate whether the differences observed in taxonomic community structure between alpha diversity patterns were associated with functional changes, we performed untargeted urine metabolomics at 3 and 12 months of age.Metabolite evaluation in urine has the advantage of revealing markers of physiological or pathological host-microbe interactions, as microbiome-derived products can be excreted renally [76].We identified differences in the concentration of specific urine metabolites between the typical and atypical alpha diversity trends for both bacteria and fungi at 3, but not 12, months of age (Fig. 6).Infants with an atypical or decreasing bacterial alpha diversity trend exhibited significant enrichment of trimethylamine N-oxide (TMAO; p = 0.005; Fig. 6A), indole acetic acid (IAA; p = 0.003; Fig. 6B), creatine (p = 0.002; Fig. 6C), and 2-furoylglycine (p < 0.001; Fig. 6D) relative to those with a typical or increasing bacterial alpha diversity trend.For fungi, an atypical or increasing alpha diversity trend was associated with higher concentrations of lactic acid (p < 0.001; Fig. 6E).Given most metabolites (n = 102) remained unchanged when assessed by either the bacterial or fungal alpha diversity trend, this suggests that compositional changes in the gut microbiome likely did not translate to systemic host functional shifts, but investigations of the serum and stool metabolomes could help confirm this.However, the increases observed in the concentration of metabolites with known microbial origins (e.g., TMAO, IAA, lactic acid) [77][78][79] implies these changes in gut microbial composition may still have functional consequences on the host.
Discussion
A signature feature of primary ecological succession in bacterial communities is an increase in alpha diversity, propelled mainly by non-stochastic, niche-driven effects [80].These predictable patterns of primary succession have also been observed in the bacterial gut microbiome of infants over the first year of life, based on groupwise comparisons of diversity metrics across early-life timepoints [16][17][18][19][20].By evaluating ecological shifts at the individual level, we identified divergent trajectories of gut microbiome maturation across 100 Canadian infants based on changes in bacterial and fungal alpha diversity per individual from 3 to 12 months of age, which were masked when performing group-based analyses.These trajectories occurred in over 40% of infants and were characterized by distinct differences in community composition, inter-kingdom co-occurrence dynamics, and the abundance of select microbially-derived urine metabolites, suggestive of variable rates of microbiome maturation.Factors known to be involved in directing early life bacterial microbiome maturation, such as breastfeeding and delivery mode [19,20,57], were associated with these patterns for bacteria, but not fungi.Together, this work highlights the important knowledge gaps created when microbiome research focuses exclusively on groupbased, single-kingdom, and/or cross-sectional analyses.
Successional patterns of infant gut microbiome maturation have been well-described for the bacterial microbiome [16][17][18][19][20], while only a handful of reports on fungal microbiome maturation patterns exist and most are limited by small sample sizes [3,9,[21][22][23][24][25][26][27][28].In this sub-cohort, the overall changes in the taxonomic structure of the bacterial and fungal gut microbiome from 3 to 12 months largely followed what has been previously reported [3,9,[16][17][18][19][20][21][22][23][24][25][26][27][28], but differences emerged in infants with atypical bacterial or fungal alpha diversity trends, ) and E lactic acid by fungal alpha diversity trend (decrease: n = 73, increase: n = 18) at 3 months observed in urine, assessed by t-test.Metabolite concentrations were normalized using the median, log-transformed, and pareto-scaled (mean-centered and divided by the square root of the standard deviation of each metabolite) and a false discovery rate (FDR) cutoff of 0.1 was applied.No significant differences in urine metabolite concentrations were observed at 12 months suggesting alterations in the arrival times of specific microbes, availability of appropriate niches, or initiation of ecosystem transitions.For example, Ruminococcus gnavus has been identified as marker of microbiome immaturity [20,81], but emerged as a hub in both atypical networks at 12 months.Similarly, Candida is typically dominant very early in fungal microbiome maturation [3,21,22,27], but infants with an atypical fungal alpha diversity trend maintained a high abundance of Candida and failed to transition to Saccharomyces-predominant communities over the first year of life, with Candida parapsilosis being identified as a hub taxon in the atypical co-occurrence network at 12 months.Transition towards communities enriched with Saccharomyces has been previously linked to the introduction of solid foods [3,9]; however, this factor was not significantly associated with an atypical fungal alpha diversity trend in our study, suggesting more complex ecosystem dynamics may underly this incomplete compositional transition.
Our investigations of inter-kingdom dynamics revealed stark differences in the structure of co-occurrence networks between infants with typical and atypical maturational patterns at both 3 and 12 months of age, regardless of whether the atypical trends were driven by changes in bacteria or fungi.Networks for the typical (inverse) bacterial and fungal alpha diversity trend displayed defined functional clusters with few taxa passing the correlation threshold, which increased in complexity from 3 to 12 months.In contrast, both atypical trends exhibited densely connected networks that were structurally comparable between timepoints.These differences may indicate reduced microbiome maturity and lack of successional progression in infants with an atypical bacterial or fungal alpha diversity trend, given densely connected ecosystems are more vulnerable to disturbances [82].In contrast, more competitive microbial community dynamics with fewer taxa passing the correlation threshold, such as the ones observed for the typical alpha diversity trends, are associated with increased community stability and maturity [82].This is further emphasized by the increase in modularity in the typical networks from 3 to 12 months, evidenced by the distinct functional clusters separated by negative co-occurrence relationships, suggestive of the formation of sub-communities driven by ecological processes such as habitat filtering or niche occupation [83].Meanwhile, the differences in centrality measures across each network highlights the distinct community dynamics and hubs, or most central taxa, underlying the varied patterns of microbiome maturation.Ultimately, the observed inter-kingdom cooccurrence dynamics suggest atypical shifts in bacterial or fungal alpha diversity in the first year of life may limit the ability of the microbiome to form resilient, stable communities, potentially due to overly cooperative dynamics that prevent or delay subsequent successional steps from occurring.This is supported by experimental evidence from eco-evolutionary models, showing that evolution limits cooperation among microbial community members, as this increases dependency on species that may not be present and renders less productive communities [84].
Functionally, select metabolites with known microbial origins were found to be elevated at 3 months in infants with an atypical bacterial or fungal alpha diversity trend, suggesting these divergent maturational patterns may have important functional implications.First, the higher creatine levels observed in those with an atypical bacterial trend support our hypothesis that these infants may be experiencing delayed microbiome maturation, as reductions in creatine have been associated with microbial colonization and microbiome maturity in both animal models and humans, explained by microbial involvement in creatine elimination [85][86][87][88].Meanwhile, the enrichment of metabolic by-products of various microbial metabolic pathways, including TMAO (protein catabolism) and IAA (tryptophan catabolism) in the atypical bacterial trend and lactic acid (sugar anabolism) in the atypical fungal trend, suggest these alterations may have broad functional effects on the host.For example, enrichment of IAA in infants with an atypical bacterial alpha diversity trend may reflect greater tryptophan metabolism by the IAA-producer, Bacteroides ovatus [89], whose relative abundance is significantly higher in the atypical bacterial trend at 3 months.This could translate to broad physiological influences on the host, as IAA is involved in immune homeostasis, gut-brain communication, regulating epithelial integrity, and host gene expression [77,[89][90][91].In contrast, the accumulation of lactic acid in the atypical fungal alpha diversity trend may indicate the absence of lactate-consuming, butyrate-producing strains in the microbiome of these infants, such as Roseburia, which could have downstream impacts on gut epithelium integrity due to the role of butyrate in colonocyte health [92][93][94][95].Although these metabolic changes are not maintained longitudinally, being observed at 3 months only, it is possible that they may still be influential given the rapid developmental processes and pronounced influence of host-microbiome crosstalk during this early-life critical window [31,32].
Current understandings of the factors influencing gut microbiome maturation patterns in early life are based on the bacterial microbiome.Our study found that the factors related to the bacterial alpha diversity trend are largely consistent with the literature, including the effects of breastfeeding, antibiotics, and mode of birth [19,20,57].Yet, we also identified a role of both maternal and infant FUT2 secretor genotype in microbiome maturational trajectories, with the directionality of the association changing depending on whether the mother or infant was a homozygous (GG) secretor.This intriguing finding may reflect differential influences of maternal vs. infant secretor status on microbial metabolism and gut physiology, as the secretion of ABO histo-blood group antigens in breast milk and on the gut mucosa act as carbohydrate substrates for microbes, and thus, may favour certain microbes occupying specific geospatial niches [8,96].For example, secretor mothers produce fucosylated human milk oligosaccharides (HMOs) in breastmilk that select for the expansion of HMO-utilizing bifidobacteria, and subsequently, encourage cooperative microbial cross-feeding dynamics in these communities [8,96,97].This has been associated with increased alpha diversity in breastfed infants [8,96,97], consistent with our results.In contrast, infant secretors express these carbohydrate groups on the mucus lining of the gut, which may influence microbial community composition by favoring mucosa-associated microbes or mucin degraders, such as Akkermansia muciniphila [98].This could explain the expansion of Akkermansia muciniphila observed at 12 months in infants with an atypical bacterial alpha diversity trend, as this trend was positively associated with infant FUT2 secretor status.Given the variability in associations previously reported between secretor status and microbiome composition [97,99,100], this relationship is likely complex, but our finding highlights the need to consider the influence of both maternal and infant genetic factors on microbiome maturation patterns in early life.
Unlike the divergent bacterial microbiome maturational trajectories, fungal alpha diversity trend was largely not associated with known bacterial microbiome-modifying factors, apart from exposure to artificial sweeteners [36], suggesting fungal colonization may be directed by factors beyond commonly studied pre-and post-natal exposures.Instead, we found alpha and beta diversity metrics of the opposing kingdom to be robust predictors of alpha diversity trend directionality.This highlights the importance of multi-kingdom microbial interactions during infant microbiome assembly, as within-ecosystem dynamics beyond bacteria may have differential and stronger influences on microbial colonization patterns than external factors.For example, a recent ecological analysis by Rao et al. revealed that Candida albicans dictated early microbial assembly by inhibiting Escherichia and Klebsiella colonization, while its own expansion was prevented by Staphylococcus [10].Considering the substantial relative abundance of Candida in infants with an atypical fungal alpha diversity trend at both 3 and 12 months, it is possible that similar inter-kingdom ecosystem dynamics may underly these different maturational patterns.Together, these findings call for the inclusion of additional microbiome members in studies on early-life gut microbiome maturation and highlight the limitations of generalizing our understandings of factors influencing bacterial colonization patterns to other kingdoms.
The main strength of our study is the incorporation of multi-kingdom data and individual-level longitudinal analyses to add improved resolution to our understanding of bacterial and fungal gut microbiome maturation patterns in early life.By evaluating bacterial and fungal members of the gut microbiome together, we were able to identify the important influence of inter-kingdom factors on microbiome maturation and generate clearer understandings of the differences in microbial co-occurrence dynamics between typical and atypical maturation patterns.However, while the network analyses used to generate these findings are informative and hypothesis-generating, it is important to note that they inherently come with limitations, particularly when based on compositional vs. absolute data, and the biological interactions inferred should be reproduced in other cohorts and corroborated experimentally to confirm their relevance.Further, although our study is constrained by sample size, this limitation simultaneously highlights the prevalence of diverging microbiome maturation patterns in early life, calling for greater research attention.
Future work should focus on the incorporation of repeated microbiome measures and additional functional analyses (e.g., fecal metabolomics, immune markers) to determine if the atypical alpha diversity trends observed vary within or extend beyond the first year of life and clarify the functional effects of atypical trajectories of microbiome maturation on the host.This could be further strengthened through the incorporation of metagenomic analyses to help overcome the limitations of amplicon-based sequencing, particularly by providing broader functional measures, improved bacterial taxonomic assignment, and enabling the interrogation of other microbiome members (e.g., bacteriophage, viruses, Archaea) and how they contribute to gut microbial ecosystem dynamics.In parallel, longitudinal data on health outcomes in childhood and adolescence would provide important insights into the developmental implications of these divergent maturational patterns, which are unclear in this work due to our early-life focus.Despite these limitations, our research clearly highlights the pitfalls of reductionist (e.g., bacteria only) and exclusively groupbased analytical approaches in gut microbiome research, which have a greater propensity to mask more complex ecosystem dynamics and yield incomplete narratives.
Overall, our findings suggest atypical patterns of bacterial and fungal gut microbiome succession are more common than previously considered, and that these patterns may be indicative of delayed or variable rates of microbiome maturation.Analyses in large, longitudinal cohorts containing data on health outcomes and repeated microbiome measures will be imperative to determine whether the atypical microbiome maturation patterns observed have long-term consequences.Our work also determined that while the mycobiome plays an important role in bacterial microbiome establishment during early life, the factors influencing fungal microbiome maturation differ from those commonly reported for the bacterial microbiome and remain underexplored.It may be the case that the mycobiome is more strongly influenced by stochastic factors, within-ecosystem dynamics, or other social and environmental factors (e.g., cultural differences in diet, geography, seasonality).Future work should seek to better delineate the differential influences of early-life exposures on the bacterial vs. fungal microbiome, as well as how inter-kingdom dynamics contribute to gut colonization patterns.Ultimately, understanding the ecological and host-derived processes behind microbial primary succession may be useful within restoration and conservation frameworks aimed at improving the health trajectories of children at risk of or already displaying early-life microbiome alterations.
Fig. 1
Fig. 1 Divergent bacterial alpha diversity maturation patterns are observed in the first year of life.A Bacterial Shannon and Chao1 alpha diversity indices at 3 and 12 months of age, assessed by Mann-Whitney U test (3 months: n = 99, 12 months: n = 99).B Comparison of bacterial beta diversity using the Bray-Curtis dissimilarity index at 3 and 12 months, assessed by PERMANOVA (3 months: n = 99, 12 months: n = 99).Ellipses represent 95% CI.C Changes in bacterial alpha diversity (Shannon index) per individual from 3 to 12 months, assessed by paired t-test (increase: n = 74, decrease: n = 24; see Figure S2A for bacterial richness).D Comparison of bacterial beta diversity using the Bray-Curtis dissimilarity index by alpha diversity trend at 3 and 12 months, assessed by PERMANOVA (increase: n = 74, decrease: n = 24; see Figure S2B for bacterial richness trend).Ellipses represent 95% CI
Fig. 2
Fig. 2 Divergent fungal alpha diversity maturation patterns are observed in the first year of life.A Fungal Shannon and Chao1 alpha diversity indices at 3 and 12 months of age, assessed by Mann-Whitney U test (3 months: n = 95, 12 months: n = 95).B Comparison of fungal beta diversity using the Bray-Curtis dissimilarity index at 3 and 12 months, assessed by PERMANOVA (3 months: n = 95, 12 months: n = 95).Ellipses represent 95% CI.C Changes in fungal alpha diversity (Shannon index) per individual from 3 to 12 months, assessed by paired t-test (decrease: n = 73, increase: n = 18; see Figure S3A for fungal richness).D Comparison of fungal beta diversity using the Bray-Curtis dissimilarity index by alpha diversity trend at 3 and 12 months, assessed by PERMANOVA (decrease: n = 73, increase: n = 18; see Figure S3B for fungal richness).Ellipses represent 95% CI.E Fungal beta dispersion by alpha diversity trend at 3 and 12 months, assessed by permutation test (decrease: n = 73, increase: n = 18)
Fig. 3
Fig. 3 Differences in taxonomic structure are observed in infants with atypical bacterial and fungal alpha diversity trends in the first year of life.A Relative abundance of the 15 most abundant bacterial genera at 3 and 12 months (3 months: n = 99, 12 months: n = 99).B Relative abundance of the 15 most abundant bacterial genera by bacterial alpha diversity trend at 3 and 12 months (increase: n = 74, decrease: n = 24; see Figure S2C for bacterial richness trend).C Differentially abundant bacterial ASVs in the decreasing (atypical) bacterial alpha diversity trend relative to the increasing (typical) trend at 3 and 12 months (increase: n = 74, decrease: n = 24).D Relative abundance of the 15 most abundant fungal genera at 3 and 12 months (3 months: n = 95, 12 months: n = 95).E Relative abundance of the 15 most abundant fungal genera by fungal alpha diversity trend at 3 and 12 months (decrease: n = 73, increase: n = 18; see Figure S3C for fungal richness trend).F Differentially abundant fungal ASVs in the increasing (atypical) fungal alpha diversity trend relative to the decreasing (typical) trend at 3 and 12 months (decrease: n = 73, increase: n = 18)
Fig. 4
Fig.4 Structural differences in inter-kingdom co-occurrence networks exist between infants with typical (inverse) and bacteria or fungi atypical overall alpha diversity trends in the first year of life.Inter-kingdom correlation networks of bacterial and fungal species based on overall alpha diversity trends at A 3 and B 12 months.Infants were classified into overall alpha diversity relationships based on the combination of alpha diversity trends they exhibited for bacteria and fungi.A typical inverse trend was characterized by increasing bacterial and decreasing fungal alpha diversity (n = 50); a bacteria atypical trend was characterized by decreasing bacterial and fungal alpha diversity (n = 21); and a fungi atypical trend was characterized by increasing bacterial and fungal alpha diversity (n = 16).Networks were generated using the fast greedy clustering algorithm with a minimum Pearson correlation coefficient threshold of ± 0.4.Positive correlations are displayed in green and negative correlations in red.Bacterial species are represented by circles and fungal species by triangles.Node size was determined based on degree centrality.Hub taxa are those with the highest betweenness centrality and are labelled with their shape perimeter bolded.Shape colour represents clusters of species more likely to co-occur with one another than with species from outside of these modules.Pair-wise comparisons of network measures were calculated using 5,000 permutations (see TableS4)
Fig. 5
Fig. 5 Multi-kingdom dynamics, maternal and infant nutrition, delivery mode, and antibiotic exposure are associated with atypical bacterial and fungal diversity trends.A Predictors of bacterial diversity trend identified by random forest using 10-fold cross-validation, 500 trees, and 1,000 permutations (increase: n = 63, decrease: n = 22).The increasing alpha diversity trend was set as the reference level.B Multivariable logistic regression identifying associations between early life, infant, and maternal factors and a decreasing (atypical) bacterial alpha diversity trend (increase: n = 70, decrease: n = 23).The increasing alpha diversity trend was set as the reference level.C Predictors of fungal alpha diversity trend identified by random forest using 10-fold cross-validation, 500 trees, and 1,000 permutations (decrease: n = 68, increase: n = 17).The decreasing alpha diversity trend was set as the reference level.D Multivariable logistic regression identifying associations between early life, infant, and maternal factors and an increasing (atypical) fungal alpha diversity trend (decrease: n = 71, increase: n = 17).The decreasing alpha diversity trend was set as the reference level.AS, artificially sweetened; GI, Gini index; AG and GG FUT2 secretor genotypes are secretors, reference level AA genotype are non-secretors; ~ p < 0.1; *p < 0.05; **p < 0.01
Fig. 6
Fig. 6 bacterial and fungal alpha diversity trends are associated with functional differences reflected in urine metabolites at 3 months.Normalized concentrations of A trimethylamine N-oxide (TMAO), B indole acetic acid (IAA), C creatine, and D 2-furoylglycine by bacterial alpha diversity trend (increase: n = 74, decrease: n = 24) and E lactic acid by fungal alpha diversity trend (decrease: n = 73, increase: n = 18) at 3 months observed in urine, assessed by t-test.Metabolite concentrations were normalized using the median, log-transformed, and pareto-scaled (mean-centered and divided by the square root of the standard deviation of each metabolite) and a false discovery rate (FDR) cutoff of 0.1 was applied.No significant differences in urine metabolite concentrations were observed at 12 months
Table 1
Characteristics of mother-infant dyads from the CHILD cohort included in this analysis (n = 100) FUT2 genotype indicates whether an individual is a non-secretor (AA) or secretor (AG or GG) of ABO histo-blood group antigens in other bodily fluids, such as on the gut mucosa or in breastmilk d Includes Winnipeg and two rural sites, Morden and Winkler a Data missing for two mother-infant dyads b Data missing for three mother-infant dyads c
|
v3-fos-license
|
2019-06-15T13:07:26.176Z
|
2019-06-26T00:00:00.000
|
189818047
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/EC79A45C4AF849F61A0BB75539F78457/S0266462319000333a.pdf/div-class-title-population-adjustment-methods-for-indirect-comparisons-a-review-of-national-institute-for-health-and-care-excellence-technology-appraisals-div.pdf",
"pdf_hash": "ab5e77c8c2a56a30321fb68fda7519e4b0124b25",
"pdf_src": "Cambridge",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43230",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "74a3652167910f982891e6954b9ad29240b70016",
"year": 2019
}
|
pes2o/s2orc
|
Population Adjustment Methods for Indirect Comparisons: A Review of National Institute for Health and Care Excellence Technology Appraisals
Abstract Objectives Indirect comparisons via a common comparator (anchored comparisons) are commonly used in health technology assessment. However, common comparators may not be available, or the comparison may be biased due to differences in effect modifiers between the included studies. Recently proposed population adjustment methods aim to adjust for differences between study populations in the situation where individual patient data are available from at least one study, but not all studies. They can also be used when there is no common comparator or for single-arm studies (unanchored comparisons). We aim to characterise the use of population adjustment methods in technology appraisals (TAs) submitted to the United Kingdom National Institute for Health and Care Excellence (NICE). Methods We reviewed NICE TAs published between 01/01/2010 and 20/04/2018. Results Population adjustment methods were used in 7 percent (18/268) of TAs. Most applications used unanchored comparisons (89 percent, 16/18), and were in oncology (83 percent, 15/18). Methods used included matching-adjusted indirect comparisons (89 percent, 16/18) and simulated treatment comparisons (17 percent, 3/18). Covariates were included based on: availability, expert opinion, effective sample size, statistical significance, or cross-validation. Larger treatment networks were commonplace (56 percent, 10/18), but current methods cannot account for this. Appraisal committees received results of population-adjusted analyses with caution and typically looked for greater cost effectiveness to minimise decision risk. Conclusions Population adjustment methods are becoming increasingly common in NICE TAs, although their impact on decisions has been limited to date. Further research is needed to improve upon current methods, and to investigate their properties in simulation studies.
covariates, and proportions for categorical covariates). The weights are then used to obtain predicted outcomes in the AgD study population, for example by taking weighted mean outcomes on each treatment. STC is a regression adjustment method, where a regression model is fitted in the IPD study. This model is then used to predict average outcomes in the AgD study population. Whichever method is used, once the predicted outcomes are obtained for the AgD study population these are compared with the outcomes reported by the AgD study. The development and use of these methods is motivated by one of two reasons: either (i) there is evidence for effect modification, and these variables are distributed differently in each study population; or (ii) there is no common comparator or the relevant studies are single arm, and so adjustment is required for all prognostic and effect modifying variables. Phillippo et al. (6;7) reviewed the properties and assumptions of population adjustment methods and provided recommendations for their use in submissions to NICE. As summarised in Table 1, population adjustment in anchored scenarios (where a common comparator is available) relaxes the constancy of relative effects assumption (to conditional constancy of relative effects) by adjusting for effect modifiers. In unanchored scenarios, a much stronger assumption is required (conditional constancy of absolute effects), because it is necessary to reliably predict absolute outcomes. This requires all effect modifiers and prognostic variables to be adjusted for, and is very difficult to achieve or justify, either empirically or otherwise. As such, unanchored comparisons are subject to unknown amounts of residual bias due to unobserved prognostic variables and effect modifiers (6;7).
Although statistical theory is clear on which variables must be adjusted for to obtain an unbiased indirect comparison, in practice variable selection requires judgement and justification (6). For anchored comparisons, evidence of effect modifier status on the chosen scale should be provided before analysis. Unanchored comparisons are very difficult to justify, but a principled approach to selecting variables before analysis should be taken to avoid "gaming." The level of overlap in the covariate distributions between the IPD and AgD study populations is another key property of population adjusted indirect comparisons. For regression-based methods such as STC, the lesser the overlap the greater the amount of extrapolation required, which requires additional assumptions to be valid. For re-weighting methods such as MAIC, extrapolation is simply not possible; sufficient overlap is, therefore, crucial for re-weighting methods. As well as checking the distributions of the covariates in each study, a simple indicator of the amount of overlap is the effective sample size (ESS), which may be calculated from the weights (6). Large reductions in ESS may indicate poor overlap between the IPD and AgD studies; small absolute ESS shows that the comparison is dependent on a small number of individuals in the IPD study and may be unstable.
MAIC and STC were both designed with simple indirect comparisons in mind and do not generalize naturally to larger networks of studies or treatments, where there may be multiple comparators of interest and/or multiple aggregate study populations. We are, therefore, interested in the prevalence of such scenarios in NICE TAs, to determine how larger network structures have been handled using current methods and to motivate the development of more appropriate methods. In this study, we undertake a comprehensive review of technology appraisals (TAs) published by NICE (8), aiming to characterize the use of population adjustment methods. As well as investigating the uptake of population adjustment in different clinical areas, we are interested in the ways in which these methods are used and whether the key assumptions are likely to hold, to assess the adequacy of current practice for decision making. We discuss how these methods have been received by appraisal committees and how they have impacted decision making. We conclude with a discussion, and suggest several key improvements to current practice, toward providing better evidence for decision makers and greater impact for submitting companies.
Methods
We reviewed all NICE TAs published between January 1, 2010, and April 20, 2018, for the use of population adjustment methods. We excluded appraisals that had access to IPD from all included studies, and focussed on those with only partial availability of IPD. From those appraisals using one or more forms of population adjustment, we extracted the following information from company submissions: Population adjustment method used; Whether the comparison was anchored or unanchored; Outcome type; Clinical area; Number of covariates adjusted for; How the covariates were chosen; For appraisals using MAIC, effective sample sizes after weighting; Whether a larger network structure was present (e.g., multiple comparators and/or aggregate studies), and how this was dealt with.
Screening and data extraction were carried out by a single primary reviewer (D.M.P.).
Results
A total of 268 technology appraisals have been published by NICE since 2010, when MAIC and STC were first suggested in the literature (4;5), up until April 20, 2018. Of these, twenty-one appraisals used a form of population adjustment; three of these had IPD available from all included studies, so we focus on the remaining eighteen appraisals with only partial IPD. Figure 1 shows the selection process. The included appraisals are tabulated in Supplementary Table 1.
The first use of population adjustment in a TA was TA311 in 2014. Since then, the use of population adjustment in TAs has increased rapidly, in terms of both the absolute number and the relative proportion of appraisals using population adjustment methods ( Figure 2). In 2017, a total of nine appraisals used population adjustment, accounting for 14.5 percent of all appraisals that year.
Usage by Clinical Area
Since 2010, almost half of all published TAs have been in oncology (127 of 268; 47.4 percent). Of these, fifteen (11.8 percent) used population adjustment, accounting for over 80 percent of all applications of population adjustment in appraisals to date. Only two other clinical areas saw any applications of population adjustment: two out of twelve (16.7 percent) appraisals in hepatology (both for hepatitis C), and one out of 28 (3.6 percent) appraisals in rheumatology. The usage of population adjustment methods in oncology TAs has increased since 2010, both in terms of the number and proportion of TAs using these methods. In 2017, a total of nine appraisals in oncology (25.7 percent) used population adjustment methods, up from one appraisal (9.1 percent) in 2014 ( Figure 3). The increasing use of population 222 Phillippo et al.
adjustment in oncology appraisals, which themselves make up the largest proportion of all appraisals, is the main driver behind the overall results in Figure 2.
Outcome Types
Unsurprisingly, due to the majority of applications of population adjustment being in oncology appraisals, survival outcomes (e.g., progression-free survival, overall survival) were the most common outcome type used in population-adjusted analyses, thirteen of eighteen appraisals (72.2 percent) included a population-adjusted survival outcome. Rate outcomes such as response rates were used in five appraisals, and duration and change from baseline outcomes in one appraisal each. Two appraisals (TA462, TA451) used population adjustment for more than one type of outcome (survival and response rate, and response rate and duration, respectively).
Population Adjustment Method
The large majority of appraisals using some form of population adjustment used MAIC (16 of 18; 88.9 percent). STC was less popular, used in only three appraisals (16.7 percent). Two appraisals used both MAIC and STC and compared the results, which were reported to be similar in each case (TA383, TA492). One appraisal (TA410) used neither MAIC nor STC. In this appraisal, a published prediction model (developed for a previous appraisal) (9) was used to adjust the survival curves from the AgD trials to the population of the IPD trial.
Of the sixteen appraisals performing MAIC, only nine (56.3 percent) reported an effective sample size. Of these, the median effective sample size was 80.0 (range: 4.0 to 335.5, interquartile range [IQR]: 15.4 to 52.0), with a median reduction in effective sample size from the original sample size of 74.2 percent (range: 7.9 percent to 94.1 percent, IQR: 48.0 percent to 84.6 percent). Such large reductions in ESS indicate that in many cases there may be poor overlap between the IPD and AgD studies. A substantial proportion of TAs reported small absolute ESS, and the resulting comparisons are, therefore, dependent on a small number of individuals in the IPD study and may be unstable.
Anchored and Unanchored Comparisons
Only two of eighteen appraisals (11.1 percent) formed anchored comparisons (TA383, TA449). The remaining sixteen appraisals (88.9 percent) instead formed unanchored comparisons without a common comparator, relying on strong assumptions that are very difficult to justify and are thus subject to unknown amounts of residual bias. No appraisals attempted to quantify residual bias, although this is challenging to achieve (6). Appraisal committees and review groups treated estimates from unanchored comparisons with strong caution.
Covariates Adjusted for
For appraisals reporting unanchored comparisons, the median number of covariates adjusted for was six, and ranged from one to thirteen covariates. Only one of the two appraisals reporting anchored comparisons presented any information on the choice of covariates; in this appraisal (TA383) ten covariates were adjusted for.
Common covariates adjusted for in oncology appraisals were age, Eastern Cooperative Oncology Group (ECOG) performance status, gender, and the number and/or type of previous therapies. Many appraisals also adjusted for other clinical factors such as biomarker levels or disease subtypes.
Both hepatitis C appraisals (TA364, TA331) adjusted for age, body mass index, gender, fibrosis staging, and viral load. One appraisal (TA364) further adjusted for race, genotype, and several biomarker levels in two MAIC analyses for different genotypes and comparator treatments, but in a third MAIC analysis only had sufficient sample size to adjust for viral load.
The single rheumatology appraisal (TA383) adjusted for ten covariates including age, gender, race, concomitant treatments, two biomarkers, and three functional/activity scores.
The most common justification for covariate selection amongst appraisals reporting unanchored comparisons was simply to adjust for all baseline characteristics reported in both studies. This was also true for appraisal TA383 which used an anchored comparison, despite the fact that adjustment is only required for covariates which were effect modifiers in anchored comparisons. (The other appraisal with an anchored comparison, TA449, did not report any information on variable selection.) Unnecessary adjustment will not introduce bias but may increase uncertainty, particularly with MAIC (6) (although we note that TA383 took place before the advice in Phillippo et al. (6) was published). Two appraisals (TA429, TA457) justified the selection of covariates using expert clinical opinion. One appraisal using MAIC (TA510) asked experts to rank covariates by importance, then added covariates into the model one-by-one in decreasing order of importance; the final model choice was determined by consideration of effective sample size. Unanchored MAICs in particular have to make trade-offs between effective sample size and the number of adjustment variables, because the number of potential prognostic factors is likely to be large. However, unless all prognostic factors and effect modifiers are included in the adjustment, the estimates will remain biased (6). Moreover, the covariates for which the effective sample size reduction is greatest are those which are most imbalanced between populations, and are, therefore, more important to adjust for amongst the covariates with similar prognostic or effect modifying strength. Two appraisals using STC used statistical techniques to choose covariates. One (TA333) selected covariates that were "significant" in the regression model, which is again likely to incur residual bias, particularly in small samples (10). Another (TA492) selected covariates to maximise cross-validated predictive performance, which is more appropriate given that STC relies on accurate predictions into the aggregate population, but is still subject to the limitations of in-sample validation (6).
Larger Networks
As originally proposed, MAIC and STC cannot be extended to larger network structures with multiple comparators of interest and/or multiple aggregate studies. However, these scenarios frequently arise in practice: a total of ten of eighteen TAs (55.6 percent) involved larger networks of treatments and studies.
In five of these (71.4 percent; TA331, TA383, TA429, TA500, TA510), multiple population adjusted indirect comparisons were performed and then simply left as stand-alone estimates. Each of these estimates will be valid for different target populations, and so cannot be interpreted together coherently unless additional assumptions are met, namely that all the target populations are in fact identical (in terms of effect modifiers for anchored comparisons, and also in terms of prognostic variables for unanchored comparisons).
One appraisal (TA492) used STC (and MAIC as a sensitivity analysis) to predict active treatment arms for each single-arm study in an unconnected network, and then analysed this newly connected network using network meta-analysis (NMA). This results in a coherent set of relative effect estimates (11). However, aside from the very strong assumptions required for the unanchored comparisons, this analysis must also assume that there are no imbalances in effect modifiers between the single-arm studies included in the NMA. Another serious concern is the repeated use of the predicted active treatment arms, which are all based on the same data set and so are not independent.
Two appraisals (TA311, TA380) had wider networks of treatments and studies including the two treatments of primary interest, but that were not fully connected. These networks were analysed using NMA (without any population adjustment) using an equivalency assumption for two treatments (TA311) and a matched pairs analysis (TA380) to connect the networks. Separate unanchored MAICs were then used to create population-adjusted comparisons as sensitivity analyses.
One appraisal (TA427) had additional single-arm IPD sources which were used to provide additional stand-alone comparisons (in this case using Cox regression for survival outcomes).
Lastly, the method of analysis was unclear for one appraisal (TA364) which had multiple comparators of interest, some with several aggregate studies available. However, given that unanchored MAIC was used, this analysis is susceptible to the same sets of pitfalls described above depending on whether the estimates were left as stand-alone estimates or synthesised as a network.
Discussion
In this review, we have focussed on the use of population adjustment methods in NICE Technology Appraisals. Different practices may be found in submissions to other reimbursement agencies, who may also receive and interpret such analyses differently, and outside of the technology appraisal context. A general review of applications in the literature has previously been published by Phillippo et al. (6) and found similar issues to those Phillippo et al.
discussed here, although with even greater variation in analysis practices. This review of TAs also spans a limited time period (8 years) since these methods were first published. Practice is likely to continue to evolve, for example as methodological guidance is published (6). A further limitation of this review is that the data extraction was carried out by a single reviewer only. Population adjustment methods (and in particular unanchored MAIC) have been used in NICE TAs either as the main source of comparative clinical effectiveness, or as supportive evidence alongside the company's base-case analysis. Although these methods may account for some differences between study populations that conventional indirect comparison methods cannot, appraisal committees were often concerned by the quality of the estimates they produced. This was not necessarily due to inherent methodological limitations; rather, the methods were used in situations where the data underpinning the analyses were often weak (for example, immature follow-up data or small single arm studies).
Furthermore, population-adjusted comparisons were often associated with uncertainty regarding the covariates that were adjusted for, specifically, which ones were selected for adjustment, how they were selected, and whether and to what extent any unobserved characteristics biased the analysis. A key challenge to appraisal committees, especially with unanchored comparisons, was where to draw the line between the number of variables to adjust for and the precision of the resulting estimates. This was particularly apparent for MAIC, where the effective sample size decreases with each additional covariate adjusted for.
In NICE TAs, decisions are not based solely on clinical effectiveness; cost considerations are also taken into account in a cost-effectiveness analysis, summarised by an incremental costeffectiveness ratio (ICER). The impact of evidence from population-adjusted indirect comparisons is, therefore, understood within this context. In some instances, appraisal committees could not make a positive recommendation for the technology because the uncertainty in the population-adjusted estimates were not offset by a sufficiently low ICER to manage the decision risk. Where the appraisal committee made a positive recommendation, the committee typically compared the most plausible ICER against the lower end of the acceptable range (requiring the technology to be more cost effective) to minimise the risks associated with uncertainty. Where appraisal committees judged a technology to have plausible potential to be cost-effective, they often recommended the use of the technology with interim funding as part of a data collection arrangement.
In general, appraisal committees tended to use population adjustment methods for decision-making when they were presented alongside an alternative, confirmatory analysis, and when the uncertainty in the method was acknowledged, described, and explored as far as possible (for example using sensitivity analyses). Appraisal committees have previously suggested that companies should also consider validating the results of their analyses (e.g., TA510), for example by estimating the effect of the technology using population adjustment methods in an external cohort (such as registry data) and comparing that estimate with the observed effect of the technology in that cohort.
Population adjustment methods are becoming ever more prevalent in NICE TAs. The majority were unanchored with no common comparator, and hence rely on very strong assumptions as outlined in Table 1. The proliferation of unanchored analyses is likely to escalate, in large part due to the rise of single-arm studies for accelerated or conditional approval with regulators such as the US Food and Drug Administration or the European Medicines Agency (12). However, the evidential requirements for demonstrating clinical efficacy (to obtain licensing) can be less stringent than those for demonstrating cost effectiveness (to obtain reimbursement). NICE appraisal committees and evidence review groups have been justifiably wary of the use of unanchored population adjustment methods to bridge this evidence gap, with many commenting that the results should be interpreted with caution and may contain an unknown amount of bias. As such, committees typically looked for greater cost effectiveness (lower ICER) to minimise the decision risk resulting from clinical effectiveness evidence perceived to be uncertain and poor quality. Increased dialogue between regulators and reimbursement agencies may help bridge this gap in evidence requirements.
All current population adjustment methods assume that there are no unmeasured effect modifiers when making anchored comparisons. For unanchored comparisons, it is further assumed that there are no unmeasured prognostic factors. This latter assumption is particularly strong and difficult to justify. Some suggestions for quantifying residual bias due to unmeasured confounding are made by Phillippo et al. (6), and this is an area for further work.
Several technology appraisals had multiple comparators and/or AgD study populations for which comparisons were required. Current MAIC and STC methodology cannot handle larger network structures: multiple analyses were performed in each case, and then either left as stand-alone comparisons or themselves synthesised using network meta-analysis, requiring further assumptions in the process. Furthermore, current MAIC and STC methods produce estimates which are valid only for the aggregate study population (typically that of a competitor) without additional assumptions, which may not match the target population for the decision (6). This fact has been largely overlooked in appraisals to date, although one appraisal (TA451) did note that the MAIC analysis that was performed took the results of an IPD trial deemed to be relevant to the decision population and adjusted them into a nonrepresentative aggregate trial population.
Clearly, if effect modification is present then it is not enough to simply produce "unbiased" estimates: the estimates produced must be specific to the decision population, otherwise they are of little use to decision-makers. This motivates the need to develop new methods which can extend naturally to larger networks of treatments, and can produce estimates for a given target decision population. Furthermore, if all trials are a subset of the decision target population with respect to one or more effect modifiers, then any adjustment must rely on extrapolation; if these effect modifiers are discrete, adjustment may be impossible.
The large majority of technology appraisals used MAIC to obtain population-adjusted indirect comparisons. Effective sample sizes were typically small and often substantially reduced compared with the original sample sizes, indicating potential lack of overlap between the IPD and AgD populations. Lack of overlap is of particular concern with re-weighting methods such as MAIC, because they cannot extrapolate to account for covariate values beyond those observed in the IPD and, thus, may produce estimates that remain biased even when all necessary covariates are included in the model (6). This motivates the need for simulation studies to explore the robustness of MAIC (and other population adjustment methods) in scenarios where there is a lack of overlap between populations.
Three appraisals were excluded from our review, as IPD were available from all included studies (13-15). These appraisals were all unanchored comparisons of survival outcomes in oncology, and used a selection of propensity score, covariate matching, and regression methods. Having IPD available from all studies is the gold-standard and is preferable if at all possible. This is because IPD allows for analyses that have more statistical power and may rely on less stringent assumptions, and allows assumptions to be tested. Separate methodological guidance is available for analyses with full IPD (16).
For population-adjusted analyses to have the desired impact on decision making in technology appraisals, several key improvements are needed to current practice in line with recent guidance (6). First, a target population relevant to decision makers should be defined, and estimates must be produced for this population to Phillippo et al.
be relevant. Current population adjustment methods can only produce estimates valid for the population represented by the aggregate study unless further assumptions are made, which may not represent the decision at hand; this has been largely overlooked in appraisals to date (although note that several of the TAs we identified pre-date published guidance). For anchored comparisons there should be clear prior justification for effect modification, based on empirical evidence from previous studies and/or clinical expertise. Appraisals reporting anchored comparisons to date did not provide any such justification. Unanchored comparisons require reliable predictions of absolute effects by means of adjustment for both prognostic and effect-modifying covariates, and are highly susceptible to unobserved confounding due to a lack of randomisation. Simply adjusting for all available covariates, as is currently common practice, is not sufficient.
For unanchored comparisons to be impactful, covariates should be selected with predictive performance in mind and estimates of the potential range of residual bias are required; otherwise, the amount of bias in the estimates is unknown and may even be larger than for an unadjusted comparison. This is not easy to achieve (some suggestions are made in (6)), but without such reassurance appraisal committees are likely to remain justifiably wary of unanchored analyses. Many of the above issues can be mitigated, at least in part, by the availability of IPD from all studies in an analysis, and thus the increased sharing of IPD is greatly encouraged.
|
v3-fos-license
|
2018-04-03T06:11:49.381Z
|
2007-12-10T00:00:00.000
|
22310815
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1364/oe.15.016527",
"pdf_hash": "53bf477f4048feccf6e67a311f1e8932acf882d9",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43231",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"sha1": "53bf477f4048feccf6e67a311f1e8932acf882d9",
"year": 2007
}
|
pes2o/s2orc
|
Inhibition of multipolar plasmon excitation in periodic chains of gold nanoblocks
Periodically corrugated chains of gold nanoblocks, fabricated with high precision by electron-beam lithography and lift-off techniques, were found to exhibit optical signatures of particle plasmon states in which relative contribution of longitudinal multipolar plasmons is significantly lower than that in equivalent rectangular gold nanorods. Plasmonic response of periodic chains is dominated by dipolar plasmon modes, which in the absence of multipolar exciations are seen as background-free and spectrally well-resolved extinction peaks at infrared (IR) wavelengths. This observation may help improve spectral parameters of IR plasmonic sub-wavelength antennae. Comparative studies of plasmon damping and dephasing in corrugated chains of nanoblocks and smooth rectangular nanorods are also presented. © 2007 Optical Society of America OCIS codes: (240.6680) Surface plasmons; (260.3910) Metals, optics of; (160.4760) Optical properties References and links 1. E. Hutter and J. Fendler, “Exploitation of localized surface plasmon resonance,” Adv. Materials 16, 1685–1706 (2004). 2. M. L. Brongersma, J. W. Hartman, and H. A. Atwater, “Electromagnetic energy transfer and switching in nanoparticle chain arrays below the diffraction limit,” Phys. Rev. B 62, R16356–R16359 (2000). 3. P. MElschlegel, H.-J. Eisler, O. J. F. Martin, B. Hecht, and D. W. Pohl, “Resonant optical antennas.” Science 308, 1607–9 (2005). URL http://dx.doi.org/10.1126/science.1111886. 4. M. I. Stockman, “Nanofocusing of optical energy in tapered plasmonic waveguides.” Phys. Rev. Lett. 93, 137404 (2004). 5. K. Kneipp, Y. Wang, H. Kneipp, L. T. Perelman, I. Itzkan, R. R. Dasari, and M. S. Feld, “Single Molecule Detection Using Surface-Enhanced Raman Scattering (SERS),” Phys. Rev. Lett. 78, 1667–1670 (1997). 6. C. Anceau, S. Brasselet, J. Zyss, and P. Gadenne, “Local second-harmonic generation eto the erefore, nhancement on gold nanostructures probed by two-photon microscopy.” Opt. Lett. 28, 713–5 (2003). 7. T. Kalkbrenner, M. Ramstein, J. Mlynek, and V. Sandoghdar, “A single gold particle as a probe for apertureless scanning near-field optical microscopy.” J. Microsc. 202, 72–6 (2001). 8. A. Bouhelier, J. Renger, M. R. Beversluis, and L. Novotny, “Plasmon-coupled tip-enhanced near-field optical microscopy.” J. Microsc. 210, 220–4 (2003). 9. K. Ueno, S. Juodkazis, V. Mizeikis, K. Sasaki, and H. Misawa, “Clusters of closely-spaced gold nanoparticles as a source of two-photon photoluminescence at visible wavelengths,” Accepted to Adv. Mater. (2007). 10. N. Nath and A. Chilkoti, “Label-free biosensing by surface plasmon resonance of nanoparticles on glass: optimization of nanoparticle size.” Anal. Chem. 76, 5370–8 (2004). URL http://dx.doi.org/10.1021/ac049741z. 11. C. Sönnichsen and A. P. Alivisatos, “Gold nanorods as novel nonbleaching plasmon-based orientation sensors for polarized single-particle microscopy.” Nano Lett. 5, 301–4 (2005). URL http://dx.doi.org/10.1021/nl048089k. #88640 $15.00 USD Received 16 Oct 2007; revised 21 Nov 2007; accepted 25 Nov 2007; published 29 Nov 2007 (C) 2007 OSA 10 December 2007 / Vol. 15, No. 25 / OPTICS EXPRESS 16527 12. K. Ueno, S. Juodkazis, M. Mino, V. Mizeikis, and H. Misawa, “Spectral sensitivity of uniform arrays of gold nanorods to dielectric environment,” J. Phys. Chem. C 111, 4180–4184 (2007). 13. T. S. Hartwick, D. T. Hodges, D. H. Barker, and F. B. Foote, “Far infrared imagery,” Appl. Opt. 15, 1919–1922 (1976). 14. B. B. Hu and M. C. Nuss, “Imaging with terahertz waves,” Opt. Lett. 20, 1716–1718 (1995). 15. J. R. Krenn, G. Schider, W. Rechberger, B. Lamprecht, A. Leitner, F. R. Aussenegg, and J. C. Weeber, “Design of multipolar plasmon excitations in silver nanoparticles,” Appl. Phys. Lett. 77, 3379–3381 (2000). 16. G. Schider, J. R. Krenn, A. Hohenau, H. Ditlbacher, A. Leitner, F. R. Aussenegg, W. L. Schaich, I. Puscasu, B. Monacelli, and G. Boreman, “Plasmon dispersion relation of Au and Agnanowires,” Phys. Rev. B 68, 155427 (2003). 17. K. Ueno, V. Mizeikis, S. Juodkazis, K. Sasaki, and H. Misawa, “Optical properties of nano-engineered gold blocks,” Opt. Lett. 30, 2158 – 2160 (2005). 18. K. Ueno, S. Juodkazis, V. Mizeikis, K. Sasaki, and H. Misawa, “Spectrally-resolved atomicscale length variations of gold nanorods.” J. Am. Chem. Soc. 128, 14226–14227 (2006). URL http://dx.doi.org/10.1021/ja0645786. 19. C. Sönnichsen, T. Franzl, T. Wilk, G. von Plessen, J. Feldmann, O. Wilson, and P. Mulvaney, “Drastic reduction of plasmon damping in gold nanorods,” Phys. Rev. Lett. 88, 77402 (2002). 20. A. Wokaun, J. P. Gordon, and P. F. Liao, “Radiation Damping in Surface-Enhanced Raman Scattering,” Phys. Rev. Lett. 48, 957–960 (1982). 21. K. Imura T. Nagahara, and H. Okamoto, “Near-field optical imaging of plasmon modes in gold nanorods,” J. Chem. Phys. 122, 154701 (2005). 22. H. J. Huang, C. ping Yu, H. C. Chang, K. P. Chiu, H. M. Chen, R. S. Liu, and D. P. Tsai, “Plasmonic optical properties of a single gold nano-rod,” Opt. Expr. 15, 7132–7139 (2007). 23. P. Johnson and R. Christy, “Optical constants of noble metals,” Phys. Rev. B 6, 4730–4739 (1972). 24. S. Enoch, R. Quidant, and G. Badenes, “Optical sensing based on plasmon coupling in nanoparticle arrays,” Opt. Expr. 12, 3422–3427 (2004).
Introduction
Optical properties of noble metal nanoparticles are dominated by localized surface plasmons (LSP) [1], recognizable from resonant extinction peaks in the elastic light scattering spectra, and having spatial field modes strongly localized at the metal's surface.The resonant scattering and field enhancement are attractive features for nanophotonics [2,3,4], single molecule detection [5,6], high-resolution microscopy [7,8], development of metallic emitters of visible radiation [9], bio-sensing [10,11,12], and terahertz (THz) imaging [13,14].Smallest nanoparticles whose dimensions do not exceed few tens of nanometers, typically exhibit LSP resonances at visible wavelengths and have predominantly dipolar spatial modes.In larger nanoparticles the fundamental (lowest frequency) LSP resonance also has a dipolar spatial mode whose central wavelength roughly scales with the nanoparticle size, and in addition, LSP resonances with multipolar spatial modes may evolve in a broad spectral range, resulting in new resonant scattering peaks riding on a spectrally broad background [15].Practical spectral tuning of LSP resonances is achieved by tailoring the size and shape of nanoparticles, and is especially versatile with nanorods having rectangular or circular cross-sectional shape.Elongation of nanorods leads to so-called LSP shape resonances, and allows selective excitation of longitudinal plasmon (LP) modes by optical field polarized along the axis of elongation.Plasmonic applications aimed at infrared (IR) wavelengths would generally require an increase in the nanorod length.However, this leads to an increased contribution of multipolar LP modes [15,16], which may be not desired in plasmonic applications that require spectral selectivity.In this work we propose and implement a simple design for elongated nanoparticles that allows a substantial inhibition of multipolar plasmonic modes, while retaining dipolar modes essentially unaffected.The basic idea of this proposal can be understood by recalling that multipolar LP modes of smooth nanorods have field patterns reminiscent of a standing wave oriented in the direction of nanorod elongation.In smooth nanorods the standing-wave modal patterns (each mode contributing to extinction at a certain spectral position) must be commensurate with the nanorod length.Among the allowed LP modes, the lowest-order and lowest-energy longitudinal mode will be predom- inantly dipolar, while higher-order modes will be multipolar.On the other hand, if a periodic corrugation (such as variation of transverse size) is intentionally imposed on the nanorod, exitation of longitudinal modes that are incommensurate with both the nanoparticle length and the period of corrugation will be inhibited.This prediction was verified in the present work by fabricating chains of diagonally-oriented, overlapping gold nanoblocks, and comparing their plasmonic scattering spectra with those of smooth rectangular nanorods.As expected, chains of nanorods exhibit much weaker spectral signatures of multipolar LP modes due to their periodically corrugated shape.In addition, comparison between the parameters of fundamental dipolar LP modes in both kinds of nanoparticles is reported.We demonstrate that chains of nanoblocks, despite their somewhat larger volume, exhibit similar radiative plasmon damping as smooth nanorods.At the same time, periodic nanoparticle chains, same as nanorods, allow tuning of the plasmonic response by scaling the chain length, and in practice allows one access the IR spectral regions.
Samples and their fabrication
Layout of the investigated gold nanoparticles is illustrated schematically in Fig. 1(a).The structure labeled 1 is composed of rectangular gold nanoblocks with side length of (100 × 100 × 40)nm 3 , aligned diagonally into chains of N nanoblocks, with a small overlap w between their nearest corners.The chain has an elongated form-factor and is periodic along the y−axis with period l = (141 − w/2) nm.Fig. 1(b) shows Scanning Electron Microscopy (SEM) image of the fabricated nanoparticle comprising three nanoblocks.The fabrication process and structural characteristics of nanoparticles will be discussed later.For comparison with rectangular nanorods whose optical properties are relatively well-studied, nanorods labeled 2 in Fig. 1(a) were also fabricated.The nanorods have square cross-section with side length in the x − z plane of 40 nm, and a total length in the y−axis direction of l = 141 N. Hence, nanoblock chains and nanorods with the same N have nearly identical lengths.In the following we will refer to the two categories of nanoparticles simply as "nanoblocks" (or "chains of nanoblocks") and "nanorods", respectively.The fabrication was aimed at obtaining large ensembles of periodically arranged nanoparticles with identical design parameters.A similar fabrication procedure was used in our earlier works [9,12,17,18].First, planar patterns of nanoparticle arrays were defined using an EBL system (ELS-7700H, Elionix Co., Ltd., Japan) on a thin film of co-polymer resist (ZEP-520a, Zeon Co., Ltd., Tokyo, Japan), spin-coated on (10 × 10) mm 2 sapphire substrates (Shinkoshya Co., Japan).After the exposure the substrates were developed in a standard developer (Zeon Co., Ltd., Japan).Subsequently, 2 nm thick Cr and 40 nm thick Au films were sputtered on the substrates and lift-off was performed by immersion in sonicated acetone and resist remover (Zeon Co., Ltd.) solutions for 5 min.As a result, substrates with identically oriented, nanoparticles having the same length N = 1, 2,...,25 were prepared.In order to reveal possible influence of narrow necks on the LSP properties of nanoblock structures, three series of nanoblock samples with different neck widths of w = 4.4, 8.8, and 13.2 nm were fabricated.
Structural quality of the samples
Prior to discussing optical properties of the fabricated nanoparticles, it is relevant to briefly examine their structural parameters and correspondence with the idealized models shown in Fig. 1.
Structural inspection of the samples was performed by SEM using JSM-6700FT (JEOL).As emphasized in Fig. 1(b), shapes of the actual nanoparticles deviate from the initial design.The deviations seen in the SEM images involve rough sides and rounded corners of the nanoblocks.Previously we have conducted careful analysis of plasmonic extinction spectra in gold nanorods, fabricated using the same method as in the present study.It was determined, that despite some irregularities seen in SEM images, difference between the design and actual lengths of nanorods was in the range from 0.625 to 1.93 nm, which corresponds to the thickness of about 4 to 12 atomic layers of gold [18].This result allows to expect fabrication with similar accuracy in the present study as well.Height of the nanoparticles and quality of their top surface were inspected using Atomic Force Microscopy (AFM), and the average height of 40 nm, and a roughness of about 2 nm were found.Thus, despite some imperfections seen in SEM images, overall quality and uniformity the samples can be regarded as comparable to or higher than those reported before.
Optical properties
Optical extinction spectra of the samples were measured in transmission geometry using a Fourier-transform infrared (FTIR) spectrometer equipped with a microscope attachment (FT-IR, IRT-3000, Jasco) in the wavelength range of 660-4000 nm.In the measurements, areas with typical size of (20 × 20) μm 2 comprising about 1000 nanoparticles, depending on their size, were probed with the help of infrared microscope.The microscope uses a pair of confocal Cassegrainian reflection objectives with angular acceptance range of 16 − 32 • with respect to the optical axis.During the measurements the substrates were oriented perpendicular to the optical axis of the objectives.
Elongated nanoparticles exhibit so-called shape resonances in the LSP scattering spectra.These resonances can be recognized from distinct extinction peaks for different orientations of linearly-polarized incident radiation.For polarization parallel and perpendicular to the axis of elongation (coincident with y−axis in Fig. 1(a)), longitudinal plasmon (LP) and transverse plasmon (TP) modes are excited.Spectral position of TP and LP resonant modes generally depends on the size and shape of nanoparticles.In this study we will mainly focus on the LP modes and their transformation with nanoparticle length.
The FTIR spectrometer and microscope setup used for the measurements is equipped with unpolarized light source, which prevents selective excitation of purely LP and TP modes, except at visible and NIR wavelengths where it was possible to directly verify longitudinal or transverse origin of the spectral features by inserting a polarizer into the probing beam of the microscope during the measurements.Nevertheless, in our strongly elongated nanoparticles it was possible to identify the LP modes from their spectral position (at long wavelengths) even using unpolarized excitation.A further indirect proof of longitudinal nature of the modes was obtained from theoretical calculations.Each spectrum is dominated by two major extinction resonances.One of them occurs at a constant photon energy E = 1.75 eV (wavelength of 0.71 μm) in all samples regardless of their length.We have verified polarization invariance of this peak's position (i.e., nearly identical spectra were obtained for linear polarization corresponding to LP and TP modes).These findings indicate the fundamental (lowest frequency) mode of a single nanoblock as the origin of the LP peak.
Another major resonant extinction peak occurs at a lower photon energy (for example, E = 0.76 eV (wavelength of 1.6 μm) for nanoparticles with N = 2), and exhibits a red-shift with N, completely tuning out of the observation range for N > 6.The approximate spectral position and the red-shift of this peak indicates fundamental LP mode of the entire multi-block nanoparticle as its origin.The two dominant peaks seen in LP extinction spectra can be thus tentatively ascribed to the two characteristic shape components -that of a single nanoblock, and of an elongated composite nanoparticle -in chains of nanoblocks.
We emphasize that spectral interval between the two of the above mentioned peaks has no other distinct features, such as minor resonances and broadband background, that might signify excitation of multipolar LSP resonances.
Smooth rectangular nanorods
Figure 2(b) shows the measured LP extinction spectra of rectangular nanorods.Resonant extinction peaks centered at a constant photon energy of E = 2.0 eV (wavelength of 0.62 μm) can be seen for all nanorods regardless of their length.At lower photon energies, extinction peaks whose spectral positions are red-shifted with N are clearly visible.Although the presence of two major extinction peaks and their spectral behavior may look similar to those seen in the nanoblock structures, there are some important differences.Two dominant peaks are seen even for the shortest nanorods (N = 1), with the low-energy peak centered at E = 1.3 eV (wavelength of 0.95 μm).This occurs because nanorods with N = 1 are elongated nanoparticles with aspect ratio of 3.5, whereas equivalent nanoblocks are symmetrical with aspect ratio of 1.One can also notice that low-energy extinction peaks of nanorods in Fig. 2(b) appear to be asymmetrically broadened and ride on a wide background.One more difference from the spectra of chains of nanoblocks is the presence of weaker minor extinction peaks in the spectral interval between the two major peaks for longer nanorods (N ≥ 4).Similar peaks were observed previously in lithographically designed silver nanorods, and assigned to multipolar LSPs [15,16].The latter assignment is most likely valid in our case as well.In comparison to the nanoblock chains (Fig. 2(a)), smooth nanorods seem to exhibit a significant broadband background scattering, that is likely a consequence of a wide distribution and merging of multipolar LSP resonances.According to the earlier report [15], silver nanorods also exhibit a significant broadband background at equivalent spectral positions.
The above data make it obvious that chains of nanoblocks have better-resolved and background-free resonant LSP extinction peaks than smooth nanorods.
Spectral characteristics of LSP resonances
The observations described in Sect.3.2 illustrate that both kinds of the investigated nanoparticles have spectrally constant, and length-tunable LP modes.Properties of the tunable LP modes, which are seen as major resonant peaks in the extinction spectra, are summarized in Fig. 3.In estimating parameters of the LP peaks in Fig. 2, they were fitted by a Lorentz function, which is commonly used in spectroscopy for representing homogeneously broadened resonances.Fig. 3 nanoparticle segments (or the total length).As can be seen, the lowest-energy LSP modes behave almost identically in chains of nanoblocks and smooth nanorods by decreasing monotonously with the total length.Chains of nanoblocks have only one length-tunable LSP mode, whereas smooth nanorods exhibit two such modes, denoted by the index j = 1, 2 in Fig. 3(a).In analogy with the existing interpretation [15,16], in our case j = 1 represents the dipolar mode, whereas j = 2 represents the dipole-allowed multipolar LSP mode of the lowest order (most likely, the second multipolar mode to the lowest).This assignment is also supported by the results of theoretical modeling in Sect.3.4.It is relevant to point out that j = 2 mode might not be the lowest-energy multipolar mode.Reexamining the extinction spectra shown in Fig. 4(b), one may conclude that noticeable asymmetry of the fundamental extinction peaks of nanorods may be due to the presence of weak high-energy spectral shoulders, possibly arising due to multipolar LSP modes.However, weakness of these features makes their quantitative identification difficult (likewise, even for modal peaks labeled j = 2, only their spectral position (but not width or resonance quality) could be determined reliably).
Below we will examine the tunability range and damping mechanisms of dipolar LSP modes.Full tunability range achieved with the fabricated gold nanoparticles extends beyond our experimentally available observation range in longer structures (N > 7).Nevertheless, it can be extrapolated that E c will reach the short-wavelength edge of the THz range (≈ 0.012 eV) in the longest of the fabricated nanoparticles.This parameter may be important for applications like THz imaging.Fig. 3(b) shows the spectral width, ΔE (FWHM), of the peaks extracted from their Lorentzian best fits, versus the number of nanoparticle segments.As can be seen, equivalent nanoblock and nanorod structures exhibit lowest-energy LSP modes with nearly identical spectral widths.Spectral width characterizes damping of the resonance and its quality factor, Q = E c /ΔE.These parameters are important in applications relying on coherent and incoherent interactions occurring in the regions occupied by the LSP near-field.Quality factors for the investigated samples are plotted in Fig. 3(c), and are very similar for nanoblock chains and nanorods with Q ≈ 4.5, except for N = 1, when nanoblocks have somewhat higher quality factor than nanorods.The energy loss reflected by the quality factor occurs mostly due to the finite plasmon lifetime T 1 , limited by radiative and non-radiative decay of plasmon population.For homogeneously broadened resonances, plasmon dephasing time can be determined as T 2 = 2/(hΔE).This expression includes the (radiative and non-radiative) plasmon lifetime T 1 and their "pure" elastic scattering described by a time constant Slower dephasing provides better spectral selectivity and promotes coherent interactions between the plasmonic near-field and surrounding species (e.g., molecules).Figure 3(d) shows dephasing time deduced by assuming predominantly homogeneous ensemble broadening.This assumption is validated by the high accuracy of fabrication process and a high homogeneous linewidth of the resonances concerned.As can be seen, dephasing is fastest in shorter nanoparticles, where the value of T 2 ≈ 5 − 6 fs is close to that in bulk gold [19].T 2 increases with the length, reaching about 20 fs for the longest nanoparticles studied.This value is close to the record-long dephasing times found earlier for LP modes at visible frequencies in small gold nanorods [19].However, our findings must be treated with caution, since in longer nanoparticles length of the optical cycle corresponding to the LP resonance is also longer.Taking this fact into account, LP coherence is relatively short-lived in our samples, compared to that in [19].This is also indicated the quality factors Q ≈ 4 − 8, whereas the above study has reported Q ≈ 20.From the dependence of T 2 on the length of nanoparticles in Fig. 3(d), one can infer that dephasing time comprises approximately constant number optical cycles, thus suggesting radiative plasmon damping as the dominant dephasing mechanism.Radiative losses are known to increase with volume of nanoparticle [20], which in our studies is relatively large, even for nanoparticles with N = 1.It is interesting to note, however, that although nanoblocks have 3.5 times the volume of nanorods, they exhibit similar dephasing times as nanorods.The most likely reason for this result is that despite the larger geometric voulume, LSP modes of chains of nanoblocks are well-localized, and their modal volume is comparable to that of equivalent modes in nanorods.This phenomenon will be illustrated by theoretical calculations presented in the next Section.
Similar analysis was also conducted for nanoblock structures with larger neck widths of w = 8.8 and 13.2 nm.We did not find significant differences in the parameters ΔE, Q, and T 2 , and only a slight reduction in the resonance energy E c arising from the reduction of the total length of the chain.Insensitivity of ΔE, Q, and T 2 to the neck width variations illustrates that these regions, even when their average width is only a few nanometers, do not contribute significantly to lifetime and scattering of longitudinal plasmons.Hence, radiative losses is the predominant mechanism of plasmon damping.
Theoretical modeling by Finite-Difference Time-Domain technique
Resonant LP scattering peaks reflect localization of the optical near-field at the nanoparticles' surface.Field distribution and maximum enhancement factor are important for plasmonic applications.Spatial patterns of the electric field intensity may also help to identify dipolar or multipolar character of the corresponding plasmon modes.However, practical monitoring of the near-field distribution is a difficult task [21,22].In these circumstances the most accessible method for gaining an insight into the near-field distribution is theoretical modeling based on numerical solution of Maxwell's equations.This approach has proved to be accurate for metallic nanoparticles having dimensions larger than about 10 − 15 nm [1].In this work we use Finite-Difference Time-Domain (FDTD) calculations for the modeling, which was performed using FDTD Solutions (Lumerical, Inc.) software.The idealized structures used for the modeling are similar to those shown in the schematic picture in Fig. 1.The calculations were performed on a discrete cubic mesh with spacing of 4 nm.Since width of the necks between the nanoblocks, w = 4.4 nm is close to the mesh spacing, in the regions surrounding the necks the regular mesh was overridden by a finer mesh with spacing of 2 nm.Perfectly-matched layer (PML) boundary conditions were imposed at the boundaries of the calculation domain, which was chosen large enough to avoid truncation of the field.Optical properties of gold were described using Lorentz and plasma approximations of the existing experimental data [23].The substrate was assumed to have a refractive index of n = 1.77, close to that of sapphire.Optical extinction was determined using the Total Field-Scattered Field (TFSF) formulation.To represent the unpolarized excitation used in the experiments, two perfectly overlapping, simultaneous TFSF sources with mutually perpendicular polarizations (along the x− and y−axes) were used.FDTD calculations allow determination of extinction cross-section and electric field pattern from the same calculation.Since simulation time increases rapidly with the size of the calculation domain (or the total number of mesh points), the calculations were carried out for nanoparticles with N ≤ 4 in order to maintain reasonable calculation times on a personal computer with two shared-memory processors.
Figure 4 shows the calculated extinction spectra for nanoblock and nanorod structures (N = 4) together with the corresponding experimental data sets, taken from Fig. 2. The calculated data represent the spectra of extinction cross-section, σ ext , estimated from the balance of total electromagnetic power flowing in and out of the TFSF region, which surrounds the nanoparticles.In Fig. 4 the measured and calculated data use different ordinate axes, whose scaling was varied to obtain close qualitative matching between these datasets (i.e., their relative scaling factor was the only adjustable parameter).As can be seen from both panels in the Figure, matching between the calculated and measured spectra is very satisfactory, especially for the nanoblock structures in Fig. 4(a).Almost all major and minor extinction features, their spectral positions (with exception of the minor extinction peaks' positions) and relative amplitudes are reproduced by the calculations.
The minor extinction peaks assigned to the multi-polar plasmon modes in the calculated extinction spectra in Fig. 4(a) and (b) qualitatively reproduce the experimental data.Thus, in the calculated spectrum chains of nanoblocks exhibit only a weak extinction peak at the intermediate spectral position of E = 1.033 eV, and a weak broad background scattering.Correspondingly, in the experimental spectrum a very weak extinction peak is most likely seen centered at the photon energy of E = 0.85 eV, and rides on a low-intensity broadband background.In contrast, the calculated extinction for nanorods exhibits a much stronger intermediate peak at E = 1.305 eV and a significant spectrally broad background.In the experimental spectrum the intermediate peak is centered at the photon energy of E = 1.17 eV, and a considerable broadband background extinction is present.
The existing disagreements between experiments and calculations can be explained by the differences between the idealized model used for the calculations and the conditions of the measurements.First, the average size, shape and length of nanoparticles in the fabricated ensembles may differ from those of the single nanoparticle defined in the idealized model.Second, the calculations assumed a single incidence direction parallel to the z−axis, and collection of a scattered field in the full 3D angular range of 4π.In reality, however, both incidence and collection directions were distributed in the conical angular range of 16 − 32 • with respect to the z−axis direction due to the use of infrared Cassegrainian microscope objectives.Third, our samples may have been unintentionally contaminated by dust, moisture, or other agents present in the ambient atmosphere.It is known, that deposition of dielectric layers of nanometric thickness on nanoparticles may significantly modify their plasmonic scattering spectra [24].
In order to elaborate further the distinction between dipolar and multipolar plasmon excitation, in the following we will present a brief analysis of calculated near-field intensity patterns at important spectral positions, shown in Fig. 5.The field was monitored on an x − y plane located at half-height of the nanoblocks, i.e., 20 nm above the substrate.The intensity is normalized to that of the incident field, and consequently, spatial maps shown in the aries of the nanoparticle, and also a significant field distribution along the extreme transverse (left and right) boundaries of the top and bottom nanoblocks.In the monitored plane this mode has a maximum field intensity enhancement factor of about 200 (even stronger enhancement can be expected at the planes coincident with the top and bottom surfaces of the nanoblocks).Although, strictly speaking, the fundamental LP resonance of the entire nanoparticle has a multipolar spatial mode, the overall longitudinal distribution of the near-field intensity pattern is predominantly dipolar.This trend can be expected to become even stronger in longer chains of nanoblocks due to the stronger overall elongation of the chain.Therefore, we can informally categorize this mode as "predominantly dipolar".At the spectral position E = 1.033 eV of the minor extinction peak the near-field redistributes closer to the middle section of the chain, and As mentioned above, in order to roughly represent depolarized excitation conditions, FDTD simulations employed two excitation sources having linear polarizations parallel and perpendicular to the nanoparticle elongation direction.This circumstance has resulted in a slight asymmetry of the calculated field patterns with respect to the long axis of the nanorod or chain of nanoblocks.We have verified in separate calculations that selective excitation of longitudinal modes by a single source polarized parallel to the elongation axis of the nanoparticles would remove the asymmetry.For example, in chains of nanoblocks the tilted lines of high-intensity field (Fig. 5(a)) straighten out and break into several high-intensity spots localized at the narrow necks between the nanoblocks.
For the nanorod sample (Fig. 5(b)), the fundamental LP mode at E = 0.496 eV is predominantly dipolar (due to the high aspect-ratio of the nanorod) and has enhancement factor of about 240.This pattern is retained (albeit with lower enhancement factors) with increasing photon energy till the minor extinction peak at E = 1.305 eV, when it becomes replaced by a four-peak pattern reminiscent of a standing-wave, observed previously [21].This peak therefore represents the lowest-energy dipole-allowed multipolar mode of the nanorod.The highest-energy extinction peak at E = 2.07 eV also has a standing-wave pattern, but with even more maxima (some asymmetry of the pattern along the x−axis is caused by the excitation source polarized along the same direction).Away from the fundamental LP peak the field enhancement factor decreases steadily.
The above analysis may help one understand the relative weakness of LP scattering in the spectral interval between the extinction peaks of nanoblock chains (Fig. 4).Periodicity of the chain along the y−axis direction creates favorable conditions for certain LP modes only.In our case, longitudinal modes of the entire chain (low-energy) and of single nanoblocks (high-energy), are dominant.At intermediate photon energies, only those modes whose longitudinal distribution of the field intensity is commensurate with periodicity of the chain can provide a limited contribution to the optical scattering due to LSP.In contrast, smooth nanorods will not exhibit such selectivity and sustain LSP modes whose longitudinal field distribution is commensurate with the total length of the rod.Consequently, stronger resonant and broadband scattering will be seen at intermediate energies.
Conclusions
We have proposed and implemented elongated periodic chains of gold nanoblocks, which can sustain dipolar LP modes similar as in smooth nanorods, and simultaneously inhibit excitation of multipolar LP modes.The dipolar LP modes of chains of nanoblocks are spectrally tunable by tailoring the chain length in the IR spectral range; their damping occurs mainly due to radiative losses, and generally has a magnitude almost identical to that found in smooth nanorods of equivalent length.Hence, elongated nanoparticles possessing periodically corrugated shapes can be regarded as interesting systems for plasmonic applications that require spectrally-selective response at IR or longer wavelengths.One attractive area of such applications might be in signal receivers to be used for THz imaging in homeland security.
Fig. 1 .
Fig. 1.(a) Geometric parameters of gold nanoparticles on a dielectric substrate: (1) a chain of connected nanoblocks, (2) a straight nanorod.The side and diagonal lengths are given in nanometers, N is the number of chain/rod segments along the y-axis direction.All dimensions are given in nanometers, the decrease in the total length of the nanoblock chain due to a slight overlap, w, between the nanoblocks is ignored.Nanoparticles are attached to a thick dielectric substrate whose thickness is drawn out of scale in the Figure.(b) Top-view SEM image of chain of nanoblocks with N = 3.The yellow dashed line shows the outline of designed nanoblocks, the scale bar corresponds to 100 nm.In optical studies incident radiation was polarized linearly along the y−axis for predominant excitation of LP modes.
Figure2(a) shows the measured LP extinction spectra of nanoblock structures comprised of different number of segments, N.Each spectrum is dominated by two major extinction resonances.One of them occurs at a constant photon energy E = 1.75 eV (wavelength of 0.71 μm) in all samples regardless of their length.We have verified polarization invariance of this peak's position (i.e., nearly identical spectra were obtained for linear polarization corresponding to LP and TP modes).These findings indicate the fundamental (lowest frequency) mode of a single nanoblock as the origin of the LP peak.Another major resonant extinction peak occurs at a lower photon energy (for example, E = 0.76 eV (wavelength of 1.6 μm) for nanoparticles with N = 2), and exhibits a red-shift with N, completely tuning out of the observation range for N > 6.The approximate spectral position and the red-shift of this peak indicates fundamental LP mode of the entire multi-block nanoparticle as its origin.The two dominant peaks seen in LP extinction spectra can be thus tentatively ascribed to the two characteristic shape components -that of a single nanoblock, and of an elongated composite nanoparticle -in chains of nanoblocks.We emphasize that spectral interval between the two of the above mentioned peaks has no other distinct features, such as minor resonances and broadband background, that might signify excitation of multipolar LSP resonances.
Fig. 3 .
Fig. 3. Parameters of LP peaks versus the nanoparticle length deduced from spectra in the previous Figure: (a) central wavelength, (b) spectral width, (c) LP quality factor, and (d) dephasing time.
Fig. 4 .
Fig. 4. Calculated spectra of extinction cross-section for nanoblocks (a), and nanorods (b) with N = 4.For comparison, the corresponding experimental spectra from Fig. 4 are also shown.
Fig. 5 .
Fig. 5. Calculated near-field patterns on the x − y plane at a half-height of the nanoparticles, (a) for nanoblock, and (b) for nanorod structures with N = 4.
becomes quite complex, acquiring clear signatures of a multipolar LSP mode.The maximum field enhancement factor of this mode is about 70.Finally, at the peak which was previously classified as corresponding to the LP mode of a single nanoblock (E = 1.82 eV), field patterns around each nanoblock are nearly identical.The overall LSP mode is multipolar, with numerous high-intensity spots where the enhancment factor reaches about 90.
|
v3-fos-license
|
2021-06-03T06:17:22.742Z
|
2021-05-27T00:00:00.000
|
235297856
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/18/11/5775/pdf",
"pdf_hash": "28b37a391177c4cdf3c0f93daca02a95e7f95ebe",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43233",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"sha1": "2195ad21675fb31710c37d4c6c73f6a739b2473b",
"year": 2021
}
|
pes2o/s2orc
|
Responses of Spring Discharge to Different Rainfall Events for Single-Conduit Karst Aquifers in Western Hunan Province, China
It is a challenge to describe the hydrogeological characteristics of karst aquifers due to the complex structure with extremely high heterogeneity. As the response of karst aquifers to rainfall events, spring discharge variations after precipitation can be used to identify the internal structure of karst systems. In this study, responses of spring discharge to different kinds of precipitations are investigated by continuously monitoring precipitation and karst spring flow at a single-conduit karst aquifer in western Hunan province, China. Recession curves were used to analyze hydrodynamic behaviors and separate recession stages. The results show that the shape of the recession curve was changed under different rainfall conditions. Recession processes can be divided in to three recession stages under heavy rain conditions due to water drainage mainly from conduits, fracture, and matrix at each stage, but only one recession stage representing drainage mainly from matrix in the case of light rain. With the change in amount and intensity of precipitation, the calculated recession coefficient at each stage changes in an order of magnitude. The influence of precipitation on the recharge coefficient and the discharge composition at each recession are discussed, and then the conceptual model diagram of water filling and releasing in the single-conduit karst aquifers is concluded. The findings provide more insight understand on hydraulic behaviors of karst spring under different types of rainfall events and provide support for water resource management in karst regions.
Introduction
Karst water is a precious freshwater resource that feeds about one quarter of the world's population and will also play a strategic role in economic and social relationships in the future [1,2]. Fast flow to the groundwater through focused recharge is known to transmit short-lived pollutants into carbonate aquifers, endangering the quality of groundwaters where one quarter of the world's population lives [3]. It is still a challenge to predict the distribution and quantity of water resources in karst aquifers due to their complex structures with extremely high heterogeneity and dramatic variability in groundwater dynamic [4,5]. Carbonate formation that has undergone karstification can form a network of various scale gaps, such as caves, conduits, fractures, and pores, which can be described as dual-porosity or triple-porosity aquifers [6,7]. Conduit systems with high permeability, but limited volume, mainly act as preferential pathways for transferring groundwater. However, the matrix systems (fractures and pores) with relatively lower permeability but more interspace act as reservoirs for storing groundwater [8,9]. The physical processes of flow in karst aquifers are primarily determined by the characteristic of the complex condition at the beginning of the recession processes. It may provide more information for recognizing hydraulic property variation in the vertical direction and estimating the effective porosity of karst aquifers by analyzing spring recession curves together with thermal responses under various rainfall events [6,24,26].
In this paper, the responses of spring discharge to rainfall events were investigated by the continuous monitoring of spring discharge and temperature at a single-conduit karst system. The recharge area and conduit characteristics were verified through tracer tests during several precipitation events. The spring recession processes were analyzed after a series of precipitation events, and the reasons for the change in water recession regular patterns are discussed under different rainfall conditions. Finally, conceptual models are proposed to explain the hydraulic property and responses of the karst flow system to rainfall events. As the drinking water source of the local downstream residents, the research results can provide a guide for water resource management and water quality protection at Daiye cave or other similar karst spring systems.
Study Area
The study area (28 • The average annual rainfall is 1400 mm a year, with about 60% of precipitation concentrated in the rainy season from April to August. The geological structure of the study area is uniclinal, mainly consisting of Middle and Upper Cambrian that tilts to the northwest with the stratigraphic dip at about 30 degrees. The lithology of the Upper Cambrian is mainly limestone and dolomite with strong karstification, developing various karst depressions and sinkholes in the surface and, correspondingly, karst conduits and subsurface rivers underground. The lithology of the Middle Cambrian is mainly noncarbonate or argillaceous carbonate with weak permeability, which acts as a waterproof floor for the Upper Cambrian karst aquifer systems. Daiye cave and Lanhua cave are two concentrated discharge points of the Upper Cambrian karst aquifers ( Figure 1a); a previous study [36] identified the groundwater cycle characteristics of these two caves. Lanhua cave is a complex underground river system with multiple conduits, which can be divided into two parts, divided by the skylight LHD (Q 1 ). In the downstream of Q 1 , the conduit scale is quite large and already verified by cave measurements ( Figure 1b); however, in the upstream of Q 1 , the conduit scale is relatively small and filled with water. According to hydrogeological survey and previous tracer tests, the sinking streams 1, 2, and 3 flow into the Lanhua cave system and sinking stream 4 flows into the Daiye cave system [36]. The tracer tests confirmed that the Daiye cave groundwater system recharged by the karst platform with an elevation of 600-1000 m above sea level, about 3.74 km 2 recharge area [36]. A creek, originated from a series of epikarst springs, dives into the karst conduit through a sinkhole at the altitude about 700 m (Figure 1), and then discharges to the surface at Daiye cave, with an elevation of 660 m and with an average hydraulic gradient around 3.39% for the underground river. Therefore, the Daiye cave system included both karst conduit with extremely high permeability, which transports quick flow, and matrix, including fissures and pores with relatively lower permeability, that transports slow flow. Especially during intense fall periods, the creek not only gathers the upstream springs, but also converges a large amount of slope flow around the depression, which results in a rapid increase in spring discharge at Daiye cave.
Monitoring of Precipitation and Spring Discharge
To explore the aquifer structure characteristics of the Daiye Cave karst groundwater system, precipitation and spring discharge were continually monitored from July 2016 to July 2018. Rainfalls were observed by an RG3-M rain gauge (HOBO Onset, Bourne, MA, USA), with an accuracy of 0.2 mm. Spring discharge at Daiye cave was indirectly obtained by monitoring the water level variations every 20 min at the artificial weir, using a pressure sensor (Model 3001 LTC Levelogger, Solinst Canada Ltd., Georgetown, ON, Canada). During the monitoring period, the maximum daily rainfall was 137 mm/d in the study area, and the spring discharge at Daiye cave ranged from 10 to 2100 L/s.
Monitoring of Precipitation and Spring Discharge
To explore the aquifer structure characteristics of the Daiye Cave karst groundwater system, precipitation and spring discharge were continually monitored from July 2016 to July 2018. Rainfalls were observed by an RG3-M rain gauge (HOBO Onset, Bourne, MA, USA), with an accuracy of 0.2 mm. Spring discharge at Daiye cave was indirectly obtained by monitoring the water level variations every 20 min at the artificial weir, using a pressure sensor (Model 3001 LTC Levelogger, Solinst Canada Ltd., Georgetown, ON, Canada). During the monitoring period, the maximum daily rainfall was 137 mm/d in the study area, and the spring discharge at Daiye cave ranged from 10 to 2100 L/s.
Spring Recession Analysis
The method of characterizing karst springs is based on the exponential equation (1). Discharge recession curves were divided into several stages based on flow regimes [18]: where t is any time since the beginning of the recession for which discharge is calculated, α is the recession coefficient, t 0 is the time at the beginning of the recession (usually set equal to zero), Q t is the spring discharge at time t, Q 0 is spring discharge at the start of the recession (t 0 ). Generally speaking, the recession curve can be divided into three sections, which represent different media and mainly control the water release process: quick flow (conduitdominated flow), slow flow (diffuse-dominated flow), or mixed flow [18,22,26]. Equation (1) can be rewritten as Equation (2).
In the first recession stage (conduit-dominated flow stage), the discharge includes water released from conduits, fractures, and matrix media. In the second recession stage (mixed flow stage), the discharge includes water release from fractures and matrix media. In the third stage (diffuse-dominated flow), the discharge is entirely released by matrix medium. The change in water release of each medium with time is shown in Equation (3).
The relationship between the discharge and time of each medium is calculated by Equation (4).
Water storage capacity during different recession stages can be calculated from this model, using the Equation (5) [29]: where V i is the volume drained during period t i .
Recession coefficient is a comprehensive reflection of hydraulic conductivity and water storage capacity, which can be calculated by Equation (6) [22]: where T is transmissivity, S is storativity, L is distance from the discharge point to the drainage divide.
Tracer Tests
To investigate the groundwater flow velocity and the geometric parameters, two groups of tracer tests were carried out in the Daiye cave system. Uranine was chosen as tracer and injected into the Xiazhai sinkhole after two rainfall events, with different flow rates in the conduit during the period of 18-24 May 2018 (Table 1 and Figure 1a). The concentrations of uranine were measured by a fluorometer (GGUN-FL Fluorometer, Neuchâtel Switzerland) at the Daiye cave spring every 10 min with the accuracy around 0.01 ppb.
160 g and 370 g sodium fluorescein were injected into the same sinkhole at different flow rates. The first group of tracer tests was carried out under the condition of an average discharge of 60 L/s without rainfall before tracer injection, while the second group of artificial tracer test was carried out with an average discharge of 650 L/s after a 36.8 mm rainfall. The observed tracer concentration breakthrough curve can be used to calculate the tracer recovery ratio R as follows [19] (Equation (7)): where M 0 is the mass of the injected tracer, C(t) is the tracer concentration at time t during the test, Q(t) is spring discharge of Daiye cave at time t. According to the breakthrough curve, one can also obtain the average flow velocity (v) in the conduit by Equation (8) as follows: v = x s /t 0 (8) in which x s is the distance between the injection and recovery points, t 0 is the mean transit time. Discharge was measured during each tracer test, allowing for a rough estimate of aquifer volume by the tracer cloud using Equation (9), and the cross-sectional area can be estimated from Equation (10).
where V is conduit volume, A is the mean cross-sectional area, and t is the duration of tracer tests. Assuming the karst conduit to be a cylindrical channel, the flow-channel diameter (D c ) can be estimated from Equation (11), where D c represents the conduit diameter in the Daiye cave system. Figure 2 shows the variations in tracer concentration with time during the two group of tracer tests. The two breakthrough curves of tracer tests are both single peak curves, in accordance with the single conduit structure of the Daiye cave system. The average travel velocity of tracer test in the conduit is 251.65 and 32.72 m/h, indicating that the conduit runoff is very rapid and unstable.
Breakthrough Curves for Tracer Tests under Different Precipitation Conditions
Groundwater flow rate significantly affects the shape of the tracer breakthrough curve. For tracer test 1, the duration time is longer and the concentration peak is smaller than tracer test 2 ( Figure 2) due to a longer retention time and more sufficient diffusion in the conduit at lower flow rate, which results in a longer tail during the recession process of tracer concentration [37] and a lower recovery rate. velocity of tracer test in the conduit is 251.65 and 32.72 m/h, indicating that the conduit runoff is very rapid and unstable.
Groundwater flow rate significantly affects the shape of the tracer breakthrough curve. For tracer test 1, the duration time is longer and the concentration peak is smaller than tracer test 2 ( Figure 2) due to a longer retention time and more sufficient diffusion in the conduit at lower flow rate, which results in a longer tail during the recession process of tracer concentration [37] and a lower recovery rate. The straight-line distance from the underground river entrance to exit (tracer migration) is about 1180 m. Conduit geometric parameters were calculated based on tracer breakthrough curves using Equation (7) to Equation (9), with similar results for both tests ( Table 2). The calculated average diameter of karst conduit for Daiye cave system is about 2.687 m, and the total volume of the conduit is 6692 m 3 . Figure 3 plots the time series of precipitation, spring discharge, and groundwater temperature for the period from May 2016 to May 2018. The range of discharge is 10-2100 L/s, and the range of the groundwater temperature is 9.7-18.9 °C. Spring discharge and groundwater temperature quickly respond to rainfalls only a few hours later. The straight-line distance from the underground river entrance to exit (tracer migration) is about 1180 m. Conduit geometric parameters were calculated based on tracer breakthrough curves using Equation (7) to Equation (9), with similar results for both tests ( Table 2). The calculated average diameter of karst conduit for Daiye cave system is about 2.687 m, and the total volume of the conduit is 6692 m 3 . Figure 3 plots the time series of precipitation, spring discharge, and groundwater temperature for the period from May 2016 to May 2018. The range of discharge is 10-2100 L/s, and the range of the groundwater temperature is 9.7-18.9 • C. Spring discharge and groundwater temperature quickly respond to rainfalls only a few hours later. 10 rainfall-discharge response curves were selected, including 6 heavy rains (P ≥ 25 mm/d) and 4 light rains (0 ≤ P < 25 mm/d); the principle of rainfall-discharge response curve selection was that rainfall is relatively concentrated, and there is no rainfall event 5 days after the occurrence of the rainfall event, as shown in Figure 4. Spring discharge rises rapidly within a few hours after rain, and the groundwater temperature also changed abruptly due to rainfall recharge. Under the conditions of heavy rain (H1-H6), discharge of the Daiye cave spring has a steep rise and fall; however, for light rain (L7-L10), discharge curves are much wider and gentler with multiple lower peaks. Rainfall-discharge curves also indicate that the Daiye cave conduit is unobstructed; therefore, rainfall recharge will quickly produce a discharge response.
Spring Discharge Variations at Daiye Cave
Only one peak appears in the rainfall-discharge curves of H1-H5; among which, rainfall events in H1 and H5 have a pause process where the discharge first rises to a 10 rainfall-discharge response curves were selected, including 6 heavy rains (P ≥ 25 mm/d) and 4 light rains (0 ≤ P < 25 mm/d); the principle of rainfall-discharge response curve selection was that rainfall is relatively concentrated, and there is no rainfall event 5 days after the occurrence of the rainfall event, as shown in Figure 4. Spring discharge rises rapidly within a few hours after rain, and the groundwater temperature also changed abruptly due to rainfall recharge. Under the conditions of heavy rain (H1-H6), discharge of the Daiye cave spring has a steep rise and fall; however, for light rain (L7-L10), discharge curves are much wider and gentler with multiple lower peaks. Rainfall-discharge curves also indicate that the Daiye cave conduit is unobstructed; therefore, rainfall recharge will quickly produce a discharge response. Only one peak appears in the rainfall-discharge curves of H1-H5; among which, rainfall events in H1 and H5 have a pause process where the discharge first rises to a certain amount and then stabilizes for a short time. The reason for the pause phenomenon is that the distribution of rainfall mainly concentrates in two periods with a rain pause period that is too short for the discharge turning into the recession process for the former rainfall; meanwhile, a stronger discharge response to the subsequent rainfall already occurs, resulting in a new rising period of discharge at Daiye cave. Nevertheless, for rainfalls in H2 and H3, representative single peak curves were obtained with a sharp rise stage followed by a relatively gentle descent stage, corresponding to more concentrated rainfall events. Besides, the discharge curve for rainfall H6 shows three incremental peaks during the ascent stage ( Figure 4), due to three concentrated and strong rainfall periods during the rainfall event. Table 3 shows the characteristic parameters of rainfall-discharge response curves under 10 rainfall events, where lag time represents the time from the moment of maximum rainfall to peak discharge and delay time represents the time from the discharge response to the end of the flow decline. In case of heavy rain (H1-H6), discharge responds within 2 h after the rainfall event, and reaches the peak flow within 5 h, with a lag time of 1-5 h and delay time of 71-161 h. Under light rain conditions (L7-L10), the response times are about 4-9 h, and peak flows appear 17-28 h later, with a lag time of 18-29 h and a delay time of 110-144 h. Table 4 shows the fitting results of 10 groups of discharge recessions. Discharge recession curves can be decomposed into three exponential stages in the case of heavy rain, such as H2 (P = 41.8 mm) shown in Figure 5. With the decrease in rainfall, two exponential stages can fit the recession curve. However, only one exponential stage can fit the recession curve in the case of light rain, such as L8 (P = 18.2 mm) shown in Figure 5.
Recession Processes under Different Rainfall Conditions
137 mm rainfall in 39 h (28 June 2016) According to the recession curves, the volume of water released from each medium was calculated, shown in Table 5. For heavy rainfalls (H1-H6), the volume of water from the conduit accounts for 5.2-15.1% with a duration of 4-10 h, and water from the fracture accounts for 8.0-20.8% with a duration of 14.7-27.7 h. Except for H4, the volume of water discharged from fracture accounts for more than 70% of the total discharge. However, for light rainfalls (L7-L10), spring discharge is released entirely from the matrix. Table 4 shows the fitting results of 10 groups of discharge recessions. Discharge recession curves can be decomposed into three exponential stages in the case of heavy rain, such as H2 (P = 41.8 mm) shown in Figure 5. With the decrease in rainfall, two exponential stages can fit the recession curve. However, only one exponential stage can fit the recession curve in the case of light rain, such as L8 (P = 18.2 mm) shown in Figure 5. According to the recession curves, the volume of water released from each medium was calculated, shown in Table 5. For heavy rainfalls (H1-H6), the volume of water from the conduit accounts for 5.2-15.1% with a duration of 4-10 h, and water from the fracture accounts for 8.0-20.8% with a duration of 14.7-27.7 h. Except for H4, the volume of water discharged from fracture accounts for more than 70% of the total discharge. However, for light rainfalls (L7-L10), spring discharge is released entirely from the matrix.
Influence of Precipitation on Discharge Composition of Recession Process
As mentioned above, the discharge recession curves for heavy rain (H1-H6) can be divided into two or three stages (Table 4). In the first stage of the recession process, spring discharge is mainly composed of water released from conduit that moves as quick flow and generally reaches the outlet several hours after rainfalls, with the recession coefficient ranging from 0.121 to 0.331 (Table 5). Then, spring discharge is mainly composed of water released from fracture after the conduit water drainage, which is the middle stage, with the coefficient ranging from 0.03 to 0.102 (Table 5). In the third stage, water release from matrix medium is dominant, which feed to fractures and then enter the conduit, and the conduits act as the water conduction channel. As water flow in matrix is relatively slow and the total volume is quite large compared with conduit and fracture water, the recession stage of matrix water is much longer, always lasting over 100 h, and smoother, with a relatively more stable recession coefficient ranging from 0.009 to 0.022 (Table 5).
To investigate the variation in water composition during the recession processes, the proportion of water released from conduit, fracture, and matrix under heavy rains (H1-H6) were calculated by Equation (4) and plotted in Figure 6. It shows that the ratio of water release from conduit decreases rapidly, while the ratio of water release from matrix gradually increases and becomes predominant. The variation in the ratio of water release from fracture is more complicated, which generally rises first and then declines after peak, except for rainfall event H4 where the ratio of fracture water continually decreases. According to the discharge recession analysis, the water release from conduit is 2436-11,655 m 3 , which exceeds the total volume of conduit (6691.6 m 3 ) calculated according to tracer tests for rainfall event H5 and H6. It indicates that the conduit may be fully filled with water and pressure flow may last for a period due to the continual surface runoff from the upstream of Xiazhai sinkhole under the long period (over 30 h) of heavy rain (over 60 mm). of the Xiazhai sinkhole rapidly injects into conduit and causes a steep rise in discharge at the outlet of Daiye cave (Figure 4). Water temperature variations with discharge also provide corresponding evidence. Figure 7 shows typical water temperature variations under heavy rain conditions in the wet season (summer), when the temperature of precipitation is higher than that of groundwater. Groundwater temperature quickly forms a rising pulse signal after rainfall due to the fast recharge of higher temperature precipitation through concentrated injection into conduit at the Xiazhai sinkhole and shows a fluctuation decline in the discharge recession process until it returns to the groundwater temperature before the rainfall event ( Figure 7). Similarly, Figure 8 reveals the temperature changes in groundwater that are mainly influenced by planar infiltration recharge under light rain in the dry season (winter) when the temperature of groundwater is higher than precipitation. Groundwater temperature exhibits a slight downtrend as the discharge increases due to the slow and low temperature water flow recharged by precipitation infiltration (Figure 8). The variation in recession behaviors may be induced by the double recharge manner of both planar infiltration through fractures and point injection through sinkholes, and the regulation and storage of multiple karst aquifer media (conduits, fractures, and matrix) under different patterns of precipitation. Under light rain conditions, planar infiltration was the main recharge method when the rainfall intension does not exceed the infiltration capacity. Infiltration flow moves slowly in the small pores and fractures that mainly play the roles of storage space, and then converges into large fractures and conduits that primarily play the roles of transmissivity channels. Therefore, the discharge curves (L7-L10) are fairly smooth, with a low peak and long tail during the recession process ( Figure 4) that is controlled by water release from matrix medium (Table 5). However, point injection recharge plays the primary role when the rainfall intensity exceeds the infiltration capacity under heavy rain conditions. The surface runoff formed in the upstream of the Xiazhai sinkhole rapidly injects into conduit and causes a steep rise in discharge at the outlet of Daiye cave (Figure 4).
Water temperature variations with discharge also provide corresponding evidence. Figure 7 shows typical water temperature variations under heavy rain conditions in the wet season (summer), when the temperature of precipitation is higher than that of groundwater. Groundwater temperature quickly forms a rising pulse signal after rainfall due to the fast recharge of higher temperature precipitation through concentrated injection into conduit at the Xiazhai sinkhole and shows a fluctuation decline in the discharge recession process until it returns to the groundwater temperature before the rainfall event (Figure 7). Similarly, Figure 8 reveals the temperature changes in groundwater that are mainly influenced by planar infiltration recharge under light rain in the dry season (winter) when the temperature of groundwater is higher than precipitation. Groundwater temperature exhibits a slight downtrend as the discharge increases due to the slow and low temperature water flow recharged by precipitation infiltration (Figure 8).
tion into conduit at the Xiazhai sinkhole and shows a fluctuation decline in the discharge recession process until it returns to the groundwater temperature before the rainfall event (Figure 7). Similarly, Figure 8 reveals the temperature changes in groundwater that are mainly influenced by planar infiltration recharge under light rain in the dry season (winter) when the temperature of groundwater is higher than precipitation. Groundwater temperature exhibits a slight downtrend as the discharge increases due to the slow and low temperature water flow recharged by precipitation infiltration (Figure 8).
Influence of Precipitation on Infiltration Coefficient and Recession Coefficient
Rainfall pattern has a significant impact on the recharge manner and infiltration coefficients ( Figure 9). The values of calculated infiltration coefficient gradually decrease as the rainfall amount and intensity increase. Then, the infiltration coefficient maintains a relative stabilization during rainfall amounts larger than 40 mm (Figure 9a) or rainfall intensities larger than 4 mm/h (Figure 9b). However, the situation for heavy rain (H5) is an exception, for which the rainfall amount exceeds 60 mm but the rainfall intensity is about 2 mm/h and may be suitable for infiltration at Daiye cave system. When rainfall intensity is larger than 2 mm/h, the proportion of surface runoff that flows to the outside of the system increases and results in a lower infiltration coefficient. The recession coefficient is a comprehensive reflection of hydraulic conductivity and water storage capacity, which is also related to the water-filling state of the karst aquifer system at the initial point of recession curves [22]. As the peak water levels in in conduits and matrix are variational under different rainfall events, it may influence the values of the calculated recession coefficient even for similar kinds of rainfalls. Figure 10 plots a recession coefficient with rainfall amount and rainfall intensity, showing different relationships for each recession phase. Only one recession stage appears for light rain, L7-L10, (P ≤ 25 mm) with the rainfall intensity less than 2 mm/h, and the values of recession coefficient α3 vary between 0.01 and 0.1, showing an increasing trend with rainfall amount. Two or three recession stages were found for heavy rain, H1-H4, (40 ≤ P ≤ 45 mm) and torrential rain, H5 and H6, (P ≥ 50 mm). Generally, the values of the recession
Influence of Precipitation on Infiltration Coefficient and Recession Coefficient
Rainfall pattern has a significant impact on the recharge manner and infiltration coefficients (Figure 9). The values of calculated infiltration coefficient gradually decrease as the rainfall amount and intensity increase. Then, the infiltration coefficient maintains a relative stabilization during rainfall amounts larger than 40 mm (Figure 9a) or rainfall intensities larger than 4 mm/h (Figure 9b). However, the situation for heavy rain (H5) is an exception, for which the rainfall amount exceeds 60 mm but the rainfall intensity is about 2 mm/h and may be suitable for infiltration at Daiye cave system. When rainfall intensity is larger than 2 mm/h, the proportion of surface runoff that flows to the outside of the system increases and results in a lower infiltration coefficient.
Influence of Precipitation on Infiltration Coefficient and Recession Coefficient
Rainfall pattern has a significant impact on the recharge manner and infiltration coefficients ( Figure 9). The values of calculated infiltration coefficient gradually decrease as the rainfall amount and intensity increase. Then, the infiltration coefficient maintains a relative stabilization during rainfall amounts larger than 40 mm (Figure 9a) or rainfall intensities larger than 4 mm/h (Figure 9b). However, the situation for heavy rain (H5) is an exception, for which the rainfall amount exceeds 60 mm but the rainfall intensity is about 2 mm/h and may be suitable for infiltration at Daiye cave system. When rainfall intensity is larger than 2 mm/h, the proportion of surface runoff that flows to the outside of the system increases and results in a lower infiltration coefficient. The recession coefficient is a comprehensive reflection of hydraulic conductivity and water storage capacity, which is also related to the water-filling state of the karst aquifer system at the initial point of recession curves [22]. As the peak water levels in in conduits and matrix are variational under different rainfall events, it may influence the values of the calculated recession coefficient even for similar kinds of rainfalls. Figure 10 plots a recession coefficient with rainfall amount and rainfall intensity, showing different relationships for each recession phase. Only one recession stage appears for light rain, L7-L10, (P ≤ 25 mm) with the rainfall intensity less than 2 mm/h, and the values of recession coefficient α3 vary between 0.01 and 0.1, showing an increasing trend with rainfall amount. Two or three recession stages were found for heavy rain, H1-H4, (40 ≤ P ≤ 45 mm) and torrential rain, H5 and H6, (P ≥ 50 mm). Generally, the values of the recession The recession coefficient is a comprehensive reflection of hydraulic conductivity and water storage capacity, which is also related to the water-filling state of the karst aquifer system at the initial point of recession curves [22]. As the peak water levels in in conduits and matrix are variational under different rainfall events, it may influence the values of the calculated recession coefficient even for similar kinds of rainfalls. Figure 10 plots a recession coefficient with rainfall amount and rainfall intensity, showing different relationships for each recession phase. Only one recession stage appears for light rain, L7-L10, (P ≤ 25 mm) with the rainfall intensity less than 2 mm/h, and the values of recession coefficient α 3 vary between 0.01 and 0.1, showing an increasing trend with rainfall amount. Two or three recession stages were found for heavy rain, H1-H4, (40 ≤ P ≤ 45 mm) and torrential rain, H5 and H6, (P ≥ 50 mm). Generally, the values of the recession coefficients α 1 and α 2 decrease as the rainfall amount increases; however, the values of recession coefficients show a positive relationship with rainfall intensity under similar rainfall amounts, such as heavy rain, H1-H4. It is worth noting that the variations of α 3 are more complicated, where the calculated values of α 3 for light rain are obviously higher than that of heavy rain and torrential rain. Meanwhile, the values of α 3 for torrential rain, H5 and H6, are greater than that of heavy rain, H1-H4. To further investigate the influence of the water-filling state of the karst aquifer system on recession coefficients, the relationship between the initial discharge flow rate (Qt) and the recession coefficient (α) at different stages of the Daiye cave spring were plotted in Figure 11, including observations under 10 independent rainfalls (flow without superposition) and other non-independent rainfalls (flow with superposition). It shows that the data points of short-term heavy rainfall are mainly near curve I, where the values of recession coefficient increase rapidly with the increase in discharge. Correspondingly, the data points of long-term heavy rainfall are near curve II, where the values of recession coefficient increase more smoothly with the increase in discharge compared with curve I. It indicates that the values of recession coefficients for short-term heavy rain are greater than that for long-term heavy rain even though the initial discharge flow is the same, which may be explained by the differences in the water filling state under various precipitations. For short-term heavy rainfall, concentrate injection recharge is the dominant recharge and replenishment of matrix is not insufficient; thus, water release is controlled by conduit and a large scale of fractures. For long-term heavy rainfall, both concentrate injection and planar infiltration are important and the replenishment of the karst aquifer system is more sufficient for both conduit and matrix; hence, the discharge recession process is controlled by conduit, fracture, and matrix in sequence. Therefore, curve Ⅰ and curve Ⅱ represent, respectively, an insufficient replenishment system, where water filling in conduits is predominant, and a sufficient replenishment system, with water filling in conduit, fracture, and matrix. Other data points are in the middle of curve Ⅰ and curve Ⅱ, where water filling states are moderate. To further investigate the influence of the water-filling state of the karst aquifer system on recession coefficients, the relationship between the initial discharge flow rate (Q t ) and the recession coefficient (α) at different stages of the Daiye cave spring were plotted in Figure 11, including observations under 10 independent rainfalls (flow without superposition) and other non-independent rainfalls (flow with superposition). It shows that the data points of short-term heavy rainfall are mainly near curve I, where the values of recession coefficient increase rapidly with the increase in discharge. Correspondingly, the data points of long-term heavy rainfall are near curve II, where the values of recession coefficient increase more smoothly with the increase in discharge compared with curve I. It indicates that the values of recession coefficients for short-term heavy rain are greater than that for long-term heavy rain even though the initial discharge flow is the same, which may be explained by the differences in the water filling state under various precipitations. For short-term heavy rainfall, concentrate injection recharge is the dominant recharge and replenishment of matrix is not insufficient; thus, water release is controlled by conduit and a large scale of fractures. For long-term heavy rainfall, both concentrate injection and planar infiltration are important and the replenishment of the karst aquifer system is more sufficient for both conduit and matrix; hence, the discharge recession process is controlled by conduit, fracture, and matrix in sequence. Therefore, curve I and curve II represent, respectively, an insufficient replenishment system, where water filling in conduits is predominant, and a sufficient replenishment system, with water filling in conduit, fracture, and matrix. Other data points are in the middle of curve I and curve II, where water filling states are moderate.
In summary, the recharge and discharge characteristics of the Daiye aquifer system under different rainfall conditions are illustrated by Figures 12-14. For ease of discussion, rainfalls were divided into three types: 1 , short-time heavy rainfall (large rainfall intensity, small total rainfall, such as H1-H4); 2 , long-term heavy rainfall (medium rainfall intensity, large total rainfall, such as H5 and H6); and 3 , light rain (P < 25 mm).
In the case of short-time heavy rainfall, characterized by a strong rainfall intensity but a limited rainfall amount (such as H1-H3), the water level of conduit increases rapidly after precipitation and higher than that of fractures and matrix (Figure 12b), as the concentrated injection at the sinkhole predominates [32,38]. Therefore, the water discharged mainly comes from conduit medium at the early stage, with a relatively large value of recession coefficients α 1 and α 2 (Figure 12b). However, the values of the recession coefficient (α 3 ) are relatively small in the third stage (Figure 12c), due to the water filling in fracture and the matrix medium lagging and being insufficient.
than that for long-term heavy rain even though the initial discharge flow is the same, which may be explained by the differences in the water filling state under various precipitations. For short-term heavy rainfall, concentrate injection recharge is the dominant recharge and replenishment of matrix is not insufficient; thus, water release is controlled by conduit and a large scale of fractures. For long-term heavy rainfall, both concentrate injection and planar infiltration are important and the replenishment of the karst aquifer system is more sufficient for both conduit and matrix; hence, the discharge recession process is controlled by conduit, fracture, and matrix in sequence. Therefore, curve Ⅰ and curve Ⅱ represent, respectively, an insufficient replenishment system, where water filling in conduits is predominant, and a sufficient replenishment system, with water filling in conduit, fracture, and matrix. Other data points are in the middle of curve Ⅰ and curve Ⅱ, where water filling states are moderate. In summary, the recharge and discharge characteristics of the Daiye aquifer system under different rainfall conditions are illustrated by Figures 12-14. For ease of discussion, rainfalls were divided into three types: ①, short-time heavy rainfall (large rainfall intensity, small total rainfall, such as H1-H4); ②, long-term heavy rainfall (medium rainfall intensity, large total rainfall, such as H5 and H6); and ③, light rain (P < 25 mm).
In the case of short-time heavy rainfall, characterized by a strong rainfall intensity but a limited rainfall amount (such as H1-H3), the water level of conduit increases rapidly after precipitation and higher than that of fractures and matrix (Figure 12b), as the concentrated injection at the sinkhole predominates [32,38]. Therefore, the water discharged mainly comes from conduit medium at the early stage, with a relatively large value of recession coefficients α1 and α2 (Figure 12b). However, the values of the recession coefficient (α3) are relatively small in the third stage (Figure 12c), due to the water filling in fracture and the matrix medium lagging and being insufficient. For long-term heavy rainfall with a medium rainfall intensity and large rainfall amount (such as H6), both conduit and matrix can obtain sufficient recharge through centralized injection and planar infiltration (Figure 13b). The water levels of conduit, fracture, and matrix all stay at a high position at the beginning of recession process, after an adequate water exchange between different kinds of aquifer media. Compared with heavy rainfall (H1-H3), water levels of conduit and fracture may be lower at the peak flow point, resulting in smaller values of recession coefficient α1 and α2. Nevertheless, water levels of matrix are higher during the third recession stage on account of the quick discharge rate of conduit and fracture water (Figure 11c). Correspondingly, the values of α3 are greater for long-term heavy rainfall (H5 and H6) than that of heavy rainfall. For long-term heavy rainfall with a medium rainfall intensity and large rainfall amount (such as H6), both conduit and matrix can obtain sufficient recharge through centralized injection and planar infiltration (Figure 13b). The water levels of conduit, fracture, and matrix all stay at a high position at the beginning of recession process, after an adequate water exchange between different kinds of aquifer media. Compared with heavy rainfall (H1-H3), water levels of conduit and fracture may be lower at the peak flow point, resulting in smaller values of recession coefficient α 1 and α 2 . Nevertheless, water levels of matrix are higher during the third recession stage on account of the quick discharge rate of conduit and fracture water (Figure 11c). Correspondingly, the values of α 3 are greater for long-term heavy rainfall (H5 and H6) than that of heavy rainfall.
Under the condition of light rain with small rainfall intensity and rainfall amount (such as L7-L10), the aquifer system is mainly recharged by planar infiltration. Therefore, only fracture and matrix obtain effective supply, but the conduit mainly plays a role of a drainage gallery with low water level (Figure 14b). Spring discharge recession processes are mainly controlled by the fracture and matrix medium, appearing at only one recession stage with the values of the recession coefficient between α 2 and α 3 for heavy rainfall. and matrix all stay at a high position at the beginning of recession process, after an ade-quate water exchange between different kinds of aquifer media. Compared with heavy rainfall (H1-H3), water levels of conduit and fracture may be lower at the peak flow point, resulting in smaller values of recession coefficient α1 and α2. Nevertheless, water levels of matrix are higher during the third recession stage on account of the quick discharge rate of conduit and fracture water (Figure 11c). Correspondingly, the values of α3 are greater for long-term heavy rainfall (H5 and H6) than that of heavy rainfall. Under the condition of light rain with small rainfall intensity and rainfall amount (such as L7-L10), the aquifer system is mainly recharged by planar infiltration. Therefore, only fracture and matrix obtain effective supply, but the conduit mainly plays a role of a drainage gallery with low water level (Figure 14b). Spring discharge recession processes are mainly controlled by the fracture and matrix medium, appearing at only one recession stage with the values of the recession coefficient between α2 and α3 for heavy rainfall.
Limitation of This Study
The limitation of this study is that the rain gauge station is not in the recharge area of the Daiye cave system but is instead located 2.8 km away from the southeast direction of the sinkhole. The variation in microclimate in mountainous areas may have a certain influence on the results. Besides, the conceptual mode of water filling and release processes in the Daiye cave system was proposed based on an analysis of spring discharge monitoring data, which still exits in an uncertain way and needs to be further verified by borehole water levels.
Conclusions
In this study, the response of spring discharge to rainfall events was investigated by continuously monitoring precipitation and karst spring flow at Daiye cave, a representative single-conduit karst system in western Hunan province, China. The distribution of the karst conduit was verified by tracer tests, with the average diameter of 2.687 m estimated by analyzing the tracer concentration breakthrough curve. Recession curves were used to analyze hydrodynamic behaviors and separate recession stages. The results show that the shape of the recession curve was changed under different rainfall conditions. Recession processes can be divided into three recession stages under heavy rain conditions due to water drainage mainly from conduits, fracture, and matrix at each stage with the ratio of 5.2-15.1% conduit water and 8-20.8% fracture water, but only one recession stage representing drainage mainly from matrix in the case of light rain. An interesting finding is that the calculated recession coefficient at each stage is not a constant and changes in an order of magnitude but is related to different amounts and intensities of precipitation. Recession coefficients decrease with increasing durations of precipitation events for the same amount of rainfall, which could be attributed to various response times to precipitation for different kinds of aquifer media. The water filling and releasing speed of conduit medium is obviously faster than that of fracture and matrix medium; therefore, the rainfall intensity controls the speed of water filling of medium and the rainfall duration controls the saturation degree of the aquifers. When precipitation intensity exceeds infiltration capacity, surface runoff concentrates into karst aquifers through the sinkholes, resulting in a quick response in discharge at the spring outlet and steep slope in the recession curves
Limitation of This Study
The limitation of this study is that the rain gauge station is not in the recharge area of the Daiye cave system but is instead located 2.8 km away from the southeast direction of the sinkhole. The variation in microclimate in mountainous areas may have a certain influence on the results. Besides, the conceptual mode of water filling and release processes in the Daiye cave system was proposed based on an analysis of spring discharge monitoring data, which still exits in an uncertain way and needs to be further verified by borehole water levels.
Conclusions
In this study, the response of spring discharge to rainfall events was investigated by continuously monitoring precipitation and karst spring flow at Daiye cave, a representative single-conduit karst system in western Hunan province, China. The distribution of the karst conduit was verified by tracer tests, with the average diameter of 2.687 m estimated by analyzing the tracer concentration breakthrough curve. Recession curves were used to analyze hydrodynamic behaviors and separate recession stages. The results show that the shape of the recession curve was changed under different rainfall conditions. Recession processes can be divided into three recession stages under heavy rain conditions due to water drainage mainly from conduits, fracture, and matrix at each stage with the ratio of 5.2-15.1% conduit water and 8-20.8% fracture water, but only one recession stage representing drainage mainly from matrix in the case of light rain. An interesting finding is that the calculated recession coefficient at each stage is not a constant and changes in an order of magnitude but is related to different amounts and intensities of precipitation. Recession coefficients decrease with increasing durations of precipitation events for the same amount of rainfall, which could be attributed to various response times to precipitation for different kinds of aquifer media. The water filling and releasing speed of conduit medium is obviously faster than that of fracture and matrix medium; therefore, the rainfall intensity controls the speed of water filling of medium and the rainfall duration controls the saturation degree of the aquifers. When precipitation intensity exceeds infiltration capacity, surface runoff concentrates into karst aquifers through the sinkholes, resulting in a quick response in discharge at the spring outlet and steep slope in the recession curves due to fast flow in conduits. On the contrary, when precipitation intensity is lower than infiltration capacity, the aquifer is mainly recharged through planar infiltration in fractures, causing a higher response time to precipitation at the spring due to the slow flow in the fractures and matrix. Finally, a typical recharge and discharge model diagram of karst water system in Southwest China is put forward for different rainfall conditions. Under light rain, it is mainly surface dispersed infiltration recharge, while, in heavy rain conditions, it is mainly concentrated recharge. These findings can provide more insight on hydraulic behaviors of karst springs under different types of rainfall events and scientific support for water resource management and utilization.
|
v3-fos-license
|
2022-03-16T15:24:22.345Z
|
2022-03-01T00:00:00.000
|
247466272
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/19/6/3393/pdf",
"pdf_hash": "32624842df82a6c9d7c21d74109fadbaac606106",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43243",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "4f0affdea237f8760afd439aa6e7a355a0dc2d85",
"year": 2022
}
|
pes2o/s2orc
|
Enotourism in Southern Spain: The Montilla-Moriles PDO
The profile of tourists during the COVID-19 pandemic is changing toward those seeking health, safety and quality products. One of the modalities that best adapts to these needs is gastronomic tourism and, within this segment, wine tourism (enotourism), which can be enjoyed in many areas across the world. The great diversity of grapes, climates, terrains and winemaking processes gives rise to an enormous variety of wines that ensures that no two wines are alike. The current situation of the tourism market necessitates enhancing the uniqueness of areas that offer differentiated products, helping to position such locations as benchmarks for gastronomic tourism. Gastronomic routes provide a way to unify and benefit rural areas through the recently increased demand of tourists seeking to experience regional foods. In this study, the Montilla-Moriles Wine Route is analyzed with the objective of forecasting the demand (using autoregressive integrate moving average, ARIMA models), establishing a tourist profile and calculating the probability that a wine tourist is satisfied with the visit based on their personal characteristics (logit model). The results obtained indicate a slight increase (3.6%) in wine tourists with a high degree of satisfaction, primarily derived from the gastronomic or catering services of the area, from the number of wineries visited, from the treatment received and from the age of the tourist. Consequently, a high percentage of these tourists recommend the route. By increasing the demand for enotourism in this area and applying the results obtained, marketing initiatives could be established, particularly for wine festivals to improve this tourist segment and generate wealth in that area.
Introduction
The changes that have taken place in recent years in tourism activity and during the COVID-19 pandemic [1][2][3][4] are creating new destinations that, far from the traditional sun and beach destinations, generate complementary routes for wealth and job creation [5,6]. Thus, along with the already classic inland destinations, other tourism products are emerging that attract certain segments of the population. As a result of this demand, new tourist routes are being created, among which gastronomic routes stand out [7].
Until 2019, Spain was considered the second largest tourist power in the world, attracting more than 82 million foreigners, mainly for sun and beach tourism, generating an economic benefit of more than 92 billion euros, which served as the driving force of the economy and allowed the country to overcome the economic crisis of 2009 by compensating for a balance in payments. Nevertheless, the 2020 health crisis paralyzed this sector when the borders closed. Spain is currently recovering slowly by receiving international tourists again, but the number of tourists has not exceeded 40 million [8]. The tourism sector is changing its profile, seeking to revitalize national tourism, but it has to be modernized and adapted to the changing demands of new tourists, who are more concerned with safety and sustainability and less so with cost. Post-COVID-19 tourists [5,6,9] prefer quality
PDOs and Gastronomic Routes
Tourism, in all its forms, is a means to solve economic and social issues or challenges within interior regions. Changes in the economic and social roles of the traditional production of food products in rural areas, through a restructuring of the productive structure, offer new job opportunities for the population [40]. Tourism may not be the main source of income in rural areas but can provide supplemental income for local inhabitants [41].
New trends in consumer habits have led to a growing interest in higher-quality products, differentiated and adapted to the new needs of different groups and market segments. Given this increase in the consumption of differentiated products based on quality, one of the most recognized strategies in the agri-food sector to achieve this differentiation is geographical indicators of origin and, in particular, PDOs, which integrate in their definition not only the geographical origin but also the tradition and specialization of producing high-quality products with unique features and the regulation and control of mechanisms for their production [42].
PDOs [43] and PGIs, located mainly in rural areas, constitute the system used in Spain for the recognition of high-quality food products, resulting from the unique and differentiating characteristics related to the geographical environment where the raw materials are produced and the products are manufactured and to the influence of human factors involved [44]; however, they are not sufficient to create a tourist product. As such, it is necessary to create tourist routes, so-called gastronomic or food routes [45]. There can be countless activities that tourists can engage in related to the products identified by these routes; for example, visiting producers at their establishments where tourists are shown the preparation processes and allowed to taste the products. In addition, restaurants will offer traditional dishes prepared with products from the area. However, to constitute a gastronomic route (Figure 1), a distinctive product is first needed; that is, a raw material such as wine or oil endorsed by a PDO. Second, an itinerary is needed based on a road network that includes several businesses affiliated with the route such as wineries, restaurants, hotels or shops where the product is marketed or showcased in culinary dishes or accompanying meals. Furthermore, all committed establishments on gastronomic routes must meet certain standards of quality that identify the route. Signage for the route on the road network and to identify the participating establishments along the route must also be provided, and an organization or association must coordinate the different elements of and information about the route. it is necessary to create tourist routes, so-called gastronomic or food routes [45]. There can be countless activities that tourists can engage in related to the products identified by these routes; for example, visiting producers at their establishments where tourists are shown the preparation processes and allowed to taste the products. In addition, restaurants will offer traditional dishes prepared with products from the area. However, to constitute a gastronomic route (Figure 1), a distinctive product is first needed; that is, a raw material such as wine or oil endorsed by a PDO. Second, an itinerary is needed based on a road network that includes several businesses affiliated with the route such as wineries, restaurants, hotels or shops where the product is marketed or showcased in culinary dishes or accompanying meals. Furthermore, all committed establishments on gastronomic routes must meet certain standards of quality that identify the route. Signage for the route on the road network and to identify the participating establishments along the route must also be provided, and an organization or association must coordinate the different elements of and information about the route. In Spain, according to the latest data from November 2021, 202 PDOs and 160 PGIs were registered (Table 1). Of these, 96 (46.5%) are the designation of origin of wines, and 43 are PGIs (26.9%). Cheeses, oils, fruit and vegetable products compose the remaining 53.7%. The economic impact of the agri-food sector, according to the Ministry of Agriculture, Fisheries and Food (MAPA) [46], is approximately 5000 million euros. Of this economic value, wines, accounting for approximately 3000 million euros, rank first, followed by spirits, accounting for 450 million euros, followed distantly by cheeses, meat products, turrones (nougats), fruits and virgin olive oils. Agri-food products PDO PGI In Spain, according to the latest data from November 2021, 202 PDOs and 160 PGIs were registered (Table 1). Of these, 96 (46.5%) are the designation of origin of wines, and 43 are PGIs (26.9%). Cheeses, oils, fruit and vegetable products compose the remaining 53.7%. The economic impact of the agri-food sector, according to the Ministry of Agriculture, Fisheries and Food (MAPA) [46], is approximately 5000 million euros. Of this economic value, wines, accounting for approximately 3000 million euros, rank first, followed by spirits, accounting for 450 million euros, followed distantly by cheeses, meat products, turrones (nougats), fruits and virgin olive oils. In Andalusia, a region located in southern Spain (and the object of this research), 30 designations of origin and 31 geographical indications are registered. Andalusia is the autonomous community with the most designations of origin and PGIs, followed by Castilla y León with 27 and Castilla-La Mancha with 24. By type of product, virgin olive oil stands out, with 13 designations of origin, accounting for 41.9% of the Spanish designations of origin for this product [47]. The wine sector is also very prominent, with eight PDOs. Regarding other wine products, i.e., vinegars, there are only two PDOs recognized throughout all of Spain; for fruits and vegetables, there are four PDOs, and Serrano ham has one PDO, indicating that the autonomous community of Andalusia accounts for a considerable total registered in Spain, i.e., 67% in both cases.
According to Blanco and Riveros [48], a gastronomic route in rural environments associated with a PDO is a form of rural tourism that stimulates new economic activities to maintain and improve the living conditions of the rural population. Such tourism aims to achieve a product that integrates the greatest number of actors, generates more jobs in these areas, and that diversifies the existing offerings [49]. Tourism in these rural areas is not the main source of income and does not saturate the environment but rather contributes supplemental income to the local inhabitants.
Gastronomic routes must be thought of as a tour that is organized in such a way to allow tourists to recognize and enjoy the agricultural and industrial production processes and to taste regional cuisine, expressions of the cultural identity of the region. The routes are organized around a key characteristic product, which often provides the name for the route. In addition, routes offer a series of pleasurable experiences and activities related to distinctive elements; they are organized to consolidate productive regional culture and to enhance regional products to stimulate regional economies through the promotion of products and gastronomic culture [50]. Barrera [51] classifies gastronomic routes as follows: a. Gastronomic routes by product, which are routes organized on the basis of a specific product, e.g., cheese, oil, and wine; b. Gastronomic routes by dish, which are organized around the most important prepared dishes; c.
Ethnic-gastronomic routes, which are ventures based on the culinary traditions of immigrant peoples.
In addition, tourists are offered a series of pleasures and activities related to the distinctive elements of products, including visiting cultivation fields and processing plants, learning the history of the evolution of the product, tastings, etc. They are organized to consolidate the productive regional culture, to enhance regional products to stimulate regional economies through the promotion of products and gastronomic culture.
The promotion of food brands through routes, in addition to the place of origin of the product, is a means of promoting typical products of the region, providing added value to the service/product offered to tourist consumers. The promotion of gastronomic and culinary heritage includes not only consumption on the premises but also the acquisition of regional food products as souvenirs, thus increasing the income obtained from native products of the area and making it possible to position the food product in the market.
Enotourism: The Montilla-Moriles Route
The definition and conceptualization of enotourism are not uniform because they can be analyzed from different perspectives, such as marketing or the motivation of travelers. Following Getz and Brown [52], enotourism is simultaneously a consumer behavior, a strategy to develop a geographical area and the wine market in that area and an opportunity to promote the sale of products by wineries directly to consumers. Recent studies on the subject of wine tourism suggest and promote the idea that food and wine can be, and often are, the primary reason to travel to a certain region and not necessarily a secondary activity of the trip [53].
New tourist patterns, the search for new experiences, and the availability of free time are generating the evolution of certain types of tourism, such as enotourism [21,22,54]. The importance that enotourism has acquired in recent years in different parts of the world has been sufficiently documented [55], for example, Chile [56][57][58]; Hungary [59]; New Zealand [60][61][62][63][64]; South Africa [65][66][67]; Italy [23,68,69]; France [70,71]; the US [55,[72][73][74][75]; Portugal [76]; and Spain [18,41,[77][78][79]. Such an expansion of tourist destinations brings with it new regions that will boost the economy of these areas. Specifically, in Spain, the Gilbert study [80], which analyzed the importance of wine tourism on the area of La Rioja at the beginning of the nineties of the last century, inspired similar studies in other wine regions, for example, Valencia [81][82][83], Priorato [84], Montilla-Moriles [85][86][87] and Malaga [88,89]. These studies discussed how the development of this tourism product, managed by small and medium enterprises (many of them cooperatives), can serve as a complement to other activities in rural areas, generating wealth and creating jobs. Elias [87] describes enotourism as trips and stays geared toward knowledge of the landscapes, tasks and spaces of winemaking and the activities that increase knowledge and acquisition and generate development in various wine regions.
However, for enotourism to develop effectively, it must be supported by quality products, i.e., PDOs and PGIs, that are regulated by councils to ensure the quality of the products under their umbrella.
In Europe, there are 1322 designations of origin of wine and 388 PGIs (Table 2), with France having the most quality labels (554), followed by Italy (547) and Spain (139). Spain has about one million cultivated hectares. It has the largest area of vineyards in the world, but in terms of production, it ranks third, behind Italy and France. It is also the world leader in the export of wine, exporting more than 22 million hectoliters in 2019, with countries of the European Union being the main consumers. They are, therefore, well-known Spanish wines in the European Common Market, and consumers of such wines could be potential clients for enotourism, with destinations in any of the 17 autonomous communities of the country because the PDOs are distributed throughout the entire Spanish territory ( Figure 2 They are, therefore, well-known Spanish wines in the European Common Market, and consumers of such wines could be potential clients for enotourism, with destinations in any of the 17 autonomous communities of the country because the PDOs are distributed throughout the entire Spanish territory ( Figure 2). However, to create a quality product related to wine, it is necessary to have not only PDOs and PGIs but also routes associated with those products [75]. In Spain, the region with the most wine routes is Castilla León with seven, while Andalusia only has three ( Figure 3). Once a tourist route has been created, the next step is to promote the tourism product [90] for the purpose of reaching the potential demand of wine tourists. Thus, a key question arises: Who are the wine tourists? There are different studies that analyze the characteristics of wine tourists. Among such studies, Charters and Ali-Knight [91] group tourists into four different types: Wine lover. These individuals have a vast education in oenological aspects, and the main reason for their trip is to taste different types of wine, to buy bottles of wine and to learn in situ. They are also very interested in local gastronomy. The connoisseur. These individuals, although they do not have a vast education in oenological issues, know the world of wine relatively well. They usually have a university education, and the main reason for their trip is to put into practice what they have read in different specialized magazines. Wine interested. These individuals do not have technical training in oenological issues but are interested in the world of wine. Visiting wineries is not the main reason for their trip but rather as a complement to other activities. Wine novice. For different reasons (such as advertising along a route or wanting new experiences), these individuals visit wineries without having any knowledge in this field. The main reason for the trip is not associated with wine, but these individuals spend a few hours visiting wineries. The purchases they usually make are intended for private consumption or, in most cases, as gifts.
Each type of enotourist demands a different product [92]. Wine lovers are more demanding in terms of the quality of the wines and the explanations about the production process than are wine novices; they are also more willing to pay more for a quality product. Once a tourist route has been created, the next step is to promote the tourism product [90] for the purpose of reaching the potential demand of wine tourists. Thus, a key question arises: Who are the wine tourists? There are different studies that analyze the characteristics of wine tourists. Among such studies, Charters and Ali-Knight [91] group tourists into four different types: However, to create a quality product related to wine, it is necessary to have not only PDOs and PGIs but also routes associated with those products [75]. In Spain, the region with the most wine routes is Castilla León with seven, while Andalusia only has three ( Figure 3). Once a tourist route has been created, the next step is to promote the tourism product [90] for the purpose of reaching the potential demand of wine tourists. Thus, a key question arises: Who are the wine tourists? There are different studies that analyze the characteristics of wine tourists. Among such studies, Charters and Ali-Knight [91] group tourists into four different types: Wine lover. These individuals have a vast education in oenological aspects, and the main reason for their trip is to taste different types of wine, to buy bottles of wine and to learn in situ. They are also very interested in local gastronomy. The connoisseur. These individuals, although they do not have a vast education in oenological issues, know the world of wine relatively well. They usually have a university education, and the main reason for their trip is to put into practice what they have read in different specialized magazines. Wine interested. These individuals do not have technical training in oenological issues but are interested in the world of wine. Visiting wineries is not the main reason for their trip but rather as a complement to other activities. Wine novice. For different reasons (such as advertising along a route or wanting new experiences), these individuals visit wineries without having any knowledge in this field. The main reason for the trip is not associated with wine, but these individuals spend a few hours visiting wineries. The purchases they usually make are intended for private consumption or, in most cases, as gifts.
Each type of enotourist demands a different product [92]. Wine lovers are more demanding in terms of the quality of the wines and the explanations about the production process than are wine novices; they are also more willing to pay more for a quality product.
Wine lover. These individuals have a vast education in oenological aspects, and the main reason for their trip is to taste different types of wine, to buy bottles of wine and to learn in situ. They are also very interested in local gastronomy. However, to create a quality product related to wine, it is necessary to have not only PDOs and PGIs but also routes associated with those products [75]. In Spain, the region with the most wine routes is Castilla León with seven, while Andalusia only has three ( Figure 3). Once a tourist route has been created, the next step is to promote the tourism product [90] for the purpose of reaching the potential demand of wine tourists. Thus, a key question arises: Who are the wine tourists? There are different studies that analyze the characteristics of wine tourists. Among such studies, Charters and Ali-Knight [91] group tourists into four different types: Wine lover. These individuals have a vast education in oenological aspects, and the main reason for their trip is to taste different types of wine, to buy bottles of wine and to learn in situ. They are also very interested in local gastronomy. The connoisseur. These individuals, although they do not have a vast education in oenological issues, know the world of wine relatively well. They usually have a university education, and the main reason for their trip is to put into practice what they have read in different specialized magazines. Wine interested. These individuals do not have technical training in oenological issues but are interested in the world of wine. Visiting wineries is not the main reason for their trip but rather as a complement to other activities. Wine novice. For different reasons (such as advertising along a route or wanting new experiences), these individuals visit wineries without having any knowledge in this field. The main reason for the trip is not associated with wine, but these individuals spend a few hours visiting wineries. The purchases they usually make are intended for private consumption or, in most cases, as gifts.
Each type of enotourist demands a different product [92]. Wine lovers are more demanding in terms of the quality of the wines and the explanations about the production process than are wine novices; they are also more willing to pay more for a quality product.
The connoisseur. These individuals, although they do not have a vast education in oenological issues, know the world of wine relatively well. They usually have a university education, and the main reason for their trip is to put into practice what they have read in different specialized magazines. However, to create a quality product related to wine, it is necessary to have not only PDOs and PGIs but also routes associated with those products [75]. In Spain, the region with the most wine routes is Castilla León with seven, while Andalusia only has three ( Figure 3). Once a tourist route has been created, the next step is to promote the tourism product [90] for the purpose of reaching the potential demand of wine tourists. Thus, a key question arises: Who are the wine tourists? There are different studies that analyze the characteristics of wine tourists. Among such studies, Charters and Ali-Knight [91] group tourists into four different types: Wine lover. These individuals have a vast education in oenological aspects, and the main reason for their trip is to taste different types of wine, to buy bottles of wine and to learn in situ. They are also very interested in local gastronomy. The connoisseur. These individuals, although they do not have a vast education in oenological issues, know the world of wine relatively well. They usually have a university education, and the main reason for their trip is to put into practice what they have read in different specialized magazines. Wine interested. These individuals do not have technical training in oenological issues but are interested in the world of wine. Visiting wineries is not the main reason for their trip but rather as a complement to other activities. Wine novice. For different reasons (such as advertising along a route or wanting new experiences), these individuals visit wineries without having any knowledge in this field. The main reason for the trip is not associated with wine, but these individuals spend a few hours visiting wineries. The purchases they usually make are intended for private consumption or, in most cases, as gifts.
Each type of enotourist demands a different product [92]. Wine lovers are more demanding in terms of the quality of the wines and the explanations about the production process than are wine novices; they are also more willing to pay more for a quality product.
Wine interested. These individuals do not have technical training in oenological issues but are interested in the world of wine. Visiting wineries is not the main reason for their trip but rather as a complement to other activities. However, to create a quality product related to wine, it is necessary to have not only PDOs and PGIs but also routes associated with those products [75]. In Spain, the region with the most wine routes is Castilla León with seven, while Andalusia only has three ( Figure 3). Once a tourist route has been created, the next step is to promote the tourism product [90] for the purpose of reaching the potential demand of wine tourists. Thus, a key question arises: Who are the wine tourists? There are different studies that analyze the characteristics of wine tourists. Among such studies, Charters and Ali-Knight [91] group tourists into four different types: Wine lover. These individuals have a vast education in oenological aspects, and the main reason for their trip is to taste different types of wine, to buy bottles of wine and to learn in situ. They are also very interested in local gastronomy. The connoisseur. These individuals, although they do not have a vast education in oenological issues, know the world of wine relatively well. They usually have a university education, and the main reason for their trip is to put into practice what they have read in different specialized magazines. Wine interested. These individuals do not have technical training in oenological issues but are interested in the world of wine. Visiting wineries is not the main reason for their trip but rather as a complement to other activities. Wine novice. For different reasons (such as advertising along a route or wanting new experiences), these individuals visit wineries without having any knowledge in this field. The main reason for the trip is not associated with wine, but these individuals spend a few hours visiting wineries. The purchases they usually make are intended for private consumption or, in most cases, as gifts.
Each type of enotourist demands a different product [92]. Wine lovers are more de-Wine novice. For different reasons (such as advertising along a route or wanting new experiences), these individuals visit wineries without having any knowledge in this field. The main reason for the trip is not associated with wine, but these individuals spend a few hours visiting wineries. The purchases they usually make are intended for private consumption or, in most cases, as gifts.
Each type of enotourist demands a different product [92]. Wine lovers are more demanding in terms of the quality of the wines and the explanations about the production process than are wine novices; they are also more willing to pay more for a quality product.
In Spain, according to the Association of Spanish Wine Cities (ACEVIN) in its 2019 report [93], the number of enotourists who visit PDOs is not homogeneous; 3,076,634 people visited wineries and museums along the wine routes of Spain, more than double that of a decade ago. However, despite being a very significant figure, it is still small compared to the 43 million tourist who visited American wineries in 2017, especially considering that Spain is the leading country in the world in terms of vineyard surface area and the third largest wine producer [94]. One of the most attractive options for enotourists is enjoying the view of a vineyard within a natural setting [95]. Different approaches are being implemented in other countries that may explain these numbers. By region, Andalusia receives the most enotourists. The PDO of Jerez receives more than half a million tourists per year and is the best-known international PDO from the tourist point of view (Figure 4), receiving more than 80% of novice tourists; in the PDO of Ribera del Duero, that same percentage of tourists are wine interested.
In Spain, according to the Association of Spanish Wine Cities (ACEVIN) in its 2019 report [93], the number of enotourists who visit PDOs is not homogeneous; 3,076,634 people visited wineries and museums along the wine routes of Spain, more than double that of a decade ago. However, despite being a very significant figure, it is still small compared to the 43 million tourist who visited American wineries in 2017, especially considering that Spain is the leading country in the world in terms of vineyard surface area and the third largest wine producer [94]. One of the most attractive options for enotourists is enjoying the view of a vineyard within a natural setting [95]. Different approaches are being implemented in other countries that may explain these numbers. By region, Andalusia receives the most enotourists. The PDO of Jerez receives more than half a million tourists per year and is the best-known international PDO from the tourist point of view ( Figure 4), receiving more than 80% of novice tourists; in the PDO of Ribera del Duero, that same percentage of tourists are wine interested. Analysis of the profile of enotourism consumers in the Montilla-Moriles (Córdoba) designation of origin: Montilla-Moriles wine spans different municipalities of the province of Córdoba. The main economic activity of the inhabitants of this area is agriculture, followed by the service industry, except in the capital of the province, where the tertiary sector is practically the only sector. Likewise, the secondary sector is practically nonexistent in the area except in the production of wine and oil because there is no other type of manufacturing activity, with approximately 4.5% of jobs being lost in the secondary sector each year.
This area is relatively well connected by road and rail, with the different provincial capitals of its surroundings (mainly Seville, Granada and Malaga). Likewise, it is also close to two international airports, which is a deciding element for citizens of other countries to consume this tourism product.
Wines in the area have recognized prestige thanks to the control exercised by the designation of origin. The quality of the wines in the area is largely due to the clay soil, the climate, the location of the vineyards, the historical legacy of production and the use of new technologies. The wines of this area include fine wines and bitter wines, which are pale gold in color and very aromatic. Regarding vineyards, the "Pedro Ximénez" variety prevails, along with the "Moscatel," "Lairén," "Airén," "Baladí Verdejo" and "Montepila" Analysis of the profile of enotourism consumers in the Montilla-Moriles (Córdoba) designation of origin: Montilla-Moriles wine spans different municipalities of the province of Córdoba. The main economic activity of the inhabitants of this area is agriculture, followed by the service industry, except in the capital of the province, where the tertiary sector is practically the only sector. Likewise, the secondary sector is practically nonexistent in the area except in the production of wine and oil because there is no other type of manufacturing activity, with approximately 4.5% of jobs being lost in the secondary sector each year.
This area is relatively well connected by road and rail, with the different provincial capitals of its surroundings (mainly Seville, Granada and Malaga). Likewise, it is also close to two international airports, which is a deciding element for citizens of other countries to consume this tourism product.
Wines in the area have recognized prestige thanks to the control exercised by the designation of origin. The quality of the wines in the area is largely due to the clay soil, the climate, the location of the vineyards, the historical legacy of production and the use of new technologies. The wines of this area include fine wines and bitter wines, which are pale gold in color and very aromatic. Regarding vineyards, the "Pedro Ximénez" variety prevails, along with the "Moscatel," "Lairén," "Airén," "Baladí Verdejo" and "Montepila" varieties, grown using tilling, pruning and trellising, culminating with a harvest at the end of August, the earliest in Spain. Grapes are crushed and pressed to extract the must from which the wines will age. After fermentation, the wine is transferred to stacked wooden casks or criaderas to age. Aging takes place in the wineries that dot the periphery of this geographical area, where the ideal temperature, humidity and light conditions are maintained at the levels required for the sophisticated production procedures.
Currently, 60 companies are part of the Regulatory Council of the "Montilla-Moriles" designation of origin, of which 18 are cooperatives (30%), 30 are limited liability companies (50%) and 12 are private companies (20%).
The varieties, grown using tilling, pruning and trellising, culminating with a harvest at the end of August, the earliest in Spain. Grapes are crushed and pressed to extract the must from which the wines will age. After fermentation, the wine is transferred to stacked wooden casks or criaderas to age. Aging takes place in the wineries that dot the periphery of this geographical area, where the ideal temperature, humidity and light conditions are maintained at the levels required for the sophisticated production procedures. Currently, 60 companies are part of the Regulatory Council of the "Montilla-Moriles" designation of origin, of which 18 are cooperatives (30%), 30 are limited liability companies (50%) and 12 are private companies (20%).
The Thus, it is essential to analyze the socioeconomic profile of wine tourists to adapt the supply to the existing demand in each area. The last section of the study presents the conclusions obtained when analyzing the PDO Montilla-Moriles (Córdoba). The results obtained will enable the adaptation of the products offered by small and medium-sized businesses in rural areas to the consumer demand of this tourism segment. Through the diversification of economic activities in rural areas, it will be possible to obtain supplemental income and generate new jobs, alleviating current problems in rural areas.
Materials and Methods
This investigation focuses on conducting an econometric study to estimate the quantitative demand of wine tourism on the Montilla-Moriles route and to determine the characteristics of tourists who visit this route, with the objective of identifying a tourist profile and, subsequently, the necessary measures to improve this tourist route, which would logically generate an increase in wealth in this area.
The sources of information that have been used to carry out this study are as follows: • Information on the number of monthly tourists who visit route Montilla-Moriles (from January 2015-February 2020). • Data obtained through fieldwork and two different surveys ( Table 3): The first survey was conducted during the months of February to May 2019 and included companies that are part of AVINTUR and the wineries that belong to the Montilla-Moriles Regulatory Council (in total, 85 businesses); the response rate was 46% (39 surveys received), with a margin of error of 4.7%. The objective of this survey was to determine the enotourism offerings in this area. The second survey, conducted during the months of February to December 2019, was applied to 500 people who visited this route. To determine the profile of wine tourists, a questionnaire consisting of 35 questions divided into four blocks of tourist consumers who visited the Montilla-Moriles wine route in 2019 was conducted. The first block of the questionnaire collected personal information (e.g., age, gender, educational level, marital status) The second block gathered information about the route taken (e.g., How did you learn about the gastronomic route? Did the route meet your expectations? What would you change? Did you travel expressly because of the gastronomic route?). The third block addressed the motivation for gastronomic tourism (e.g., Why did you choose a gastronomic route?). The fourth block collected information regarding value (e.g., services received on the route, price of the trip, hospitality and treatment received). The objective of this survey was to establish a profile of tourists who choose this type of tourism and to determine the motivation of such tourists, with the purpose of reinforcing and designing strategies that promote the development of wine tourism in the area. With the data obtained, the following are proposed: (1) A binary logit model in which the variable under study is dichotomous and to which two values are assigned: 1, which represents the category of the variable to be analyzed, and 0 otherwise. The objective is to determine the probability of tourist satisfaction relative to the expectations they had of the tourist route, based on their socioeconomic profile [96]. (2) A SARIMA model to predict the demand for enotourism in the Montilla-Moriles PDO based on a sample from January 2015 to March 2020. The Box-Jenkins (BJ) methodology was applied using ARIMA models. According to Box et al. (2015) [97], the facilitating factor of this prediction method is an analysis of the probabilistic, or stochastic, properties of economic time series themselves (in this case, the number of wine tourists in the Montilla-Moriles PDO).
Likewise and independent of the logit and SARIMA models, contingency tables were used to establish the relationship between age, income level and the degree of tourist satisfaction. The chi-square test was used to determine associations from contingency tables for the three variables; the value obtained was 75.1, with a probability of 0, indicating that there is a strong association between the three variables studied [98].
Estimation of the Level of Tourist Satisfaction Relative to Their Expectations, Based on Their Socioeconomic Profile: A Logit Model
In general, the average profile of a tourist who travels the Montilla-Moriles Wine Route is a skilled worker between 50 and 59 years of age who has a medium-high income level, travels as a family, and considers that the treatment received (hospitality) is good but also feels that the cost is high, and that the area lacks complementary activities to the wineries. Some of these characteristics coincide with those noted by Sundbo and Dixit [99], who reported that tourists are predominantly between 40 and 55 years of age and are independent professionals or skilled workers with a medium-high level of training (approximately 50% have studied at university).
In addition, a logit model was estimated based on a sample of 500 people from the European Union; the objective of the model was to calculate the probability of satisfaction relative to tourist expectations regarding the Montilla-Moriles route based on their socioeconomic characteristics. Satisfaction was the variable under study (Satisf), with satisfied taking a value of one and dissatisfied taking a value of zero.
The main predetermined variables in this survey were as follows: The results of the estimation are provided in Table 4: From the estimations, we obtained the following results: The variable number of wineries visited positively influenced the degree of satisfaction with the trip because as the number of wineries visited increased, the degree of satisfaction was higher (B 22 = 17.568).
The age variable was also significant because when the age of the tourist increased, their level of satisfaction was higher (B 2 = 12.253). In our opinion, this result could be related to the profile of wine tourists who visit this area, consistent with the previous classification made by Charters and Ali-Knight [89]. Depending on the profile of tourists, wineries sell different tourism products.
Eighty-four percent of the people surveyed would recommend this tourist route, a result that, in our opinion, reflects the high degree of satisfaction that this destination provides to tourists (B 20 = 14.572).
Regarding the negative variables indicated by the travelers surveyed, the high price of the trip (B 25 = −1.253) and the few complementary activities in the area (B 24 = −4.983) were notable.
The growing importance of wine tourism cannot be denied. The need to propose sustainable models of tourism in areas traditionally dedicated to other economic activities, to prevent errors in the commercialization of tourist spaces, leads to the need to determine exactly what and how tourists want to consume at each specific destination. As such, the main results of our research are as follows: The number of wineries open to the public on this route is still limited, especially during weekends and long weekends, which implies that the wine supply in the area does not adequately satisfy the actual (and potential) demand, which could be diverted to other oenological destinations. For this reason, it is essential to clearly position this tourist destination to create a brand image for the route, preventing tourists from detouring to other, more or less similar destinations [100].
The demand for enotourism in the area is growing, as seen by the positive coefficient for the trend variable (2325 people). Additionally, there is a high probability that tourists will repeat the experience, thus achieving a high degree of loyalty. This leads us to propose that there is a minimum demand for different companies (especially existing cooperatives) to make investments in this area to satisfy this tourist segment.
Estimation of the Demand for Enotourism in the Montilla-Moriles PDO: SARIMA Model
To predict the demand for enotourism in the Montilla-Moriles PDO, a SARIMA model was used. In Figure 6, a slight increasing trend is observed in the variable demand for enotourism in the years analyzed (January 2015 to February 2020); additionally, this variable has a varying trend, which was corrected with the Box-Cox transformation (λ = 0.4), and average and cycle trends that were corrected through a differentiation in averages and in cycles. The growing importance of wine tourism cannot be denied. The need to propose sustainable models of tourism in areas traditionally dedicated to other economic activities, to prevent errors in the commercialization of tourist spaces, leads to the need to determine exactly what and how tourists want to consume at each specific destination. As such, the main results of our research are as follows: The number of wineries open to the public on this route is still limited, especially during weekends and long weekends, which implies that the wine supply in the area does not adequately satisfy the actual (and potential) demand, which could be diverted to other oenological destinations. For this reason, it is essential to clearly position this tourist destination to create a brand image for the route, preventing tourists from detouring to other, more or less similar destinations [100].
The demand for enotourism in the area is growing, as seen by the positive coefficient for the trend variable (2325 people). Additionally, there is a high probability that tourists will repeat the experience, thus achieving a high degree of loyalty. This leads us to propose that there is a minimum demand for different companies (especially existing cooperatives) to make investments in this area to satisfy this tourist segment.
Estimation of the demand for enotourism in the Montilla-Moriles PDO: SARIMA model.
To predict the demand for enotourism in the Montilla-Moriles PDO, a SARIMA model was used. In Figure 6, a slight increasing trend is observed in the variable demand for enotourism in the years analyzed (January 2015 to February 2020); additionally, this variable has a varying trend, which was corrected with the Box-Cox transformation (λ = 0.4), and average and cycle trends that were corrected through a differentiation in averages and in cycles. I II III IV I II III IV I II III IV I II III IV I II III IV I 2015 As seen in Figure 7, the behavior of the model (thousand tourists) with the estimated data (fitted) is very similar to that of the real data (actual), indicating that the SARIMA model is correct. The estimated SARIMA model for forecasting the monthly demand for enotourism is (1,1,1) (0,1,0) 12 .
(1 + 1.078215 B) (1-B)1 (1-B 12 ) Enotourist 0.4 = (1 + 0.997914 at tϕ1 = 26.47281 * tθ1 = −101.7076 * * Significant parameters for α = 0.0 5. As seen in Figure 7, the behavior of the model (thousand tourists) with the estimated data (fitted) is very similar to that of the real data (actual), indicating that the SARIMA model is correct. I II III IV I II III IV I II III IV I II III IV I 2016 2017 2018 2019 2020 Residual Actual Fitted Figure 7. Comparison of real enotourism (actual), estimated enotourism (fitted) and errors (residual). Source. Own elaboration. Table 5 provides the estimation results for the GARCH model for which the parameters are significant and indicates the absence of conditioned autoregressive heteroscedasticity because the probability of the statistics 0.0109 and 0.02950 is lower than the significance level of 5% (prob. column). Table 6 provides the predictions obtained with the SARIMA model for the year 2022 and a comparison with the year 2019. The years 2020 and 2021 were omitted because they Table 5 provides the estimation results for the GARCH model for which the parameters are significant and indicates the absence of conditioned autoregressive heteroscedasticity because the probability of the statistics 0.0109 and 0.02950 is lower than the significance level of 5% (prob. column). Source. Own elaboration. Table 6 provides the predictions obtained with the SARIMA model for the year 2022 and a comparison with the year 2019. The years 2020 and 2021 were omitted because they were atypical years due to the pandemic, during which wineries were closed and individuals were unable to travel within Spain (months of March to June 2020). These predictions are made under the assumption that the pandemic is controlled and the borders are completely open, not with total or partial restrictions against tourist entries. It is expected that growth will be 3.61% during 2022; that is, 1124 more tourists in 2022 than in 2019 will visit the Montilla-Moriles wine route. However, this figure may increase depending on whether tourists prefer not to travel abroad during the pandemic, but rather prefer to engage in more inland tourism, in particular in rural environments.
If the supply continues to increase, it may be possible to mitigate the seasonality of the demand by advertising the route as an option for residents of other autonomous communities and other countries during their local and autonomous festivals [101,102], with the objective of obtaining a better use of existing resources.
The creation of more complementary activities, such as cultural or gastronomic festivals, should be encouraged because such an approach can attract more mature tourists, thus generating more income for the area but also requiring a greater supply of hotel beds and rural houses.
It is necessary to increase investments in promoting this tourist destination, and illegal offerings must be controlled.
Discussion
The profile of the gastronomic tourist has changed in recent years, especially as a result of the COVID-19 pandemic. Tourists seek destinations that are safe from a health point of view and sustainable for the environment, preserving cultural heritage and giving back to the local community in terms of the benefits generated by tourism [88,89].
As indicated by Winfree et al. [103], enotourism can help increase wine sales in wineries, especially when tourists classified as wine experts participate; it is also a way of promoting products through tourism. In the case of the Montilla-Moriles route, many wineries, especially those operating as cooperatives, are reluctant to welcome tourism because they think that this activity will not generate an increase in sales, while the individual wineries have a more forward-looking vision and believe that the enotourist is a potential wine seeker in their cities who will contribute to increasing sales and act as an ambassador for their products.
The profile of the tourist along the Montilla-Moriles route is a person of mature age (50 to 59 years), and very similar to that of the studies by Sundbo and Dixit [99], who is approximately 55 years old, but quite far from the enotourist profile of the Jerez-Xérès-Sherry PDO [104] who are mostly young people under 30 years of age. The difference in profiles is because most tourists who visit the Jerez wineries seek sun and beach tourism; their visits to the wineries are a side activity. Wine tourism is not their main purpose of the trip, and this group consists mainly of foreign tourists classified as wine novices. In contrast, enotourists who visit the Montilla-Moriles route can be classified as connoisseurs who know the world of wine and know what wines they can expect to taste on the route. It follows that they are highly satisfied, since they feel that the wines of the Montilla-Moriles PDO are high quality. Many of these tourists have visited other wineries in other areas of Spain, such as La Rioja, which rank higher in terms of wine heritage (wineries, wine museums, signage, etc. [105]. However, we are still a long way from the profile of the tourist who visits the Quinta da Gaivosa wineries in Portugal [106]; these enotourists can be classified in the highest category-wine lovers-indicating that they are a great connoisseur of wine culture, travel expressly for wine-related reasons and earn more than 3000 euros per month, purchasing wine from the winery after the visit. This is the type of profile that the Montilla-Moriles route should be seeking. The ARIMA model for forecasting wine tourism demand for 2022 in the Montilla-Moriles route predicts that this will grow, but only slightly (3.6%), despite the increase in inland tourism in Spain, especially in rural areas by more than 50% in 2021, as tourists sought nature and to get away from crowded tourist areas. This shows that the Montilla-Moriles area is losing an opportunity to attract new tourists, especially those from the Spanish market, because the route is not well known. Its communication channels should be improved, especially the marketing efforts it has made thus far. Marketing should employ more digital channels to ensure that the ecotourism experience is innovative, multisensory and unforgettable and so that the experience is satisfactory and achieves positive E-WOM [107]. In addition, the wine route must be advertised in specialized wine magazines that will attract wine lovers with high purchasing power. However, if the number of wineries available to be visited on weekends cannot be expanded, then new hotels should be built, the restaurant service improved, and a menu of dishes offered that are either made with local wines or accompanied by local wines. If not, the Montilla-Moriles route will lose an opportunity that will be taken advantage of by other wine tourism routes. This study does not advocate excessively increasing the number of enotourists who would exceed the carrying capacity of the wineries or vineyards but rather that the focus be on improving the quality of tourists to attract people who know how to appreciate the wines of the PDO and are potential buyers of these wines in their home cities. The preferred approach is less tourism but more tourists with higher purchasing power, who will spend the night and whose average daily expenditure is high compared to the current profile of day trippers who spend less than 6 h in the area and who only visit the winery but do not buy the products. The great problem of the province of Córdoba, where the Montilla-Moriles route is located, is that it caters to day excursions, tourists passing through who spend the night in other cities such as Malaga or Seville, leaving behind few economic resources in the region. Enotourism could be a solution to increase provincial wealth, which has one of the lowest average incomes in Spain.
Conclusions
Tourists seeking nature during the pandemic can help mitigate the socioeconomic gap in rural areas and provide endogenous development. For this, a strategic vision of a sector that integrates agriculture, development and tourism is necessary to prevent saturation of the rural environment and to promote environmental sustainability for generating wealth and employment. This statement coincides with studies from [28,34,35].
An increase in enotourism in the Montilla-Moriles PDO requires the coordination and involvement of all agents, i.e., public and private entities, neighborhood associations, and entrepreneurs, who at all times take into account the quality of the environment; this is the only way to guarantee the continuous offering of tourism products that are the fruits of the effort and tenacity of the people and resources in rural areas.
The Montilla-Moriles area should diversify the tourism products offered on the market and specialize in adapting to the changes in consumer habits and to satisfy their needs, which is ultimately the most important for building loyalty and attracting new customers, in addition to expanding hotel infrastructure and complementary services related to the wine experience.
Obtaining tourist products demanded by consumers is an arduous task; once such products are obtained, they will generate a new source of supplemental income for the inhabitants of the region. Thus, the help of public entities and private entities to enhance the unique elements of the area is mandatory.
In this study, an analysis of the tourist demand of the Montilla-Moriles Wine Route was conducted using different statistical tools. Through the fieldwork carried out, the demand for enotourism and the probability of tourists being satisfied based on their personal characteristics were estimated, identifying the main parameters of actual tourist demand (in short, the client) visiting this area. For both public managers and private enterprises, we propose guidelines regarding the behavior of tourists with respect to medium-term trends.
Based on these conclusions, there may be an opportunity for the development of supplemental economic activities in the Montilla-Moriles area due to the increase in tourism demand (results obtained with the SARIMA model); however, to achieve this development, the support of different public administrators (especially infrastructure improvements) and private companies is necessary. The area should adapt to this new tourist segment, adjusting to the changes in the habits of consumers to satisfy their needs, which, in short, is the most important element for building loyalty and attracting new customers.
An element of debate, or perhaps of reflection, that is open to future research is the different perspectives that tourists and entrepreneurs of the area have regarding what the greatest improvements on this route might be. Most tourists think that there should be more complementary offerings, especially nightlife, to increase the number of overnight stays, and entrepreneurs in the area should focus on promotion and publicity.
This divergent response between the two sides of the market should involve a deep analysis and reflection on the part of the different public administrators involved in the route and the private companies that offer products that effectively cater to the demand of tourists to increase complementary activities and to create an indispensable marketing plan. In turn, these actions should be guided by an in-depth analysis that clearly defines the demand based on the profile of wine tourists who visit this route.
Finally, as a future line of research, we highlight the need to conduct an in-depth study of the relationship that might exist between cultural tourism (focused on the city of Córdoba) and rural tourism (focused on Subbética Natural Park) with this tourist route. Such an investigation could lead to flows of tourists that combine two (or even all three) tourist destinations, therefore resulting in a greater redistribution of tourism-derived income that would be generated in a part of the province of Córdoba similar to that which has occurred in other places [98]. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2019-01-22T22:31:49.686Z
|
2019-01-01T00:00:00.000
|
58621526
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1999-4923/11/1/25/pdf",
"pdf_hash": "2b2351e5d1e8deeab26466c5a21bf0c2834a58cc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43248",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"sha1": "2b2351e5d1e8deeab26466c5a21bf0c2834a58cc",
"year": 2019
}
|
pes2o/s2orc
|
Complexes of Pro-Apoptotic siRNAs and Carbosilane Dendrimers: Formation and Effect on Cancer Cells
This paper examines the complexation of anti-cancer small interfering RNAs (siRNAs) by cationic carbosilane dendrimers, and the interaction of the formed complexes with HeLa and HL-60 cancer cells. Stepwise formation of the complexes accompanied by the evolution of their properties has been observed through the increase of the charge ratio (dendrimer/siRNA). The complexes decrease the viability of both “easy-to-transfect” cells (HeLa) and “hard-to transfect” ones (HL-60), indicating a high potential of the cationic carbosilane dendrimers for siRNA delivery into tumor cells.
Introduction
Therapeutic nucleic acids hold great potential for anti-tumor therapy. However, the natural barriers of a cell form considerable obstacles against the efficient transport of nucleic acids (NA) into cytosol or target compartments [1][2][3][4]. Nowadays, a wide number of methods for the transfer of the nucleic acid material (such as plasmid DNA or messenger RNA (mRNA)) and regulatory oligonucleotides (such as small interfering RNAs (siRNAs), microRNAs, antisense oligonucleotides, guide RNAs for CRISPR/Cas9 system, etc.) into a cell has been developed [5][6][7]. As carriers for NA, various chemically designed systems from small hydrophobic and positively charged moieties to soft and hard nanoparticles-based constructions have been tested [8,9]. Among the NA-delivery systems explored so far, supramolecular associates incorporating nucleic acid cargo are relatively simple and synthetically accessible systems, which are characterized by controllable composition, insufficient cytotoxicity, and high transfection efficiency. In addition, these associates may protect NA from detrimental intracellular nucleases, and provide the release of the agents in a controlled manner. Dendritic compounds bearing cationic groups on their periphery often serve as building blocks for The synthesis of oligoribonucleotides was carried out on an automatic ASM-800 DNA/RNA synthesizer (Biosset, Novosibirsk, Russia) at the 0.4 µmol scale, with the use of 2′-O-tertbutyldimethylsilyl-protected RNA phosphoramidites (5-(ethylthio)-1H-tetrazole as an activator; the coupling time was 5 min) and with automated procedures being optimized for the synthesizer. Cleavage from a solid support, and the removal of protecting groups of oligoribonucleotides were The synthesis of oligoribonucleotides was carried out on an automatic ASM-800 DNA/RNA synthesizer (Biosset, Novosibirsk, Russia) at the 0.4 µmol scale, with the use of 2 -O-tert-butyldimethylsilyl-protected RNA phosphoramidites (5-(ethylthio)-1H-tetrazole as an activator; the coupling time was 5 min) and with automated procedures being optimized for the synthesizer. Cleavage from a solid support, and the removal of protecting groups of oligoribonucleotides were performed under the conditions described in [21]. Unprotected oligonucleotides were purified by denaturing polyacrylamide gel electrophoresis (PAGE, 15%) and desalted, and then the pure oligonucleotides were precipitated as Na + -salts. The identities of the oligoribonucleotides were verified by MALDI-TOF mass spectrometry (MS) analysis (Table S1). The MS spectra of the oligoribonucleotides were recorded on a REFLEX III spectrometer (Bruker Daltonics, Billerica, MA, USA) with the use of 3-hydroxypicolinic acid as a matrix. The lyophilized sense and antisense strands (see Table S1) of the siRNAs were dissolved in a buffer containing 137 mM NaCl, 2.7 mM KCl, 10 mM phosphate buffer, pH 7.4, and the final concentration of the siRNA was 50 µM. The solution was heated at 90 • C for 2 min, then slowly cooled to the room temperature for 1 h.
Dendriplexes were formed by combining negatively charged siRNA (see concentrations below) and positively charged carbosilane dendrimers in a RNase free 1× phosphate-buffered saline (PBS) buffer (10 mM phosphate buffer, pH 7.4, 137 mM NaCl, 2.7 mM KCl) with the following incubation for 10 min at 25 • C. The dendrimer:siRNA ratio was calculated as: where CR is the charge ratio; MR = C D C siRN A is the molar ratio; N + is the number of cations per dendrimer molecule (12 for BDEF32 or 24 for BDEF33); N − is the number of anions per siRNA molecule (40 for Bcl-2, Bcl-xL, Mcl-1, or 42 for Scr).
Gel Retardation Assay
The ability of the cationic carbosilane dendrimers to form complexes with siRNAs was studied by gel electrophoresis in 1% agarose gel. Dendriplexes were prepared by mixing siRNA (40 pmol per band), ethidium bromide (EB) (0.4 µM,~1 EB molecule per 2 bp of siRNA) and dendrimers (at increasing concentrations, depending on the charge ratios), dissolved in 1×PBS. After 30 min incubation at 25 • C, electrophoresis was run in 1% agarose gel at 80 V (Mini-Sub ® Cell GT, Bio-Rad, Hercules, CA, USA) in 1×TBE buffer, pH 8.4, and the bands were visualized under UV using a gel documentation system (Helicon, Moscow, Russia).
Ethidium Bromide Intercalation Assay
Samples containing ethidium bromide (EB) and siRNAs at final concentrations of 3 and 0.3 µM, respectively (~1 EB molecule per 2 bp of siRNA), were prepared in 1×PBS (10 mM phosphate buffer, pH 7.4, 137 mM NaCl, 2.7 mM KCl) and incubated at 25 • C for 10 min, then the increasing amounts (final concentrations were in the range from 0 to 9 µM) of dendrimers were added to the samples. The fluorescence spectra of EB were recorded in the region of 500-800 nm (excitation wavelength was 480 nm) using PerkinElmer LS 50-B spectrofluorometer (PerkinElmer, Waltham, MA, USA). The excitation and emission slit widths were set at 10 and 15 nm, respectively. The data are expressed as mean values ± SD (at 595 nm) of five independent experiments for both generations of dendrimers.
Zeta Potential Measurements
The zeta potential values of dendriplexes were measured by laser Doppler electrophoresis (electrophoretic light scattering). All measurements were performed using the Zetasizer Nano-ZS (Malvern Instruments Ltd., Malvern, UK). Samples of 0.5 µM siRNAs were prepared in 10 mM phosphate buffer, pH 7.4, then the increasing amounts of dendrimers (final concentrations were in the a range from 0 to 9 µM) were added to the samples with the following incubation for 10 min at 25 • C. The saturated dendriplexes were treated with heparin. The zeta potentials of the complexes were determined from electrophoretic mobility using the Smoluchowski approximation. The data are expressed as the mean values ± SD of five independent experiments for both generations of dendrimers.
Circular Dichroism-Spectroscopy
The circular dichroism (CD)-spectra of the dendriplexes were recorded using a JASCO J-815 circular dichroism spectropolarimeter (JASCO, Tokyo, Japan). Samples containing 2 µM siRNA were prepared in 1×PBS (10 mM phosphate buffer, pH 7.4, 137 mM NaCl, 2.7 mM KCl), and then increasing amounts of dendrimers (final concentrations were ranged from 0 to 15 µM) were added to the samples. Then, the saturated dendriplexes were treated by heparin.
AFM Experiments
Dendriplexes were obtained by mixing of solutions of siRNA (1 µM) and dendrimer at charge ratio 1:5. An aliquot of dendriplex solution was deposited onto a mica slide for 1-2 min. The slide was then washed 3 times with deionized H 2 O and dried on air. Scanning was performed in tapping mode using Multimode 8 atomic force microscope (Bruker AXS, Karlsruhe, Germany) with NSG10_DLC cantilevers with tip curvature radii of 1-3 nm (NT-MDT, Moscow, Russia), at scanning rate of 3 Hz. Images were processed using Gwyddion 2.36 software (Czech Metrology Institute, Brno, Czech Republic).
Cell Culture and Transfection
The adherent human cervical cancer cell line (HeLa) was purchased in Banca Biologica e Cell Factory (Genova, Italy), and a suspension human acute promyelocytic leukemia cell line (HL-60) was purchased in ATCC (Manassas, VI, USA). HeLa cells were cultured in Dulbecco's Modified Eagle Medium (DMEM, Gibco, Poland), and HL-60 cells were cultured in Roswell Park Memorial Institute medium (RPMI-1640, Sigma Aldrich, Poznan, Poland) in a humidified incubator containing a mixture of air and 5% of CO 2 at 37 • C (Brunswick, Lake Forest, IL, USA). DMEM and RPMI-1640 were supplemented with 10% fetal bovine serum (FBS), 100 U/mL penicillin and 100 µg/mL streptomycin (Sigma Aldrich, Poznan, Poland). The viability of cells was evaluated by counting the cells after dyeing with 0.2% Trypan blue.
For the cell viability experiments, cells (2 × 10 4 cells in 100 µL per well) were cultured in 10% FBS containing DMEM or RPMI-1640 for 24 h on 96-well microplates. Complexes of siRNAs (final concentrations in wells were 0, 50, 100, and 250 nM) and dendrimers at a charge ratio of 1:10 were prepared in 1×PBS and incubated for 15 min at 25 • C. After treatment of the cells by siRNAs, dendrimers and dendriplexes, the cells were incubated for 72 h in a humidified incubator containing a mixture of air and 5% CO 2 at 37 • C.
In Vitro Cytotoxicity Assays
After 72 h treatment of the cells by dendrimers or dendriplexes in the corresponding serum-containing media (see above), the media were refreshed. The influences of siRNA, dendrimers, and dendriplexes on the cell viability was determined using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) cell viability assay for the HeLa cell line [22], and the spectrofluorimetric resazurin assay (SRA) or Alamar Blue assay for the HL-60 cell line [23,24].
In the MTT assay, yellow MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) is reduced by the cellular reductases of viable cells to a dark blue formazan [22,25]. MTT was added to each well at final concentration of 0.5 mg/mL and the cells were incubated for 3 h in 5% CO 2 humidified incubator at 37 • C. After 3 h of incubation, MTT solution was removed, and DMSO was added to dissolve the formazan crystals [22,25]. The absorption of the samples was measured at the reference wavelength of 630 nm and a test wavelength of 570 nm, using a microplate spectrophotometer (BioTek, Burlington, VT, USA).
In the Alamar Blue assay, blue non-fluorescent resazurin is reduced to pink fluorescent resorufin, which is a metabolic response of living cells. This resazurin conversion determines the cell viability [26,27]. After incubation, 20 µL of resazurin solution (1 mg/mL in PBS) was added to each well, and the cells were incubated for 2 h at 37 • C in the dark. Then, resorufin fluorescence was read at λ ex = 530 nm and λ em = 590 nm by using a fluorescence microplate reader (Fluoroskan, Thermo Fisher Scientific Inc. Waltham, MA, USA). The cell viability was presented as a percentage of the fluorescence obtained for untreated control cells treated by 1×PBS.
Statistics
The Shapiro-Wilk test was used to check the normality of distribution. The results are presented as a mean ± SD (standard deviation), n = 6. The data were analyzed by a paired Student t-test.
siRNAs
A prospective method of nucleic acid-based anti-cancer treatment is the use of siRNAs to induce the apoptosis in target cells. The therapeutic effect is achieved by down-regulating the expression of the genes that are responsible for the maintenance of cell proliferation. From this point of view, a promising target is the Bcl-2 family of apoptosis-regulating genes [28]. This family, the key regulator of the mitochondrial apoptosis pathway, consists of both pro-apoptotic and anti-apoptotic genes. Whereas the expression of these genes is normally balanced to keep a cell proliferating, putting this equilibrium out of balance by silencing the anti-apoptotic genes leads to the activation of the Bax-mediated programmed cell death [29].
As therapeutic oligonucleotides, siRNAs targeted to the mRNAs of three anti-apoptotic genes of the Bcl-2 family (Bcl-2, Bcl-xL, Mcl-1) were chosen from previously published works [30,31]. As a scrambled siRNA (Scr), we chose a complementary pair of oligoribonucleotides that had no exact matching with the human genome [31]. The sequences of the oligoribonucleotides used are given in Table S1. The activity of these siRNAs and their cocktails (equimolar siRNA mixtures) against cancer cells upon dendrimer-mediated delivery has been demonstrated in other works [15,32].
Cationic Carbosilane Dendrimers
The dendrimers used in the study (BDEF32 and BDEF33, Figure 1) have a rigid 1,3,5-trihydroxybenzene core, which is close to surface of the dendrimers and accessible to the solvent; this forces the branches of the dendrimers to spread away from the core. For this reason, the dendrimers are less susceptible to steric hindrance during the synthesis of the branches (i.e., when the dendritic generation increases), and this makes them easy synthetically accessible, and, in addition, they possess higher chemical stability and solubility when compared to carbosilane dendrimers that are derived from a Si atom core [20].
The ammonium-terminated dendrimers G n O 3 [SNMe 3 I] m of the first (n = 1, m = 6), and second (n = 2, m = 12 (BDEF32)) generations (where G n indicates the generation, O 3 is a core derived from 1,3,5-trihydroxybenzene, and [SNMe 3 I] m describes the peripheral function and its number (m)) have already been studied with respect to their antibacterial properties against Gram-positive (Staphylococcus aureus CECT 240) and Gram-negative (Escherichia coli CECT 515) bacterial strains [20]. These dendrimers were shown to be effective antibiotics against both Gram-positive and Gram-negative model strains, while the reference, the well-known antibiotic penicillin V potassium salt is active only against the Gram-positive S. aureus CECT 240 strain.
The study [17] demonstrates the successful use of the second-generation dendrimer G 2 O 3 [SNMe 3 I] 12 (BDEF32) and its fluorescein isothiocyanate (FITC)-labeled analogue G 2 O 3 [SNMe 3 I] 11 -FITC, for the delivery of the specific anti-HIV-1 siRNA Nef to interfere with HIV-infectivity in human primary astrocytes. The work indicates that this type of transporters is a promising alternative to achieving very high transfection levels in "hard-to-transfect" astrocytes without causing cytotoxicity. The biodistribution of the dendriplex formed by the siRNA and the dendrimer loaded with FITC was studied in vivo in BALB/c mice. The dendrimer efficiently transferred the siRNA to mouse brains and it crossed the bloodαbrain barrier, showing its great potential as a drug transporter for the therapy of neurological disorders. Motivated by these inspiring results we have chosen this family of cationic dendrimers for the binding and transport of pro-apoptotic siRNAs to "easy-to-transfect" (HeLa) and "hard-to transfect" (HL-60) tumor cell lines. The cationic carbosilane dendrimers of the second and the third generations G n O 3 [SNMe 3 I] m (n = 2, m = 12 (BDEF32); n = 3, m = 24 (BDEF33)) were prepared according to the convenient procedure described in [20]. In brief, the functional groups on surface of the dendrimers were introduced by a "thiol-ene click chemistry" reaction taking place between spherical vinyl-functionalized dendrimers, and 2-(dimethylamino)ethanethiol hydrochloride, which is fast, easy initiated, chemoselective, and gives high yields. Then, neutralization and subsequent quaternization by the excess of iodomethane led to the ammonium-terminated dendrimers BDEF32 and BDEF33 ( Figure 1). Structural characterization of the dendrimers has been carried out by using 1 H-and 13 C-NMR spectroscopy and mass spectrometry.
Formation of Dendriplexes
Cationic carbosilane dendrimers can bind siRNAs, mainly by the means of electrostatic interactions. The properties of the dendriplexes formed depend on their composition, i.e., on the charge ratio of the components taken. To study the ability of the dendrimers BDEF32 and BDEF33 to bind siRNAs, and the properties of the formed dendriplexes, and to find a ratio that could be considered optimal for the further cell experiments, the profiles of siRNA binding by cationic dendrimers were obtained by the means of several physico-chemical methods. The use of independent indirect methods to study siRNA binding is required, since each of these methods provides information on only on an exact property of the dendriplexes (charge, size, structure, etc.) [33]. Thus, it is possible to follow all the stages of the dendriplex formation: from initial binding through to structural changes in the RNA duplex, to the rearrangement of the complexes to yield particles that are as charged as possible.
The early stage of the siRNA binding to the dendrimers was studied by means of the ethidium bromide intercalation assay (EBIA). The method is based on the intercalation of the fluorescent dye EB into a double strand siRNA (a binding site of 2-4 base pairs). These interactions result in a significant increase of EB fluorescence intensity, and they cause a blue shift in EB fluorescence emission maximum. Strong electrostatic interactions between cationic dendrimers and siRNAs lead to a higher affinity of the dendrimers to siRNAs. The following structural distortions of siRNA causes the displacement of intercalated EB from the duplex and subsequent quenching of EB fluorescence. Thus, EBIA is a sensitive method for monitoring even subtle changes in double-stranded siRNA structure [33]. We observed a fast decrease of EB fluorescence intensity upon titration of EB-siRNAs complexes by the dendrimers BDEF32 and BDEF33 up to about equivalent charge ratios (~1:1, for both dendrimers and regardless of the siRNA structure) (see Figure 2 and Figure S1), reaching a plateau at~3-fold molar excess of the dendrimers. the displacement of intercalated EB from the duplex and subsequent quenching of EB fluorescence. Thus, EBIA is a sensitive method for monitoring even subtle changes in double-stranded siRNA structure [33]. We observed a fast decrease of EB fluorescence intensity upon titration of EB-siRNAs complexes by the dendrimers BDEF32 and BDEF33 up to about equivalent charge ratios (~1:1, for both dendrimers and regardless of the siRNA structure) (see Figures 2 and S1), reaching a plateau at ~3-fold molar excess of the dendrimers. The reversible nature of the binding was confirmed by recovery of EB fluorescence intensity upon the treatment of the saturated dendriplexes with heparin, a natural polyanion [34]. When the dendriplexes were treated with heparin, the siRNA was released.
Further evolution of the double-stranded RNA structure upon the binding with dendrimers was studied by CD spectroscopy. The CD spectra of the naked siRNAs indicate the typical curve of an A-form helix geometry, with a strong positive band at~260-265 nm, and a negative band at~210 nm (Figure 3a,b). Since the binding of siRNAs to dendrimers causes a disturbance in their structures, titration of the siRNAs by the dendrimers was accompanied by a significant decreases of intensities of the characteristic bands to completely smooth out the spectra ( Figure S2). The double-stranded structure of the siRNA appeared to be completely distorted at a siRNA:dendrimer molar ratio 1:2.5 (G3) or 1:5 (G2) (Figure 3c). Both the values of molar excess corresponded to the 1.5-fold excess of cations. Moreover, after addition of the dendrimers to siRNAs, we observed red-shifts in the CD spectra maximum from~260 nm to~280 nm, and in the corresponding minimum from~210 nm tõ 215 nm. The addition of heparin to the saturated dendriplexes reduced the initial shape of the siRNAs CD spectra, indicating the release of native siRNAs from the complexes as a duplex (Figure 3c). This finding is important, since the duplex is the biologically active siRNA form. It also suggests that the base pairing of the duplex strands is not fully disrupted when it is bound to dendrimer. (G3) or 1:5 (G2) (Figure 3c). Both the values of molar excess corresponded to the 1.5-fold excess of cations. Moreover, after addition of the dendrimers to siRNAs, we observed red-shifts in the CD spectra maximum from ~260 nm to ~280 nm, and in the corresponding minimum from ~210 nm to ~215 nm. The addition of heparin to the saturated dendriplexes reduced the initial shape of the siRNAs CD spectra, indicating the release of native siRNAs from the complexes as a duplex ( Figure 3c). This finding is important, since the duplex is the biologically active siRNA form. It also suggests that the base pairing of the duplex strands is not fully disrupted when it is bound to dendrimer. In addition, the complexation was studied by the means of an agarose gel retardation assay. This method is based on the mobility of the charged macromolecules (or complexes of molecules) in an electric field through a porous lattice formed by agarose gel, with the retardation of the molecules depending on their charge and mass. Herein, it allowed us to observe the non-complexed siRNA in samples. The electrophoresis data show that the siRNA was fully complexed by cationic dendrimers, In addition, the complexation was studied by the means of an agarose gel retardation assay. This method is based on the mobility of the charged macromolecules (or complexes of molecules) in an electric field through a porous lattice formed by agarose gel, with the retardation of the molecules depending on their charge and mass. Herein, it allowed us to observe the non-complexed siRNA in samples. The electrophoresis data show that the siRNA was fully complexed by cationic dendrimers, being in 2-fold charge excess, which corresponded to a 6.7-fold molar excess (BDEF32) or a 3.3-fold molar excess (BDEF33), respectively ( Figure 4). The complexes formed were cationic dendriplexes that were retained at the start line, or that migrated towards the cathode (Figure 4, right). being in 2-fold charge excess, which corresponded to a 6.7-fold molar excess (BDEF32) or a 3.3-fold molar excess (BDEF33), respectively ( Figure 4). The complexes formed were cationic dendriplexes that were retained at the start line, or that migrated towards the cathode (Figure 4, right). In the experiments, we observed the initial moment when stabile supramolecular associates were formed. The increase of the dendrimer/siRNA ratio led to the further evolution of the dendriplex structure, with the reorganization of the particles and the increase of their size and surface charge. The changes could be monitored by the means of electrophoretic light scattering. The technique was based on the scattering of the laser light by nanoscale objects moving in the electric field. The value of the potential on the border of the solvate shell (zeta potential) calculated from the scattering data gives the idea of the surface charge of nanoparticles. The profiles had a sigmoidal shape; In the experiments, we observed the initial moment when stabile supramolecular associates were formed. The increase of the dendrimer/siRNA ratio led to the further evolution of the dendriplex structure, with the reorganization of the particles and the increase of their size and surface charge. The changes could be monitored by the means of electrophoretic light scattering. The technique was based on the scattering of the laser light by nanoscale objects moving in the electric field. The value of the potential on the border of the solvate shell (zeta potential) calculated from the scattering data gives the idea of the surface charge of nanoparticles. The profiles had a sigmoidal shape; negative charges of siRNAs were compensated at the 12-fold excess of the G2 dendrimer, and at the 4-fold excess of the G3 one. A further increase of the dendrimer concentration led to a charge saturation at the 20-fold (G2) or 10-fold (G3) molar excesses of dendrimers ( Figure 5 and Figure S3). In the experiments, we observed the initial moment when stabile supramolecular associates were formed. The increase of the dendrimer/siRNA ratio led to the further evolution of the dendriplex structure, with the reorganization of the particles and the increase of their size and surface charge. The changes could be monitored by the means of electrophoretic light scattering. The technique was based on the scattering of the laser light by nanoscale objects moving in the electric field. The value of the potential on the border of the solvate shell (zeta potential) calculated from the scattering data gives the idea of the surface charge of nanoparticles. The profiles had a sigmoidal shape; negative charges of siRNAs were compensated at the 12-fold excess of the G2 dendrimer, and at the 4fold excess of the G3 one. A further increase of the dendrimer concentration led to a charge saturation at the 20-fold (G2) or 10-fold (G3) molar excesses of dendrimers (Figures 5 and S3). The results obtained from the biophysical assays described above are summarized in the Table 1. The molar ratio (MR) values represent the molar excess of dendrimers to siRNAs. Herein, since dendrimers bearing pH-independent charged groups have been used, it is also possible to operate charge ratio (CR) values, i.e., the excess of cationic groups in complexes. The comparison of the CR values seems to be more representative if dendrimers of different generation are taken, for it reveals The results obtained from the biophysical assays described above are summarized in the Table 1. The molar ratio (MR) values represent the molar excess of dendrimers to siRNAs. Herein, since dendrimers bearing pH-independent charged groups have been used, it is also possible to operate charge ratio (CR) values, i.e., the excess of cationic groups in complexes. The comparison of the CR values seems to be more representative if dendrimers of different generation are taken, for it reveals the role of the number of charges per dendrimer molecule in the complexation. The dendritic effect is also better seen if analyzed in the terms of CR. Figure 2) were taken as MR 50 ; 2 points of zero crossing ( Figure 5) were taken as MR 50 .
In the complexation profiles, two points are of special interest: (1) half-effect point (MR 50 , CR 50 ) corresponding to the charge or molar ratio of the half-binding; and (2) the saturation point (MR sat , CR sat ) corresponding to the charge or molar ratio where the measured parameter is no longer changed Whereas the former point is convenient for observing dendritic effects, the latter represents the properties of dendriplexes at the end of the complexation.
Analyzing the data from the assays studying the dendriplexes' formation, one could find an interesting regularity in the results obtained: the binding and saturation values that were obtained by CD spectroscopy, EBIA, and agarose gel electrophoresis were quite close one to another (CR 50 was estimated by 1, a small positive dendritic effect is observed; CR sat is~1.5-2). In contrast, the zeta potential values continue to evolve even after the saturation achieved, as shown by other assays (at a CR 50 value of 4 (for G2) or 2.4 (for G3), a strong positive dendritic effect is observed; CR sat is 6 for both generations).
The observed differences suggest that two processes occur upon the exposition of siRNAs to cationic dendrimers. The first one is the binding of siRNA to dendrimers. It is accompanied by the distortion of the NA duplex structure (refer to the CR 50 values in the Table 1). After that, during the increasing of the dendrimer concentration (i.e., molar and charge excess) dendriplex particles appear to be rearranged due to the reversibility of the dendriplexes' structures [35]; siRNA molecules are surrounded by more dendrimer molecules than before. This leads to an increase of the surface charge of the dendriplex particles.
Previously, the appearance of plural CR 50 /MR 50 values upon the siRNA binding by cationic dendrimers was observed [36,37], with minor differences having been found upon changing the dendrimers' architecture (refer, for example, to the comparison of the siRNA binding by polyamidoamine, phosphorus and carbosilane dendrimers [14,15]). However, to the best of our knowledge, the stepwise dendriplex formation has never been deduced from these findings. The work [35] analyzes rearrangements of dendriplexes in the context of the dendriplexes' lability towards spontaneous decomposition into smaller complexes.
It should be noted that the stepwise formation of dendriplexes have never been observed upon the binding of NAs to dendrimer-based nanoparticles. The surface charge of dendriplex nanoparticles is stabilized at relatively low CR/MR values, corresponding to the full NA binding. However, similar behaviors has been observed for amphiphilic PAMAM dendrons [38,39] and dendrimers [40], amphiphilic carbosilane dendrons [19], and dendron-decorated carbon nanotubes [18].
One can speculate whether the dendriplex composition (i.e., charge/molar ratio) is optimal for biological experiments. Indeed, the initial intention is to keep the dendrimer content in samples as low as possible, to avoid unwanted side effects, for instance, increased cytotoxicity. However, the use of higher dendrimer excess likely leads to the decrease of the particle size [34,41,42], with the size evolution profiles correlating with those of zeta potential. This factor is important for the efficient cell internalization of dendriplexes, since it occurs by means of clathrin-or caveolin-mediated endocytosis [43,44], where the size of nanoparticles to be internalized is limited. Summarizing, the use of CR sat /MR sat obtained from the zeta potential profiles for the preparation of dendriplexes is likely preferential for the biological experiments (at least, in vitro). The AFM images of the dendriplexes prepared at the CR value of 5, which is close to the saturation region, are given in the Figure 6. The size of the dendriplexes varies from 35 to 75 nm.
Pharmaceutics 2019, 11, x FOR PEER REVIEW 10 of 15 preferential for the biological experiments (at least, in vitro). The AFM images of the dendriplexes prepared at the CR value of 5, which is close to the saturation region, are given in the Figure 6. The size of the dendriplexes varies from 35 to 75 nm.
Effect of Dendrimers and Dendriplexes on Cancer Cells
As target cell lines, HeLa (cervical carcinoma cells) and HL60 (human myeloid leukemia cells) were chosen to study the influence of the cells' characteristics on the transfection efficiency. The lines under study represented two major types of the human cancer cell lines: adherent (HeLa) and suspension (HL60) lines. Due to the peculiarities of the cell membrane composition, HL60 cells are known as hard-to-transfect [45], whereas HeLa cells are normally considered as easy-to-transfect cell lines.
Effect of Dendrimers and Dendriplexes on Cancer Cells
As target cell lines, HeLa (cervical carcinoma cells) and HL60 (human myeloid leukemia cells) were chosen to study the influence of the cells' characteristics on the transfection efficiency. The lines under study represented two major types of the human cancer cell lines: adherent (HeLa) and suspension (HL60) lines. Due to the peculiarities of the cell membrane composition, HL60 cells are known as hard-to-transfect [45], whereas HeLa cells are normally considered as easy-to-transfect cell lines.
Both types of dendrimers exhibited dose-response cytotoxicities towards HeLa and HL60 cells. The cytotoxicity profiles looked similar for both cell lines, thus suggesting that leukemia cells (HL60) are principally permeable by dendrimers (Figure 7a,c). As expected, siRNAs had no effect on the cell viability (Figure 7b,d).
Effect of Dendrimers and Dendriplexes on Cancer Cells
As target cell lines, HeLa (cervical carcinoma cells) and HL60 (human myeloid leukemia cells) were chosen to study the influence of the cells' characteristics on the transfection efficiency. The lines under study represented two major types of the human cancer cell lines: adherent (HeLa) and suspension (HL60) lines. Due to the peculiarities of the cell membrane composition, HL60 cells are known as hard-to-transfect [45], whereas HeLa cells are normally considered as easy-to-transfect cell lines. Figure 7. Cytotoxicities of dendrimers and siRNAs. Viability of HeLa cells treated with (a) dendrimers BDEF32 and BDEF33; or (b) pro-apoptotic siRNAs, as measured by the MTT assay. Viability of HL-60 cells treated with (c) dendrimers BDEF32 and BDEF33; or (d) pro-apoptotic siRNAs measured by spectrofluorimetric resazurin assay (SRA). The viability of the cells was evaluated (% relative to the negative control) after 72 h incubation with dendrimers and siRNAs. Complexes of carbosilane dendrimers and siRNAs were shown to penetrate efficiently across the cell membrane [15]. Herein, we assessed the effects of three anti-cancer siRNAs, Bcl-2, Bcl-xL, and Mcl-1, on the viability of HeLa and HL60 cancer cells. All of these siRNAs inhibit the synthesis of anti-apoptotic proteins of the Bcl-2 family, thus directing target cells to the Bax-mediated apoptosis. As a control, scramble siRNA, which was claimed not to have any target sequence in the human genome [31] was taken.
Upon treating target cells with anti-cancer dendriplexes, several consecutive processes take place: dendriplexes' endocytosis, endosomal escape, siRNA release and functioning, apoptosis induction [32]. A deficiency at any step is supposed to result in a sharp decrease of the overall effect, i.e., cell viability. In view of this, measuring cell viability can be a convenient technique for a fast screening of the dendriplexes' anti-cancer activity.
Dendriplexes containing pro-apoptotic siRNAs (at 10-fold cation excess) have been shown to decrease the target cell viability, with the cytotoxic effect being the dose-response (Figure 8). The moderate cytotoxic effect of dendriplexes obtained at the siRNA concentration 100 nM suggest a potential use of these constructions in the combinational therapy, along with low-molecular chemodrugs. For instance, pro-apoptotic siRNAs carried by cationic dendrimers are efficiently combined, with 5-fluorouracil, as we have recently demonstrated [32]. These two agents act in synergy to induce cell death by affecting different mechanisms of the cell cycle regulation.
Upon treating target cells with anti-cancer dendriplexes, several consecutive processes take place: dendriplexes' endocytosis, endosomal escape, siRNA release and functioning, apoptosis induction [32]. A deficiency at any step is supposed to result in a sharp decrease of the overall effect, i.e., cell viability. In view of this, measuring cell viability can be a convenient technique for a fast screening of the dendriplexes' anti-cancer activity.
Dendriplexes containing pro-apoptotic siRNAs (at 10-fold cation excess) have been shown to decrease the target cell viability, with the cytotoxic effect being the dose-response (Figure 8). The moderate cytotoxic effect of dendriplexes obtained at the siRNA concentration 100 nM suggest a potential use of these constructions in the combinational therapy, along with low-molecular chemodrugs. For instance, pro-apoptotic siRNAs carried by cationic dendrimers are efficiently combined, with 5-fluorouracil, as we have recently demonstrated [32]. These two agents act in synergy to induce cell death by affecting different mechanisms of the cell cycle regulation. It is worth noting that, unlike the expected result, scramble siRNA appeared to exhibit some cytotoxic effect. Such an artifact is explained by the partial pairing of the scramble siRNA strands with mRNAs of the genes that are essential for normal cell functioning (5-methyltetrahydrofolate-homocysteine methyltransferase, kinesin family member 3B and DNA topoisomerase I), resulting in an inhibition of their translation [32]. These effects are not well-pronounced at relatively low siRNA concentrations; however, at 250 nM, the cytotoxic effect of scramble siRNA was as high as that of target siRNAs, suggesting a strong off-target effect limiting the use of the dendriplexes at high concentrations.
Conclusions
To use dendrimers as carriers for therapeutic nucleic acids, their compositions should be optimized to achieve better efficiency. Along with other factors, the dendrimer:NA ratio appears to have a considerable effect on the biological activities of the dendriplexes. Using cationic carbosilane dendrimers and anti-cancer siRNAs as a convenient model, we have summarized the data on the siRNA complexation obtained by several indirect methods, to find the optimal composition for dendriplexes for the siRNA delivery. Based on our findings, we have suggested a scheme of stepwise dendriplexes formation consisting the NA binding step, and further rearrangements of dendriplexes upon the increase of the dendrimer excess. The dendriplexes containing pro-apoptotic siRNAs Bcl-2, Bcl-xL, and Mcl-1 induced the cell death in both HeLa and HL-60 cell lines. This evidences that cationic carbosilane dendrimers are versatile agents to transfect both adherent and suspension cell lines, with the latter being known to be resistant to common synthetic vectors. The results obtained provide useful practical data for the design of dendrimer-based gene therapy tools. . Table S1. Sequences of the oligonucleotides used. Figure S1. EBIA profiles for the dendriplexes. Figure S2. CD spectroscopy curves and ellipticity changing profiles for the dendriplexes. Figure S3. Zeta-potential profiles of the dendriplexes.
|
v3-fos-license
|
2018-12-23T14:19:12.255Z
|
2018-07-01T00:00:00.000
|
56715758
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2514183X18792760",
"pdf_hash": "5181195125cf4580c6f26fc75594f54fc24b4995",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43249",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "5f0cc6d92cf4993d636f2eff6454fd231e03574e",
"year": 2018
}
|
pes2o/s2orc
|
The new Swiss postgraduate training (residency program) in neurology
Following the creation of the first university chair for neurology (Zurich 1894), the Swiss Neurological Society (SNG) was founded in 1908.. In 1932, neurology was recognized in Switzerland as an independent specialty and included in the medical (undergraduate) curriculum. The postgraduate training (residency program) in neurology lasted first 4 years (including 1 year of internal medicine, 0.5 years of psychiatry and 2.5 years of clinical neurology as mandatory rotations). In 1985, it grew to 5 years, and in 1996 to 6 years (including 1 year of internal medicine, 3 years of clinical neurology, and 1 year of clinical neurophysiology). Considering the results of a survey among young neurologists and “landscape changes” such as the increasing subspecialization, economic pressure, requirements for research, number of foreign doctors, and restrictions of working hours, the SNG undertook a revision which was approved in 2016. Today, the Swiss neurology postgraduate training includes 1 year of internal medicine, a “common trunk” of 3 years of general neurology (with 1 year of clinical neurophysiology including sleep), and 2 years of “fellowships” with rotations in different subspecialties and up to 12 months of research.
Neurology arose as a new medical specialty first in England and France in the second half of the 19th century. 1 First chairs of neurology were created in 1869 in Moscow (Sechenow University, A Kozhevnikov) and 1882 in Paris (JM Charcot). 2First national neurological societies were founded in the United States (1875) and Belgium (1896); the prestigious "Neurological Society of London" (founded in 1885) and "Societé de Neurologie de Paris" (1899) became only later the national societies of the United Kingdom (1907) and France (1949), respectively.
Despite the interest in Switzerland of several clinicians for brain disorders since the 17-18th centuries (e.g.JJ Wepfer in Schaffhausen, AD Tissot in Lausanne, G Vieussuex in Geneva), neurological patients were taken care in the country by internists and psychiatrists until the end of the 19th century.
First private practices devoted to neurological patients were those of P Dubois in Bern (since 1876) and von Monakow (since 1887) and Veraguth (since 1897) in Zurich. 3,4he first to obtain the "venia docendi" in neurology were G Burkhardt (1863) in Basel and P Dubois (1876) in Bern.The first professorships were created at the universities of Zurich in 1894 (C von Monakow, Extraordinarius and first Swiss chair in neurology, Bern in 1902 (P Dubois, Extraordinarius ad personam), Basel in 1932 (R Bing, Ordinarius ad personam), and Geneva in 1941 (De Morsier, Extraordinarius).The Swiss Neurological Society (SNG) was founded in 1908 and few years later, the outpatients clinics of von Monakow in Zurich (1913) and Bing in Basel (1916) were recognized as university institutions.It was only much later that the neurological inpatients units became independent and university neurology departments were created (1951 in Zurich, 1958 in Bern, 1961 in Geneva, and 1962 in Basel and Lausanne).
The growth of Swiss neurology
At his first meeting in 1908, the SNG listed 108 members.The number increased to 144 in 1930, 166 in 1956, 295 in 1987, 420 in 2003, and 691 in 2018 5 ; a marked increase linked also to the creation of the Swiss Association of Young Neurologists (SAYN), with 135 SAYN members out of 691 in 2018. 6The increase over the 30 years parallels the progresses in neurological care and the rising need for neurologists.Between 1990 and 2017, the number of board (Swiss Medical Association: FMH) certified neurologists in Switzerland increased by 111% (54 new titles in the years 1990-1994 (11 per year) versus 114 new titles in the years 2013-2017 (23 per year)).In 2017, the number of practicing neurologists in Switzerland was 702 representing 2% of the 36,000 working physicians in the country (Figure 1). 7orrespondingly, the ratio of neurologists per inhabitant went from 1 neurologist for 20,162 inhabitants (4-9/ 100,000 inhabitants) in 2008 to 1 neurologist per 12,083 inhabitants (8.3/100,000) in 2017.For comparison, in 2006, in Europe there were an average of 6.6 neurologists/ 100,000 inhabitants with a range going from 0.9/100,000 (United Kingdom) to 17.4 (Georgia). 8Several extra-European countries still have a much lower number of neurologists (e.g. 6 for a population of 30 millions in Kenya in 2003). 9oteworthy, also the number of neurologists coming from other countries which had their diploma recognized/ accredited by the Swiss medical board significantly increased (doubling from 2012 until 2017 to a total of 609 titles). 10
Undergraduate and postgraduate (residency) trainings in neurology in Europe
At the time of the first World Neurology Meeting in Bern in 1931, the only European countries having neurology as a mandatory field in the curriculum of medical students were Bulgaria, Estland, Romania, Russia, and Norway. 6In fact, in most European countries, neurology has become only recently an independent specialty (e.g. in 1977 in Italy).As a consequence, most neurology residency programs were created only in the last 20-30 years (e.g. in 1991 in Italy). 11 survey of 31 (out of 41), European countries in 2006 evaluated the pre-and postgraduate training in neurology. 8he neurological training in the curriculum of medical students was found to range from 20 to 240 h (mean: 114 h), whereas the mean duration of the neurological residency ranged from 3 to 6 years (mean: 4.8 years) with working hours going from 30 to 80 (mean: 43) per week.The differences between the different countries were considered to be important and more harmonization was felt to be needed. 124][15][16] Strategies to motivate students and young doctors to choose neurology as future specialty are also discussed. 17,18Few countries also developed tools to define and assess the core competences in neurology resident education.
Undergraduate and postgraduate (residency) trainings in neurology in
Switzerland (1935-2016) Following a first unsuccessful attempt by Otto Veraguth in 1911, 19 Mieczyslaw Minkowski and the SNG board were successful in establishing neurology in 1935 as a mandatory specialty in the curriculum of medical students in Switzerland. 6It was however only in 1967 that neurology became an examination subject at the final (federal/ national) medical examination.Furthermore, neurology was in a way still assigned the status of a de facto "subdiscipline" of internal medicine.As such, neurology at the final oral examination was tested (until the last revision which eliminated the oral examinations) as an elective subject in alternation with other internistic subdisciplines such as rheumatology.
It was in 1932 that the Swiss Medical Association (FMH) recognized neurology as an independent specialty.The contents of the residency training in order to become a (FMH) certified neurologists were as follows: 2.5 years in neurology, 6 months in psychiatry, and 1 year of "preparatory training" ("Vorstudium", of which at least 6 months in internal medicine).In 1939, the same board approved the requirements for a joined neurologypsychiatry curriculum over 5.5 years ("Nervenarzt").
In 1978, 6 months of neurosurgery were included (under the SNG Presidency of Prof. Eric Zander, chair of neurosurgery at the University Hospital in Lausanne).
In 1985, the duration of the postgraduate training was increased to 5 years including 6 mandatory months of neurosurgery and clinical neurophysiology, respectively.
In 1996, the duration increased to 6 years to become, together with those of Austria, Finland, Slovenia, and the Netherlands, the longest in Europe. 8It included 3 mandatory years of clinical neurology, 1 mandatory year of clinical neurophysiology, 1 optional year in a specialty/ discipline related to neurology, and 1 mandatory year of internal medicine.For the first time, a rotation in neurorehabilitation was recognized.
The 2016 neurology postgraduate training (residency) program in Switzerland
In Switzerland, as in other countries, the pressure to increase clinical productivity, the restrictions in term of working hours (more flexible working hours have been recently introduced in the United States considering the negative impact of short continuous work hours on educational and clinical outcome (without improvement in patient care) 20 ), the growing number of subspecialties and importance of acute/intensive care in neurology, the increasing complexity (and requirements) of clinical research, the need for more flexible working schedules, the increased number of doctors coming from other countries (and the need for more international mobility) were among the reasons that triggered discussions around the residency program.In addition, a need to enhance the attractiveness of the training was made necessary by the increase in number of residency positions offered as compared to the number of candidates available/ interested, which over time led to the need to "attract" residents from other countries.In 2010, a survey performed among new board-certified neurologists revealed that despite its length, the Swiss residency program was not anymore considered to be competitive in the work market.
In 2011, the Swiss Neurological Society (SNG) (at that time under the presidency of one of us, Claudio Bassetti) decided to examine the situation in detail and to plan a revision of the residency program.Several critical areas were identified, including the following five points.
Neurophysiology
To obtain reimbursement for clinical neurophysiology examination, a neurologist needs not only to be boardcertified by the SNG but also to obtain a certificate from the Swiss Society of Clinical Neurophysiology (SGKN).The requirements to obtain this certificate were the following: (i) at least 9 months spent full-time in one single specialty (EEG or ENMG or neurosonography), (ii) a minimal number of exams to perform, that is, 800 EEG or 500 ENMG or 500 neurosonographic exams, (iii) and passing a clinical neurophysiology examination, whose rules are determined by the SGKN.Of note, this SGKN examination is independent from the SNG examination needed to become board-certified neurologist.The organization of the neurology curriculum did not allow to obtain more than one certificate, EEG, ENMG, or neurosonography.By contrast, foreign neurologists coming to Switzerland automatically obtain the recognition of the three certificates in clinical neurophysiology, even if they have spent much less time than 27 months (3 Â 9 months) to obtain these certificates.So, there was paradoxically a disadvantage of being a Swiss-trained neurologist in Switzerland.
Sleep Medicine
Despite the great advances in sleep science and medicine, education in sleep medicine was only marginally included in the residency program of neurologists and the possibility to obtain the Swiss sleep certificate (which is given by the Swiss Sleep Society) was precluded.
Subspecialties
The importance of neurological subspecialties (in addition to those already linked with the neurophysiological certificates, that is, stroke, epilepsy, and neuromuscular disorders) has greatly increased: Some specialized rotations are possible in some centers but not officially recognized/supported (an analogy with the fellowship programs in the United States, following a residency program of 3 years, is made).
Research
A maximum of 6 months of patients-oriented research could be recognized as part of the curriculum.To promote the careers of talented clinical neuroscientists, a longer period is thought to be necessary.
Training sites
Traditionally, neurology training could be obtained only in the five university hospitals (Basel, Bern, Geneva, Lausanne, and Zurich) and three large Cantonal hospitals (Aarau, Lugano, and St. Gallen) (so-called category A training centers).The interest of promoting short rotations in smaller neurological divisions and units (and their recognition as training centers) is considered to be advantageous.
After 5 years of discussions, debates, back-and-forth between the different societies and the Swiss Institute for Medical Education (SIWF), the SNG (at that time under the presidency of one of us, Renaud Du Pasquier) could finalize the curriculum, which was finally accepted by the FMH in 2016 (Figure 2).
In the following we list the main changes (summarized in Table 1).
Neurophysiology
One year of clinical neurophysiology, while still mandatory, can now be divided in two 6-month modules which have to be performed in two different neurophysiology subspecialties (EEG, sleep, ENMG, or cerebrovascular neurosonography).These two modules offer the possibility for residents to fulfill the requirements to obtain two certificates in clinical neurophysiology.In order to guarantee a sufficient experience and quality, the number of tests in the different neurophysiological domains was maintained, that is, 800 EEG, 500 ENMG, 500 neurosonographic exams.During this year of clinical neurophysiology, it is recommended to free the resident from on call duties.It remains also possible to spend the entire neurophysiological year in one subspecialty and to add a second year in the same or another subspecialty as a "fellowship" (see below).
Sleep medicine
Sleep medicine, while still not mandatory can now be included in the basic neurophysiology training and recognized in the curriculum.To obtain the Swiss sleep certificate, the resident must combine the sleep rotation with a 6-month rotation in EEG/epileptology.
Subspecialties
The curriculum starts with 2 years of general neurology (including neurological emergencies) and 1 year of clinical neurophysiology which are mandatory for everybody.The last 2 years of training are more flexible introducing (and formalizing) the possibility of rotations in specific subspecialties ("fellowships") or a more in-depth training in clinical neurophysiology (and sleep).These 6-12 months rotations can be combined with "electives" that can be done in neurosurgery, neuroradiology, and other neurologyrelated disciplines.
Research
Patients-oriented research is now recognized in the curriculum for up to 12 months.
Discussion
Neurology is one of the very few medical specialties which is still growing, with great potentials but also challenges ahead. 20This evolution, which is accompanied by (and partially due to) great advances in diagnostic but also in treatment opportunities, triggered a revision of residency programs worldwide.A change was made necessary by multiple factors including the increasing subspecialization (in 2012, about 25 neurological subspecialties of neurology were officially recognized in the United States 12 ), the restrictions of working hours, the economic pressure of the hospitals, the increasing complexity of (and formal requirements for) clinical research, and the needs (and expectations) of the newer generations.
Switzerland has been one of the first countries to recognize neurology as an independent specialty and to recognize in 1935 a postgraduate training (residency) program, which lasts for 6 years (among the longest in Europe).Despite the integration of mandatory rotations in internal medicine and clinical neurophysiology, an emphasis in clinical and general neurology remained over the years a hallmark of neurology training in the country.This was made possible by favorable (although deteriorating) working conditions (including number of specialists per number of patients/beds, salary, etc.).
The last revision was particularly time-and nerveconsuming (5 years as compared to the 3 years which had been necessary for the 1996 revision (Ch.W. Hess, personal communication)) because of the different aims, which were in parts conflicting with each other or with the interests of different subspeciality groups.One goal was to change contents, while maintaining the traditional emphasis on a broad and solid education in general neurology. 21his led to the definition of a "common trunk" of 3 years that is equal and mandatory for all residents in neurology.Another goal was to allow (but not necessarily) request a stronger diversification (and early specialization) in order to better prepare for a variety of different (academic, hospital-based, practice-based, consulting-based, etc.) neurological careers.There are now several possible rotations including those in neurological research, neuroradiology, and sleep.A third goal was to offer two neurophysiological certificates during the residency, while reducing the time by one-third and leaving the numbers of examinations needed unchanged, in order to maintain the high educational standards in clinical neurophysiology in the country. 22n conclusion, the authors of this article, who were involved in the revision of the training program and are daily concerned with its implementation, are convinced that the new residency program represents an attractive and valid instrument to prepare a new generation of neurologists for the challenges of their career in a rapidly changing medical landscape.
As usual, only the future can tell which choices have been correct and which ones may need to be reconsidered and adapted.The discussions about the length and contents of the training and about the best strategies (and tools) for the evaluation of postgraduate training are an ongoing process, which will take into consideration the annual surveys performed by the FMH as well as the direct feed-backs of the residents. 15,23,24
5 Figure 2 .
Figure 2. The different stages of the new Swiss postgraduate training in neurology (1 year internal medicine, 3 years "common trunk", 2 years of fellowships).
5 .
Training sites General neurology training in a smaller neurological units and their recognition as training centers (categories B and C) is now accepted up to 12 months
Table 1 .
The major changes of the last revision of the Swiss neurology postgraduate training (residency program).
|
v3-fos-license
|
2021-08-03T00:05:50.686Z
|
2021-05-13T00:00:00.000
|
236733229
|
{
"extfieldsofstudy": [
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/ra/d1ra02645c",
"pdf_hash": "51ba993496137cce471853d8624a8bcf381c5201",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43250",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"sha1": "01f10c2c061412fd46902c7c0b59d379afd11196",
"year": 2021
}
|
pes2o/s2orc
|
Morphology and surface analyses for CH3NH3PbI3 perovskite thin films treated with versatile solvent–antisolvent vapors
Organometal halide perovskite (CH3NH3PbI3) semiconductors have been promising candidates as a photoactive layer for photovoltaics. Especially for high performance devices, the crystal structure and morphology of this perovskite layer should be optimized. In this experiment, by employing solvent–antisolvent vapor techniques during a modified sequential deposition of PbI2–CH3NH3I layers, the morphology engineering was carried out as a function of antisolvent species such as: chloroform, chlorobenzene, dichlorobenzene, toluene, and diethyl ether. Then, the optical, morphological, structural, and surface properties were characterized. When dimethyl sulfoxide (DMSO, solvent) and diethyl ether (antisolvent) vapors were employed, the CH3NH3PbI3 layer exhibited relatively desirable crystal structures and morphologies, resulting in an optical bandgap (Eg) of 1.61 eV, crystallite size (t) of 89.5 nm, and high photoluminescence (PL) intensity. Finally, the stability of perovskite films toward water was found to be dependent on the morphologies with defects such as grain boundaries, which was evaluated through contact angle measurement.
Introduction
Organometal halide perovskite solar cells (PSCs) have received tremendous interest for a next-generation photovoltaic (PV) technology. [1][2][3][4][5] Perovskite can be designated by a common formula known as ABX 3 where 'A' is a large organic cation [CH 3 NH 3 or HC (NH 2 ) 2 ], 'B' is a metal cation (Pb, Sn), and 'X' is a halide (Cl, Br, I). The perovskite material is a light-harvesting component of the PSCs, and is able to offer many desirable characteristics such as low-temperature solution processability, 6,7 high absorption coefficient, 8 long carrier diffusion length, 9 high charge carrier mobility, 10 and adjustable direct bandgap with suitable alternative metals, halogens, and organic cations. [11][12][13][14] These characteristics can be further modied by using additives, [15][16][17] compositional adjustments, 18,19 and solvent-antisolvent extraction approaches. 20 Hence, the PSCs have been a promising candidate for commercialization in the current PV industries.
In general, the PV performance of PSCs relies on the morphologies of the perovskite thin lm because the structural characteristics of a photoactive layer decide PV performances of devices. [21][22][23][24][25][26][27][28][29][30][31][32] For example, if there is a trap site (e.g., surface defect and grain boundary) in a perovskite layer, it acts as carrier recombination sites, 33 resulting in a reduced performance of devices. Thus, the morphology and crystallinity of the perovskite thin lm should be very important for fabricating high-efficiency PV devices. 34 To date, numerous approaches have been developed to obtain a high quality and defect-minimized perovskite thin lm. 35,36 For example, thermal annealing of a perovskite lm at 85-120 C has been employed. 37 Furthermore, low-temperature antisolvent assisted fabrication of devices are one of the useful techniques for obtaining a lm with desired morphologies. 18,19 Importantly, it is notable that the additive and antisolvent strategies are both signicantly promising in improving the performance of PSCs. [38][39][40][41][42][43][44][45][46][47][48][49] Moreover, the dipping time, 50 precursor's type and concentration, 51 spin-speed, 52 solvent types, 53,54 and temperature are important processing factors for optimizing a perovskite layer. In the sequential deposition of the PbI 2 and CH 3 NH 3 I (MAI) layers, the MAI's intercalation into the PbI 2 layer is critically important to obtain a high quality perovskite without any unreacted precursor material. If there is an incomplete conversion of PbI 2 -MAI into a perovskite, it may be a problem for device performances. 55 However, for improving the stability of PSCs, there are researchers who used a PbI 2 interfacial nanolayer in their device conguration. [56][57][58][59][60] In this work, we employed a modied sequential deposition method for fabricating organometal halide perovskite thin lms. For this purpose, the solvent-antisolvent vapor techniques were adopted as a method of morphology engineering. The ve anti-solvents such as chloroform (CF), chlorobenzene (CB), 1,2-dichlorobenzene (DCB), toluene (Tol), and diethyl ether (Et 2 O) were tested, which may act as an extractor of a solvent, dimethyl sulfoxide (DMSO). Then the properties of CH 3 NH 3 PbI 3 thin lms were investigated as a function of antisolvent species, which may include UV-vis light absorption, micro-/nano-structural morphologies, crystal structures, photoluminescence (PL) emission, and surface analysis through the water contact-angle measurements. In this study, it was observed that when a perovskite layer is well crystallized, the surface polarity of perovskite lms remains a longer time, i.e., an enhanced stability toward water or its vapor.
Materials and methods
In all synthesis methods, analytical grade high purity reagents were used. All solvents and antisolvents were purchased from Fine Chemicals Ltd. Indium tin oxide/uorine-doped tin oxide (ITO/FTO) coated glass substrates were purchased from TECHINSTRO Chemicals Ltd. PbI 2 precursors were purchased from Tokyo chemical industries (TCI) and synthesized using a hydrothermal method. 27 CH 3 NH 3 I (MAI) was synthesized by reacting methylamine (aqueous, 40 wt%) and hydroiodic acid (aqueous, 57 wt%) in an ice bath for 2 h with stirring. Then the solvent was evaporated using a rotary evaporator and the precipitate was collected and washed using Et 2 O three times and dried at 60 C for 24 h in a vacuum oven. The resulting product, MAI, was used without further purication. To obtain a CH 3 -NH 3 PbI 3 precursor, the synthesized PbI 2 and MAI were deposited on the top of poly (3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS)-coated substrate using a modied sequential deposition technique: (a) PbI 2 /DMSO deposition, (b) MAI/IPA deposition at a 1 : 1 mole ratio, and (c) solvent-antisolvent exposure, in which the solvent is DMSO, and the antisolvents are CF, CB, DCB, Tol, and Et 2 O.
Thin-lm preparation
To study the effect of a solvent, DMSO was used to prepare 1 M of PbI 2 (461.78 mg ml À1 of DMSO) solution and annealed at 80 C for 12 hours. ITO coated glass substrates were used to deposit the samples and sequentially washed with detergent, DI water, and ethanol in an ultrasonic bath. A hole transporting material PEDOT:PSS was deposited on the top of the ITO glass substrate. Then, the PbI 2 precursor solution was ltered by a 2 mm sterile polytetrauoroethylene (PTFE) membrane lter. PbI 2 /DMSO was spin-coated on the top of the PEDOT:PSS layer and heated at a temperature of 100 C. Then, the as-synthesized MAI in isopropyl alcohol (IPA) solution was spin-coated on the top of the PbI 2 layer. Aer deposition of the MAI, the thin lm was exposed to a DMSO vapor at 80 C for 10 minutes, and then the crystallizable perovskite layer was exposed to different antisolvent vapor at 70 C for 10 minutes. The optical, structural, morphological, and surface properties of organometal halide perovskite thin lms were then investigated, accordingly.
Characterization
UV-visible spectra measurements were taken on thin lms using Shimadzu UV-2600-Series, diffused reectance spectrophotometer double-beam source at a range of 350-800 nm tted with deuterium and halogen lamps as sources. Transmission and reection modes were recorded simultaneously. ITO glasses have been used as a background reference for thin lms. The X-ray diffraction (XRD) patterns of the as-prepared perovskite samples have been characterized by Philips X'pert PRO-240 mm diffractometer provided with an integrated germanium detector Cu-Ka, radiation source at l ¼ 1.54060Å operating at an applied voltage of 45 kV with a current intensity of 40 mA. The equatorial scans in the continuous mode were taken from 2q ¼ 4 to 80 at a step of 0.017 with a scan step time of 24.4 seconds. To probe the electron transition behavior of samples photoluminescence (PL) spectroscopy was utilized by F-7000 uorescence spectrophotometer, Hitachi, Japan, using a xenon lamp at a wavelength of 532 nm with an emission wavelength starting from 600 nm to 850 nm at a scan speed of 1200 nm min À1 . The morphologies of perovskite lms were investigated by eld-emission scanning electron microscopy (FE-SEM; Hitachi, Japan, SU8000 Series) at an accelerating voltage of 5.0 kV. Energy-dispersive X-ray spectroscopy (EDX) investigation has been performed to probe the elemental distribution variations present in the thin lm. Mapping analysis has been done to obtain elemental maps in the range of nanoscale. Contact Angle Goniometer (KRUSS GmbH, DSA25; Germany) was used to record and analyze the effect of antisolvents on the surface energies of perovskite thin lms. To determine contact angle, the edge detection of the water droplet was tted by a polynomial tting approach. Measurements were taken in time intervals of 40 ms over a period of 30 seconds. In the text, the dimension of (cal cm À3 ) 1/2 was used for solubility parameter. top of PbI 2 /PEDOT:PSS/ITO. Finally, the solvent (DMSO) and antisolvent (CF, CB, DCB, Tol, Et 2 O) vapors are sequentially exposed to the perovskite layer. The properties of solvent and antisolvents were summarized in Table 1. Here, the solubility parameter (d) with the dimension of (cal cm À3 ) 1/2 is in the order of 14.5 (DMSO) > 10 (DCB) > 9.5 (CB) > 9. For solvent-antisolvent vapor engineering, the solvent/ antisolvent should be miscible, whereas the perovskite/ antisolvent immiscible. During the lm-formation process, if the number of nucleation cites is reduced, the crystal and grain size of perovskite may increase, resulting in a high quality lm with small grain boundaries. For this purpose, the solvent DMSO molecules should be quickly extracted from the wet DMSO/perovskite lm by help of antisolvent. 61,62 Furthermore, it is notable that although perovskite is hygroscopic and hydrophilic, the measured water-contact angle was reported to be very high (i.e., signicantly hydrophobic). This paradox was solved by recognizing that the hydrophobic PbI 2 is formed at the interface of water and CH 3 NH 3 PbI 3 . 63 In other words, the measured water contact angle is not for CH 3 NH 3 PbI 3 , but for PbI 2 (i.e., the result of a perovskite degradation). Here, of course, the contact angle data may include the effect of the morphologies of a lm including grain boundaries. Fig. 1(b) shows the chemical structure of solvent and antisolvents. Here the solubility parameter (d) is equal to a square root of cohesive energy density (CED), i.e.,
Results and discussion
=V , whereÛ vap is the molar heat of vaporization, andV is the molar volume. 64 Furthermore, two small organic molecules (here, solvent and antisolvent) are expected to be miscible because of a large entropic gain, although there is an enthalpic cost from the apparent dissimilarity in solubility parameters. Hence, DG mix ¼ DH mix À TDS mix < 0, in which DG mix , DH mix , andDS mix denote the Gibbs free energy, enthalpy, and entropy of mixing, respectively, and T is temperature. On the other hand, for the intermolecular interactions between antisolvent and perovskite, the relation should be DG mix > 0, facilitating a wet perovskite lm to undergo a drying process. Fig. 2(a) shows the UV-vis absorption spectra of perovskite lms as a function of antisolvent species. As shown in Fig. 2(a), although the overall shape of absorption is similar, the absorption edge, i.e., the optical bandgap (E g ), is a little bit different due to a non-identical ordering state of a lm. Here, the absorption data was replot using the Tauc model, 65 where a is the absorption coefficient, b is a constant (disorder parameter), h is Plank's constant and n is the frequency of light. The value 'n' is 1/2 for a direct bandgap semiconductor and 2 for an indirect bandgap. 66 Hence, n ¼ 1/2 can be used because CH 3 NH 3 PbI 3 is included in the former. As shown in Fig. 2(b), the plot (ahn) 2 vs. hn, results in the optical bandgap of $1.61-1.63 eV. As an example, the absorption edge is 770.19 nm (E g ¼ 1.61 eV) for Et 2 O vapor condition, whereas it is 760.74 nm (E g ¼ 1.63 eV) for 'None' condition, i.e., the perovskite sample was not exposed to any solvent/ antisolvent vapor. Here, the small bandgap indicates that the perovskite semiconductor has a well-organized structure, as observed in other stereoregular polymer semiconductors through red-shi in the absorption spectra. [67][68][69] Note that, if the perovskite becomes a single crystalline wafer, the bandgap was reported to be much smaller like 1.36 eV, corresponding to the light absorption onset at 910 nm. 70 This trend indicates that the allowed energy states of an electron increase with reducing defect densities in the crystalline lattice forming a periodic potential. In other words, the energy band increases and the bandgap decreases if the quality of perovskite lms is improved. Furthermore, if there are any defects in perovskite, the typical trap energies are known to be shallow because of its defecttolerance property. [71][72][73] Hence, based on the optical data, the ordering of perovskite materials is in the order of: Et 2 O > Tol > DCB > CB > CF > 'None'. Interestingly, if there is no solventantisolvent vapor treatment, the perovskite sample exhibits the smallest optical absorption, indicating that the vapor treatment is a useful technique for organizing the perovskite lms. properties such as d ¼ 7.4 (cal cm À3 ) 1/2 and bp ¼ 34.6 C should be helpful to extract DMSO from the wet DMSO/perovskite lm. Fig. 4 shows the elemental mapping images of perovskite lms for the three representative cases, (a) 'None', (b) Tol, and (c) Et 2 O. Here, the mapping data follows the morphologies of a sample according to the SEM images (Fig. 3). Accordingly, Et 2 O-treated perovskite lm shows a uniform distribution of organic/inorganic elements, whereas Tol-treated one exhibits some voids/pinholes as shown in Fig. 4. Fig. 5(a) shows XRD patterns for the perovskite lm as a function of antisolvent species at room temperature. Importantly, CH 3 NH 3 PbI 3 is a polymorphic material, exhibiting the crystal structures of orthorhombic at T < 162.2 K, tetragonal at 162.2 K < T < 327.4 K, and cubic at T > 327.4 K. 74 Indeed, based on the data in Fig. 5(a), the calculated lattice parameters are a ¼ b ¼ 8.87Å and c ¼ 12.65Å, conrming that perovskite has a tetragonal structure at $298 K according to the literature report. 75 Interestingly, in Fig. 5(a), it is noticeable that 'None/CF/ CB/DCB' conditions display unreacted PbI 2 peak at 2q z 13 , 59 whereas Et 2 O and Tol conditions do not exhibit such a peak from unreacted PbI 2 . This observation indicates that, in a modied sequential deposition process, PbI 2 compounds would be reacted with MAI completely when DMSO-Tol or DMSO-Et 2 O was used as a solvent-antisolvent couple system. This is because Et 2 O [d ¼ 7.4 (cal cm À3 ) 1/2 and bp ¼ 35 C] and Tol [d ¼ 8.9 (cal cm À3 ) 1/2 and bp ¼ 111 C] are relatively nonpolar and volatile, allowing the wet DMSO/perovskite lm to be dried fast (i.e., the mixed DMSO-Tol or DMSO-Et 2 O molecules are quickly evaporated from the hygroscopic perovskite). This rapid crystallization results in a complete reaction between PbI 2 and MAI. Furthermore, based on the most intense peak at (110) crystallographic planes in Fig. 5(a), the crystallite size of each perovskite lm could be estimated. The result is displayed in Fig. 5(b). Importantly, the trend of crystallite size variation is in line with the UV-vis absorption data. However, one exception was observed in "CF' condition which has volatile characteristics (bp ¼ 61 C). Recall the boiling point is in the order of 189 C (DMSO) > 180 C (DCB) > 131 C (CB) > 111 C (Tol) > 61 C (CF) > 35 C (Et 2 O). Table 2 shows the crystallite size of (110) crystallographic plane when d-spacing is 0.623 nm. Here the crystallite size (t) was calculated based on Scherrer's equation as follows, 76,77 where l (¼ 0.154 nm) is the wavelength of X-ray, and B is a full width at half maximum (FWHM) at diffraction angle, q. Here, dspacing was calculated based on the Bragg's law (l ¼ 2d sin q). Fig. 6 shows PL spectra for perovskite lm as a function of antisolvent species, in which the peak was observed at 792.6 nm ('None'), 792.0 nm (CF), 792.2 nm (CB), 792.5 nm (DCB), 792.6 nm (Tol), and 792.2 nm (Et 2 O), indicating the PL peak positions have no direct relationship with the optical bandgap (E g ) shown in Fig. 2(b). However, the PL intensity has a direct correlation with the E g in the UV-vis absorption data. For example, when E g is 1.61 eV (the most red-shi sample), the PL intensity is highest, indicating that, when crystallite size is large in a well-organized morphology, the radiative recombination process is carried out abundantly, resulting in the highest intensity of PL. In other words, when morphologies have a lot of defects like in 'None' or 'CF' conditions, the probability of nonradiative recombination is increased, resulting in a weak intensity of PL as proved in Fig. 6.
Finally, to understand the surface polarity of perovskite lms depending on solvent-antisolvent vapor exposure, the water contact angle (q c : here, subscript 'c' stands for contact angle) was measured (see Fig. 7 and 8). Here, it should be bear in mind that, when water is dropped on the surface of perovskite lm, the nanoscale PbI 2 lm is known to be immediately formed at the interface between water and perovskite through degradation of CH 3 NH 3 PbI 3 . 63 However, despite this PbI 2 formation, the stability of perovskite lm could be studied. This is because polycrystalline morphologies contain a lot of defects such as grain boundaries through which water molecules can be easily penetrated, resulting in the change of surface polarity of a lm.
The raw contact-angle data at step number 17 is displayed in Fig. 8 as an example. Fig. 7(a) shows contact angle change as a function of step number in which each measurement was taken in time intervals of 40 ms over 30 s. In Fig. 7(a), the rst striking observation is that, with increasing the step number, the contact angle decreased, indicating the polarity of a perovskite lm was changed through the water-induced degradation effect. Note that in our previous work, 27 the contact angle and surface energy for the pure PbI 2 lms (DMSO used as a processing solvent) were 130 and 6.3 mJ m À2 , respectively. However, in this work, the perovskite lm (from which PbI 2 is formed, like a water/PbI 2 /CH 3 NH 3 PbI 3 conguration) shows the water contact angle of about 120 and the average surface energy of $11.5 mJ m À2 (see Step 1 in Table 3). Hence, the water contact angle of a perovskite lm should be affected by perovskite's degradation (PbI 2 ), morphologies (including grain boundaries), and others.
The change of water contact angle with time was smaller for the cases of Et 2 O and Tol compared to the others, indicating that, when the perovskite materials were well crystallized (recall Fig. 3), the stability of lms (i.e., water-resistivity) should be signicantly improved in humid conditions. The next observation is that at steps 17 and 18, the contact angle was saturated as shown in Fig. 7(b). In this study, it is noticeable that considering the golden triangle in solar cells (that is efficiency, stability, and cost), 78 this stability-enhanced perovskite lm should be important, providing a general insight for the necessity of a single crystal 70 without any grain boundary as an ideal condition if there is a practical processibility.
Conclusion
The morphologies and surface properties of CH 3 NH 3 PbI 3 thin lms were studied by varying solvent-antisolvent vapor treatment conditions, for which the solvent was dimethyl sulfoxide (DMSO), and the antisolvents were chloroform (CF), chlorobenzene (CB), dichlorobenzene (DCB), toluene (Tol), and diethyl ether (Et 2 O). Major ndings are as follows. First, according to UV-vis absorption data, the optical bandgap of perovskite lms ranged from 1.61 eV (Et 2 O) to 1.63 eV ('None': without any solvent-antisolvent vapor treatment). Second, according to SEM images, when antisolvent was Et 2 O or Tol, the morphologies and crystal structures of perovskite lms were improved. Third, when Et 2 O or Tol was used as an antisolvent, the precursor materials (PbI 2 and CH 3 NH 3 I) were completely reacted (i.e., without any PbI 2 residue) according to the XRD patterns. Forth, according to PL emission data, when the crystallite size (t ¼ 89.5 nm for both 'Et 2 O' and 'Tol' conditions) was large, the PL intensity was higher than those of the other conditions (DCB, CB, CF, and 'None'). Fih, by measuring the water contact angle as a function of antisolvent species, the surface energy (g sv ) of each perovskite lm was estimated. Initially, the average g sv for all samples was 11.53 AE 0.64 mJ m À2 . However, when the contact angle data were saturated at step number 17, the g sv values were different depending on the antisolvent condition: g sv ¼ 25 mJ m À2 (Et 2 O), and g sv ¼ 54.3 mJ m À2 (None), indicating that the high-quality lms (exposed by Et 2 O) have more stability toward water compared to the others. Hence, the solvent-antisolvent vapor technique should be useful for the enhanced stability of perovskite layers if it is well utilized. Finally, our future works may include the device performances by extending the current study, leading to the processing-structure-property-performance relationship of perovskite solar cells.
Conflicts of interest
The authors declare no competing nancial interest. Table 3 Water contact angle ( ) and surface energy (g sv ) of organometal halide perovskite thin films at the step numbers of 1, 17, and 18 Antisolvent Contact angle ( ) Surface energy (mJ m À2 ) Step
|
v3-fos-license
|
2017-12-23T23:31:58.231Z
|
2017-12-22T00:00:00.000
|
3690333
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://mmrjournal.biomedcentral.com/track/pdf/10.1186/s40779-017-0147-0",
"pdf_hash": "76dace38bfc93116c4a73b361fb60f89c1448db6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43251",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "76dace38bfc93116c4a73b361fb60f89c1448db6",
"year": 2017
}
|
pes2o/s2orc
|
Successful use of closed-loop allostatic neurotechnology for post-traumatic stress symptoms in military personnel: self-reported and autonomic improvements
Background Military-related post-traumatic stress (PTS) is associated with numerous symptom clusters and diminished autonomic cardiovascular regulation. High-resolution, relational, resonance-based, electroencephalic mirroring (HIRREM®) is a noninvasive, closed-loop, allostatic, acoustic stimulation neurotechnology that produces real-time translation of dominant brain frequencies into audible tones of variable pitch and timing to support the auto-calibration of neural oscillations. We report clinical, autonomic, and functional effects after the use of HIRREM® for symptoms of military-related PTS. Methods Eighteen service members or recent veterans (15 active-duty, 3 veterans, most from special operations, 1 female), with a mean age of 40.9 (SD = 6.9) years and symptoms of PTS lasting from 1 to 25 years, undertook 19.5 (SD = 1.1) sessions over 12 days. Inventories for symptoms of PTS (Posttraumatic Stress Disorder Checklist – Military version, PCL-M), insomnia (Insomnia Severity Index, ISI), depression (Center for Epidemiologic Studies Depression Scale, CES-D), and anxiety (Generalized Anxiety Disorder 7-item scale, GAD-7) were collected before (Visit 1, V1), immediately after (Visit 2, V2), and at 1 month (Visit 3, V3), 3 (Visit 4, V4), and 6 (Visit 5, V5) months after intervention completion. Other measures only taken at V1 and V2 included blood pressure and heart rate recordings to analyze heart rate variability (HRV) and baroreflex sensitivity (BRS), functional performance (reaction and grip strength) testing, blood and saliva for biomarkers of stress and inflammation, and blood for epigenetic testing. Paired t-tests, Wilcoxon signed-rank tests, and a repeated-measures ANOVA were performed. Results Clinically relevant, significant reductions in all symptom scores were observed at V2, with durability through V5. There were significant improvements in multiple measures of HRV and BRS [Standard deviation of the normal beat to normal beat interval (SDNN), root mean square of the successive differences (rMSSD), high frequency (HF), low frequency (LF), and total power, HF alpha, sequence all, and systolic, diastolic and mean arterial pressure] as well as reaction testing. Trends were seen for improved grip strength and a reduction in C-Reactive Protein (CRP), Angiotensin II to Angiotensin 1–7 ratio and Interleukin-10, with no change in DNA n-methylation. There were no dropouts or adverse events reported. Conclusions Service members or veterans showed reductions in symptomatology of PTS, insomnia, depressive mood, and anxiety that were durable through 6 months after the use of a closed-loop allostatic neurotechnology for the auto-calibration of neural oscillations. This study is the first to report increased HRV or BRS after the use of an intervention for service members or veterans with PTS. Ongoing investigations are strongly warranted. Trial registration NCT03230890, retrospectively registered July 25, 2017.
Background
Advanced understanding and treatment for post-traumatic stress disorder (PTSD) will require a paradigm that appreciates its complexity and holds a promising solution for its extensive burden of suffering. Conventionally, PTSD is classified and treated as a behavioral disturbance that can follow a traumatic event, and its main symptom clusters pertain to re-experiencing the trauma, avoidance and generalized numbing, negative cognitions and mood, and heightened arousal [1]. However, while military service members with PTSD are beset with increased psychosocial risks, including compromised role functions [2], substance abuse [3], and suicidality [4], studies also show an increased risk for cardiovascular and metabolic diseases [5,6] as well as all-cause mortality [7]. Care for individuals with traumatic stress symptomatology should thus entail attention to both behavioral and physical health status. Moreover, although therapies based on re-exposure to trauma have been designated as evidence-based treatments [8], there is concern about high dropout rates associated with this approach [9], as well as a lack of impact on sleep disturbances [10]. In addition, despite the undeniable suffering of American veterans of the Vietnam War, which contributed to recognition of PTSD as a Diagnostic and Statistical Manual of Mental Disorders (DSM) clinical disorder, there are also concerns that the medicalization of combat-related stress can lead to unhelpful consequences related to stigmatization [11,12].
To improve modeling of PTSD (or post-traumatic stress, PTS, more broadly), a promising starting point may be the recognition of the brain as the organ of central command. The physiological paradigm of allostasis (stability through change) notes that the brain directs set points for biological function on the basis of perceived needs for organism-level survival [13,14]. Stated another way, it is the brain that lets environmental stressors "get under the skin" [15]. Within the allostasis paradigm, the sympathetic and parasympathetic divisions of the autonomic nervous system serve as principal pathways for bidirectional communication and coordination between the brain and peripheral physiology. For example, in the setting of an acute threat, the brain orchestrates these divisions to affect an instantaneous redirection of metabolic resources away from the anabolic process of digestion toward the catabolic process of mobilization. Allostasis thus fully predicts that over time, exposure to traumatic stresses is likely to entail both behavioral and physical health disturbances. For therapeutics, allostasis points to the potential for multi-system symptom reduction through interventions that are expressly designed to facilitate the brain's role as the organ of central command [16]. Moreover, through its emphasis on contextdependent heightened stress responsivity, the allostasis paradigm supports destigmatization of PTS-related phenomenology. There is no single normal mode of brain function; rather, there are ranges for set points [17] that are subject to modification based on neuroplasticity. The functional set point for a given capacity-vigilance, for example-may or may not be adaptive ("pathological"), depending on the environment.
Given its crucial and distributed role for connecting the brain, body, and behavior, the autonomic nervous system merits special attention in the allostasis paradigm. Autonomic regulation of the cardiovascular system can be characterized by measuring heart rate variability (HRV) and baroreflex sensitivity (BRS). HRV and BRS indicate the physiological capacity to produce dynamically varied responses to the changing needs of an environment, and prospective studies show that decreased HRV is a risk factor for incident cardiovascular disease [18] and all-cause mortality [19]. Furthermore, depressed HRV is seen generally across behavioral disorders [20], specifically in military personnel and veterans with diagnosed PTSD [21][22][23][24][25], and as a pre-deployment predictor of new post-deployment PTSD diagnoses or symptom severity [26,27].
An open question in studies of PTS and autonomic dysregulation pertains to how or why HRV is depressed in persons who have been exposed to traumatic stress. An epigenetic and neural oscillatory explanation is provided by the bihemispheric autonomic model (BHAM) [28]. The BHAM begins by recognizing that there is hemispheric lateralization in the management of the autonomic nervous system, with the right and left sides having primary responsibility for the sympathetic and parasympathetic divisions, respectively. The BHAM proposes that trauma-related sympathetic hyperarousal may be an expression of maladaptive right temporal lobe activity, whereas the avoidant and dissociative features of the traumatic stress response may be indicators of a parasympathetic "freeze" response that is significantly driven by the left temporal lobe. An implication of the BHAM is that a successful allostatic (i.e., brain-based, top-down) intervention may facilitate the reduction of symptom clusters associated with autonomic disturbances through the mitigation of maladaptive asymmetries.
The objective of this report is to document changes in self-reported symptoms, autonomic, and functional measures after use of a closed-loop acoustic stimulation neurotechnology by a series of active duty service members or recent veterans with military-related symptoms of traumatic stress. The neurotechnology strategy (HIR-REM®; Brain State Technologies, Scottsdale, Arizona) is aligned with the allostasis paradigm through its brainfocused strategy for the auto-calibration of neural oscillations [16]; through its attention to hemispheric asymmetry, it is also designed to leverage insights described in the BHAM. We hypothesized that use of the neurotechnology would be followed by reductions in selfreported PTSD-related symptomatology as well as improvements in HRV and BRS. For a subset of the initial subjects, we also conducted exploratory analysis of changes in biochemical and epigenetic markers related to stress or inflammation. Changes in brain network connectivity demonstrated through analysis of whole brain, resting-state magnetic resonance imaging (MRI) evaluations are reported elsewhere [29,30].
Population and subject recruitment
This single site, ongoing, IRB-approved pilot study (Clinicaltrials.gov registration NCT0323089), is being carried out in the Department of Neurology at the Wake Forest School of Medicine, Winston-Salem, North Carolina, USA. Initial eligibility screening is conducted through an online questionnaire followed by a phone conversation. To be considered for inclusion, individuals must be active duty military service members or recent veterans with service since 2001 with symptoms of military-related traumatic stress, including insomnia, poor concentration, sadness, irritability, or hyper-alertness, with or without a history of traumatic brain injury (TBI). Participants are required to have either a formal diagnosis of PTSD, a referral from a military medical provider confirming active PTS symptoms, or prior or current treatment for the same. For those participants who are special operations service members, the study deliberately does not use a symptom inventory threshold score as an eligibility criterion because of the under-reporting of symptoms among these individuals (personal communication, Naval Special Warfare medical officer). If contact is established through selfreferral with the absence of a formal PTSD diagnosis, a score of 50 points on a screening PCL-M is required. Potential participants have been identified by referrals from military medical providers, as well as the Care Coalition and Preservation of the Force and Family, which both support the special operations community of the United States Armed Forces. Several participants have joined through self-referral after word of mouth from other participants or review of open studies on the research program webpage on the Wake Forest Baptist Health website. Based on advice from military personnel and recognition that the potential stigma associated with a diagnosis of PTSD might limit recruitment, study flyers and related materials focused on symptoms and did not include use of the term "PTSD." Exclusion criteria are the inability to provide informed consent, inability to attend all study visits or sit comfortably in a chair, bilateral total hearing loss, known seizure disorder, or an ongoing need for the use of benzodiazepines, opiates, anti-psychotic medications, selective serotonin reuptake inhibitor (SSRIs) or selective norepinephrine reuptake inhibitor (SNRIs), prescribed sleep medications including zolpidem or eszopiclone, stimulant medication, or thyroid hormones. Those with ongoing or anticipated regular use of recreational drugs, alcohol, or energy drinks during the intervention and in the 4 weeks following intervention completion or a lack of internet or smart phone access were also excluded. With the knowledge of and under the direct management of their medical provider, participants could titrate off what would otherwise have been considered exclusionary medications or recreational substances prior to enrollment.
Intervention schedule
Beginning on a Monday morning, and following informed consent, baseline (Visit 1, V1) outcome measures are collected (details below), including self-reported symptom inventories, physiological and functional measures, an assessment of brain electrical activity, blood and saliva samples for biomarkers or epigenetic testing, and a whole brain, resting-state MRI scan. Participants then receive a series of closed-loop acoustic stimulation sessions (HIRREM) over a period of 12 days. The initial two sessions are given on the afternoon of the first day following the completion of all baseline data collections. Thereafter, participants receive two sessions daily, with a break between sessions. Typically, no sessions are given on Saturday (day 6), many receive a single afternoon session on Sunday (day 7), with a final, single morning session on the second Friday (day 12), prior to the repeated outcome measures.
All outcome measures are repeated prior to departure on day 12 (Visit 2, V2), except that for scheduling purposes and to ensure similar time of day of sampling, blood and saliva collection followed the morning session on day 11. Symptom inventories are collected remotely via online surveys at 1, 3, and 6 months following intervention completion (V3, V4, V5, respectively). Brief informal interviews are conducted with participants in person during V2 data collection, and narrative comments are sought at subsequent data collections by either phone or email. [31]. Seventeen items are rated on a Likert scale with a composite score range of 17 to 85. A score of 50 or higher is correlated with a probability of militaryrelated PTSD [32], although cutoff scores as low as 30 to 34 have been suggested for active-duty soldiers seen in primary care populations [33]. A reduction of ≥ 10 points in the PCL-M has been suggested to be a clinically significant change [34]. The Insomnia Severity Index [35] is a 7-question measure, with responses from 0 to 4 for each question, that yields scores ranging from 0 to 28. A score of 15 or greater is considered to indicate moderate or greater insomnia severity, and 8 to 14 indicates subthreshold insomnia. A reduction of at least 6 to 7 points has been suggested as the minimally important clinical difference for insomnia symptom reduction [36,37]. The Center for Epidemiologic Studies Depression Scale (CES-D) [38] is a 20-item survey assessing affective depressive symptomatology to screen for the risk of depression. Scores range from 0 to 60, and a score of 16 or greater is commonly used as a clinically relevant cut-off [39]. The Generalized Anxiety Disorder 7-item scale (GAD-7) [40] is a seven-item screening tool for anxiety that is widely used in primary care. The clinical threshold to consider treatment is 8, and a statistically reliable change is 4 or greater. Subjects with a history of mild traumatic brain injury or concussion also complete the Rivermead Post-Concussion Questionnaire [41], a 16-item survey that assesses the severity of common post-concussion symptoms on a scale of 0 to 4, with a total score range from 0 to 64 (least to highest symptom severity).
Autonomic cardiovascular regulation
Continuous recordings of blood pressure (BP) and heart rate (HR) are obtained from noninvasive finger arterial pressure measurements and electrocardiogram for 10 min with subjects resting supine and breathing freely. These recordings follow the completion of the symptom inventories and functional testing. Systolic, diastolic, and mean arterial BP, as well as beat-to-beat RR intervals (RRI) files generated via the data acquisition system (BIOPAC acquisition system and Acknowledge 4.2 software, Santa Barbara, CA) at 1000 Hz are analyzed using Nevrokard SA-BRS software (by Nevrokard Kiauta, d.o.o., Izola, Slovenia). All recordings are visually inspected, and the first 5 min of usable tracings are analyzed. Recordings with dropped beats or gross motion artifacts are excluded from analysis. Assessments included multiple measures of heartrate variability (HRV) in both time and frequency domains, baroreflex sensitivity (BRS), and blood pressure [42].
Functional testing
Reaction testing uses a drop-stick, clinical reaction time apparatus. It is constructed from a meter stick covered in friction tape with gradations. The modified meter stick is fixed to a weighted rubber cylinder. The apparatus is placed between the thumb and index finger of the subject and released at a random time during a countdown. The subject catches the apparatus and the distance it has fallen is measured. Following two practice trials, subjects perform 8 trials, and the mean distance value is used for analysis [43]. Grip strength evaluation is done using a hydraulic hand dynamometer (Baseline Hydraulic Hand Dynamometer). The greatest force generated during three trials is used for analysis [44].
Biomarkers of stress and inflammation and epigenetic measures
During the study, funding became available to permit limited exploratory analysis of post-interventional changes in markers of stress and inflammation in 15 subjects and for epigenetic measures in 8 subjects. Blood-based measures included Angiotensin II (Ang II), Angiotensin 1 to 7 (Ang 1-7), epinephrine, norepinephrine, C-reactive protein (CRP), vasopressin, Interleukin 1 (IL-1), Interleukin 6 (IL-6), and Interleukin 10 (IL-10), and saliva measures included cortisol and alpha-amylase. For epigenetic testing, DNA was isolated from whole blood samples to quantify DNA methylation at each site. Microarray assays were used to determine the methylation proportion for each site (beta value) based on the ratio of the fluorescence intensity of the methylated versus the combined methylated and unmethylated probes.
Closed-loop allostatic neurotechnology intervention
The process and procedures for the provision of closedloop allostatic neurotechnology by a technologist in an office setting have been discussed in detail previously [16]. An initial assessment of brain electrical activity entails two-channel recordings from at least 6 paired locations on the scalp (F3/F4, C3/C4, T3/T4, P3/P4, FZ/OZ, O1/O2; also, typically FP1/FP2 and CB1/CB2) with the participant at rest and while carrying out a task, using sensors and amplifiers that sample at 256 Hz. At each scalp location, data are recorded for 1 minute each with eyes closed, eyes partially open as a transitional state of arousal, and eyes open while carrying out a specific mental task (e.g., reading numbers or performing mental calculations). Trained technologists evaluate assessment data to choose protocols for the initial intervention session.
Protocols for each session include recording brain electrical activity through, generally, two channels, with scalp sensors placed at homologous regions of the hemispheres according to the 10-20 International EEG system. Software algorithms analyze specific ranges of the brain electrical frequency spectrum in real time, identify dominant frequencies based on proprietary mathematical formulae, and translate those frequencies into acoustic stimuli (audible tones of variable pitch and timing) which are presented to participants through standard earphones (Creative EP-630 or Sony Stereo Headphones MDR-EX58V) with as little as an eightmillisecond delay. Volume (decibels) of acoustic stimulation is adjusted for each participant in accordance with their preference.
Each session (typically 90-180 min each) consists of 4 to 10 protocols, ranging from 5 to 40 min per protocol, and each is intended to address a specific anatomical location and frequency range. Some protocols are completed with eyes closed and some with eyes open, with the participant being asked to relax while sitting or reclining comfortably in a zero-gravity chair. After the initial session, specific protocols and protocol durations for successive sessions are chosen based on brain electrical data from the participant's preceding session, which, for purposes of technologist review, are aggregated in broad-band frequency ranges (< 1. Although exact mechanisms await confirmation, it appears that with rapid updates regarding its own electrical activity, intended to support frequency-matching or resonance between the acoustic stimulation and oscillating brain networks, the brain is supported towards auto-calibration and self-optimization. As a closed-loop process, no conscious or cognitive activity is required, yet the brain pattern is observed to shift on its own terms towards improved balance and, often, reduced hyperarousal.
Statistical analysis
A repeated-measures ANOVA was performed to evaluate changes in symptom inventory scores between baseline and each follow-up visit. For other comparisons, two-tailed paired t-tests were performed to evaluate preto post-HIRREM changes. In consideration of the sample size, the non-parametric Wilcoxon signed-rank test was used to corroborate the t-test findings. Analyses were performed using SAS (Cary, NC).
Results
Twenty-seven individuals were screened, and 18 met eligibility criteria, provided informed consent, and enrolled in the study. Of the 9 who were excluded, 7 had schedule or training cycle conflicts that did not permit travel to the study site, and 2 did not meet criteria with respect to a formal diagnosis of PTSD, active symptoms, or treatment of PTS. The mean age of the cohort was 40.9 (SD 6.9) years. There were 17 men, and the cohort was largely Caucasian (17 Caucasian, 1 Asian). Three recent veterans were enrolled, while the other 15 participants were on active duty. Self-reported health conditions are listed in Table 1, and therapies previously used for PTS symptom remediation are listed in Table 2. Of the 11 individuals who reported previous use of a psychoactive or sleep-related medication, 10 had made recent adjustments to their regimen (withholding or discontinuing a medication that would entail exclusion) under the guidance of their medical provider. Participants received a mean of 19.5 (SD 1.1) HIRREM sessions, with 2779 min (SD 315) of protocol time, over the 12-day intervention period. There were no adverse events and no drop-outs. One participant temporarily returned to his military base midway through the intervention period to be closer to his social support network and to address some activeduty responsibilities. Nine HIRREM sessions were provided to him by one of the study investigators (CLT) at his location using a mobile configuration of the HIR-REM intervention (laptop instead of desktop computer). Table 3 provides the military service history for each participant, including their duration of traumatic stress symptoms and number of recognized traumatic brain injuries (TBIs) as well as selected notes that pertain to their experience with the study during and after the intervention.
Symptom scores for PTSD, insomnia, depressive mood, and anxiety are shown in Fig. 1. Through the first two follow-up visits, 83% of subjects reported PCL-M scores that were at least 10 points lower than their baseline (at V2, 9 of the subjects reported reductions of at least 10 points on their PCL-M, and at V3, an additional 6 reported reductions of at least 10 points compared to their V1 score). Over the same visits, 78% of subjects reported ISI scores that were at least 7 points lower than their baseline (seven subjects at V2 and an additional 7 subjects at V3). For the 15 individuals with a history of TBI or concussion, there were also durable reductions in concussionrelated symptomatology (V2 RPQ -11.8, SD 14.1, P < 0.01, subsequent visits not shown). Figures 2 and 3 show V1 and V2 values of HRV, BRS, and blood pressure measures. V1 and V2 values are also shown for grip strength (Fig. 4) and reaction testing ( Figure 5). Of the biochemical measures that were assessed, there were trends for reductions in C-reactive protein (−37%, P = 0.06), angiotensin II to angiotensin 1-7 ratio (−24%, P = 0.19), and IL-10 (−12%, P = 0.14). Epigenetic markers showed no statistically significant changes.
Discussion
This report documents outcomes for a series of activeduty military service members and veterans with symptoms of military-related PTS, predominantly special operations warfighters or support personnel, who participated in the use of a closed-loop, allostatic, acoustic stimulation neurotechnology. On average, there were robust and durable reductions in the symptoms of PTS, insomnia, depressive mood, and anxiety. At the first postintervention data collection, there were marked increases in HRV and BRS, and there were trends for improvements in physical functional performance and markers of stress or inflammation. There were no adverse events, and all participants completed their course of intervention sessions along with all follow-up data collections.
The present findings are consistent with outcomes reported after the use of closed-loop allostatic neurotechnology by civilians with self-reported PTS, who were mostly women with non-military trauma [45] or athletes with sports-related concussions [46]. Together, these studies concur with the idea that realtime monitoring and modulation of brain activity (closedloop strategies) can support advanced remediation of neurological and psychiatric disorders, sleep enhancement and, potentially, performance optimization [47][48][49][50]. The authors are aware of only two other studies that have reported quantitative HRV effects after the use of any type of intervention by military personnel or veterans with PTSD. HRV decreased with the use of escitalopram [51] and showed no change after the use of mindfulness meditation [52].
Limitations to the generalizability of these findings include the modest sample size and the absence of a control group. Improvements in reaction testing may have been related to practice effects, which have been documented with the drop-stick procedure [53]. The use of numerous types of psychoactive medications, as well as alcohol or recreational drugs, was an exclusion to enrollment, and it is unknown how these co-interventions or influences might affect the outcomes of future studies. Although the improvements demonstrated may have been influenced by subjective expectations, positive social interactions with study personnel, or other "placebo" components, it seems unlikely that these non-specific factors were the fundamental drivers. HRV is an objective physiological measure, and meta-analysis has found that placebo effects in clinical trials tend to be limited to continuous subjective outcomes [54]. In addition, the durability of the symptom score improvements appears Fig. 2 Values for heart rate variability, baroreflex sensitivity, and blood pressure, before and after intervention. Error bars are standard error of the mean (SEM). **, P < 0.01; ***, P < 0.001 vs Visit 1 (V1); RRI. R to R interval; SDNN, Standard deviation of the normal beat to normal beat interval; rMSSD, Root mean square of the successive differences; Seq ALL, Sequence all.; V1. baseline study visit; V2. immediately after intervention completion Fig. 3 Spectral power values for heart rate variability before and after intervention. Error bars are standard error of the mean (SEM). *, P < 0.05; **, P < 0.01 vs V1; RRI, R to R interval; V1, Visit 1; V2, Visit 2; HF, High frequency; LF, Low Frequency. V1. baseline study visit; V2. immediately after intervention completion inconsistent with the interpretation that changes were due to statistical randomness, regression to the mean, or natural history of disease. Given the protracted duration of symptoms, and numerous other therapies that had been tried previously, spontaneous recovery over a few weeks to months would also be considered unlikely.
Various aspects of the intervention point to its promise as an innovative modality for the remediation of the effects of traumatic stress for active-duty military service members, veterans, and other populations. The reductions in insomnia symptoms are noteworthy given the intractability of sleep complaints in PTSD [55]. Sequelae of TBI can complicate PTSD-specific interventions [56], yet the numerous TBIs reported by the subjects did not appear to hinder their participation, and there was a reduction in TBI-specific symptomatology. The noninvasive methodology is encouraging with respect to safety, feasibility, and scalability considerations. Moreover, support from the US Army Research Office [57] has allowed the development of a self-use configuration of the core technology (Braintellect®-2; Brain State Technologies, Scottsdale, Arizona), with sensor locations at prefrontal and temporal scalp locations only. This device may further facilitate the development of population-based strategies that leverage precision-guided, allostatic neurotechnology, and its standalone use has been proposed as a potential strategy to enable the primary prevention of PTSD through the optimization of sleep quality [58].
Conclusions
A series of active-duty military personnel and veterans with symptoms of military-related traumatic stress used a closed-loop acoustic stimulation methodology to support the auto-calibration of neural oscillations. Subsequently, participants showed robust improvements in autonomic cardiovascular regulation and durable reductions in PTSrelated symptomatology, including insomnia, with no adverse events or dropouts. This study is the first to show an increase in both HRV and BRS, which are significant indicators of the capacity of the brain to exert dynamic and adaptive regulation of peripheral physiology, following an intervention provided to military personnel or veterans with PTS. The composite intervention profile points to the promise of allostatic neurotechnology for system-level PTS management. Ongoing investigations are strongly warranted.
|
v3-fos-license
|
2022-08-19T15:19:03.509Z
|
2022-08-18T00:00:00.000
|
251662913
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/QRAM-01-2021-0009/full/pdf?title=researchers-hate-love-relationship-to-performance-measurement-systems-in-academia-a-foucauldian-perspective",
"pdf_hash": "e014d8d210083060ab900429beea03e0f1cc2944",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43254",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"sha1": "d41fbe02aa0fb07524b1c2006934e56082ace9a4",
"year": 2022
}
|
pes2o/s2orc
|
Researchers ’ hate-love relationship to performance measurement systems in academia – a Foucauldian perspective
Purpose – The purpose of this paper is to describe and theorize the type of hate-love relationship to performance measurement systems (PMSs) that individual researchers tend to develop in academia. To this end, the paper draws upon Foucault ’ s writings on neoliberalism to analyse PMSs as neoliberal technologies holding certain qualities that can be expected to elicitsuch ambivalent views. Design/methodology/approach – The paper is based on a qualitative interview study of researchers from three Swedish universities, who were asked to re fl ect upon questions related to three overall themes, namely, what it means to be a researcher in contemporary academia, the existence and use of PMSs at their universitiesandif/howsuch PMSs affected them andtheir work asresearchers. Findings – The empirical fi ndings show that the hate-love relationship can be understood in terms of how PMSs are involvedin threecentral moments of governmentality,where each such moment of governmentality tends to elicit feelings of ambivalence among researchers due to how PMSs rely on: a restricted centrifugal mechanism, normalization rather than normation and a view of individual academics as entrepreneurs of themselves. Originality/value – Existing literature has provided several important insights into how the introduction and use of PMSs in academia tend to result in both negative and positive experiences and reactions. The current paper adds to this literature through theorizing how and why PMSs may be expected to elicit such ambivalent experiencesandreactions among individual researchers.
Introduction
The use of performance measurement systems (PMSs) to govern academic life has attracted a large and sustained interest in the accounting literature over the past decades (for overviews, see Argento et al., 2020;Guarini et al., 2020;O'Connell et al., 2020a).In one important (and large) part of this literature, such systems are typically portrayed as holding certain qualities that turn them into a clear threat to academic practices (Agyemang and Broadbent, 2015;Martin-Sardesai et al., 2017;Parker, 2011).For example, one such threatening quality is that they help to increase the power of managers, administrators, publishers, funders, etcetera, at the expense of researcher autonomy (Argento et al., 2020;Gebreiter and Hidayah, 2019;Guthrie et al., 2019;Martin-Sardesai et al., 2017;Mingers and Willmott, 2013;Parker, 2002).Another quality refers to their "reductive" nature, which is typically seen as having a largely narrowing, homogenizing and stagnating effect on research (Gendron, 2008;Hopwood, 2008;O'Connell et al., 2020a).Finally, they are often seen as replacing the intrinsic motivation of researchers with extrinsic incentives, which has proven to have largely demoralizing and demotivating effects on (at least some) researchers (Kallio and Kallio, 2014;Martin-Sardesai et al., 2020;Narayan, 2016;Lewis, 2014).
Interestingly though, and despite that the literature is replete with examples of how PMSs are associated with such negative qualities in academia, another part of this literature points to how PMSs are also associated with other qualities that make them considerably more appealing.One such quality relates to how PMSs function as concrete tools that allow universities to work with and achieve what is considered important goals for them (Söderlind and Geschwind, 2019).From such a perspective, PMSs are believed to contribute towards a number of "generally desirable effects" (O'Connell et al., 2020a(O'Connell et al., , p. 1181)), such as increasing the overall research productivity (Guarini et al., 2020, p. 113), ensuring that "academics produce something with the time they are given" (Alvesson and Spicer, 2016, p. 33) and propelling them to focus on quality improvements (O'Connell et al., 2020a).In fact, and as suggested by Argento et al. (2020, p. 2), PMSs provide academics with a form of rules for what to do, and when such rules "are explicitly communicated and enforced, academics know how to behave and what to prioritize" (see also Aguinis et al., 2020).Another reason is related to the individual benefits that are (at least sometimes) associated with PMSs.For example, Argento et al. (2020, p. 2) point to how "[s]ome academics may feel oppressed by the duty to be accountable, while others may have become more independent and entrepreneurial, and specifically welcome pressures to produce research outputs".In a similar manner, Chatterjee et al. (2020Chatterjee et al. ( , p. 1221) stress that "[f]or some, assessment may provide an opportunity to demonstrate that they can achieve something others find important" (see also van Helden and Argento, 2020).
Notwithstanding the many different insights made with regard to how some qualities of PMSs seem to trigger negative reactions towards them in academia, while other qualities seem to result in the opposite, the potential ability of PMSs to concurrently provoke both negative and positive reactions has attracted rather scant attention in the literature.Indeed, a few scholars have started to talk about how PMSs trigger a form of bittersweet or hate-love relationship among individual academics (Gendron, 2008;Knights and Clarke, 2014;Parker, 2012;van Helden and Argento, 2020).For example, Knights and Clarke (2014) talked about the bittersweet experiences of life in academia in times of new public management, while Parker (2012) and others (Gendron, 2008;van Helden and Argento, 2020) have pointed to researchers' hate-love relationship with journal lists and rankings.However, we lack more systematic and thoroughgoing theoretical analyses of what it is about PMSs that allow them to foster this type of ambivalent relationship to them.Arguably, this is surprising given that Researchers' hate-love relationship such analyses could further our understanding of what it is about PMSs that make them so powerful in academia, despite all the criticism that has been raised against them.
The overall ambition with this paper is to provide such an analysis.More specifically, the purpose is to describe and theorize the type of hate-love relationship to PMSs that individual researchers tend to develop in academia.To this end, we draw upon Foucault's (2007Foucault's ( , 2008) ) writings on neoliberal forms of government to theorize three central moments of governmentalityi.e.specific instances or arenas through which neoliberal forms of government are played out (Michael, 2009;Walker et al., 2008) that PMSs are involved in, and which help explain how and why individual academics tend to develop this type of hatelove relationship to them.In short, we analyse how PMSs help (re)construct: the relationship between individual researchers and their governing parties by means of a restricted centrifugal mechanism; the relationship between individual researchers and their peers by means of normalization rather than normation; and individual researchers' relationships to themselves by means of the entrepreneurial self as an ideal.
Using empirical evidence from a qualitative study of researchers at three Swedish universities, we demonstrate how these three central moments of governmentality are key for understanding the type of hate-love relationship to PMSs that has been identified in extant literature.In so doing, this paper arguably contributes to extant literature in several ways.Overall, it contributes to the small but growing literature that has drawn attention to the hate-love relationship as such (Knights and Clarke, 2014;Parker, 2012;van Helden and Argento, 2020).Moreover, and related, such a focus allows us to go beyond the notion that some aspects or qualities of PMSs are seen as triggering negative views and reactions while other qualities are seen as triggering positive views and reactions.In contrast, we explore how the very same qualities of PMSs can feed and form highly ambivalent views on them.Finally, in so doing, we draw upon a hitherto largely unexplored facet of neoliberal technologies in the accounting literature, namely, the ways in which the technologies as such are premised on a number of contradictory termssuch as governing without governing, the concurrent production and consumption of freedom etcetera (Foucault, 2008) which can in and of themselves be expected to feed and form a hate-love relationship.
In the remaining parts of the paper, we first provide a more detailed discussion of how and why PMSs, when seen as neoliberal technologies, can be expected to trigger ambivalent views among individual researchers (Section 2).Based on this theoretical backdrop, we then outline how the empirical study was designed and conducted (Section 3), followed by a presentation of the empirical findings (Section 4).In the final section, we draw conclusions and discuss some important implications of our study (Section 5).
Performance measurement systems as neoliberal technologies of government
As suggested above, we draw upon Foucault's writings on neoliberalism to theorize PMSs as neoliberal technologies of government.Based on this, we first outline the historical background to, and general features of, neoliberal governmentality (Section 2.1).In an ensuing section, we elaborate on how PMSs can be seen as deeply involved in three central moments of this type of governmentality (Section 2.2).
Neoliberalism as a form of governmentality
Issues of how neoliberal ideas were (and still are) mobilized as a way of organizing social life were attended to by Foucault in a series of lectures during the late 1970s [1].During these lectures, Foucault immersed himself (among other things) in the notion of "Governmentality" [2].In fact, based on a genealogy of the different types of "mentalities" that had been used historically for governing human behaviour, he identified the emergence of a new one in the mid-18th century.One that largely differed from previously ruling forms such as the sovereign and disciplinary ones, in that it focused neither on the protection of a geographical territory nor on the disciplining of individual bodies, but on the security of a whole population (see also Burchell et al., 1991;Chiapello, 2017).As many times before, Foucault paid attention to how such a shift in focusin the sense that one now tried to understand the workings of a "collectivity" of living beings rather than the individuals per serequired a particular form of knowledge.One that would allow those who governed to not only understand the population as a "field of relations between people and people, people and things, people and events" (Rose et al., 2006, p. 6) but also how to govern such relations.
As has been discussed many times before in the accounting literature (Chiapello, 2017;Cooper, 2015), Foucault identified the political economic ideas of neoliberalism as forming the basis of this new type of governmentality.As part of a liberal doctrine, these ideas were grounded in the somewhat paradoxical notion of "freedom" as a rationale for governing (Foucault, 2008;Rose et al., 2006).That is, there was a general belief that to be effective, the governing of a population needed to allow (and indeed require) people to behave as "free individuals", where freedom meant the ability to act autonomously, to make one's own choices, to be self-reliant, etc.This rested on the assumption that it was only through unleashing the power of the "liberal subject" that the interests and benefits of the individual could be aligned with those of the population as a whole.Importantly though, according to these ideas, freedom (and acting in the name of freedom) was no longer seen as something "given by nature" (which had been the case in classical liberalism).Rather, if one wanted to bring "together the mass effects characteristic of a population", making sure that they reached a form of equilibrium and that the security of the "whole" was protected from its internal dangers (Foucault, 2003, p. 249), one was convinced that such effects had to be arranged, orchestrated or regulated into being (see also Hopwood, 1992).
The neoliberal way of adhering to this problem of governing the freedom of individualsi.e. to govern without governing (Miller and Rose, 1990)was to find ways of acting on the conditions of individuals.That is, rather than intervening directly on the individualsi.e. through marking or disciplining themthe idea was to act on the "milieu" within which they exercised their freedom (Foucault, 2007; see also Munro, 2012;Rose and Miller, 1992).To create a "realm of action" within (and through) which individuals could, on the one hand, be allowed to act freelye.g.through being innovative, to incorporate new ideas, to expand and to develop ever-wider circuits (Foucault, 2007).On the other hand, though, such a realm had to allow for the freedom to be controllede.g.through "sifting the good and the bad, ensuring that things [were] always in movement, constantly moving around, continually going from one point to another, but in such a way that the inherent dangers of this circulation [were] cancelled out" (Foucault, 2007, p. 93).And importantly, this "doubleedged" solution was found in the notion of a market.Or, more precisely, in the setting up of individuals in competitive relations to each other, as such relations would not only allow the individuals a certain form of freedom, but it would also require them to take care of themselves and their self-interests if they wanted to stay competitive and survive on the market.
As will be argued below, this notion of the market as a superior mode of organizing social life (Harvey, 2006;Hopwood, 1992;Mennicken and Miller, 2012;Mudge, 2008) is particularly useful for theorizing how and why individual academics develop a hate-love-relationship to PMSs.The line of reasoning is as follows.Firstly, through transforming and recoding academic practices into calculable terms, PMSs contribute to provide a particular form of knowledge (cf.Miller, 2008;Rose and Miller, 1992) that makes comparisons of different practices possible, which, in turn, allows competition to emerge (Miller, 2001(Miller, , 2008;;Townley, 1995;Wickramasinghe et al., 2021).Hence, from such a perspective, PMSs can be seen as important technologies through which the abstract neoliberal ideal of the market is articulated and manifested in an empirical setting such as academia (Kallio et al., 2017;Mennicken and Miller, 2012;Miller, 2008).Secondly, in producing and displaying performance numbers in the name of comparison and competition, PMSs "mediate" (Miller and Power, 2013) and help (re)construct individual researchers' relationships to their governing parties, to their peers and to themselves in ways that produce positive and in the same stance negative effects for them.Below, we discuss each such (re)construction as a central moment of governmentality that PMSs are involved in.
2.2 Performance measurement systems and three central moments of governmentality 2.2.1 Performance measurement systems and the (restricted) centrifugal mechanism.The first moment of governmentality relates to how competitively oriented technologies help (re)-construct the relationship between the governing and the governed based on what Foucault (2007) refers to as a centrifugal mechanism of control.That is, a mechanism designed to allow for the colonisation of ever wider areas of social life by a form of market coordination.Again, the underlying assumption of this is that human activities in general, and educational ones in particular, are best orchestrated when left to the rationality of the market (Foucault, 2007;Hamann, 2009;Mudge, 2008).That is, rather than allowing any overall orientation or logic of such activities to be established a priori by, for example, a strong political governance, it is assumed that the outcomes of such activities will be optimized when individuals are truly "liberated" so that they may use their rational and calculative abilities to compete with each other (Lynch, 2006).In this sense, then, the centrifugal mechanism can be seen as one that allows things to "take their course" (Foucault, 2007, p. 64), yet ensuring that people always do what they can to develop themselves and their competitive abilities.When those who govern academic practices use PMSs in this sense, the latter ones are expected to work as a form of market mechanism.That is, and largely in contrast to a disciplinary form of powerwhich can be seen as "'centripetal' in the sense that it 'concentrates, focuses, and encloses' to regulate and 'prevent everything'" -PMSs are expected to work as a centrifugal force "in that they continuously expand and integrate new elements and 'let things happen'" (Moisander et al., 2018, pp. 379-380).
In the accounting literature, the notion of how a centrifugal mechanism can incite positive feelings among researchers has received rather limited attention (but see e.g.Jones, 1992;Wickramasinghe et al., 2021).However, in a few cases, it has been emphasized how PMSs can contribute towards feelings of an increased freedom or autonomy among individual academics.For example, Argento et al. (2020) suggested that although the effects may vary between researchers, some may indeed become more independent.In a similar manner, Lewis (2014) found that although many researchers felt that their professional autonomy was reduced by the PMS, this was not true for everybody.On the contrary, a substantial number of researchers suggested that the system did have a positive impact on their autonomy.
In line with this, we propose that individual academics can be expected to appreciate the ways in which PMSs can work as such a "centrifugal force", in the sense that they contribute towards the setting up of markets in academia.And importantly, in doing so, they do not coerce or pressure academics into particular forms of behaviour.They do not decide a priori, for example, how research should be conducted or how individual researchers should become competitive.On the contrary, they provide them with a certain form of freedom or autonomy to do research without constraints or the interference from others, as long as they are prepared to take responsibility for the outcomes of their "free" acts (Chiapello, 2017;Cooper, 2015).The reason being, as suggested by Rose and Miller (1992, p. 174), that from such a perspective, "[p]ower is not so much a matter of imposing constraints upon citizens as of 'making up' citizens capable of bearing a kind of regulated freedom".
However, we also propose that individual academics can be expected to form considerably more negative feelings related to the centrifugal character of PMSs, due to how the setting up of quasi-markets in academia requires that some "rules of the game" or "criteria for competition" have to be instituted.That is, since markets constitute non-natural phenomena (in the eyes of the neoliberals), they must be actively instituted and maintained by means of various forms of interventionsincluding the very criteria by which they are brought into being in the first place (Foucault, 2007;Rose and Miller, 1992).And importantly, such criteriaregardless of whether they refer to the number of publications, citations, the amount of external funding etceterainevitably constrain or consume the type of freedom that they aim to produce.Or, as suggested by Foucault (2008), they help construct a productive/destructive relationship with freedom, as they not only provide the battleground on which competition is to be played out but also inevitably establish its limitations.As a result, the type of freedom or autonomy that the centrifugal mechanism aims to produceand which can be expected to be associated with positive feelings among academicsconstitutes somewhat of an illusion (Beime et al., 2021;Morrissey, 2015) or a cultural myth (Grealy and Laurie, 2017).
As suggested in the introduction, this particular themei.e.how PMSs can provoke negative reactions because of how they narrow down and constrain academic practiceshas received considerable attention in the accounting literature.In fact, when experiencing that one needs to conduct research in line with the heteronomous criteria set up by PMSs (just to be able to publish, attract research funding or be promoted), academics typically experience that their autonomy is circumscribed (Argento et al., 2020;Guarini et al., 2020;Martin-Sardesai et al., 2017;Tourish and Willmott, 2015).In fact, a well-rehearsed argument in the critically oriented accounting literature is that PMSs reduce (Kenny, 2017), diminish (Grossi et al., 2020) or undermine (Parker, 2011) academic autonomy.The premise is, as suggested by Parker (2011, p. 445), that out of "fear of offending key stakeholders and present or potential funding sources, or by fear of impact on their own performance evaluation, job contracts, tenure and career prospects within the university", academics feel that they have to sacrifice their autonomy and adapt to that which evidently works (Englund and Gerdin, 2019;Kallio et al., 2021).
Taken together then, we suggest that the ways in which PMSs work as a restricted form of centrifugal forcei.e. one that both produces and consumes the autonomy of individual researchersconstitutes a first argument for why individual academics can be expected to develop a hate-love-relationship to such technologies.
2.2.2 Performance measurement systems and the principle of normalization.The second central moment of governmentality is related to how PMSs help (re)construct the relationships between individual researchers and their peers by means of a principle of normalization rather than one of normation (Foucault, 2007).That is, rather than departing Researchers' hate-love relationship from any pre-existing or pre-defined norm (as disciplinary apparatuses would), they regulate such relationships through an ongoing construction of the (ab)normal by means of transforming individual performances into numbers, aggregating these into a "population" and establishing what is (ab)normal for such a population (e.g. through calculating its statistical average and variance).
Two aspects of such a normalizing principle are arguably important for understanding the type of hate-love-relationship between individual academics and PMSs that we are interested in.One aspect relates to the ways in which normalization takes "the empirical norm as a starting point, which serves as a regulative norm and allows for further differentiations and variations".(Lemke, 2011, p. 47).That is, it starts out in the domain of the empirical (e.g. in the actual performances of researchers) in the sense that quantified knowledge of what is normal in this domain (e.g. in terms of the average number of publications for a certain type of research/er), serves as the basis for the formation of a norm (Foucault, 2007).A norm that, instead of stating a priori that a researcher should publish a certain number of articles or attract a certain amount of funding, departs from what is "normally occurring" and what can be considered acceptable spans of variation around this normal (Foucault, 2007).Another aspect relates to the ways in which the principle of normalization brings the individual into a particular form of relation to the "population" (i.e. to one's peers).The premise is that what is (ab)normaland whether oneself is considered (ab)normalis no longer decided by the government or the "central office".On the contrary, it is constantly being reproduced by each and everyone involved in the game of competition (including oneself).As a result, one can always see oneself in relation to one's peers, and one knows that all the dots in the diagram or all the numbers in the spreadsheet (showing, e.g. the distribution of publications or citations among researchers), are made up of actual performances.
In the accounting literature, it has been pointed to how this type of establishment of how an individual's performances relate to those of others can work as an important source of motivation and inspiration, not least through visualizing the performances of those in the top; the highly successful ones; the academic stars; those who will function as heroes or idealized others (Knights and Clarke, 2014;Tucker and Tilt, 2019).The premise is that such visualizations help to "make up people" (Hacking, 2007) as important role models or ideals.Or, as suggested by Knights and Clarke (2014, p. 343), the highly successful "star" can function as an idealized other, a revered intellectual.In line with this, and the writings on normalization in general, we suggest that individual academics can be expected to appreciate the ways in which this type of normalization means that it is the actual academic practices that form the ground for what becomes the norm.Moreover, they can be expected to react positively on the ways in which PMSs help construct a form of "empirically based role-model".
In contrast to such writings, though, we also propose that the ways in which PMSs work to normalize academic practices can be expected to provoke negative reactions.One important reason for this is that normalization turns the norm into something continuously unfolding and relative, in the sense that it is constantly being reproduced.As a result, the notion of "good research" or a "good researcher" is transformed from something essential (i.e. from something that has an inherent character or something that you may be) to something performative (i.e. to something that you continually do).Or, as suggested by Bauman (2000, p. 29), it transforms the norm into something that "can exist only as an unfulfilled project".
In the existing literature, it has been pointed to how such a relativization of the norm not only increases the work-intensification and stress among individual researchers (Parker, 2011) but also that it can lead to jealousy (Tucker and Tilt, 2019), rivalry (Gendron, 2015) and cynicism (Kallio and Kallio, 2014).Guthrie et al. (2019, p. 13) pointed to how this type of relativization tends to intensify processes of social comparisons, and when people have a "fundamental need for status recognition" in and through such processes, it can lead to rivalry.Or, as suggested by Chatterjee et al. (2020), it can lead to that research staff, and even whole universities are torn apart (see also ter Bogt and Scapens, 2012).
Taken together then, we suggest that also this second moment of governmentality can be expected to provoke both positive and negative reactions among individual researchers.The premise is that they help construct an empirically based relationship between the individual and the populationin this case, between individual researchers and their peersthat can be expected to work both as a motivator (e.g. through providing empirically based role-models) and as a demotivator (e.g. through intensifying feelings of envy and rivalry).
2.2.3 Performance measurement systems and the entrepreneurial self.Third and finally, PMSs not only help construct the relationships that researchers form to their governing parties (cf. the centrifugal force) and their peers (cf. the normalizing principle) but also to themselves.In fact, when theorized as neoliberal technologies, PMSs both presume and help "make up" (Hacking, 2007) a particular form of subject, namely, an entrepreneurial self (Chiapello, 2017;Cooper, 2015).On the one hand, this is premised on the fact that the market mechanism relies upon individuals working as "the instrument, relay, or condition for obtaining something at the level of the population" (Foucault, 2007, p. 65).On the other hand, though, and more specifically, it also relies on the individuals conducting themselves in a very particular way, namely, as active economic agents driven by their (self-)interests.That is, it relies upon individuals who are prepared to invest in themselves, who are committed to competition and who are prepared to live their lives as an enterprisea true Homo economicus (Foucault, 2008; see also Cannizzo, 2015;Cooper, 2015).Or, as suggested by Foucault, a subject who is prepared to be an "entrepreneur, an entrepreneur of himself" (Foucault, 2008, p. 226; see also Wickramasinghe et al., 2021).In fact, unless people are prepared to dress themselves in such an entrepreneurial armour, and contribute towards the ongoing circulation of capital, the inherent dangers of the population cannot be cancelled out (Foucault, 2007).
Arguably, individual academics can be expected to appreciate the ways in which the image of such an entrepreneurial self carries a number of highly seductive virtues and values (cf.Davies, 2003;Morrissey, 2015).One aspect of this relates, of course, to the ways in which PMSs presuppose and help cultivate the type of autonomy, freedom and choice referred to above (cf.the "centrifugal mechanism").However, apart from offering a form of market space within which individuals can exercise their freedom, they also offer a promise of control (Cooper, 2015), the fulfilment of desires (Cannizzo, 2015) and success (Clarke and Knights, 2015), when manoeuvring within such a space.The premise is that rather than working only as centrifugal and normalizing mechanisms at the level of the population, PMSs also function as tools for working with, and improving, one's skills and abilities at the level of the individual (so that one can become and stay competitive in the market space).Or, as suggested by Foucault (1994, p. 146;Mennicken and Miller, 2012), they become important "technologies of the self" that people can mobilize to "effect, by their own means, a certain number of operations on their own bodies, their own souls, their own thoughts, their own conduct, and this in a manner so as to transform themselves, modify themselves, and to attain a certain state of perfection, happiness, purity, supernatural power".
In the accounting literature, several scholars have pointed to how PMSs function as such tools that allow individual researchers to work with and achieve what was above referred to as a number of "generally desirable effects" (O'Connell et al., 2020a(O'Connell et al., , p. 1181)).As suggested in the introduction, such effects then can relate to the overall research productivity (Guarini et al., 2020, p. 113), ensuring that "academics produce something with the time they are given" (Alvesson and Spicer, 2016, p. 33) and propelling them to focus on quality improvements (O'Connell et al., 2020a;Parker, 2013).Along these lines, for example, Northcott and Linacre (2010, p. 44) pointed to how at least some of their respondents saw merits in the formal research assessment exercise, as it had helped them to get "'going as a research community', 'encouraged publishing', been 'very important to encourage quality' and provided 'advice regarding publication outlets [that is] likely to be clearer and possibly more honest and helpful for junior academics'".
However, it also seems reasonable to suggest that some of the qualities associated with, and constitutive of, the entrepreneurial self will be less well received among individual academics.Again, one reason for this relates to the ways in which aspects of the centrifugal mechanism and processes of normalization tend to reduce the autonomy of researchers.However, it can also be expected that individual researchers will react more negatively to certain qualities of the neoliberal subjectivity per se.The premise is that the neoliberal self is preconceived as a highly self-interested creature (Cannizzo, 2015;Lynch, 2006) that is incentivized by one thing and one thing only, namely competition.Moreover, and related to the ways in which quasi-markets require certain criteria by which competition is to be played out, the type of subject that PMSs try to hail into existence is seen as consisting of a number of measurable and definable attributes that can be calculated with, improved and even optimized (cf.Cannizzo, 2015).While such qualities of the subject can be expected to be met with scepticism in and of themselves, it is perhaps the effects that they tend to bring with them that can be expected to result in most disapproval, namely, the tendency of the entrepreneurial self to become strongly focused on, for example, the numbers as such rather than the substance.This is something that has been extensively discussed in the accounting literature (Guarini et al., 2020;Hopwood, 2008;Messner, 2015;Northcott and Linacre, 2010;O'Connell et al., 2020a).For example, several scholars point to how such a focus on the measurable and definable attributes results in a view on academic work as a "hyper-individualized exercise" (Chatterjee et al., 2020(Chatterjee et al., , p. 1221)), as a game that is, and has to be, played based on the "selfinterest of academics in securing and accumulating capitals" (Pianezzi et al., 2020, p. 574) and as being about reaching certain performance targets so as to maximize individual benefits and career progression (Parker, 2011).A type of focus that is not only seen as "narrowing the types of research undertaken and the types of methods chosen" (O'Connell et al., 2020b(O'Connell et al., , p. 1297; see also Guthrie et al., 2019;Martin-Sardesai et al., 2019;van Helden and Argento, 2020), but also as a form of "outsourcing of meaning" (Alvesson and Spicer, 2016, p. 38), whereby many researchers end up doing things which they do not necessarily identify or sympathize with.As a result, they tend to experience internal moral conflicts (Pianezzi et al., 2020, p. 579), anxiety (Argento et al., 2020) and doubts whether they are doing the right things (Berg et al., 2016).
Based on this, we suggest that the ways in which PMSs contribute to construct individual academics as entrepreneurial selves can also be expected to provoke a hate-love relationship to such systems.The premise is that the entrepreneurial self carries not only a number of seductive virtuous and values but also ones that threaten more traditional academic ideals.
All in all, we argue that a theorization of how PMSs are involved in the three central moments of governmentality referred to abovewhereby the help (re)construct individual researchers' relationships to their governing parties, their peers and to themselvescan further our understanding of how it may be that individual academics develop a hate-love relationship towards such systems.Before empirically substantiating these theoretical arguments, we will provide further details on how we designed and carried out the empirical study.
Research context and data collection
The empirical material presented in this study relies on interview data collected in a Swedish context, and as all qualitative data, they primarily reflect the contextual conditions in which they emerged.However, when comparing the Swedish context with findings from studies conducted in other countries, it seems that Swedish universities are increasingly governed in ways that resemble a more general neoliberalisation of academic settings around the world.In fact, just like many other countries (Dobija et al., 2019;Krücken et al., 2018;Raudla et al., 2015), Sweden has experienced major reforms over the past decades, with the introduction of various forms of market-based forms of control.For example, there has been a substantial increase in external and competition-based research funding, where resources to an increasing extent are allocated based on the performance (as in number of publications, citations and amounts of external funding received) of universities and its researchers in relation to the performance of other universities and researchers (Hammarfelt et al., 2016).Moreover, this type of competition-based funding of Swedish academia has typically trickled down to the internal PMS of individual universities, where the measurement of external research grants won, number of publications etcetera, work as important competition-based incentive systems, when it comes to such things as position appointments, salary trends, publication bonuses and the allocation of funding within the universities (Englund and Gerdin, 2020; for similar developments in other countries, see Agyemang and Broadbent, 2015;Dobija et al., 2019;Guarini et al., 2020;Raudla et al., 2015).
Within the Swedish context, three universities were selected; one of the top five, one of the top 10 and one of the top 20 universities in Sweden according to the Times Higher Education World University Ranking (which is based on performance indicators such as those mentioned above).The reason for selecting universities based on their ranking was that it enabled us to capture a broader picture of the phenomenon under study in the sense that relative positional status in terms of rank may influence how universities and individual researchers alike relate to and feel about the quantitative developments which are taking place (Gendron, 2015;Guarini et al., 2020;Kallio et al., 2021).
In the selected universities, we conducted a total of 20 interviews during the fall of 2018 and the spring of 2019 [3].In more detail, respondents were selected from either of three different faculties: Business, Science and Engineering, Law, Psychology and Social work or Humanities, Education and Social Sciences.Also, in terms of seniority, they were selected based on varying levels of tenure, ranging from assistant to full professors.The reason for this particular design was that extant literature gave us reason to believe that how individual academics experience PMSs can vary depending on, for example, aspects in the local context, the faculty to which they belong and their academic position (Archer, 2008;Chatelain-Ponroy et al., 2018;Guarini et al., 2020;Lewis, 2014)[4].Table 1 below illustrates the number of respondents divided by faculty and seniority.
The interviewees were asked to reflect upon questions related to three overall themes, namely, what it means to be a researcher, the existence and use of PMSs in academia and if/ how such PMSs affected them and their work as researchers.Linked to each such overall theme, the interview guide consisted of a number of general and rather "open-ended" questions.For example, we asked them questions such as "What does it mean to be successful as a researcher in your view?", "What kind of research is prioritized in your department?" and "How does your own research fit with the expectations from others, such Researchers' hate-love relationship as external funders?".And importantly, depending on where the interviewee would turn the conversation related to such questions, we would raise a number of related issues, ask her or him to elaborate and ask for concrete examples.While studying our own practices and "colleagues" may have affected the interviews negatively (e.g. through making us "blind" to certain issues), we believe our personal experiences and understandings of the field as researchers were advantageous in this case.For example, such pre-understandings arguably enabled us to have deeper discussions with the interviewees and to ask relevant follow-up questions (Ahrens and Chapman, 2006;Gendron, 2008;Agyemang and Broadbent, 2015).All interviews were recorded and subsequently transcribed verbatim.
Data analysis
As we began to analyse the empirical material, we drew upon rather broad categories from extant literature to identify various ways in which PMSs triggered negative reactions among researchers (Argento et al., 2020;Gebreiter and Hidayah, 2019;Guthrie et al., 2019;Parker, 2002;Mingers and Willmott, 2013;Kallio and Kallio, 2014;Martin-Sardesai et al., 2017, 2020;Narayan, 2016;Lewis, 2014).This resulted in a large number of quotations where the interviewees felt that they had to comply with the criteria set up by the PMSs in different ways to receive funding, to secure an individual career or to manage their own self-esteem.
However, as we organized and coded these quotations in NVivo (a software for qualitative analysis), we made two important observations.Firstly, we noted that while our respondents certainly had their doubts about, and gave voice to a number of concerns with, the PMSs, the material was also replete with examples where they expressed a more positive view on such systems.Secondly, there were no marked differences between respondents depending on academic position, discipline or the university they came from, at least not to the extent that such classifications could "explain" how and why individual researchers perceived the PMS in a particular way.Based on these two observations, we decided to redirect our analytical focus somewhat, and focus on the concurrent occurrence of negative and positive sentiments towards PMSs among the researchers, regardless of where they came from or their degree of seniority.
With such an emerging focus, we (re)read all transcripts and attempted to abstract a number of categories (from the empirical material) that could help explain why our interviewees seemed to have these ambivalent views on the PMS.This work resulted in several emerging categories, most of which were related to the PMS as such.That is, they pointed to aspects or qualities of the system as such that seemed to trigger both negative and positive reactions.For example, it seemed that our interviewees referred to the system in negative ways because of how it was perceived to restrict or narrow down the type of research they conducted.However, such narrowing down also had clear positive connotations because of how it provided them with a form of guidance.In a similar manner, the material suggested that the system was associated with negative feelings because of how it created a form of demotivating external pressure on them, at the same time as such a pressure was seen as positive because of how it ensured progress and prevented laziness.
To further elaborate on and theorize these empirically generated categories, we consulted the literature once more in search for a framework that could offer further theoretical guidance.In doing so, our attention was drawn to Foucault (2007Foucault ( , 2008) ) and his writings on neoliberal forms of government, as these writings not only offered a conceptual apparatus through which our preliminary categories could be further developed and theorized but also a way of talking about the categories as different, yet interrelated, facets of PMSs as neoliberal technologies (depending on whether they related to the restricted centrifugal mechanism, the principle of normalization or the entrepreneurial self).During this work, we also went back to extant accounting literature on PMS in academia to be able to relate (and differentiate) our emerging findings to previous insights.
All in all, therefore, the analytical work underlying the current paper can best be characterized as an iterative process, where we have gone back and forth between the empirical material, the emerging findings and the extant literature (Ahrens and Chapman, 2006;Bazeley, 2013).This has enabled an in-depth analysis of what it is about PMSs that make them so powerful in academia and the development of a number of concepts describing the character as well as the effects of the different moments of governmentality that PMSs are involved in.In the following section, we present the results of these analyses.Two things should be noted regarding this presentation.Firstly, as we present quotes from our interviewees, we will denote the interviewees as I1-I20, for reasons of anonymity.Secondly, to allow each researcher to have a voice within the text, without losing sight of the overall empirical story, we have built the story based on a combination of quotes from individual interviews.While this certainly risks leading to a form of decontextualizing of the individual quotes, the large number of quotes presented means that, when taken together, several different facets of the different interviews are indeed reflected in the story.
Empirical findings
As suggested in the methods section, Swedish universities have, in similarity with universities in many other countries, experienced major reforms when it comes to their governing.While variously referred to as the marketization of universities, the spread of new public management-ideas or a new managerial era (Martin-Sardesai et al., 2017;O'Connell et al., 2020b), such reforms have resulted in an increased importance of various forms of performance measures, both at a national and local level.In the universities under study here, such performance measures mainly covered three areas, namely, how individual researchers performed when it came to the number of publications, the number of citations and the amount of external research funding attracted.Or, as suggested by some of our respondents, "the measurement systems we have today, they focus on the number of publications, international publications and to be cited" (I13), "over the last five years there has become this enormous pressure, that you're measured on the number of citations you have and the number of publications" (I6), "you need to publish [. ..] and above all you should attract funding from these bigger funders" (I3) and that "publication indexes, the number of publications and funding are growing more and more important" (I5).
Although the relative importance of the individual measures seemed to vary somewhat between universities and scientific disciplines, they were all reported and followed up systematically.For example, our respondents explained how individual researchers or research leaders would "collect everything that we have done this year" (I4), "sometimes during the autumn you report on your publications, your conferences, presentations, applications for funding" (I13), "you account for everything the research group has done once a year [. ..]It's external funding, publications and so on" (I17) and then "we talk about it in our different meetings, we follow up on [. ..] like the last time, it was individual researchers and how much we publish, the number of research applications we have submitted [. ..] and in the yearly activity report of course, where we report the number of citations, publications and so on" (I9).
Apart from allowing internal follow-ups and discussions, the reported numbers also served as an important ground for both material and non-material rewards.In fact, and as suggested above, the universities as such "get funding based on what we publish, the number of citations, bibliometrics and so on" (I18), and this performance-based logic then trickles down and forms the ground for a performance-based logic within each university.For example, such a logic included publication bonuses, in the sense, that "we get a publication bonus" (I9), where "you get a certain amount of money based on what you've written, and the amount of money also depends on which journal it is" (I18).Apart from such bonuses, the performance measures also constituted an important basis for the salary revisions.For example, it was referred to how "those who have published more get more paid, so of course it's a salary incentive" (I3) or that "the more I have [in terms of publications], the better my negotiating position is when discussing salary" (I10).
Moreover, it was clear that this type of measures where highly important not only when it came to such things as positions (e.g. when ranking candidates for a position, "articles are valued the most", I20) and individual career development ("to become an associate professor you have to publish articles, that's the only thing that counts", I4) but also when it came to who would get attention and be celebrated within the universities: "there are these informal structures [that guide] what should be celebrated" in this sense, so that "when someone has received external funding or released a publication, it is noted in the internal newsletter" or by "the head of department [. ..],where she can say that 'this year, it is gratifying to see that [the department] has been successful', and then she mentions those who have attracted external funding" (I2).
From these, and other examples, two things stood out as important.Firstly, it was clear that the performance measures worked to operationalize and articulate the neoliberal ideal of competition (Mennicken and Miller, 2012).That is, it was through this type of measures and the ways in which they were used to draw attention to, and reward, certain performances that competition was set in motion.Secondly, in doing so, it was clear that competition is not played out on the basis of who you are as an 'individual', but rather on the basis of the numbers you are able to provide.You do not enter the competitive arena as a whole individual.Instead, you throw in your numbers.Or, as suggested by our respondents, "we have a system in our 'world of research' that presumes that you are successful in a very particular way.It's implicit in the system that you should have a certain type of funding, you should publish in certain journals, you should have a certain number of publications" (I18)."It's very seldom that we talk about qualitative objectives.Instead, it's often 'the number of [. ..]'" (I9), which means that you "through numbers try to establish how good or bad something is" (I14).
When we talked to our respondents about how they experienced the fact that their numbers seemed so important for how they were made up as researchers, we received somewhat ambivalent reactions.That is, while they would oftentimes vindicate a common finding in the extant literature, namely that the introduction and use of performance measures in academia tend to provoke various forms of negative reactions among researchers, we also found evidence of considerably more positive views among the same respondents.In fact, while they often started out by means of a critical stance on the PMSe.g. because they felt that "you can't measure quality in this way, with this type of indicators" (I1) and that you cannot compare different types of research "and say which one is best" (I4)their reasoning would often include also more positive elementse.g. in terms "I think it's important that we follow up on what we do" (I9) and that, generally speaking, I think that "competition is good for us" (I19).
While such positive and negative sentiments were sometimes expressed by different individuals and sometimes in different parts of the interview with one and the same individual, they were also often expressed as truly ambivalent views.When it came to the numbers as such, for example, one of the respondents talked about how he had "a somewhat ambivalent view.I'm not all critical, as some are [. ..]Of course there is something positive about this quantitative.We can't stop producing statistics [on how people perform], but you need to have a more problematizing view [on the numbers]" (I14).One important reason for this felt "need for numbers", is that "we need to measure quality in some way" (I10) and "it's important that we know what we are doing, and that we feel that it's transparent so that we can see what others are doing [through the numbers]" (I9).Yet, the same respondents added that "I'm not sure whether this is the right way or not" (I10) and that it is important that "the documents do not only serve the purpose to compare and say that 'you have four, and you have five'" (I9).On the contrary, it was stressed that "you need to be cautious when you use it [i.e. the statistics[. ..], comparing apples with apples and pears with pears, realizing that this is just one out of many ways to measure quality, but then I think it's okay" (I14).Another important reason for such a felt need for numbers, as suggested previously, is that they allow for competition to be played out, and as suggested by many respondents, "competition is good" (I19, I15), "a certain amount of competition works just fine" (I16) because it ensures that we get the best research, the type of research that is best for our society" (I19).At the same time, though, the very same researchers added that "the way it works today [with a strict focus on the numbers] is not good" (I19) because "competition is not always good for the development of new ideas and new knowledge" (I16).
As suggested by these (and other) quotations, there were clearly some mixed emotions, where pure criticism was mixed up with ideas like "to ensure that research contributes [to something], we must have a system of control" (I4) or that "if there was no competition, one would still have to measure and evaluate quality in some way" (I10).Drawing upon Foucault's writings on neoliberal governmentality, we will below describe and discuss how such ambivalent views may be understood in terms of how PMSs function as neoliberal technologies that rely on a restricted centrifugal mechanism (Section 4.1), a principle of normalization rather than normation (Section 4.2) and a view of individual academics as entrepreneurs of themselves (Section 4.3).
Performance measurement systems and the (restricted) centrifugal mechanism
A first aspect of PMSs that seems to provoke the type of ambivalent reactions referred to above relates to how they work through a centrifugal market mechanism (Foucault, 2007).As suggested in the theory section, this constitutes a central moment of governmentality that is interesting because of how it constitutes the relationship between the governing and the governed in terms of freedom.
Largely in line with our theorization (see Section 2.2.1), we find that our interviewees appreciate the ways in which such a mechanism does not coerce or pressure people into particular forms of behaviour.On the contrary, it seems to foster feelings of the often sought-for, and appreciated, notion of academic freedom or researcher autonomy.At the same time, though, we see that due to the ways in which the setting up of neoliberal (quasi-) markets in academia requires that some "rules of the game" or "criteria for competition" are instituted, such feelings of freedom or autonomy are largely circumscribed.The premise is that such criteriaregardless of whether they refer to the number of publications, citations Researchers' hate-love relationship or the amount of external fundinginevitably constrain or "consume" the feelings of freedom that markets are expected to elicit (Foucault, 2008).
To illustrate their ambivalent reactions, one of the respondents concluded that "you can talk about these conditions [of the competitively oriented system] endlessly, and there are a lot of aspects that I'm critical of, and many aspects that are good" (I1), while another one emphasized the individual freedom underlying the logic of the system, although in an ambivalent way: "On the one hand, you can say that everyone has the same chance to apply for money, which points to a form of democratization.At the same time though [. ..], it results in that a lot of those who apply do not get any funding, so [there is a line somewhere] and when you cross that line, then it turns into a form of collective irrationality" (I14).Below, we elaborate in more detail on the reasons for such ambivalent views.
4.1.1Performance measures create a zone of protection that produces freedom.One way of understanding researchers' more positive views on the centrifugal mechanism is that it is associated with the creation of a "zone of protection" (Alston and Malherbe, 2009).The premise is that through rendering visible only some aspects of what researchers do but not others, the performance measures (and their users) do not "reach into" and "grab hold of" every aspect of the research process.On the contrary, performance measures typically work as a form of "results control" that only points out what to aim for, but not how to do it.As suggested by one of the respondents, this produces a (however illusionary) feeling of freedom: "they just set the targets [. ..] this and that amount is what you should do, but they never say how; that's up to us" (I1).In a similar manner, other respondents emphasized that "they are only interested in the CVs and the list of publications" (I5), "I mean, it's not like the boss asks you during the year how you're doing when it comes to funding and publications" (I3), it is more that "in December every year, we're asked to report our publications, conferences, presentations, funding applications etcetera" (I13).
As a result, the measurement system leaves plenty of room for manoeuvring or self-regulation when it comes to how, when and why to achieve such aims.In the interviews, this room for manoeuvring was stressed as important, in the sense "that you need to give people some space" (I4).In fact, one of the respondents talked about how "it's quite nice that I'm left alone, and that I can do my own thing" (I14), which was related to the fact that although the numbers are there for the governing to see, nobody knows in detail what they do as researchers: This feeling of being able to design and conduct research as they deem fit, without the interference of their superiors was seen as particularly prominent for those who could provide strong numbers (typically more senior researchers).For example, it was stressed how the zone of protection became stronger for those who had proved that they were able to publish their results or attract research funding.In a general sense, this was referred to in terms like "I think you have to prove yourself.It's always like that, if you prove yourself, then you get the freedom to do more" (I19).However, sometimes it was linked more specifically to the ability to publish articles, in the sense that "the more articles you get published, the better you're off, not least as a professor, but also as an associated professor" (I7), and sometimes it was linked to the ability to attract external funding (or both).The premise is, as suggested by one of the respondents, that "funding depends on measurable success.So those who show that they can publish will get funding" (I5) and "when you're good at attracting money to the university and to research, it almost gives you free reins to do whatever you like [. . .].I mean, the vice chancellor and the central [administration], then they say that 'if you're good, we won't interfere that much'" (I19).Or, as summarized by one of those who had recently managed to secure a large research grant: It felt good, everyone gave me a pat on the back and told me how good I was.It gives you a sense of independence because it gives you freedom.If you don't have money, you become dependent on every penny that you can get from somewhere [. . .but] when you get a large amount of money you get the chance to do something, to realize your ideas.That's a lovely feeling.(I12).
4.1.2Performance measures create a zone of protection that consumes freedom.While the focus of PMSs on particular "outcomes" or "results" can create feelings of freedom among researchers, as suggested in the previous section, the particular measures included in such a system inevitably mean that some "rules of the game" are instituted.And importantly, regardless of whether such rules refer to the number of publications, citations, the amount of external funding or some other measure, they inevitably constrain or "consume" the type of freedom that they aim to produce.
When giving voice to how performance measures consumed their freedom in this sense, several respondents referred to how a relationship to the governing party that is based on performance measures inevitably results in various forms of constraints.As nicely summarized by one of the respondents: "If we were self-governing, it might have been different, but this type of reporting [based on certain performance measures] is done by all units at the university, and all units want to look good [in the eyes of their superiors[. ..] so it becomes a treadmill that you can't get off" (I2).This treadmill, then, typically results in that you feel free in one sense, but in another sense, you do not: You have a number of degrees of freedom, but you are always aware that it [i.e.your work] has to result in something.So, it's like you're free to do what you want, but it should result in something specific.And it's like, I don't know how to explain it, but it's like in the end, you tend to do that which results in something [e.g. a journal publication] (I15).
In line with this last quote, we could see how such constraints on their freedom resulted in feelings that "research isn't free" (I7), that "you're incredibly constrained by what you think that they [e.g.funders and editors] want to see" (I10), resulting in that they variously referred to researchers as "chess players" who try to anticipate every move of the governing party (I10), as individuals who "only make safe bets" (I11), who "go for the low-hanging fruits" (I1) and as "small fish swimming in shoals" (I12).Regardless of the particular metaphor used, though, it seemed to imply that performance measures have a tendency to "streamline the thinking of researchers" (I3), regardless of whether they talked about "a form of opportunism when applying for money [. ..where] you turn to where the money is" (I1) and that "we adapt ourselves to get money" (I2) or whether they talked about investing (only) in projects and studies where you know beforehand that you "can produce several articles" (I9).Or, as nicely summarized by one of the respondents: You know that, okay, these articles need to have a certain format, so then I have to do research in a certain way.I must reach a result in a certain time frame, I must wrap it up in a certain way, which means that I need to study something that is doable within that time frame and that can be wrapped up in that way.In the end, this means that I must think twice about the kind of research that I do, and that has affected my choices in research tremendously.I don't feel free as a bird in my choices of research".(I10).
This feeling of having your wings clipped, or of not being "free as a bird", was particularly noticeable among more junior researchers, who had not yet been able to provide the protecting numbers referred to above.In fact, for those who end up on the wrong side of the empirically established norm, such numbers can have the very opposite effect.That is, rather than protecting researchers from the interference from their superiors, they can attract the attention of the governing parties, open up for problematization and even hinder individual researchers from exercising their "freedom".In the material, we could see this in the sense that those who had not yet reached, for example, a sufficient number of publications or a senior position, were clearly pressured by the need to perform.Then you know that to take the next step on the academic ladder, to win another research grant or strengthen your position in the next follow-up, you need the numbers: "That's what it takes [to become an associate professor].If you haven't published enough articles, you don't have a chance.That's how it is!" (I4)."You can't put forward an article every third year and say, 'now I'm published'" (I2) because then they "would start to wonder: 'what is that person doing?'".Or, as summarized by yet another respondent: "Still, if I just sat for four years and didn't [. ..] if nothing came out, if I just had a lot of high-risk projects and didn't receive any research grants, then I think they would have a discussion with me about what I do as a researcher and how I contribute to the research environment" (I9).
This risk of not being able to enjoy the type of freedom that outcome-oriented performance measures offer, was even further accentuated by the fact that when you enter the competitive arena and you are unable to provide good numbers, then you run the risk of ending up in a negative spiral.That is, without good numbers in the first place, it is very hard to improve your numbers since you need funding to improve your numbers, but to get the money, you need to have good numbers.Or, in the words of our respondents: you need "funding, because then you're a bit more independent in a way" (I10), and then you can "strengthen your CV" (I9).However, and again, "funding depends on measurable success", which means that it is only "those who show that they can publish [who] will get funding" (I5).The result seems to be an increased risk of ending up in "a negative spiral [. ..]. Being a good researcher isn't just about measuring publications and that, but that's what we often do, and that can be negative for certain persons, because it makes it tough for them to get into research at all [after getting their PhD]" (I8).And importantly, when you end up in such a negative spiral, it is easy to get caught in an endless struggle to improve your numbers.As suggested by one of the respondents, this means that you do not have the luxury to choose, but rather that: You have to get along with people and create your own small network so that you become invited as a co-author and get half a point for the publication so that you can apply for a lectureship.That's not possible if you don't have a lot of publications.So yes, publish or die!If you don't publish, you're out! (I20).
Or, as reflected upon by another respondent: I need to specialize a bit more [. ..] in an area where I can strengthen my CV, so that it becomes easier to apply [for money] and build networks and relationships [. ..] so that I can apply for money [with other people].It's about making time for research.[. ..]I know that I have to qualify myself in my area so that I can receive further funding, and then I can't run on this ball over here, or try to develop this, because that's too risky and those applications will probably not be granted [. ..] so you need to think strategically (I9).
Again though, this was a situation that clearly differed between those who could provide good numbers and those who could not.This became evident when, for example, more junior researchers were compared with more senior ones: "For a young researcher it is crucial [to attract external funding], but for an old owl like me it doesn't matter.Those who have become part of this system [and become full professors], they get more freedom to choose the kind of questions [that they see as relevant]" (I16)."However, if you're early in your career it can be necessary [to adapt] just to get published, maybe to become an associate professor, but I can drop that now [since I'm a full professor].But of course, if I didn't get anything published, then it would be better to publish something with lower quality [than publishing nothing at all]" (I14).
Performance measurement systems and the principle of normalization
A second central moment of governmentality that helps to provoke the type of ambivalent reactions referred to above relates to the ways in which PMSs help (re)construct the relationship between individual researchers and their peers through a principle of normalization rather than normation (Foucault, 2007).That is, rather than working through any pre-existing or pre-defined norm around the particular outcomes that they draw attention to, such systems normalize their "inhabitants" through an ongoing "empirical" construction of the (ab)normal by means of bringing the individual researcher into a particular form of relation to the "population" (i.e. to one's peers).
An overall effect of this type of normalization is that you tend to know, and value, your peers through their numbersat least those who appear a bit distant to you.In fact, through measuring how people perform on a number of performance criteria (such as the "number of publications", "publications in certain outlets", "the amount of research grants won" and "the number of PhDs supervised") and combining these into a "neoliberal spreadsheet" (McRobbie, 2015), the technologies help to "make up" (Hacking, 2007) one's peers as different types of performance-oriented subjectivities.That is, rather than being colleagues or peers, they become competitors and rivals, high-and low-performers, heroes and warning examples etcetera.Or, as nicely summarized by one of the respondents: "I think that we do focus on content [i.e. on the research done], but sometimes it easily becomes a focus on 'okay, one article here, four there, two there, the amount of external funding' [. ..] and it says something that I know more about their [i.e. the colleagues'] numbers than I know about their research questions" (I9).
When talking to our respondents about their views on how PMSs contribute to construct the relationship between individual researchers and their peers in this sense, there were clearly some mixed emotions.On the positive side, it was referred to how it allows "performances" rather than rumour or cronyism to determine, for example, who gets a position or who is rewarded a research grant."As an underdog, it's an advantage that the numbers are there, because they ensure that you don't just focus on the rumour that someone has, but rather on what you have done [i.e.how you have performed].So, they bring some good things with them [i.e. the numbers] and not just bad things" (I6).In a similar manner, it was referred to how a performance-based principle is "both good and bad.
[However] If the person that brings in a lot of money starts to 'give himself airs', thinking that he can behave however he likes, then it's not good" (I1), or when people start to manipulate their numbers just to reach fame and fortune: They [i.e., the numbers] are manipulable.But at the same time, they are the best we've got.[For example] your impact can be manipulated by self-citations or hidden self-citations [. ..] or through publishing controversial statements that everyone must refer to.But still, they are the best we've got.I can't imagine another way to measure success than this type of measures (I6).
Below, we elaborate in more detail on the underlying reasons for why PMSs provoke this type of ambivalent reaction when it comes to how they help (re)construct the relationship between individual researchers and their peers.
4.2.1 Performance numbers constitute peers as role-models.When closing in on the more positive aspects of the principle of normalization, our respondents pointed to how it helped
Researchers' hate-love relationship
to construct an environment where some researchers (and types of research) become clear "role models" for others.Again, this is largely premised on the fact that when individuals are constructed and compared based on how they perform according to particular criteria, there will always be those who come out better/worse than others.And, when looking at how our respondents constructed such a "distribution according to ranks" (Foucault, 1977, p. 181), it was clear that they were not only occupied with their own "ranking position" but also with how the ranks per se visualized a number of "heroes" or "research stars" (Tucker and Tilt, 2019) who could work as "role models" for those who had not yet reached "the top".Those who have good numbers are "looked up to [. . .] [because] it is important" (I19)."When you sit at a meeting and you can see: 'look at that, oh gosh, five publications last year, that was amazing [. ..] although you can of course view that in different ways'" (I9).Or, as suggested when it came to external funding: "You may not be able to say something about the quality or outcome [of the applied for project], but to get money [from one of the large funders], that gives you a different status I would say" (I3).
The premise is that when your numbers are displayed in this sensesignifying that you have, for example, won a research grant or managed to publish a large number of articlesothers can see that "this is a skilled researcher" (I8)."That person might not be God or so, but you realize that that person is really good.It's not like I fall on the floor when he passes by, but you do notice when people are successful and that's good for them and it's good for us" (I5).Or as exemplified by another respondent: "We have [Name of a researcher] working with us.She is in a really hot area, and we're lucky that she moved here, she's really good, she has publications in [the high-esteemed journal] Science and so on" (I12).As suggested by these and other quotations, the numbers are seen to signify success, skills and quality in the sense that "when you [for example] receive funding in competition with others, then you are seen as more successful" (I18).
When single researchers get this status through the numbers that they are able to provide, it can be highly motivating for others."When you see others succeed you think that: 'Yes', you get stoked" (I18)."It's inspiring for all of us, even though it can be tough for those who don't get, of course.So, it's mixed feelings" (I17).Or, as summarized by yet another respondent: That person (who gets funding) gets status.It can't be ignored because it's encouraged by the management.It's in the culture, and it's in the incentives.You succeed and you get a good reputation, and that's how it works.It's not a bad thing since it serves as a 'carrot' for others to make the effort.(I1) 4.2.2Performance numbers constitute peers as rivals.Again though, the ways in which PMSs contributed to construct the relationship between individual researchers and their peers in this sense, was also reflected upon in a considerably more critical manner.One aspect of this critique related to a questioning of the particular ways in which the systems constitute heroes.For example, it was referred to how "I don't think you can judge whether someone is successful or not, depending on how much they've published [. . .].The measure doesn't say anything except that you're good at getting articles out" (I8), "Basically, the larger the number of articles you publish, the better you are off [. . .].This is, simply put, pure stupidity and it doesn't benefit the quality of research at all" (I7), and that "Many of us agree that people publish too much and with too poor quality.[. ..] they do so because that's what counts, you have to do it because your career depends on it.Unless you join the race, you can't become associate or full professor and you can't get more research funding" (I11).Or, as nicely summarized by one of the respondents: The risk is that those persons [i.e. the 'heroes'] are seen as successful regardless of how important or relevant their research actually is.But, they stand out!If they really are successful, then it's fantastic, but if it's just that they are there to be seen [. ..] or if they just build up a facade, then it's not good at all.(I5).
Interestingly though, it was not just this type of questioning of what the numbers did, or did not, signify, that surfaced in their reasonings.On the contrary, and despite such critique, it was clear that the PMSs did affect how they constructed their peers.The premise is, as suggested by some of the respondents, that if and when you accept that the system works according to the principle of normalization, "where it's all about putting your best foot forward and have as many publications as possible, then you end up in a competitive situation where you start to worry about how good others are compared to you" (I1), and then you easily end up in a situation where you know, deep down, that "it is important to be happy when things go well for someone", but due to the competitive climate, you become "quite bad at confirming each other" (I14) and "you get like [. ..] cynical" (I15).
From such a perspective, we could see how at least some researchers questioned both other people's motives, such as where they are seen as "alpha males" who become "all narcissistic: 'look at me, look at what I've done' [. ..]I'm quite sure that there are many [researchers] who find it important to be portrayed as successful researchers, based on different metrics and stuff" (I5) and the type of research they do.For example, one of the respondents referred to that: There are some [researchers], where you wonder: 'what are they doing during the days?' [. ..]I mean, I have to work really hard to reach the next level and then you see someone already on that level who underperforms according to me, and then you [. ..] it's not good, but you start to see that person, thinking 'well, you never do anything, you just talk and talk'.So, you easily find yourself holding a grudge against people (I15).
Performance measurement systems and the entrepreneurial self
Third and finally, our analyses suggest that the ambivalent reactions can also be understood in terms of another central moment of governmentality, namely, how PMSs help (re)construct the relationship that the individual forms to herself.In fact, the type of competitive milieus and normalization practices that were referred to above both produce and presume a particular form of subject; an entrepreneurial self.That is, a subject who invests in herself so as to be able to participate in, and contribute towards, the ongoing circulation of capital (Foucault, 2007(Foucault, , 2008)); one that is defined by, and articulated through, the idea(l)s of economic rationality and efficiency.Or, as one of the respondents put it: "also when it comes to research you must be an entrepreneur, know what gives the highest return in the form of publications, but also the time that it takes to conduct the research and publish it within a certain time.[. ..]You feel this pressure; publications must come out and external funding must be brought in" (I12).
On the one hand, such an "image" of the neoliberal researcher clearly attracted several positive reactions among our respondents.For example, several of them stressed the benefits of people investing in themselves so as to ensure a form of continuous improvement at the individual level and a continuous development of research at the aggregate level (or the other way around, to prevent laziness and stagnation among individual researchers).On the other hand, though, it was also emphasized that this type of relationship to yourselfi.e.where you feel that you constantly have to invest in and continuously improve yourselfcan nurture various forms self-doubt.Based on these two largely conflicting sides of the same coin, the reactions among our respondents could best be described as one of mixed emotions.When emphasizing the more positive side of the coin, it was stressed that competition is good:
Researchers' hate-love relationship
"because when there is competition you have to think through your projects" (I18).However, when considering the type of inner stress and self-doubt that were also associated with competitive milieus, they emphasized instead that competition "could be a bit lower [. ..]It is a bit too extreme now" (I18).Or, as reflected upon by another respondent: 4.3.1 Performance measures trigger self-investment.When emphasising the more positive aspects of how PMSs encourage and require that you invest in yourself as a researcher, many of our interviewees pointed to how the systems keep them on their toes, keep them alert and make them move "forward": [Competition] forces you to look in different directions, looking for those topics that are really 'hot' and going for that money.[. ..].It's more demanding of course [to apply for research funding in competition], but if you had more time for research included in your position as such, you would probably need other mechanisms to make sure that people reach an acceptable level [of quality in their research], otherwise it's a risk that people become too comfortable.(I12).
Or, when it comes to publications, it makes you think about: [. ..] publication strategies and your publications.What's our view on quality?How do we think in terms of where we want to publish, where should we publish, and where can we publish?[. ..]What do I have to do strategically?What choices do I have to [. ..] to look the best or to come as far as possible?(I10).
When justifying this type of strategic, self-oriented, considerations, several respondents referred to how it could be seen as a natural "counter-performance" in relation to the position you have or the type of funding that you have been granted.For example, it was referred to that "if I do research, then of course I should publish.You could of course discuss how it should be done and where to publish, but of course it should result in a form of output" (I1) or that "you can't receive funding for just anything, there has to be a counter performance.If you're going to a conference, you should write an article [. ..] [and that is important] because an article in the next follow-up will result in points for us and our research group" (I17).
Above all, though, this type of strategic orientation towards yourself, was seen as a natural ingredient in the type of competitive milieus set up by the PMSs.That is, when one is measured, and the resulting measurement is set up in relation to those of others (e.g. through counting and comparing the number of publications or citations), it becomes in the interest of the individual researcher to provide good numbers.Put differently, it becomes in the interest of each and every one who wants to reach tenure, secure research funding or to climb on the academic ladder, to manage her performances so that she is able to compete with her colleagues.As put by our respondents, it makes you aware that "you must work with yourself.It's part of the structure.The need to perform pervades the whole university" (I2), not least because "you don't want to have that paper that has three citations, or no citations at all.You want to get to a point where you start to become relevant [i.e.where people start to cite you]" (I10).And again, this was seen as important because it "helps people keep the quality [in what they do]" (I12); "It's good that things are put in a larger context so that you do research on relevant things and don't get stuck in your tracks for 20 years because you yourself find it interesting" (I19).4.3.2Performance measures trigger self-doubt.As suggested above, though, the perceived "positive pressure" to invest in yourself as academic capital, also had another side, namely, that it triggered feelings of uncertainty, insufficiency and self-doubt.In the material, such feelings were typically associated with the continuity with which they were compared and evaluated and the ways in which their "workable self" only existed as "an unfulfilled project" (Bauman, 2000, p. 29).To illustrate the former, it was referred to how "you're evaluated all the time [. ..] you compete against each other when it comes to funding and everything" (I15), "you always compare yourself with others [. ..]I know that it isn't fair to compare myself [with more senior colleagues], but that's what you do" (I19).As a result, several respondents pointed to how you end up in a situation where you feel that "it's never enough" (I14), "we're always supposed to improve ourselves" (I2), because "we know that everything is connected.If you don't publish a lot then it affects your chances of getting funding the next time, and if we don't get any money, then you won't publish anything.It's a never-ending story" (I20).Such a constant pressure to perform, in turn, seemed to result in feelings that "you always need to look ahead, you always need to think about the future; how should I formulate my next project?You need to start with that halfway into your current project" (I6).Or, as expressed in frustration by another respondent: You know, as soon as you have defended your thesis, they start to talk about, what is your plan, how are you going to become an associate professor.And you just, oh my god, give me a minute, like that.[. ..].It is always like that and it's a lot.I think that [this kind of pressure] comes from everywhere.(I2) Moreover, several respondents pointed to how this perceived endless pressure to invest in yourself risks resulting in that you as a researcher adopt a form of performer orientationi.e.where you become highly focused on, and try to deliver, what the system wants from you.Based on this, they expressed serious doubts whether this was the right way to go and whether they were doing the right things.Such doubts concerned, for example, how far they could and should stretch themselves when it came to publishing or applying for funding."Since it is a meritocracy, you really want to look good when you compare yourself [with others], but that's no good for your psyche.It results in that you do overhasty things and that you take shortcuts, just to move on" (I15).You risk ending up with "the wrong focus" (I13), where you "only think of your career" (I1), where you feel that "you have to join the race, because otherwise you can't become associate professor, professor, or get funding" (I11), where "you avoid high-risk projects" (I9), where you "don't tell your whole story, but cut it into small pieces [to get many publications out of one]" (I4) or where you start thinking "how much can I deviate from mainstream and still be considered popular?How much can I deviate, where do I cross the line and deviate too much so that it becomes too weird, to twisted, too unconventional?"(I10).The reason being that "when you have a short list of publications, then it's like, then you take whatever you can to make the list a bit longer" (I2), you want "to get on the train and move fastforward" (I10); a type of instrumental focus that typically "raises ethical issues.I mean in terms of, like, is it okay to go for a salami-slicing tactic?[. ..]Is it okay for a good researcher to behave in that way?" (I10).
Apart from provoking doubts regarding the direction in which the PMSs were guiding them, it was clear that this type of pressure to invest in yourself also resulted in a form of inner stress and more personal doubts.Generally speaking, such feelings were associated with the fact that "you're evaluated and exposed to competition all the time" (I20), "that there aren't any real rest periods during the year anymore" (I17) and that it is "stressful to be around [high-performers] because they're breathing down my neck" (I15).As a result, several respondents emphasized that "the level of stress among academics in general, is very Researchers' hate-love relationship high" (I7), "you become stressed" (I20), "of course you feel a form of stress" (I1) and that "it leads to a lot of stress when you have this focus on what is measurable.I think a lot of people get ill" (I9).Or, as vividly explained by one of the respondents, it becomes like a "honey trap, and when you look too deep in the jar, when you push yourself too hard, then it's not unusual to get burn-out syndromes in academia" (I1).Interestingly though, despite that many respondents pointed to the risk of such serious and far-reaching consequences, it was clearly a sensitive issue to talk about.One important reason for this, as explained by one of the respondents, is that when you find yourself in a competitive situation, you are supposed to address such issues mainly by yourself.It is always a "balancing act to open up to someone and talk about these things without touching upon the most inner things, namely 'self'. . .issues like: 'doubts' whether I'm good enough" (I10).
Summary
To summarize, our empirical study of how researchers from three Swedish universities made sense of the increased reliance on PMSs for governing academic affairs draws attention to how PMSs are involved in three central moments of governmentality.As such, they help (re) construct: the relationship between the governing and the governed as one that revolves around the (restricted) centrifugal mechanism, the relationship between individual researchers and their peers as one that builds on the principle of normalization and the relationship that individual researchers form to themselves as one that builds on the notion of an entrepreneurial self.Elaborating empirically on how PMSs help reconstruct these three different, yet interrelated, relationships, we show how and why PMSs can be expected to provoke highly ambivalent feelings among researchers.Table 2 summarizes these empirical findings.
Conclusions and implications
A more and more common empirical finding when studying the use of PMSs in academia seems to be that individual researchers develop a form of hate-love relationship to such control systems (Gendron, 2008;Knights and Clarke, 2014;Parker, 2012;van Helden and Argento, 2020).In extant literature, this hate-love relationship has primarily been understood as an effect of PMSs having different qualities, where some qualities tend to provoke feelings of "hate" while other qualities underly feelings of "love".In the former case, for example, PMSs are depicted as "evil forces" that help increase the power of the governing parties at the expense of researchers (Argento et al., 2020;Parker, 2002;Mingers and Willmott, 2013), and as such, they tend to have largely narrowing and homogenizing effects on research (Gendron, 2008;Hopwood, 2008;O'Connell et al., 2020a).In the latter case, in contrast, PMSs are often depicted as useful tools that allow universities to reach a number of important goals (Söderlind and Geschwind, 2019), such as increasing the productivity (Guarini et al., 2020, p. 113) and quality of research (O'Connell et al., 2020a).
While the existing literature oftentimes departs from an either/or view on these qualitieswhereby they are typically analysed separately and as different parts of the control systemthis paper set out to analyse the ability of PMSs to concurrently provoke both negative and positive reactions.To this end, we argued theoretically and then showed empirically how PMSs are involved in three central moments of governmentality (Michael, 2009;Walker et al., 2008), where each such moment contributes towards a (re)construction of the relationships that individual researchers form to their governing superiors, to their peers and to themselves.Moreover, and importantly, they help (re)construct these relationships in largely contradictory terms, in the sense that they: produce feelings of freedom at the same time as they consume such freedoms, construct peers as role models at the same time as they construct them as rivals and provide tools for self-improvement at the same time as they cultivate feelings of self-doubt.
A main conclusion of this paper is that it is these contradictory terms that underly the hatelove relationship that individual researchers form to PMSs in academia.Hence, we suggest the highly ambivalent feelings and mixed emotions that our interviewees gave voice to when talking about the control systems can be understood neither through focusing separately on the "negative" qualities of PMSs nor on their "positive" qualities but rather on how such largely different aspects of PMSs presuppose, and work through, one another.Put differently, it is through working as a system that integrates "what is cold, impassive, calculating, rational, and mechanical in the strictly economic game of competition" with values that are seen as "warm", and where the latter ones "are presented precisely as antithetical to the 'cold' mechanism of competition" (Foucault, 2008, p. 242), that PMSs trigger highly ambivalent reactions.
In fact, as neoliberal technologies, PMSs are not designed to work either as negative or positive forces; they are not designed to motivate either as a stick or as a carrot.Instead, they integrate these into one and the same quality, which is precisely what the centrifugal market force, the principle of normalization and the notion of an entrepreneurial self, do.For example, the centrifugal market force governs through the "warm" value of freedom at the same time as there can be no such thing as governing without governing.As a result, it consumes freedom at the very moment it produces it.In a similar manner, the principle of normalization offers the "warm" values of success, winners and role models.However, such notions have no existence without the very opposites that constitute them.As a result, this principle produces failure, losers and warning examples at the very moment that it makes up individuals and their performances as part of a larger whole.Finally, the notion of an Researchers' hate-love relationship entrepreneurial self offers "warm" values such as self-investment and self-improvement which, in and of themselves, presume the insufficiency of the existing.As a result, it produces aspects of inadequacy and imperfection at the very moment that it makes up individuals as human capital.
Based on thisi.e. because of how the warm and cold values are an inherent part of, and presuppose each other in, all three central moments of governmentalitywe suggest the most reasonable to expect is that individual researchers will form a hate-love relationship to PMSs.In fact, no more than we can expect individual researchers to enjoy every single aspect of how PMSs help (re)construct the relationships that they form to their governing superiors, to their peers and to themselves, no more can we expect them to comply with such systems completely against their will.Instead, what we can expect and what our empirical findings strongly suggest, is that they develop ambivalent feelings towards them, where they can be highly critical of some aspects of the systems at the same time as they defend, and even find merit, in other aspects of the systems.Arguably, our findings related to such a hate-love relationship to PMSs and the underlying reasons for this, suggest several implications for the longstanding and largely polarized debate on PMSs in academia.
Implications
A first implication of our findings of a hate-love relationship to PMSs relates to our understanding of the ongoing neoliberalization of academia.Arguably, the perceived positive effects of PMSs explored above (see Items 1-3a in Table 2), point in the direction of a form of successful neoliberalization of academia, in the sense that various aspects of the systems were clearly accepted or appreciated among our respondents.For example, and again, there seemed to be a widespread acceptance of how PMSs contribute towards an academic climate where you can earn your freedom through performing well according to the performance criteria set up by such systems.Moreover, there also seemed to be a widespread acceptance of the fact that such criteria helped constitute academic role models and functioned as tools for self-investment and self-improvement.Arguably, such perceived positive effects can be seen as a successful neoliberalization in the sense that the interviewees have come to appreciate, and want, what the system wants from them.
Perhaps even more interestingly, though, we suggest that also the perceived negative effects point in the direction of a form of successful neoliberalization of academia (see Items 1-3b in Table 2).The reason for this is that a control system that not only relies upon the type of "warm" values referred to above but also several "cold" values can indeed be expected to attract negative attitudes among those who are being governed.In fact, the very point about the type of cold mechanism of competition discussed above, is that we are expected to feel that we must keep within the confines and boundaries of the playing ground, that our colleagues constitute rivals that we have to compete with and that the current version of ourselves is always inadequate and incomplete.The premise is thattogether with the warm values that are expected to provoke feelings of there being a space of manoeuvring within which one can strive towards progress and successsuch feelings are expected to further "push" individual researchers in this very direction.That is, to make sure that they mobilize their freedom in a calculated and regulated way, that they are spurred to become better than others and that they are compelled never to rest.
From such a perspective, we suggest the powerfulness and success of PMSs should neither be "measured" by the number of people who fully accept their underlying ideals nor by the number of people who ideologically resist them.Rather, it should be evaluated by the number of people who subject themselves to the technologies (See Butler and Spoelstra, 2012, for a similar argument).That is, the number of people who engage with the technologies as a means to improve and become better than others, regardless of whether it is out of love, out of hate or out of an intermingled combination of both.In fact, you do not have to be a believer of, identify with, or cherish the systems to contribute to their success.The only thing that really matters is whether your actions contribute to the upholding of the systems, and they do so at the very moment that you engage in the numbers, even if it is in a hate-love form.This is arguably an important difference between the logic underlying neoliberal governmentality and the one underlying, for example, disciplinary power (Foucault, 2007; see also Wickramasinghe et al., 2021).According to the latter, it is important that each and every one of us learns how to act in accordance with a norm that is stated a priori.Or, as Foucault (2007) suggests, "A good discipline tells you what you must do at every moment" (p.68), thereby working in a highly normative fashion that takes as its main point of departure "a sphere complementary to reality" (p.69).In contrast, the neoliberal form of governmentality "tries to work within reality, by getting the components of reality to work in relation to each other, thanks to and through a series of analyses and specific arrangements" (p.69).And importantly, in doing so, it accepts and even embraces the notion of difference.In fact, and to use the wordings of Foucault (2008, p. 259), the neoliberal form of governmentality tries to optimize a "system of differences".That is, a system that is constituted by, and feeds from, differences or variations in the sense that it leaves the academic playing ground open "to fluctuating processes", for "minority individuals and practices" to exist (p.259) and for different ideas and attitudes to grow.As a result, positive and negative reactions to PMSs in academia is what can be expected.In fact, we cannot expect, and we probably will not ultimately find, a big army of cheerleaders supporting every aspect of the use of PMSs in academia.However, as long as individuals subject themselves to the systems in their actions, this is arguably of less importance when evaluating the powerfulness and success of PMSs as control systems.The premise is, we suggest, that it is the effects at the level of the population that constitute the ultimate proof of success when it comes to neoliberalism, and such effects are not dependent on everyone being completely mesmerized by the systems as such all the time.On the contrary, such effects are equally dependent on individuals feeling pressured, and forced to act in certain ways, by the systems.
Our findings also suggest a second implication, namely, that when individual researchers develop a hate-love relationship to PMSs, this poses a serious threat to research quality.Indeed, such potential negative effects of PMSs on research quality are hardly new as such.On the contrary, and when focusing on the negative sentiments among our respondents, our findings largely corroborate previous findings in the sense that when individual researchers' relationships to their surroundings and to themselves become characterized by endless attempts to improve the performative image of themselves, it risks diverting attention away from what research is really about (Gendron, 2008;Hopwood, 2008;O'Connell et al., 2020a).Not only because they become strongly focused on the very criteria by which competition is played out (see Item 1b in Table 2) or that they become instrumentally oriented towards beating their rivals (see Item 2b in Table 2) but also because it can diminish or take away the passion from conducting research at all (see Item 3b in Table 2).
Importantly though, we suggest that the other side of the coin constitutes an equally serious threat to research quality.That is, also those aspects that individual researchers appreciate when it comes to PMSsincluding the "zone of protection" that such systems are perceived to produce, the type of role models they make up and the means for selfimprovement they offer (see Items 1-3a in Table 2)contribute to this threat.The underlying argument for this is that a common denominator of all these positive perceptions is that they are grounded in an acceptance of competition as a (superior) mechanism for organizing social life.And importantly, with this comes an acceptance of the very premises based on which "the rationality of the market, the schemas of analysis it offers and the decision-making criteria it suggests" is extended into the academic domain (Foucault, 2008, p. 323;Mennicken and Miller, 2012).That is, an acceptance of the very building blocks through which markets are instituted and maintained in academia and the type of reductiveness and instrumentality that are typically associated with such building blocks.
The reason for this, as has been argued many times before (Rose and Miller, 1992), is that from a neoliberal perspective, the market does not have an existence that is independent of its own constitutive criteria (Foucault, 2008).On the contrary, it constitutes a non-natural phenomenon that must be brought into being and maintained by means of various forms of interventions, including the transformation of research and researchers into performance numbers (Rose and Miller, 1992).In the current study, it was quite striking that our respondents often voiced their concerns regarding the reductive and highly instrumental character of such performance numbers, and the ways in which the more "technical" aspects of PMSs were felt to lead them astray.Regardless of this type of critique, though, it was clear that in accepting the notion of competition as a governing mechanism, they struggled to articulate a viable option to "the numbers" (For similar findings, see Butler and Spoelstra, 2012).The reason for this is, of course, that it is hard to imagine how you can have one without the other.From a research quality perspective, though, this tiny little detaili.e. that you cannot have competition without instituting the very criteria based on which competition is to be played outis pivotal.As we see it, it seriously increases the risk for the final triumph of form over substance in the sense that it not only makes researchers more focused on the "criteria" per se but also inevitably leads to an "emptying" of those contextual aspects that make research meaningful.A risk that is typically articulated and reinforced through examples such as where researchers come to see funding as a lottery (where submitting several applications for such funding are seen to increase your chances of winning), where publication is seen as a matter of being at the right place at the right time or where they are just two articles short of becoming professor, having an annual bonus or being a 4 Â 4 researcher (Alvesson and Spicer, 2016;Butler and Spoelstra, 2012).From such a perspective, both the perceived negative and positive effects of PMSs thus constitute a serious threat to our notion of quality in research.
Finally, we suggest the hate-love relationship to PMSs found in this study should also have a third type of implication, namely, an increased risk of ill health among individual researchers.As argued above, this is a type of risk that has been associated with PMSs many times before, including the risk for stress (Parker, 2011), jealousy (Tucker and Tilt, 2019), rivalry (Gendron, 2015) and cynicism (Kallio and Kallio, 2014).
When considering the perceived negative effects of PMSs that our respondents gave voice to, our findings largely confirm such previous insights.For example, our findings point to how feelings of stress and self-doubt can be grounded in the fact that freedom is not for everyone (but rather that you need to prove yourself according to the criteria set up by the PMS to potentially earn your freedom), that your colleagues are constructed as potential threats to your own progress and success and that you always have to turn yourself into something which you are not to prove the value of your academic self (see Items 1-3b in Table 2).Arguably, the underlying mechanism, and common denominator, of all these negative perceptions can be traced to how the main impetus of competitively oriented control systems is inequality.That is, as stressed by Cooper (2015, p. 15), the point of departure of such systems is that inequality, rather than equality, functions as "the medium and relation of competing capitals.When we are configured as human capital, equality ceases to be our presumed natural relation with one another".Instead, it is assumed that through being kept in a state of "equal inequality" (Foucault, 2008), our instincts and minds will be sharpened towards competition (Lazzarato, 2009).Apart from constituting an important breeding ground for rivalry, jealousy and cynicism among peers, such an underlying logic of inequality can be expected to increase the level of stress that at least some researchers experience.The reason being that it produces feelings of uncertainty and of impossibility, as we constantly must improve to stand the competition.
Importantly though, we suggest the perceived positive effects are equally (or even more) important for understanding why PMSs can be expected to increase the risk for or strengthen the level of stress and ill health among individual researchers.That is, regardless of whether one directs attention to the type of freedom that individual researchers associate PMSs with, their positive views on having clear role models or their positive views on progress and selfimprovement (see Items 1-3a in Table 2), there is another side of such positive perceptions in the sense that they all rely on a form of acceptance of the responsibility for constituting yourself as an entrepreneurial self.That is, an acceptance of yourself as a subject that is both capable of, and responsible for, caring for yourself as human capital (McNay, 2009;Wickramasinghe et al., 2021).A shouldering of the responsibility to prove yourself in performative terms; to compete for your own freedom, to always strive towards the unachievable and to never be satisfied with the current version of yourself.
Arguably, this is one of the most deceptive qualities of PMSs as neoliberal technologies since one of the most appreciated aspects of such systems is the type of freedom and autonomy that they allegedly offer (cf.Item 1a in Table 2).Importantly though, with this type of freedom and autonomy follows a responsibility to care for your own interests and to make sure that you can stand the competition (cf.Items 2-3a in Table 2).Moreover, because individuals are constituted as free and autonomous, they (as individuals) are the only ones who can shoulder this responsibility.Any contextual aspects (such as the particularities or traditions associated with doing research in a specific discipline) or structural constraints (such as the different conditions that may be related to gender, degree of seniority or nonpermanent positions) must be suppressed in order for this type of subject to make sense.Hence, if you succeed, you do so as an individual.However, if you fail, there is no one else to blame than yourself.The reason being that from the perspective of autonomous and free individuals, failure can only be understood as an inability to care for yourself as an entrepreneurial selfi.e. the very opposite of a responsible and autonomous self (McNay, 2009).Moreover, and importantly, when you start to accept and see merits in certain aspects of PMSs, this should increase the propensity of individuals to both accept and actively assume this type of responsibility.A responsibility that fosters feelings that you must invest in yourself as human capital, regardless of whether you find it meaningful or not.And importantly, it seems to us that a more willing and enthusiastic shouldering of this type of responsibility should, generally speaking, increase the level of stress due to how it places the subject more in the future than in the past or the present, and more in what it is not than what it is.That is, rather than accepting individuals on their own premises, this type of control systems assumes that the value and desirability of individuals are increased when they take a form other than their own (cf.Cooper, 2015).
Taken together, our findings thus suggest that the effects that PMSs produce in academia are best understood in terms of difference and concomitance.As control systems, PMSs should neither be seen as sometimes producing positive feelings and sometimes negative feelings nor should they be seen as producing positive feelings among some researchers and negative feelings among others.Instead, we suggest they produce ambivalence, mixed emotions and feelings of hate and love at the same time.The reason for this, as addressed in this study, is that the main characters of PMSs are premised on contradictory terms.In fact, that is the very point about the systems, to feed and form ambiguities, double messages and illusions.They provide a form of freedom, but that freedom is always constrained in various ways.They provide images of individuals to look up to, yet those very individuals are constituted as threats to your own success.They provide tools for self-improvement, but no matter how much work you do on yourself, there is always room for more.This type of multifaceted ambivalence is, we suggest, what makes PMSs so powerful in academia.
Whence and whither
The findings presented in this paper are based on a qualitative study of a few individual researchers at three Swedish universities.As is often the case in qualitative research, the ambition has been to address and reflect upon some of the contextual specificities of these particular settings.Aspects that help render the settings meaningful both for the interviewees per se and for us as researchers.On the one hand, such an approach could, of course, be seen as a potential limitation of our study, as the results typically reflect only the actual conditions and experiences of an infinitesimal part of the overall academic community.On the other hand, though, the central moments of governmentality identified and elaborated upon here are not tied to, or unique for, the particular settings under study.On the contrary, they should be useful as a conceptual apparatus for analysing and understanding other contexts as well.In fact, the centrifugal force, the principle of normalization and the notion of an entrepreneurial self, constitute general qualities that can be used to analyse both the design and use of different types of control systems, both inside and outside academia.
To further explore the interplay between such general concepts and the contextual specificities of various settings, we suggest the following topics for future research.A first important topic should be to mobilize the concepts drawn upon, and suggested here, to study other (academic) settings.Apart from adding to the list of empirical settings that could potentially be understood by means of these concepts, such studies could fruitfully involve further conceptual refinement, not least when it comes to those very aspects of PMSs that produce positive and negative effects.Moreover, it could be asked whether there are important similarities/differences in the hate-love relationships that individuals form to PMSs in different contexts and in the different relationships (i.e. in relations that individuals have to their governing bodies, to their peers and to themselves).A second topic that could be interesting to further explore is to address the potential interplay between the negatives and positives, the "hates" and the "loves", as identified in the current study.For example, is it so that the positive connotations that certain aspects of PMSs have, affect or "compensate" for other aspects that have considerably more negative connotations?If so, this could, for example, be important for understanding the lack of, or will to, genuine resistance towards PMSs in academia.Third and finally, it could be interesting to explore whether, and if so, how the (re)constructed relationships addressed in this study interplay.For example, is it so that the relationship that one forms to the governing party based on a centrifugal mechanism affects one's views on peers and oneself?Moreover, is it so that the construction of one's peers as rivals affects the relationship that one forms to oneself or the governing party?1977-1978(see Foucault, 2007) ) and "The birth of biopolitics: Lectures at the Collège de France 1978-1979" (see Foucault, 2008).
2. Governmentality was "understood in the broad sense of techniques and procedures for directing human behavior" (Foucault, 1997, p. 82), or put differently, as "an activity that undertakes to conduct individuals throughout their lives by placing them under the authority of a guide responsible for what they do and for what happens to them" (Foucault, 2007, p. 471; see also Burchell et al., 1991).
3. These interviews were collected as part of a larger research programme with the aim of exploring the effects of an increased reliance upon various forms of PMSs in academia.Within the project, some 50 interviews were conducted in total, 20 of which were conducted with university administrators and 10 of which were conducted with researchers following a different interview guide than the interviews drawn upon in this paper.
4. It could be argued that the expected differences would have been even greater had we included respondents from, for example, the medical faculty or the faculty of arts.However, such faculties were not part of all universities under study.
I
think [research] is both fun and challenging [in a positive sense [. ..]] but a tough, pressuring part can be that you're supposed to build your reputation in this world, create the right connections, build your networks [. ..].It's an important part if you want to become one of these really successful researchers.It doesn't have to be, I mean you could do good research anyhow, but it's an important part as I see it, that you should [. ..] [invest in yourself] (I9).
Notes 1 .
Lectures which have been translated and published in several books, including "Security, Territory, Population: Lectures at the Collège de France
Table 1 .
My feeling is that these[numbers or indicators]are mostly for the dean and the head of department, so that they can see what we do.[. ..]We don't have a lot of interaction with the management, so nobody really knows [what I do], for good and for bad.[. ..]But I'm quite grateful that no one interferes that much, because that's what research is like.(I14)
Table 2 .
Summary of empirical findings
|
v3-fos-license
|
2017-02-19T03:55:31.130Z
|
1833-07-01T00:00:00.000
|
10440734
|
{
"extfieldsofstudy": [],
"oa_license": "CC0",
"oa_status": null,
"oa_url": null,
"pdf_hash": "8497f6bb75836923cddc6b148710ab2858299163",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43255",
"s2fieldsofstudy": [
"Art"
],
"sha1": "222b2e5fa62f46352192ded1f1ee3df66d9b890f",
"year": 2004
}
|
pes2o/s2orc
|
Book reviews
Peter Kalm's Travels in North America. Edited by Adolph B. Benson. (New York:Dover Publications, Inc.,1966. A reprinting in two volumes of the two-volume original publication by WilsonErickson, Inc., New York, 1937. Plasticized paper permanent binding. Pp. xviii,737 numbered consecutively, Meteorological Observations 738-769, Bibliography of Kalm's Writings on America 770776, Index 777-797, plus a fold-inmap 22x28^ inches. $3.00.)
The essay consists of seven chapters, which treat successively of, 1, the number of the testicles; 2, descent of the testicles; 3, diseases of the scrotum; 4, tunica vaginalis ; 5, hydrocele; 6, diseases of the testicles; 7, diseases of the spermatic cord.
The first chapter commences with a truism, viz. that "a man is naturally provided with two testicles, which are lodged in the scrotum;" and it might be supposed that the thesis would exclude much else. But the author has contrived to collect details of some cases of monorchides and triorchides, which are pathologically and physiologically interesting.
" In a few individuals, the number of testicles has been found greater or less than the usual standard, more frequently less, and sometimes entirely wanting. The total want of testicles in the scrotum is a circumstance very alarming to parents, from the apprehensions which they naturally entertain respecting the virility of their child. These apprehensions, however, are for the most part groundless, as the absence of testicles from the scrotum arises merely from their having failed to descend from the abdomen before birth; there being undoubted instances of men without the vestige of testicles in the scrotum having become the fathers of numerous families. And in dissecting the bodies of persons of this peculiar formation of parts, one or two testicles, of a full size, ancl perfect in all respects, have almost invariably been found in the abdomen, so that a dissection in which the testicles were entirely wanting, both in the scrotum and in the abdomen, is a very rare occurrence. " A few cases of monorehides of a somewhat equivocal character^ are found in the records of medicine. They are modified by different circumstances, which constitute three varieties. "In the first variety, the solitary testicle was divided in the middle by a deep fissure, the lobes on each side were as large as a full grown testicle, and each was provided with a spermatic chord, which ran up to the same side of the body. The fissure, the lobes, and the duplication of parts, obviously result from the partial coalescence of two testicles. "The second variety has undoubtedly the same origin, only the coalescence is more general, and the incorporation more complete. The single testicle was much larger than in an ordinary full-sized testicle, equable in its surface, without any deep fissure dividing it into lobes, and provided with two spermatic chords, running to the different sides of the body. From the simple structure of the testicles, it is easily conceivable how both might be incorporated, without destroying their function as secreting organs. " The third variety agrees with the other two, in having one testicle and two spermatic chords, while it differs from them in the circumstance of both spermatic chords running to the same side of the body. The origin of this variety is not so obvious, though, like the other two, it is not productive of any inconvenience to the individual." (P. 1.) " The annals of medicine contain many cases of reputed trior-4 Mr. Russell on the Testicle. 35 chides, though it is not possible to lay down any general rules for the diagnosis, since, from the nature of the peculiarity, their true character can be ascertained only by the result of a special investigation. Various authors mention the peculiarity of three testicles as hereditary in certain families. " Although there is no natural limit to the number of testicles, the existence of more than three is an exceedingly rare occurrence.
There are, indeed, many cases recorded of persons with four testis cles, but the fact has not been verified by dissection. It has been said that persons have appeared with five, or even six testicles. In the person who had six testicles, four were of the natural size, and two much smaller. I shall not, however, dwell on a discussion for which the data are too defective to lead to any positive conclusion.
" With regard to the amorous propensities and generative faculties of persons with supernumerary testicles, the report of authors is in general favorable to their being more powerful than in other men.
But as those authors often indulge in a playsome humour, their evidence must be taken with a certain degree of reserve. For, in the investigation of such a subject, it is hardly philosophical to mention the case of a monk with three testicles, who was so salacious as to have indomitable passions, which prevented him irom keeping his vow of chastity; or that of a land-grave, with a like peculiarity, who was allowed a concubine as a reasonable indulgence to a man of his amorous complexion, who could not remain satisfied with the use of a single woman."* (P. 7.) We think our author places rather too implicit reliance upon some of the cases referred to as recorded in "the Annals of Medicine." That the testicles may be lobulated, and that, from peculiar circumstances, they may become united, will be found to be only in accordance with the established laws of epigenesis; but the occurrence of a third or fourth distinct testicle, (not a lobule, or lobules, detached or attached,) and especially the passage of the two spermatic cords to the same side, are such wide wanderings, as to require the most direct and circumstantial evidence to warrant their truth, and to convince the world that some errors in the examinations have not vitiated the accounts; especially as neither Morgagni, nor Haller, nor Meckel, ever discovered a third testicle in the dissections of reputed triorchides," although other authorities, of less weight, have put several supposed eases on record.
These, however, have most probably been detached lobules, mistaken for distinct testicles.
With regard to the Descent of the Testicles, the subject of the second chapter, the author observes, that nothing is known respecting the cause of occasional retardation; and he continues, " It is wholly unconnected with any imperfection in the confined testicle, since, upon an average of observations, the retained testicle * " A dog, remarkable for bis salacity, had two testicles in the scrotum, and ?ne in the abdomen. is as fully formed and as large as those which have descended into the scrotum." (P. 12.) This is certainly contrary to our experience: we have never met with a case in which the descent of the testicle was delayed, in which its development was not retarded, if not entirely arrested. Cases may occur in which, from preternatural impediments in the canal, the descent may be occasionally delayed or prevented; but, in general, we believe that the want of development is closely connected with the persistence of the testicles in the abdomen.
The following practical remarks are good; reports of such cases of error we have occasionally published in our Journal.
" When a testicle is arrested in its progress through the inguinal canal, it produces a swelling in the groin, which is readily mistaken for an inguinal hernia. Both complaints occupy the same place in the inguinal canal, both proceed from the protrusion of a viscus from the abdomen, and, if the medical attendant be not aware that the patient has no testicle in the same side of the scrotum, he regards the case as an inguinal hernia. Under this mistake, a fruitless attempt is made to reduce the hernia, and, when this attempt fails, a bandage is applied, which, by exciting pain, leads to a more accurate examination of the symptoms, and to the subsequent discovery of the true nature of the case; for, when an arrested testicle is the cause of the swelling, there is greater sensibility to pressure, and a peculiar sensation which characterizes the feeling of a testicle. The difficulty of the diagnosis is occasionally increased, by the complication of two complaints, which the late descent of the testicle contributes much to favour. In the first place, when the patient has passed the age of puberty, the large size of the testicle widens the inguinal canal to a preternatural degree. In the second place, the surrounding parts have their disposition to contract which existed in early life greatly impaired, so that, from the concurrence of those two causes, there is an opening left into which some of the abdominal viscera may easily enter, and produce a hernia of the congenital form." (P. 12.) After quoting the very curious case recorded by Mr. Hutcheson, in which a sailor had imposed upon the examining surgeons many times, by elevating his testicles into the inguinal passages, and thus simulating a double hernia, those curious deviations from the ordinary course of descent are referred to, in which the testicle, instead of descending through the inguinal canal, accompanies the femoral vessels in their progress under Poupart's ligament, making its appearance at the bend of the thigh. Arnaud gives several instances of this singular variety. The most instructive case is detailed at considerable length. The following are the chief points worthy attention.
" An officer, about forty years of age, consulted Mr. Arnaud respecting a swelling in the bend of the thigh, which was taken for a hernia. Upon an accurate examination of the case, however, Mr.
Arnaud satisfied himself that the swelling was not a hernia, but a misplaced testicle. He adduces three reasons in support of his opinion. 1st, That the officer had not a testicle in the same side of the scrotum. 2d, That the swelling had the form and consistence of a testicle, the appearance of the spermatic chord alone being sufficient to distinguish the case from a case of crural hernia. 3d, That pressure produced exactly the same sensation on this as on the other testicle." (P. 23.) Our author gives to the most commonplace truths an interest, by the appositeness of his illustrations, so that we are tempted to multiply our extracts.
" The arrival of the testicles in the scrotum does not produce any change in the constitution. They are of a small size, and are not endowed with much sensibility during infancy; when full grown, they are rarely equal in size, a circumstance which ought to be known by all practical surgeons; otherwise their ignorance may lead to very distressing consequences. Fabricius Aquapendente gives a most instructive instance of this in the case of a young man, v-'ho, upon observing his testicles to be unequal in size, became alarmed, and consulted a rupture-doctor about his supposed disease.
The quack pronounced the case to be very alarming, and advised the immediate extirpation of the testicle. The patient, however, being unwilling to submit to so severe an operation without farther advice, consulted Aquapendente, who relieved his fear, by satisfying him that the supposed disease was nothing more than a natural inequality in the size of the testicles, a difference which almost constantly takes place.
"The testicles likewise are in general suspended at unequal distances from the pubis." (P. 24.) In the third chapter, on the Diseases of the Scrotum, there is much instructive matter, collected from various sources; but, as it chiefly consists of extracts from the works of Acrel, Titley, and others, it will not afford us many quotations: we will take, however, the following. " The scrotum, is more predisposed to mortify than most parts of the body. It occasionally mortifies at the termination of tedious exhausting fevers, and on attacks of erysipelas. Such cases are always severe, though not fatal, excepting under circumstances particularly unfavorable. The whole of the scrotum is sometimes completely destroyed, and afterwards completely regenerated, even to the production of the hair. This, however, very rarely occurs. Even the perfect regeneration of the skin is by no means a constant termination. In those cases which I have seen, the naked surface of the testicle, or of the tunica vaginalis, was, after the cure, covered only with a thin pellicle, which adhered to the subjacent parts, and did not possess any mobility. This pellicle or cicatrix is often so limited in extent, as to confine the testicles to one situation, and sometimes even to subject them to an inconvenient degree of pres-sure.
In one case, the constriction was so great as to create the most excruciating pain, which rendered the life of the patient miserable, and induced him to submit to the removal of a testicle. But as this partial operation did not procure complete relief, he soon after requested to have the other testicle also removed. This was an extreme case.
But when the tendency to constriction once begins there is no method known of arresting its progress, nor of palliating its effects." (P. 64.) "The most singular disease ot the scrotum is the growth of a tumor of enormous size. In a memorable case of the kind, Ger. Ephr. 1692, the tumor attained the weight of more than 200 lb., a weight considerably greater than the weight of a well grown man of ordinary stature. These tumors, in general, begin insensibly without pain, and are not perceived till they attract notice by an obvious swelling. In a few cases they are the consequences of a blow, or their commencement is marked by a slight attack of pain, which is temporary, and does not return during the course of the complaint. Their progress is gradual and regular, and they may often be traced back for fifteen or twenty years. They do not occasion any inconvenience, excepting what arises from their bulk and weight. They are not only free from pain, but endued with very low powers of sensibility, since, neither the application of caustic nor the introduction of setons, excite any troublesome degree of irritation. A friend of mine, who practised some time in the West Indies, informed me that the rats sometimes fed upon these enormous tumors, while the patient lay in a most helpless condition, and was unable to defend himself from their attacks. The tumors bore being handled with considerable roughness, without the patient suffering from this rude treatment, excepting when the pressure was made on the part of the surface corresponding to the situation of the testicle; then, indeed, the patient complained of pain, as the testicle still retained its natural sensibility, or even possessed it in an unusual degree. The growth of such immense swellings does not affect the constitution, nor produce any symptom of debility. It does not, in all cases, even impair the function of generation, as Delpech particularly mentions that neither the penis nor testicles had lost any thing of their natural faculties. In this respect, however, the symptoms are not uniform, since, in some cases, the functions of the testicles seem to be suspended or impaired. In the case mentioned by Mr. Corse Scott, the patient had not had any connexion with a female for ten years before the time Mr. Scott saw him. In the case described by Dr. Titley, the patient had lascivious desires and erections, but no emissions. While Dr. Wells states that the patient's health remained unimpaired, while his virile powers gradually diminished, as the scrotal tumor increased. " It is necessary to investigate these particulars with great care, since the expediency of saving or of removing the testicle often depends upon the result of this investigation. " This very singular disease of the scrotum belongs to the warmer climates of the globe, the East and West Indies, and the correspondent latitudes of Africa. It is endemic, and very prevalent among the Bambara nation, on the coast of Guinea, among whom the misfortune of having a monstrous testicle is regarded as a mark of nobility. When the patient goes out to ride, the testicle is supported on a bowl placed on the pummel of the saddle; and when of the largest size, supported on a sheet passed over the shoulders, &nd dragged along the ground, when he attempts to walk. " I know of only two well authenticated cases of this disease having originated in Europe. One occurred in the practice of Mr.
Liston, Surgeon to the Royal Infirmary, Edinburgh; and the other in that of Mr. Delpech, of Montpelier. There is a third case, by Mr. Hall, of Manchester, probably of the same kind,. though, as the symptoms are not decidedly marked, I have not included it in the. number of well authenticated cases." (P. 68.) It appears, from Mr. Russell's account, that the chimneysweeper's cancer is rarely met with in Edinburgh, although of such frequent occurrence in London; and, as he observes, from the Parisian surgeons being silent on the subject, it is probable that it seldom occurs in Paris.
The chapter on the Tunica Vaginalis is short, and will not afford a single extract; but j.the following, which treats of Hydro-Cele, and in which the diagnoses are canvassed, and the merits of the different modes of cure discussedr will afford us matter for several.
is tense, and of long standing, it is frequently accompanied with occasional acute lancinating pains, resembling those of a scirrhous testicle. Under these imposing symptoms a hydrocele has been removed as a case of scirrhous testicle, when a subsequent dissection has shewn the testicle to have been quite sound, and castration unnecessary. So calamitous a mistake points out the expediency of, in all cases, dividing the tunica vaginalis before proceeding to operate." (P. 111.) " Another cause of deception is presented by a singular modification of congenital hernia. In this case, the lower portion of the omentum in contact with the testicle became soft from mortification : this preternatural softness misled the surgeon, who, not being prepared to expect any such change, conceived the case to be a hydrocele. The mistake, however, did not occasion any serious inconvenience." (P. 117.) " The tendency of a hydrocele to increase is not circumscribed within any determinate limits. It sometimes attains a most enormous bulk.
Mr. Cline drew off six quarts of fluid from a hydrocele on the person of Mr. Gibbon, the celebrated historian. But by far the largest on record is one mentioned by Mursinna, which was twenty-seven inches in its long axis, and seventeen in its transverse.
The enormous size of this hydrocele almost exceeds the bounds of credibility; I shall therefore give the measurement in the author's own words: 4 Diese Geschwulst betrug in ihrer grossten Lange, von oben bis unten, drey Viertheil einer Elle, und in der Mitte, im Durchschnitt von einer Seite zur andern, eine halbe Elle weniger einem Zoll." (P. 124.) We extract the following; as being a good summary of the results of different modes of treatment pursued in cases of hydrocele, and as affording a view of the opinion arrived at by the author, after so long a practice and such considerable experience. " Whatever may be the cause of the hydrocele, the particular case under consideration may be either idiopathic or symptomatic of some other affection.
Mr. Pott relates a very instructive case, in which the hydrocele was evidently dependent on a fit of gout, as the swelling disappeared along with the departure of the gout. And Sir E. Home relates three cases symptomatic of an irritation in the urethra, in which the hydrocele disappeared upon the cure of the strictures. " Hydrocele, though a troublesome complaint, and very annoying from its unwieldy bulk, is rarely painful, and never dangerous. It occurs at all periods of life. Infants are occasionally born with hydrocele; but in them, or in children at an early age, the hydrocele often admits of a spontaneous cure, or its cure may be promoted, and insured almost to certainty, by the application of stimulating embrocations. But hydrocele, in patients of advanced years, is a chronic, stationary complaint, which does not usually undergo any favorable change spontaneously. In a few cases, indeed, the accumulated fluid is completely removed by absorption. " Besides a spontaneous cure of hydrocele by the natural powers of the system, an accidental cure is sometimes obtained by the rupture of the containing parts. Dr. Douglas relates two cases of tense hydroceles, in both of which the tunica vaginalis gave way upon a slight inflexion of the body. The effused fluid escaped into the surrounding cellular membrane, from which it was speedily absorbed. A like rupture is sometimes occasioned by external violence. In either case the cure is complete for a time, but not always permanent. In one case, in which the parts were greatly distended, not only the tunica vaginalis, but the integuments of the scrotum also gave way, in consequence of a great exertion. By this means the whole fluid was completely evacuated, and a permanent cure obtained.
"The very frequent occurrence of hydrocele has afforded ample ?Pportunity to try various methods of cure, and to ascertain their respective values. " The only mode of cure now employed consists in evacuating the fluid by an operation. This operation is more or less simple, according to the object which the surgeon has in view. If his sole object be to evacuate the fluid, without using any precaution to prevent a return of the collection, he has only to make an opening into the cavity of the tunica vaginalis, by which the fluid escapes. The mere evacuation of the fluid, however, produces only a temporary, or what is in general termed a palliative, cure. It is, however, so easy, and for a time relieves the patient so completely, that *t is often employed as a matter of convenience. It is simple and easily performed; but may prove troublesome or dangerous, by imprudence or mismanagement. If the surgeon is not sufficiently on bis guard, he may wound the testicle, or the artery of the spermatic cord, which, by occasioning an unrestrainable haemorrhage, has led to the loss of the testicle; or if, by undervaluing the risk of irritation, an attack of inflammation has been excited subsequent to the operation, the consequences have proved fatal. " When the surgeon has a higher object in view, by using means to prevent a relapse, the cure is termed radical. This object is attainable in two ways: either by restoring the healthy action of the parts, or by obliterating the cavity of the tunica vaginalis. For this Purpose, six different modes of operating have been employed. The temporary irritation of the tunica vaginalis by the canula or by a bougie; the introduction of a seton; the excision of the tunica vaginalis; the application of caustic; the injection of a stimulating fluid into the cavity of the tunica vaginalis; or a longitudinal division of the tunica vaginalis through its whole length. The first four methods are now almost universally abandoned. I shall therefore confine my remarks to a comparison between the merits of the cure by injection, and the cure by incision. But, whichever method is preferred, it is desirable to operate before the hydrocele has attained a large size; since, when the hydrocele is very large, the inflammation, by spreading over a more extensive surface, produces more 413. No.Q5, New Series. o 42 CRITICAL ANALYSES. violent symptoms. To avoid this inconvenience, it is usual to evacuate the fluid, watch the progress of the subsequent collection, and, when the hydrocele has attained a convenient size, proceed to perform the radical cure.
"The cure by incision is by far the most ancient, and the most generally employed; the cure by injection has been more recently introduced. Between sixty and seventy years ago, Mr. Sabatier, in the Memoirs of the Academy of Surgery of Paris, published an excellent dissertation on the cure of hydrocele, explaining particularly the cure by injection. About twenty years after, Sir James Earle published an essay, strongly recommending the cure by injection. His recommendation produced a powerful impression on the minds of the British surgeons, so that the cure by injection became the favourite operation. It has the advantage of being more easily performed, and of subjecting the patient to a shorter confinement. But, though the consecutive symptoms are in general more mild, yet a greater proportion of deaths happen in consequence of the cure by injection than of the cure by incision.* It is likewise more uncertain as to its efficacy, as the hydrocele sometimes returns more than once. The possibility of these frequent returns affords a proof that the cavity of the tunica vaginalis is not obliterated. Indeed, the frequency of the secondary effusion, immediately after the operation, leads to the same conclusion. The cure by injection, therefore, must depend upon a change in the action of the parts, not upon an obliteration of the vaginal cavity. This opinion has the support of Mr. B. Bell and Mr. Ramsden. 43 which occasionally prove fatal. This translation of the secondary affection to an organ of a different class from the one primarily affected, is a very singular deviation from the laws which regulate vicarious affections. Another remarkable peculiarity of this affection is the injurious effects of free evacuations, a practice naturally applied to a case characterized by all the symptoms of active inflammation. Yet it seems a fact well established by those who have bad experience in this disease, that the copious detraction of blood brings on those dangerous attacks upon the brain, which admit of relief only by a discharge from one of the organs originally affected. This practice, with the most satisfactory result of relieving the brain from oppression, has been put directly to the test ot experiment, by the application of blisters behind the ear, and to tne region ?f the parotid gland. Blisters, indeed, have never, so far as I know, been applied to the testicles, though, from the striking analogy of circumstances, there is great encouragement to try the Practice: for the spontaneous resolution of this sympathetic affection of the testicles is accomplished by a copious discharge from the surface of the scrotum; while the suppression of this discharge, either by exposure to cold, or by the application of repellent medicines, induces a translation of the attack to the brain, accompanied by the usual disastrous consequences. " Besides the above peculiarities, there is a distressing tendency in this sympathetic affection to cause a decay of the testicle. In these unfortunate cases, the decrease of the swelling does not stop when the testicle has been reduced to its natural size, but continues uninterruptedly till the substance of the testicle is completely wasted, nothing remaining but an empty bag, very sensible to pressure, or to any kind of irritation. There are very few cases in which there is only a partial reduction of size. I recollect but one instance of this variety.* M. Richter gives a very curious history of a kind of rheumatic swelling of the testicle, in which the cure was effected by the swelling subsiding below the natural size of the testicle; which, however, afterwards regained its healthy size. But this is a recovery which the patient has little reason to expect. The only well-authenticated case of this is given by kaviard, who, in performing an operation for the radical cure of hydrocele, found the testicle so completely shrunk as to be concealed between the folds of the tunica vaginalis. Upon the cure of the hydrocele, however, the testicle regained its original size." (P. 152.) Wasting of the testicles, however, is occasionally met with, not as a symptom of any other cognizable disease, but as an idiopathic affection of the organ itself. " Fortunately," however, (as our author observes,) " the decay of the testicle is not a disease of frequent occurrence, so that the information on the subject lies scattered over the works of surgical * " Dr. Hamilton's paper upon Mumps." ii authors, few of whom have ever seen more than two or three cases of the disease. Baron Larrey is the only person I know who has had practice in it upon anything like an extensive scale, and his account of the disease is exceedingly interesting. After the return of the army from the Egyptian expedition, many soldiers complained of the disappearance of the testicles, without any venereal affection. The testicles lost their sensibility, became soft, diminished gradually in size, and seemed to be dried up. The attack, in general, began in one testicle at a time. The patient did not perceive this decay till the testicle was reduced to a very small size; it approached the inguinal canal, and was about the shape and size of a horsebean. It was indolent, and of a firm consistence. The spermatic cord itself diminished in size, and partook of the atrophy. When both testicles were affected, the patient was deprived of the faculty of procreation, of which he was apprised by the absence of all desire, and by the laxity of the parts of generation. This loss influences all the interior organs. The inferior extremities become lean, and totter under them; the countenance becomes discoloured, the beard thin, the stomach loses its tone, the digestion is impaired, and the intellectual faculties deranged. Several soldiers with this infirmity were invalided. " This complaint is ascribed to the excessive heat of the climate, the fatigues and privations of war, and, above all, to the use of spirits made from dates, in which different species of Solanum were infused. The ancients are said to have procured the atrophy of the testicles by the continued application to the scrotum of the inspissated juice of hemlock. " When the atrophy is complete, art does not offer any resource; but, at its commencement, the distressing consequences may be prevented by the use of vapour-baths, dry friction over the surface of the body, urtication of the thighs, refreshing stomachic remedies, and good diet. A person may be secured against this accident by abstaining from the immoderate use of women and spirituous liquors. Since the return from Egypt, Larrey had occasion to treat this malady in many soldiers of the imperial guard, who brought it upon themselves by like excesses. In one person, this malady had, in a very short time, attained an extreme degree of malignity, insomuch as to make both testicles disappear almost entirely. The patient, who heretofore was of a robust constitution, with a thick beard and prominent features, lost all character of virility, and presented the appearance of an effeminate being; his beard was thin, his voice exceedingly feeble and shrill; his genitals without action, and incapable of generating. All means of cure proved ineffectual. " Similar symptoms have been produced by deep wounds upon the nape of the neck." (P. 156.) Will phrenology throw any light upon this latter consequence? Will deep wounds in the neighbourhood of the cerebellum impair its presumed functions in regulating the sexual desires, and, of course, the normal state of the subservient generative organs?
The remaining portion of the volume is occupied with observations on Neuralgic Affections of the Testicles, on Fungus Hasmatodes, and various other malignant and nonmalignant changes which occur in these organs; and the work concludes with a short dissertation on the Diseases of the Spermatic Cord. Our notice, however, has extended to a greater length than we purposed, and therefore we abstain from making any further quotations, and close our analysis by recommending the work to the perusal of our younger brethren, especially students, who will find much valuable information condensed into a small compass, and detailed in concise terms. We have no doubt that every practitioner of mature age and experience must have met with cases of hysteria (as they are called), or of anomalous nervous maladies, particularly in with their prevalence among the same people in later times. In this country these complaints prevail, at the present day, to an extent unknown at any former period, or in any other nation." (P. 22.) It cannot be doubted, by any person who draws his conclusion from an attentive examination of different classes of society, that the modern system of education, which tends to the cultivation of the cerebral faculties in a degree disproportioned to the exercise of the bodily powers, is one of the chief causes of the nervous susceptibility becoming excessively developed; the prejudicial effects of which are more evident after puberty, and during the succeeding years of life, when the mental sensibilities are more directly called into action. Civic life developes the nervous temperament and predisposition to disorder in a much greater degree than a residence in the country.
" Women, in consequence of the greater delicacy of their physical organization, and the high degree of nervous sensibility with which they are endowed, joined to their more sedentary mode of life, are easily and more strongly affected by agreeable or painful impressions; and, consequently, are much more subject to nervous diseases than men, who are comparatively exempt from various complaints to which the female sex are liable; but who, as life advances, become more subject to mental and nervous diseases, in consequence of their being more exposed to numerous sources of cerebral excitement, in the worry and turmoil of the world." (P. 25.) " The influence of moral causes in the production of nervous disorder has not been hitherto sufficiently considered; hence this disorder is often attributed either to the effects which it occasions, of to any disease with which it may happen to co-exist. It is only in cases of mental aberration that due weight is ascribed to these causes in producing the disease. M. Georget, who has ably exposed the fallacy .of some of the theories which refer nervous disorders exclusively to irritation or derangement of other organs than the brain, says, in alluding to this subject, ' If the stomach be irritated by improper food, many diseases may be occasioned; in like manner, moral causes may produce disorder of the sensitive and intellectual functions. The only occasions on which importance is attached to disturbance in the functions of the brain is when there is complete derangement; an individual may have insomnia, cephalagia, moral, intellectual, or muscular weakness, but if he is able to reason and follow certain ideas, it is said that the brain performs its functions healthily; but, on the other hand, if an individual experience loss of appetite and slight distaste for food, he is considered to be labouring under gastric disorder.'* " Persons suffering from various nervous disorders frequently live to an advanced age, and retain an appearance of health, which would be incompatible with the existence of long continued disease of the * " De la Physiologie du Systeme Nerveux.'*?Paris. parts to which the symptoms are referred. When disease or functional derangement of an organ remote from the brain co-exists with nervous disorder, it can generally be recognised by its characteristic symptoms; and, though rarely the original cause of the cerebral affection, it may react on the brain, so as to keep up and aggravate the nervous symptoms: where a high degree of susceptibility exists, it also frequently proves an exciting cause of relapses." (P. 29.) Nervous disorders, for the most part, are not accompanied by fever, nor followed by serious consequences. They vary greatly in their symptoms and progress, and are often of long duration; the symptoms being either constantly present, or recurring at regular or irregular periods. Some are productive of much suffering; while others do not occasion pain, and are felt merely as an inconvenience. .In some cases, the symptoms frequently shift their situation, affecting various parts simultaneously, or in succession; at other times, they are concentrated towards some part in particular. Although, ln general, capable of being greatly alleviated by medical treatment, they sometimes resist all the efforts of art, and either cease spontaneously, or become merged into some other disease.
The general observations Mr. Lee offers upon Hysteria are correct, as far as they go; he does not, indeed, profess to enter very deeply into the subject. He is opposed to the opinion of those, and that justly, who ascribe ail hysterical complaints to some uterine derangement, and he thinks that the evidence that hysteria is essentially a disorder of the brain may be deduced from observation of its causes, symptoms, and of the remedial measures which are productive of the greatest benefit. regions, with sense of weight in the latter part; the legs and arms are debilitated to such a degree that she cannot sustain herself, she sheds tears, and is attacked by convulsions. On recovering, she is weakened and depressed, feels general indisposition for some hours, and sometimes experiences great pain until the menstrual secretion is established.
When this has continued two or three days without intermission, she is entirely relieved, and recovers her health and strength until the following monthly period, when similar symptoms recur.
Bleeding, leeches, baths, vegetable diet, relieve the patient, without curing her entirely." ? < A girl, set. sixteen, in whom menstruation was regularly performed, was suddenly seized with convulsive attacks, which returned two or three times a week; she was aware an hour previous to the attack of its being about to recur. It commenced with slight shivering, and a sensation of cold vapor rising from the abdomen to the head; then succeeded loss of consciousness, with convulsive movements of the limbs. After trying various antispasmodic remedies without effect, a purgative was administered, which occasioned copious alvine evacuations, and the passage of two lumbrici; several others were subsequently passed, after which the patient was cured, and had no recurrence. ' "The remedies which are most effectual in mitigating and removing the symptoms, are those which have an immediate action on the nervous system. Such direct effects as are witnessed from the influence of the mind in the prevention of an attack, and on the progress and duration of hysteria, could only occur in a disease of cerebral origin." (P. 57.) Mr. Lee passes much too rapidly over Epilepsy and Chorea, and we do not discover, in his comments upon these subjects, anything that could profitably arrest our attention, or be worthy our readers' notice.
In the second part, Mr. Lee treats on some special affections of voluntary motion and sensation, and on hypochondriasis.
Deranged Muscular Action, depending on Cerebral Excitement?
Fixed contraction of various muscles may be occasioned by the operation of the above-named causes, or it may supervene on some other form of nervous disorder. The degree of contraction varies according to the nature of the part implicated and other circumstances, it is sometimes so strong as to require the employment of much force to overcome the resistance, The efforts of the patient to overcome the rigidity are in some cases unavailing, but generally the patient retains more or less power of moving the part. " The muscles of the superior extremity appear to be more frequently the seat of fixed contraction than those of the inferior extremity. The flexors of the elbow are often affected, this joint being maintained in a state of flexion or semiflexion while the patient is awake; but if the attention can be abstracted, the part will often become partially relaxed, and may be moved with comparative facility. The joint may sometimes be easily extended, but reverts to the flexed condition as soon as the extending; force is withdrawn; at other times, the contraction is attended with pain, which any attempts at extension aggravate. In some cases, the skin of the part is morbidly sensitive to the touch. The fingers are also frequently contracted, the hand being held firmly clenched, particularly when attempts are made to open it, or when the patient's attention is otherwise directed to the complaint. This state is not unfrequently combined with contraction of the flexors of the elbow, or with other nervous symptoms.
" The muscles which raise the lower jaw are occasionally implicated, producing lockjaw more or less complete. When the muscles are so rigidly contracted as to prevent the introduction of substances into the mouth, considerabl e embarrassment is occasioned; no apprehension need however be entertained of the patient suffering Materially from hunger or thirst, as when these sensations require to be allayed, sufficient relaxation will take place to allow liquids to be introduced.
Wry-neck is sometimes produced from the Muscles on one side of the neck becoming the seat of the contraction. Other parts of the body are also liable to this affection.
"The disorder is not in general of long duration, but relapses n?t unfrequently occur. In all its varieties, the affected parts become relaxed during sleep, and may then be freely moved. Ihis cjrcumstance will serve to distinguish the more obstinate cases from Slmilar complaints of a purely local and more permanent character.' The following case is given.
" An unmarried female, eet. twenty, was admitted into St.
George's Hospital, in July 1827, having, two months previously, fallen and hurt her left elbow and hip. Considerable pain and discoloration of the elbow were caused by the accident, but subsided after the employment of a liniment. When received into the hospital, the elbow-joint was in a state of semiflexion, and the fingers and thumb firmly closed. While the patient was awake, ^anual attempts to overcome the contraction caused a kind of hysteric paroxysm. She complained of pain extending from the elbow to the wrist: this was aggravated by moving the forearm, and by lightly pinching up or tapping the skin. The sensibility ot the skin in other parts of the body was also morbidly increased, hut her general health was not impaired. Mr. Brodie, whose patient she was, prescribed the application of the spirit lotion of the hospital to the elbow, and the following medicine: Tinct. Valer., Ammon., Vini Aloes aa 3L sexta quaque hora ex aqua.
" The patient feeling relieved by these means, they were continued, with the occasional employment of the shower-bath, for about a month; at the expiration of which period, the pain having entirely subsided, and the patient having regained the use of the elbow and hand, (the contraction recurring only for a short time at distant intervals,) she was placed on the out-patients' list." (P. 77.) The case next related occurred in an hospital at Florence, and is a good illustration of the mistaken views which even experienced practitioners not unfrequently take of nervous disorders.
"Dec. 10, 1830. Three months ago, a girl, set. seventeen, in whom menstruation was occasionally irregularly performed, but healthy in other respects, on descending into a close cellar, fainted, and fell to the ground. In falling, she struck her neck against some projecting body; abscess formed in the situation of the injury, was opened, and healed at the expiration of six weeks. Some days before her admission to the hospital, she lost the use of her left arm, and, shortly after, that of the left leg. The extremities of the right side subsequently became paralytic, and she was brought to the hospital in this state in the beginning of November. The intellect, the functions of respiration and digestion, continued unimpaired, as did those of the detrusor urinse and sphincter ani muscles. The case was considered to be inflammation of the spinal marrow.
Repeated bleeding, the application of leeches and blisters along the spine, low diet, the exhibition of strychnine, and the formation of a sore by caustic in the situation of the previous abscess, produced no amelioration. " A fortnight ago, she suddenly heard of the death of a near relation; and from that time constant movements of the limbs succeeded to the state of paralysis in which they had previously lain.
These movements have continued ever since; the arms are incessantly beating against the breast, the thighs and legs alternately bent and extended with violence. Though pale, her countenance does not indicate the existence of organic disease; the intellectual and vital operations are not impaired; she answers questions readily; the tongue is clean, the pulse weak. The prognosis delivered by her physician is unfavourable. She takes no medicine, but leeches are occasionally applied along the spine. " I ascertained that the motions of the limbs, though constant during the day, did not prevent her from sleeping at night, at which period they ceased. They were more violent when any one approached the bed, and conversed with her, but, when she was not conscious of being observed, their violence was lessened, and they occasionally ceased altogether for a few seconds. The patient had the power of so far interrupting the movements as to put her hand to her head, when directed so to do, and to point to anything she wanted.
" From a consideration of the peculiarities of the case, and of its long duration, I was led to infer that, though the paralysis might have been occasioned by some irritation of the nerves, consequent on the healing of the abscess, the present symptoms did not indicate disease of the spinal marrow; that the complaint was essen-tially nervous, having great analogy with chorea, the motions of the limbs being, as in that disease, in some degree kept up by habit. This view of the case I communicated to her physician, who did me the honour to ask my opinion.
" Dec. 24th. The depletory measures have been discontinued, and the quantity of food increased, since the 14th. The patient has had, during the last two days, several hysterical symptoms; such as tremulous motions of the eyelids, loss of voice, occasional fits of laughter. The movements of the limbs are less violent, and at times cease altogether; she sleeps well, and her appetite is good. " Dec. 30th. The patient, having been allowed a more full diet, is much improved in appearance; the movements are now almost entirely confined to the hands, and cease if her attention can be drawn off from her complaint. u From this time she recovered rapidly, and was dismissed from the hospital in January." (P. 83 filbert, had existed three years, but only became painful after an attack of fever, six months previous to her admission. The pain was not constant, but came on at intervals, more or less distant, being always increased by mental agitation. The skin covering the breast and surrounding parts was exquisitely sensitive when touched, the pain remaining several hours after the manual examination. Leeches and cold lotions were employed, without advantage. The patient's general health was good, and menstruation regular. She was ordered a belladonna plaster to the painful breast, and the following draught every six hours. R. Mist. Camphorse 51., Decoct. Aloes comp. ^iij., Tinct. Humuli 3i.
These measures relieved the pain and morbid sensibility of the skin; they were continued for about three weeks, when these symptoms having entirely subsided, and the tumor being somewhat diminished in size, the patient was discharged. " Case II. A young girl, set. twelve, was received into St.
George's Hospital, in October 1829, having a tumor, of the size of a walnut, in the left breast, which came on attended by fever two years previously, and attained the size of a small orange, but had since gradually subsided. At the time of her admission, the tumor was occasionally painful; the pain recurring at irregular periods, extending down the arm of the same side, and lasting about ten minutes each time. The skin of the breast and its neighbourhood was morbidly sensitive to the touch. These symptoms had, however, existed only since the month of June. The general health was good, and the sleep undisturbed. The employment of antiphlogistic means had not been attended with advantage.
" She was ordered to take, every morning and noon, Vini Ferri et Vini Aloes aa 5U ex aqua; and the following embrocation to be rubbed on the breast twice a-day: R. Linim. Camph. 3iss., Tinct. Opii iij. M. fiat embrocatio. At the expiration of a week, the symptoms were greatly alleviated, and she was transferred to the ?ut-patient's list." (P. 134.) Pain and preternatural sensibility of the skin often affect the back, especially some part along the course of the spine, and the disorder is liable to be mistaken for spinal disease, if attention be not paid to the diagnostic marks. This, Mr. L. remarks, is the more likely to happen as the affection is frequently attended by debility, more apparent than real, of the lower extremities, particularly if the patient have been long confined to the recumbent position. aggravated by pressing on the part, or by lightly patting and pinching-up the skin. The .morbid sensibility is not however confined to the part to which the pain is referred, but is in general more extensively diffused over the back, and occasionally affects at the same time other parts of the body, the patient complaining of pain being induced by the slightest touch. The pain is not always confined to the same point, but shifts its situation, and is often absent altogether for a short period. The patient sometimes complains of weakness in the loins when attempting to stand or sit upright. The symptoms, however, vary at different times, the patient being on some days better, on others worse. The general health is not materially impaired, and the sleep is not disturbed by the complaint, which is occasionally of long duration, without any increase in the severity of the symptoms. " The diagnosis between this disorder and spinal disease is easy. In spinal disease, the pain, when present, is fixed to one point, and requires pretty firm pressure on the part to increase it; while, in the nervous affection, it is more variable, and shifts its situation.
As disease of the spine advances, the patient's appearance becomes altered, fever supervenes in most cases, the limbs become paralytic, abscess frequently shews itself in the loins or groin, and projection of one or more of the spinous processes will be perceptible when the dorsal vertebra are affected. The morbid sensibility of the skin to the touch is not, in general, present in spinal disease: I have however seen in one case this symptom co-exist with disease of the spine, in which there was manifest projection of the spinous processes of two dorsal vertebrae, and a paralytic state of the lower limbs; the skin of the back, breast, and abdomen, was acutely sensitive. The patient was a female who had formerly been subject to hysteria. (P. 136.) " A young woman, whose countenance indicated good health, was admitted into St. George's Hospital, in the beginning of April 1829, complaining of acute pain in the small of the back, which had existed, with occasional intermissions of three or four days, for three years and a half. It was sometimes transferred to other parts of the back, and was aggravated at her monthly periods; menstruation was otherwise easy and regular. She had been salivated by mercury, and confined to the horizontal position for twelve months, during part of which time a caustic issue was kept open on either side of the spine. " Though her general health was unimpaired, she was unable to stand without support. Pressure on any part of the spine occasioned pain. The skin covering the spine, and in other parts of the body, was morbidly sensitive to the touch. She could move the legs freely when lying or sitting down, and her sleep was undisturbed. She was ordered to take, three times a-day, Spirit. Amrnon. comp. n|.x., Misturse Camph. 5X.; and was allowed to get about the ward on crutches. " At the expiration of a fortnight, she considered herself to be Mr. Fletcher's Sketches from the Case-booh. 57 better, and had gained more strength in the legs. The medicine was continued, and a large blister applied to the loins. " In the beginning of May, the pain and sensibility of the skin were greatly mitigated; the patient made great objection to the blister, which she requested might not be repeated: a fresh blister was nevertheless applied, and the same medicine continued. " A third blister was applied after a few days; and, in the middle ?f May, the patient was dismissed the hospital cured, being able to walk without any other assistance than a stick." (P. 139.)
|
v3-fos-license
|
2018-04-03T01:28:11.875Z
|
2017-10-30T00:00:00.000
|
206456704
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsami.7b06728",
"pdf_hash": "68621fc5594125b9fdaefba084454fc1a69b4e71",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43257",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "68621fc5594125b9fdaefba084454fc1a69b4e71",
"year": 2017
}
|
pes2o/s2orc
|
Revealing the Bonding Environment of Zn in ALD Zn(O,S) Buffer Layers through X-ray Absorption Spectroscopy
Zn(O,S) buffer layer electronic configuration is determined by its composition and thickness, tunable through atomic layer deposition. The Zn K and L-edges in the X-ray absorption near edge structure verify ionicity and covalency changes with S content. A high intensity shoulder in the Zn K-edge indicates strong Zn 4s hybridized states and a preferred c-axis orientation. 2–3 nm thick films with low S content show a subdued shoulder showing less contribution from Zn 4s hybridization. A lower energy shift with film thickness suggests a decreasing bandgap. Further, ZnSO4 forms at substrate interfaces, which may be detrimental for device performance.
S-2
Experimental Methods X-ray photoluminescence (XPS) was done using PHI Versa Probe Scanning XPS Microprobe which uses Al (Kα) radiation of 1486 eV in a vacuum environment of 5*10 -7 Torr. Data not shown here as data was published in prior work. 1 XRD on the samples was performed in a PANalytical X'Pert PRO X-ray diffraction system.
The grazing incidence angle omega was set to 0.5° and Cu Kα radiation with a wavelength of 1.540598 Å was used. Relevant data from previous work is rehashed below.
ALD of Zn(O,S) films
The substrates were loaded into a customized flow-type ALD system. The substrate temperature was maintained at 160 °C. Diethylzinc (DEZ), distilled H 2 O, and a gas mixture of 3.5% H 2 S in N 2 were used as precursors to deposit Zn(O,S). The pulse time for all precursors was 0.1 s, their residence time in the reaction chamber was 2 s to ensure complete saturation of the available surface sites. After each precursor pulse, 45 s were allotted for byproduct evacuation. Conventionally, ZnO and ZnS films were deposited in a laminar fashion pulsing DEZ into the reaction chamber before its oxidation with either H 2 O or H 2 S. Both ALD films are well characterized in literature. 2
XANES for Zinc K-edge measurements
At BL 11-2, the samples were positioned at 45° from both the incident x-ray beam and the detector. The energy X-ray slits were approximately set to 1 mm x 10 mm. A 30-element Ge solid state Detector Array detector was used to gather the total fluorescence yield (TFY) data.
XANES for Zinc LIII, LII-edge measurements
Total electron yield (TEY) spectra were gathered at BL 10-1. To measure the sample drain current for the TEY data, all samples were loaded on the sample stick with carbon tape and were electrically grounded with carbon paste.
Zinc K and L edge spectra
All spectra were background subtracted and atomically normalized in the energy region from 9690 to 9700 eV.
The Zn LIII, LII-edge XANES total electron yield (TEY) spectra of thick samples (300 cycles) deposited on SiO 2 are shown in Fig. S2a. The same spectra is replicated in the main text except for the fact that it only shows the LIII-edge. The LIII-edge was the main focus of our analysis as the L-II edge is both weaker in intensity and also does not even start at the same energy for all the samples. In Fig. S2b, the derivative spectra is a longer range than what is shown in the main text. The more important region of the derivative spectra is the beginning peak and hence a shorter range is plotted in the main text. The sharpness and intensity of the first peak is an indicator of ionicity in the metal edge, where with increasing sulfur content the ionicity increases. While there are a few differences later on in the spectra, the spectra also suffer from worse S/N ratio there.
S-4
In Fig. S3a, the derivative of the thick (darker curves) and thin (lighter curves) are plotted together. It is observed that the thicker samples all have an earlier onset for the first peak compared to their thinner counterparts. This likely suggests that the band gaps of the thicker films is probably smaller than the thinner films. It can also be noted that fewer peaks appear for the thinner samples, which is expected given the lack of long range order having been developed.
The derivative of thick and thin films at the Zn LIII-edge is plotted in Fig. S3b. The 10% samples have much overlap in the spectrum and indeed their XANES spectra look similar. The thin 20% and 33% samples have a sharper first peak which may stem from the presence of zinc sulfate.
The spectra shown in Fig. S4 are similar to Fig. 4 in the main text. The difference is that the LII edge is also provided here and the derivative spectra shows a wider range.
In Fig. S5, as in our previous work on thin films (O K-edge), a comparison of the films deposited by the conventional nanolaminate method and our Diff'd method is shown. It can be seen that the spectra are quite similar to each other even though the ALD pulse sequence was drastically different. This is a strong indication that the films should behave similarly as one would expect the geometric and electronic structure to be quite similar. It illustrates that more investigation is necessary into the growth of ALD thin films, particularly in sulfide ternary films.
GIXRD measurement
The crystallographic data for the samples was obtained by GIXRD. The data is shown in our previous work. Below is the curve for ZnO, which is necessary in explaining the large A peak present in the Zn K edge for our ZnO sample that is otherwise not seen in literature. In literature the A peak is visible but is not as prominent as seen in our XANES spectrum. It can be seen in Fig. S1 that the ZnO grown by ALD has a strongly preferred orientation favoring the (200) plane. Due to this preferred orientation, when the XANES is performed with the configuration S-5 used, we probe a substantial sigma character of the Zn 4s orbitals that contributes to the A peak and hence the A peak we observe is much greater that what is reported in literature.
Simulations
We demonstrate the nature of bonding through multiple-scattering simulations of ZnO and ZnS clusters. Theoretical XANES spectra are simulated using the FEFF9 code 8 , 9 based on Green's function multiple-scattering theory where the parameters SCF (specifies the radius of the cluster for full multiple scattering during the self-consistency loop) and FMS (computes full multiple scattering within a sphere of radius r centered on the absorbing atom) were changed. We first obtained spectral agreement between simulated and experimental wurtzite ZnO and spahlerite ZnS at the Zn K edge.
LDOS
The LDOS was calculated for all the simulated XANES spectra that has been shown, using FEFF9. The partial density of states (pDOS) were then properly aligned to the Fermi level by shifting the total density of states (tDOS) so that the onset of the band gap was at 0 eV. Next the experimental curves were aligned by shifting it so that the highest intensity was in line with the highest intensity of the pDOS of the absorbing anion. S10_300 S10_20 S20_300 S20_20 S33_300 S33_21 Normalized derivative absorption Energy (eV) Figure S6: (a) The Zn LIII,LII-edge XANES TEY spectra of Zn(O,S) thin films of varying S composition deposited on silicon wafer. The 10% sample looks much like the thicker counterpart except for a stronger appearing shoulder. Both the 20% and 33% samples are also similar to their thicker films except the shoulder now appears as a peak. We speculate it might be the case that some zinc sulfate might be present at the interface as we oberved in our previous work. (b) the normalized derivative spectra for samples in (a). As seen in the main text, the higher sulfur content sample has a stronger and sharper first peak in the derivative which means it is the most ionic. This is then followed by the 20% and the 10% respectively. differences, suggesting that the films have similar electronic and geometric stuctures and hence should behave similarly. Figure S8: (a) The Zn LIII-edge XANES TEY spectra of Zn(O,S) films of varying S composition deposited using our diffusion facilitated deposition method (Diffd), demarcated by "n" on a silicon wafer. The 10% sample which is half as thick looks very much like thick samples grown the conventional way ( Figure 3). The 33% sample is also only half as thick as the thick sample, looks quite a bit different from the thick sample. It appears the S rich films have a wide range of tunability (oxidant ratio and thickness). The 50% thin sample looks most similar to 33% thin film grown by either deposition strategy but the shoulder and main peaks are less pronounced. (b) Zn LIII, LII-edge XANES TEY spectra (same as in (a) but showing the Zn LIIedge also).
|
v3-fos-license
|
2024-03-09T06:17:38.702Z
|
2024-02-01T00:00:00.000
|
268275944
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "f8e4287c614b0899d25f94b45579db0d2e22b184",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43258",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "300137ead919d3a1481818b588d32dad9c56bbaa",
"year": 2024
}
|
pes2o/s2orc
|
Perceiving a need for dietary change in adults living with and beyond cancer: A cross‐sectional study
Abstract Background Many people living with and beyond cancer (LWBC) do not meet dietary recommendations. To implement a healthier diet, people LWBC must perceive a need to improve their diet. Methods Participants included people diagnosed with breast, prostate or colorectal cancer in the UK. Two binary logistic regression models were conducted with perceived need for dietary change as the outcome (need to improve vs. no need). Predictor variables included demographic and clinical characteristics, receipt of dietary advice, and either body mass index (BMI) or adherence to seven relevant World Cancer Research Fund (WCRF) dietary recommendations. Results The sample included 5835 responses. Only 31% perceived a need to improve their diet. Being younger (odds ratio [OR] 0.95, 95% confidence interval [CI] = 94–0.95), female (OR = 1.33, 95% CI = 1.15–1.53), not of white ethnicity (OR = 1.8, 95% CI = 1.48–2.27), not married/cohabiting (OR = 1.32, 95% CI = 1.16–1.52) and having received dietary advice (OR = 1.36, 95% CI = 1.43–1.86) was associated with an increased odds of perceiving a need to improve diet. This association was also seen for participants with two or more comorbidities (OR = 1.31, 95% CI = 1.09–1.57), those not meeting the recommendations for fruit and vegetables (OR = 0.47, 95% CI = 0.41–0.55), fat (OR = 0.67, 95% CI = 0.58–0.77), and sugar (OR = 0.86, 95% CI = 0.75–0.98) in the dietary components model and those who had a higher BMI (OR = 1.53, 95% CI = 1.32–1.77) in the BMI model. Conclusions Most of this sample of people LWBC did not perceive a need to improve their diet. More research is needed to understand the reasons for this and to target these reasons in dietary interventions.
4][25][26][27][28] While compliance to fruit and vegetable recommendations is the most commonly reported indicator of dietary quality, 23,24 Winkels and colleagues also reported poor adherence to the recommendations for meat, fibre and energy-dense nutrient-poor foods in 1196 people diagnosed with colorectal cancer. 27Similar findings were observed in a small sample of women diagnosed with breast cancer, although better adherence to the red meat recommendations was found. 28However, low adherence rates may be explained by limited knowledge about what constitutes a healthy diet.Despite a documented desire for dietary support and advice in this population, 29,30 a recent scoping review by Johnston and colleagues reported that the dietary needs and preferences of people LWBC do not align with their access to guidance and information. 31eople LWBC may not recognise that their diets are suboptimal, 30,32 with data from qualitative research suggesting that a belief that one's diet is already healthy enough is a barrier to instigating healthful dietary changes in people LWBC. 30There is a documented 'optimism bias' in dietary perceptions, where people overestimate the healthfulness of their diet. 33,34This misperception is exhibited in the general population, 35,36 and in clinical populations, including people LWBC 37,38 and may mean that individuals do not believe in, nor identify a need to, improve their diet. 34,35This has been observed in a nationally representative sample of people LWBC where Xue et al. reported low conformance between perceived and actual dietary quality, with 56% of participants being categorised as incorrectly optimistic about their diets. 38any sociodemographic, clinical and dietary factors might contribute to these misperceptions of dietary quality. 35In Xue et al.'s study, each 10-year increase in age was associated with greater odds of being incorrectly optimistic about dietary quality, 38 while different age groups had a differential effect on being incorrectly optimistic in Variyam et al.'s study of the general population. 36While there are only few studies examining predictors of misperceptions of dietary quality, 36,38 previous research is inconclusive regarding associations between dietary quality perception (whether correct or not) and demographic factors such as age [38][39][40][41] and sex. 37,38,40While, Batis et al. and Gago et al. observed similarities in the mean ages of participants across differing perceptions of dietary quality, 40,41 Sullivan et al. reported that adults, who perceive their diet to be of higher quality tend to be older than those who report lower self-ratings of dietary quality. 39However, these discrepancies may be attributed to differences in how perceived dietary quality is measured across studies, either dichotomously 40 or using rating scales. 39,41Further research suggests that individuals with higher education levels tend to self-report better dietary quality, 39 but the accuracy of this is compromised by Xue et al.'s observation that this group is also more likely to demonstrate an optimism bias. 38There is more consistent evidence that people with overweight and obesity tend to perceive their dietary quality to be poorer than those in healthier weight categories. 39,40Similarly, when asked if they consider their diet to be healthy, people who rate their dietary quality as healthier (without investigation of the accuracy of this perception) tend to demonstrate significantly higher intake of fruits and vegetables, fish, and fibre 37,40 and lower intake of sugary and salty foods. 40However, there has been little research on whether similar factors are associated with a perceived need to change diet.
The importance of investigating perceptions surrounding a need for dietary change is underscored in Michie et al.'s capability, opportunity and motivation model of behaviour (COM-B). 42As part of their psychological capability, the individual must first be able to evaluate their current behaviour against the potential benefits of modifying this behaviour.Recognising that there are areas for improvement in their diet quality and therefore, perceiving a need to improve diet is then a contributing factor in determining whether a change in dietary behaviour will take place and can influence the behaviour directly, or indirectly via motivation.The current study therefore aimed to identify the proportion of adults LWBC perceiving a need to improve diet and to investigate factors influencing this perception, including demographic and clinical characteristics, body mass index (BMI) and intake of specific dietary components.
| Design
Secondary analysis was conducted on the crosssectional data collected during the pre-trial stage of the Advancing Survival after Cancer Outcomes Trial (ASCOT). 43ASCOT is a randomised controlled trial investigating a brief lifestyle advice intervention in people diagnosed with breast, prostate or colorectal cancer that began recruitment in 2015. 43The intervention targeted nine health behaviours including physical activity, diet, alcohol and smoking, where a change in a calculated composite health behaviour risk index was the primary outcome of the trial.The 'Health and Lifestyle After Cancer Survey' was used to identify initial interest in the ASCOT trial and comprised questions about demographics, clinical characteristics and health behaviours in people LWBC.
| Procedure
Eligible participants for the 'Health and Lifestyle After Cancer' questionnaire were identified via 10 participating NHS hospital sites across London and Essex.Patients who had received a diagnosis of breast, colorectal or prostate cancer between 2012 and 2015 were mailed the survey pack.These dates were chosen as the survey was used to identify interest in the ASCOT trial, where participants were only patients who had completed primary curative treatment. 43acks were sent out between February 2015 and November 2017 and returned questionnaires were accepted until the 4th of January 2018.Ethical approval was obtained through the National Research Ethics Service Committee South Central-Oxford B (reference number 14/SC/1369).
| Participants
Participants in this study were over 18 years old, had received a primary diagnosis of breast, prostate or colorectal cancer at the participating hospital sites and were able to provide consent for themselves and were without cognitive impairment.Although the sample was primarily comprised of patients diagnosed between 2012 and 2015, the final sample in the analysis included patients diagnosed outside of these dates (range: 1994-2017).The recorded date of diagnosis was that of their most recent diagnosis of breast, prostate or colorectal cancer, as some participants had received more than one diagnosis.Exclusion criteria (patient deceased or deemed inappropriate to contact) were intentionally limited to maximise the reach of the study and to minimise the burden of survey administration at sites.
| Demographic characteristics
Participants were asked to self-report age (in years), sex (male, female), marital status (married or living with partner, separated, divorced, widowed, single), and education level (ranging from 'no formal qualifications' to 'Masters/ PhD/Postgraduate Certificate in Education (PGCE) or equivalent').Marital status was collapsed into two categories due to small numbers in the non-married/cohabiting groups (married/cohabiting, separated/divorced/ widowed/single).Education level was collapsed into four categories (no formal qualifications, General Certificate of Secondary Education (GCSE)/vocational, A-level, degree or higher).A-levels are equivalent to school leaving qualifications such as the High School Diploma.Ethnicity information was collected by 15 possible responses to the question 'Which of these best describes your ethnic group?' and collapsed into two categories (white, any other ethnicity) as numbers in the other ethnicity categories were low (9.5%).
| Clinical characteristics
The questionnaire asked participants 'Which of these types of cancer have you been diagnosed with?' (breast, prostate, bowel [colorectal], other) and whether it had spread to any other parts of their body.Cancer type was recorded as the most recent out of the three cancer types that was reported by participants.Time since diagnosis (in months) was based on participants' date of their most recent diagnosis of breast, prostate, or colorectal cancer and the date the questionnaire was received back at the university.Treatment type was assessed by 'What treatment(s) have you had for this cancer?Please tick all that apply' (surgery, radiotherapy, chemotherapy, hormone therapy, active surveillance, none, not sure), meaning some participants selected multiple treatment types.As many participants specified biological therapy under the 'other' category, we created an additional category specifically for this treatment type.BMI was calculated from self-reported height and weight.Participants were classified as underweight (BMI < 18.5), healthy (BMI ≥ 18.5 and < 25), overweight (BMI ≥ 25 and < 30) or obese (BMI ≥ 30). 44The 'underweight' category was combined with the 'healthy' category due to low numbers (1.36%).To assess the number of comorbid conditions, participants were asked to tick all that applied in response to 'Have you ever had any of the following health problems?':osteoporosis, diabetes, asthma, emotional or psychiatric illness, stroke, Parkinson's disease, Alzheimer's disease or dementia, lung disease, arthritis, angina, heart attack, heart murmur, irregular heart rhythm, any other heart trouble, another cancer, or other health problems not listed.Number of comorbid conditions was collapsed into four categories (0, 1, 2, ≥3).
| Dietary advice
Participants were presented with the question 'In the time since you were first diagnosed with cancer, did a health professional ever recommend any of the following?' (yes, no).The options included 'Eating more fruit and vegetables', 'Avoiding foods or drinks high in fat, sugar or salt', 'Eating less red or processed meat' and 'Reducing the amount of alcohol you drink'.If participants answered yes to any of these, they were classified as having received dietary advice.
| Dietary components
Seven relevant dietary components were assessed using an adaptation of the Dietary Instrument for Nutrition Education Food Frequency Questionnaire (DINE FFQ) asking about participants' current diet. 45The DINE FFQ has been validated in the general population 45 and was chosen after a review of validated food frequency questionnaires and a review of how diet was assessed in previous studies of people LWBC. 43Based on a review of the National Diet and Nutrition Survey and the Low Income Diet and Nutrition Survey, some food items were updated to ensure that the UK diet was reflected in the items.The adapted DINE FFQ therefore included more ethnically diverse foods that are presently available, as well as enabling estimations of the relevant WCRF diet components. 43he items used to estimate intake of each component have previously been described for this dataset. 46The seven dietary components in this study were selected based on the WCRF/AICR recommendations and in line with the national United Kingdom (UK) recommendations 46,47 and included the recommendations for the intake of fibre (≥30 g per day), 5,48 fruit and vegetables (≥5 portions [400 g] per day), 5 red meat (<500 g per week), 5 processed meat (none), 5 fat (<33% of calories from fat per day), 5,49 sugar (<5% of calories from free sugars per day), 5,48 and alcohol (≤14 units per week). 5,50Adherence to each of these recommendations was operationalised using a scoring system implemented and described previously for this trial. 46 2.4.5 | Perception of the need for dietary change Participants' perception of the need for dietary change was assessed using the question 'Which of the following best describes you at the present time?'. Response options included 'I think I should have a healthier diet'; 'I don't think I need to change my diet'; 'Don't know'. Tis item was custom-made for the study.
| Missing data
Multiple imputation in SPSS was used to handle missing data and to reduce bias introduced by incomplete cases.The imputation model included all variables in the analysis.Missing data analysis found that 65.3% of cases had missing data, with 4.8% of 332,595 values missing for the included variables.Little's t-test determined that these were not missing completely at random.Imputation was performed on the dietary perception variable (1.1% missing), demographic and clinical variables: age (0.6% missing), sex (0.3% missing), ethnicity (0.5% missing), marital status (0.3% missing), and highest education level (9.4% missing), height (2.3% missing) and weight (4.4% missing) variables used to calculate BMI, the cancer spread variable (13.4% missing), and time since diagnosis (0.6% missing).Number of comorbidities contained no missing data.Each dietary advice variable was included: eating more fruit and vegetables (9.7% missing), avoiding foods high in sugar and fat (10.7% missing), eating less red and processed meat (11.9% missing), and reducing alcohol (14.9% missing).All scale item variables assessing dietary intake were included in the imputation (41 variables; 4.6% missing values) before recalculation of total intake scores to determine meeting/non-meeting WCRF recommendations.The standard five imputations were conducted with 10 iterations per imputation. 51After running the imputation model twice and running the analyses with both datasets, the results were similar and considered to converge and therefore five imputations were considered adequate.Results from the first imputation model are reported.
| Statistical analysis
All statistical analyses were performed using SPSS version 26.Two separate binary logistic regression models were conducted on the imputed dataset to determine factors that influence perceptions of need for dietary change (need to improve vs. no need to change).As BMI may mediate any association between dietary intake and perceptions of the need for dietary change and Tennant et al. 52 advises against controlling for potential mediators, separate models investigated the role of dietary components and perceiving a need to improve diet, and BMI and perceiving a need to improve diet.A mediation analysis could not be conducted as temporal ordering could not be determined in this cross-sectional data. 53issing values were imputed at this level.Respondents who selected 'don't know' were coded as missing before conducting the regression analyses since it was not appropriate to impute where true values were given but it was not possible to include this group in the regression model due to small numbers (8%).Demographic and clinical characteristics including age, gender, ethnicity, marital status, education, cancer spread, time since diagnosis, number of comorbidities and receipt of dietary advice were entered into the model simultaneously with either the seven relevant dietary components (fibre, fruit and vegetables, red meat, processed meat, fat, sugar, and alcohol) or BMI.Cancer type was not included in the regression models due to collinearity between cancer type and sex.Supplementary stratified analyses by cancer type were conducted in addition to the main analyses.Both regression analyses were repeated with the nonimputed data to explore, whether findings were similar in the original data.
| Sample characteristics
Of 13,500 surveys mailed to potential participants, 5835 were returned (43% response rate).Most participants perceived no need to change their diet (3511; 60%), while 1793 (31%) perceived a need to improve their diet, and 468 (8%) participants reported not knowing.Only 63 (1%) participants had missing data for this variable.
Figure 1 presents adherence to the WCRF recommendations according to perceptions of need for dietary change in the original data.Overall, 1.2% of the sample met all seven WCRF recommendations.In those perceiving no need to change their diet, 1.4% met all the recommendations.
| Characteristics associated with perceiving a need for dietary change
Tables 2 and 3 present the binary logistic regression analyses examining associations between demographic characteristics, clinical characteristics, receipt of dietary advice and BMI (Table 2) or dietary components (Table 3) in perceptions of need for dietary change, where perceiving a need to improve diet was the reference group.Associations between perceptions of need for dietary change and demographic and clinical characteristics, and receipt of dietary advice were similar in both models.Odds ratios and confidence intervals are reported from need.Specifically, every year increase in age was associated with 5% lower odds of perceiving a need to improve diet (OR = 0.95, 95% CI 0.94-0.95).Female participants had greater odds of perceiving a need to improve their diet than males (OR = 1.33, 95% CI = 1.15-1.53).Compared to participants of white ethnicity, participants of any other ethnicity had greater odds of perceiving a need to improve diet (OR = 1.83, 95% CI = 1.48-2.27).Participants who were not married/cohabiting had greater odds of perceiving a need to improve their diet (OR = 1.32, 95% CI = 1.16-1.32)Education level was not associated with perceptions of need for dietary change.
| Clinical characteristics
The number of comorbid conditions was significantly associated with perceptions of need for dietary change.
Compared to participants with no comorbidities, participants with two (OR = 1.31, 95% CI = 1.09-1.57)or three or more comorbidities (OR = 1.43, 95% CI = 1.16-1.75)demonstrated greater odds of perceiving a need to improve their diet.Cancer spread and time since diagnosis were not associated with perceiving a need to change diet.
| Receipt of dietary advice
Participants, who had received dietary advice were more likely to perceive a need to improve their diet (OR = 1.63, 95% CI = 1.43-1.86).
| Body mass index
Participants classified as overweight had 50% greater odds of perceiving a need to improve their diet compared to participants in the healthy/underweight category (OR = 1.53, 95% CI = 1.32-1.77).The odds of perceiving a need to improve diet were more than doubled in participants with obesity (OR = 2.73, 95% CI = 2.31-3.24).
| Dietary components
Participants, who met the WCRF/AICR recommendations for fruit and vegetables (OR = 0.47, 95% CI = 0.41-0.55),fat (0.67, 95% CI = 0.58-0.77)and sugar (OR = 0.86, 95% CI = 0.75-0.98)demonstrated lower odds of perceiving a need to improve their diet, compared to those who did not meet recommendations.Meeting the recommendations for fibre, red meat, processed meat, and alcohol were not associated with perceiving a need to change diet.
| Analysis with original data
Descriptive statistics for participants, who provided responses on all key variables (completers) compared to those who had missing data (non-completers), alongside results from the logistic regression analyses with the complete case data, are presented in the supporting information, showing similar patterns to the imputed data.A comparison of people, who were included in the analyses compared to those who answered 'don't know' is provided in the supporting information.
| DISCUSSION
In this sample of 5835 people LWBC, only 31% perceived a need to improve their diet while 60% did not.In both models, individuals who were younger, female, not of white ethnicity, not married/cohabiting, and had received some dietary advice were more likely to perceive a need to improve their diet.Without accounting for dietary intake, participants with a higher BMI were more likely to perceive a need to improve their diet.On the other hand, where BMI was not included in the model, participants who reported having two or more comorbidities, met the WCRF recommendations for fruit and vegetables, fat and sugar were less likely to perceive a need to improve their diet.This study supports previous research reporting an association between age, sex, ethnicity and marital status and diet-related perceptions, 38 but diverges from studies reporting associations between education and dietary perception. 39The association between older age and a reduced likelihood of perceiving a need to improve diet aligns with previous reports of higher self-ratings of dietary quality in older age groups, 38,39 while the greater likelihood of females perceiving a need to improve diet in the dietary components model diverges from reported similarities across men and women in perceptions of dietary healthfulness. 40,41This discrepancy may be attributed to the difference in asking participants to consider whether they need to improve their diet, which is distinct from rating the healthfulness of their diet. 38,40In their study of people LWBC, Xue et al. asked participants to rate how healthy their diet was on a scale of one to five, while Batis et al. asked participants a closed binary question of whether they considered their diet to be healthy. 38,40These assessments differ from the current study's question specifically asking about whether improvement to diet is needed and suggests that although men and women may assess the quality of their diets similarly, this does not translate equally across sexes into perceiving a need to change.While previous research has demonstrated sex differences in changes to diet in people diagnosed with colorectal cancer, with a higher prevalence of dietary changes alone reported by males and a higher prevalence of both dietary changes and supplement use reported by females, 54 the present study's findings on perceiving a need to change are novel.Our results support previous findings in the general population that women place higher importance on diet and healthy eating than males, 55 and extend these findings to people LWBC.In the dietary components model, participants experiencing more comorbidities were more likely to believe in a need to improve their diet.Heuchan et al. found a similar association between the number of comorbidities and perceiving a need to lose weight and suggested that other illness care pathways may help to raise awareness of weight status. 56This may also be the case for dietary information and accordingly, experiencing comorbid conditions may mitigate the effects of an unmet need for dietary advice that is reported in people LWBC. 30,57Similarly, this study extends findings of an association between BMI and perceptions of dietary quality by demonstrating that people LWBC with overweight or obesity are more likely to perceive a need to improve their diet.However given that people with overweight and obesity are also at heightened risk for comorbidities, future research should seek to better understand the relationships between comorbidities, BMI, actual dietary intake and dietary change perceptions. 58Additionally, stigma may be an important mediating factor in the relationship between BMI and dietary change perceptions.Oversimplification of the causes of obesity means that the majority of people in the UK believe that individuals with obesity are themselves fully responsible for their dietary intake and heavier weight. 59,60Internalisations of this belief could drive people with higher BMIs to perceive a need to improve their diet regardless of actual diet quality.Future research should aim to examine the potential mediating role of BMI on perceiving a need to change diet by using prospective designs to further investigate the causal pathway underlying the association between dietary intake and perceiving a need to change diet.
Previous research has identified perceiving one's diet to already be healthy enough as a formidable barrier in trying to encourage people to eat a healthier diet 61,62 and although people tend to be aware of nutritional guidelines, they appear to not perceive these to be relevant to their own diet. 61This could reflect misevaluations of their own diet or even a lack of belief in the role of diet in improving T A B L E 3 (Continued) outcomes. 32Xue et al. reported low conformance between perceived and actual dietary quality in people LWBC, where 56% of participants overrated the healthfulness of their diet and overrating was associated with an increased intake of 'empty calories' from added sugars and fats. 38The current study shows more promising accuracy in dietary evaluations by demonstrating that participants correctly perceived a need to improve diet, where recommendations for sugar, fat and fruit and vegetables were not met.This may be attributed to large-scale messaging about low-fat and low-calorie alternatives and the importance of 5-a-day targets. 63owever, this finding also underscores the need to increase knowledge of the other dietary recommendations in people LWBC as it indicates that people rely on fruit and vegetable, sugar and fat consumption, when evaluating dietary quality, without consideration for other important dietary components Not knowing what constitutes a healthy diet is well documented in the literature, as well as reports of inadequate provision of dietary information after diagnosis within the cancer care pathway. 31,64n this study, 58% of participants reported not receiving any dietary advice from their healthcare professionals.Receiving dietary advice was associated with perceiving a need to improve diet, suggesting that this may be an effective way to heighten awareness of dietary quality in this population.0][31] Interestingly, adherence was highest for red meat and alcohol consumption, but these were not related to perceiving no need to change diet, suggesting that people follow these guidelines without equating them to having a healthy diet. 65Future dietary interventions should aim to promote the consideration of all dietary components together in choosing a healthy diet, particularly to reduce instances where people inaccurately believe that their diet is already healthy enough, based only on meeting the more widely known recommendations.
Strengths of this study include the large sample size of 5835 people LWBC, the use of multiple imputation to reduce the impact of bias related to missing information, 66 and the novel exploration of the factors that influence perceiving a need to improve diet.However, these findings may not be directly generalisable to other cancer populations.Although findings were broadly similar among our subgroups, participants with colorectal cancer demonstrated some differences related to education level and a stronger association between ethnicity and perceiving a need to improve diet than in breast and prostate cancer participants.Future research should aim to provide a more comprehensive investigation of these differences.Other limitations include that this sample was not ethnically diverse, with 90% of participants identifying as white.Additionally, there were insufficient numbers in the 'don't know' group to be included in the regression analyses and there were some differences between this group and those who gave a 'yes' or 'no' answer to the dietary perception variable.Those who answered 'yes' or 'no' were younger and comprised a higher proportion of people of white ethnicity and people who were married/cohabiting.This group also demonstrated a higher level of education, fewer comorbidities, a higher proportion of people meeting the recommendations for fruit and vegetables, red meat, and sugar, and a lower proportion of people with an obese BMI.The results of this study should therefore be considered within this context and might therefore misrepresent participants who are, for example, less educated and this group may already be at higher risk of disease. 67,68urthermore, although similar findings were observed for both the imputed and non-imputed data, completer participants tended to be younger, more educated, with less comorbidities, met more dietary recommendations and more reported having received dietary advice than noncompleters.Another limitation was that self-report was used to assess diet and BMI, which may be prone to biases such as social desirability, where people respond to questions in a way that presents them in a more favourable light than what is objectively accurate. 65,69,70For instance, people tend to under-report and over-report certain foods based on perceived healthiness. 65,70However, face-to-face methods of assessment are more costly and do not allow for the same sample size to be acquired. 71][74] In this study, 43% of people sent the initial letter responded and the results should be considered with the acknowledgement that this sample may not be representative of all people LWBC.
| CONCLUSION
Despite a large proportion of people LWBC not meeting dietary recommendations, only 31% perceived a need to improve their diet.The results of this study can therefore be considered to align with previous reports of misperceptions in dietary quality. 33,38Qualitative research may help provide an explanation for what is driving these perceptions, as Beeken et al.'s interviews with people LWBC revealed that if people LWBC had already made some changes to their diet, they may not perceive a need to continue to make changes, despite still not meeting the recommendations.Targeted interventions may be required to improve the accuracy of perceptions among certain groups, including older people and men as these groups were less likely to perceive a need to improve their diet.Education around the different dietary factors that contribute to a 'healthy diet', including red meat and alcohol intake, may also encourage perceptions that are more accurate.Improving the accuracy of perceptions about diet alongside behaviour change interventions could help individuals LWBC to improve their dietary intake and enhance their long-term health.
5835) Perception of diet, n (valid %) a Need to improve (n = 1793, 30.2) No change needed (n = 3511, 60.2) Don't know (n = 468, 8)
Sample characteristics according to perception of diet in the original data.
T A B L E 1the model investigating dietary components.Stratified analyses by cancer type demonstrated broadly similar findings across the three cancer types and are presented in the supporting information.
Pooled (five imputations) multivariate logistic regression analysis for the association between dietary components and perceiving a need to improve diet (n = 5356) a .
: 95% CI, 95% confidence interval; A-level, General Secondary School Advanced Level; GCSE, General Certificate of Secondary Education; OR, odds ratio; WCRF/AICR, World Cancer Research Fund and American Institute for Cancer Research recommendations.
Abbreviations*Statistical significance at p < 0.05.a 'Don't know' cases excluded from analysis.b 0.997 rounded up.
|
v3-fos-license
|
2024-01-01T05:28:37.449Z
|
2023-12-27T00:00:00.000
|
266687723
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023JB027917",
"pdf_hash": "efc88974c55e718542f2a58048a35f5b33eca55c",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43259",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"sha1": "d9bea6255af421b053b2ecbe1d128f6557a00366",
"year": 2024
}
|
pes2o/s2orc
|
Quantifying Magma Overpressure Beneath a Submarine Caldera: A Mechanical Modeling Approach to Tsunamigenic Trapdoor Faulting Near Kita‐Ioto Island, Japan
Submarine volcano monitoring is vital for assessing volcanic hazards but challenging in remote and inaccessible environments. In the vicinity of Kita‐Ioto Island, south of Japan, unusual M ∼ 5 non‐double‐couple volcanic earthquakes exhibited quasi‐regular recurrence near a submarine caldera. Following the earthquakes in 2008 and 2015, a distant ocean bottom pressure sensor recorded distinct tsunami signals. In this study, we aim to find a source model of the tsunami‐generating earthquake and quantify the pre‐seismic magma overpressure within the caldera's magma reservoir. Based on the earthquake's characteristic focal mechanism and efficient tsunami generation, we hypothesize that submarine trapdoor faulting occurred due to highly pressurized magma. To investigate this hypothesis, we establish mechanical earthquake models that link pre‐seismic magma overpressure to the size of the resulting trapdoor faulting, by considering stress interaction between a ring‐fault system and a reservoir of the caldera. The trapdoor faulting with large fault slip due to magma‐induced shear stress in the submarine caldera reproduces well the observed tsunami waveform. Due to limited data, uncertainties in the fault geometry persist, leading to variations of magma overpressure estimation: the pre‐seismic magma overpressure ranging approximately from 5 to 20 MPa, and the co‐seismic pressure drop ratio from 10% to 40%. Although better constraints on the fault geometry are required for robust magma pressure quantification, this study shows that magmatic systems beneath calderas are influenced significantly by intra‐caldera fault systems and that tsunamigenic trapdoor faulting provides rare opportunities to obtain quantitative insights into remote submarine volcanism hidden under the ocean.
• Non-double-couple earthquakes with seismic magnitudes of 5.2-5.3recurred in the vicinity of a submarine caldera near Kita-Ioto Island • A mechanical model of trapdoor faulting based on tsunami data of the 2008 earthquake infers pre-seismic overpressure in a magma reservoir • Uncertainty in fault geometry varies our estimate of pre-seismic overpressure (5-20 MPa) and co-seismic pressure drop ratio (10%-40%) Supporting Information: Supporting Information may be found in the online version of this article.
Tsunami Signal From a Volcanic Earthquake at Kita-Ioto Submarine Caldera
Kita-Ioto Island is an inhabited island in the Izu-Bonin Arc, to the northwest of which a submarine caldera with a size of 12 km × 8 km is located, hereafter called Kita-Ioto caldera (Figures 1a-1c).While no historical eruption on the island has been reported, past submarine eruptions were found at a submarine vent called Funka Asane on a major cone within the caldera structure (Figure 1c).According to Japan Meteorological Agency (2013), the latest eruptions of Funka Asane were reported between 1930 and 1945, and its volcanic activity has been recently inferred from sea-color changes and underwater gas emission near the vent (Ossaka et al., 1994).In March 2022, Japan Meteorological Agency (2022) reported ash-like clouds near Kita-Ioto Island and suggested the possibility of an eruption, but it is not clear whether the clouds were caused by an eruption or by meteorological factors.Thus, the volcanic activity of the submarine caldera has not been understood well.
Active volcanism of Kita-Ioto caldera shows unique seismic activity characterized by shallow earthquakes near the caldera repeating every 2-5 years, in 2008, 2010, 2015, 2017, and 2019, in addition to that in 1992 (Figure 1c; Table S1 in Supporting Information S1).As the focal mechanism of the earthquake in 2008 represents in Figure 1c, these six earthquakes reported in the Global Centroid Moment Tensor (GCMT) catalog (Ekström et al., 2012) similarly had seismic magnitudes of M w 5.2-5.3 and non-double-couple moment tensors with large compensated-linear-vector-dipole (CLVD) components (Figure S1 in Supporting Information S1).Such types of earthquakes at a shallow depth in volcanic or geothermal environments are often called vertical-CLVD earthquakes (e.g., Sandanbata, Kanamori, et al., 2021;Shuler, Nettles, & Ekström, 2013), which can be categorized into two types: vertical-T CLVD earthquakes with a nearly vertical tension and vertical-P CLVD earthquakes with a nearly vertical pressure axis.In recent caldera studies, vertical-T earthquakes were observed in caldera inflation phases (Bell et al., 2021;Glastonbury-Southern et al., 2022;Jónsson, 2009;Sandanbata, Kanamori, et al., 2021;Sandanbata, Watada, et al., 2021), whereas vertical-P earthquakes coincided with caldera collapse and formation (Gudmundsson et al., 2016;Lai et al., 2021;Michon et al., 2007;Riel et al., 2015;Rodríguez-Cardozo et al., 2021).The earthquakes near Kita-Ioto caldera fall into the vertical-T type, implying their association with the caldera inflation. 10.1029/2023JB027917 3 of 23 Yet, the mechanisms of shallow vertical-CLVD earthquakes are often indistinguishable only from the seismic characters, due to weak constraint on parts of moment tensor components (M rθ and M rϕ ) (Kanamori & Given, 1981;Sandanbata, Kanamori, et al., 2021) and a tradeoff between the vertical-CLVD and isotropic components (Kawakatsu, 1996).These ambiguities leave room for different interpretations for the earthquake mechanism, such as fault slips in calderas, deformation of a magma reservoir, or volume change due to heated fluid injection, as previously proposed for similar vertical-CLVD earthquakes (Shuler, Ekström, & Nettles, 2013, and references therein).
Following the earthquake that occurred at 13:10 on 12 June 2008 (UTC), a tsunami-like signal was recorded by an ocean-bottom-pressure (OBP) gauge with a sampling interval of 15 s of the station 52404, ∼1,000 km away from the caldera, of Deep-ocean Assessment and Reporting of Tsunamis (DART) system (Bernard & Meinig, 2011) (Figure 1a). Figure 1d shows the OBP data, which we obtain by removing the tidal component from and by applying the bandpass (2-10 mHz) Butterworth filter to the raw record.The OBP data demonstrates that clear oscillations with the maximum pressure of ∼2 mm H 2 O started ∼5,000 s after the earthquake origin time.Our calculation using the Geoware Tsunami Travel Time software (Geoware, 2011) estimates that the tsunami would (Geoware, 2011).Solid gray line represents the data length for calculating the root-mean-square amplitudes (Equation 15).This waveform data is obtained by removing the tidal component from and by applying the bandpass (2-10 mHz) Butterworth filter to the raw OBP data for 12,000 s after the earthquake origin time.Note that oscillations of OBP changes with a few mm H 2 O are recorded after the estimated arrival time, indicating tsunami signals.
Methodology
In this section, we describe the methodology to construct a 3-D mechanical model of trapdoor faulting and to apply it to the tsunami data of the 2008 Kita-Ioto caldera earthquake.Through the application, we attempt to reproduce the tsunami data and estimate the sub-caldera magma overpressure that drove the tsunamigenic earthquake.
Mechanical Model of Trapdoor Faulting
We consider the 3-D half-space elastic medium of the host rock with an intra-caldera ring fault and a horizontal crack filled with magma (Figure 2).The ring fault and the horizontal crack are discretized into small triangular meshes, or sub-faults and sub-crack (with N F and N C meshes), respectively.The crack is assumed to have a finite inner volume and to be filled with compressible magma.Note that we do not consider viscoelasticity or heterogeneous rheology of the host rock, as the limitations are discussed later in Section 6.5.3.
We assume that trapdoor faulting is driven by magma overpressure in the crack, as follows; before trapdoor faulting, continuous magma input into the crack gradually increases the inner pressure and volume, and causes elastic stress in the host rock, accumulating shear stress on the ring fault; when the shear stress on the fault overcomes its strength, trapdoor faulting takes place.In the following, we model trapdoor faulting as a dislocation model that combines sudden and interactive processes of dip-slip on the fault with stress drop, deformation (vertical opening/closure) of the crack with volume change, and pressure change of the magma in the crack.Note that, some previous studies used the terminology of trapdoor faulting to refer to only the fault part (e.g., Amelung et al., 2000), while we consider it as the composite process involving both the fault and the magma-filled crack.
Pre-Seismic Elastic Stress in the Host Rock
As a reference state, we consider that the magma pressure p 0 in the crack is in equilibrium with the background stress 0 in the host rock due to the lithostatic and seawater loading, and that the background differential stress as zero.If we take the stress in the host rock as positive when it is compression, the background stress at an arbitrary position in the reference state is expressed as: where ρ h and z are the host rock density and the arbitrary depth in the host rock, respectively, ρ s and H are the seawater density and the thickness of the overlying seawater layer, respectively, g is the gravitational acceleration, and δ ij is the Kronecker's delta.The magma pressure in the reference state is expressed as follows: where z 0 is the depth of the horizontal crack, respectively.
We assume that long-term magma input into the crack increases the magma overpressure and opens the crack vertically, and that the resultant crack deformation changes the stress in the host rock.Thus, the shear stress is accumulated on the fault, which eventually causes trapdoor faulting.Magma pressure in the pre-seismic state, just before trapdoor faulting, is assumed to be spatially uniform within the crack and expressed as p = p 0 + p e , where p e is the pre-seismic magma overpressure.If we denote the spatial distribution of the crack opening in the pre-seismic state as , the equilibrium relationship between the normal stress on the surfaces of sub-cracks and the inner magma pressure reduces to: where is the N C × 1 column vector of the pre-seismic normal stress on sub-cracks, P is the interaction matrix, with a size of N C × N C , that map the tensile opening of sub-cracks into the normal stress on sub-cracks, and is the N C × 1 column vector of ones.The distribution of the crack opening in the pre-seismic state can be obtained from the second equality of Equation 3.Then, the pre-seismic shear stress along the dip direction on the surfaces of sub-faults (denoted as ) created by the magma overpressure p e can be expressed as: where Q is the interaction matrix, with a size of N F × N C , that maps the tensile opening of sub-cracks into the shear stress on sub-faults.With Equation 3, Equation 4 can be rewritten as: The part in the bracket, −1 , represents the shear stress on the surfaces of sub-faults caused by the crack opening due to unit magma overpressure.If we denote it as , Equation 5 can be rewritten as: (6)
Occurrence of Trapdoor Faulting
Trapdoor faulting is caused by sudden stress drop of the shear stress accumulated on the fault.The motion involves dip-slip of the fault, and deformation (opening/closure) of the crack.To determine the motion of trapdoor faulting, we here derive two boundary conditions on the surfaces of the ring fault and the horizontal crack. 10.1029/2023JB027917 6 of 23 Assuming that the shear stress along the dip direction on the fault decreases by a stress drop ratio α due to trapdoor faulting, the boundary condition on the surface of the fault can be expressed as: where Δ is the N F × 1 column vector of the shear stress change on sub-faults during trapdoor faulting.Q and R, with sizes of N F × N C and N F × N F , map dip-slip of sub-faults into the normal stress on sub-crack and the shear stress on sub-faults, respectively (Q is the same as that in Equation 4).
Sudden stress change in the host rock due to dip-slip of the fault interactively accompanies deformation (opening/ closure) of the crack, and the resultant normal stress change on the crack induces horizontal movement of the inner magma.For simplicity, we assume that the magma movement finishes and the magma pressure becomes spatially uniform in the crack quickly.Under this simplification, the boundary condition on the surface of the horizontal crack is derived from the equilibrium relationship between the normal stress on sub-cracks and the inner magma pressure, as follows: where Δ and Δp are the N C × 1 column vector of the normal stress change on sub-cracks and the scalar of the magma pressure change during trapdoor faulting, respectively.P and U are the interaction matrices, with sizes of N C × N C and N C × N F , that map the tensile opening of sub-cracks into the normal stress on sub-cracks and into the shear stress on sub-faults, respectively (P is the same as that in Equation 3).
The magma pressure change Δp during trapdoor faulting can be related to the crack volume change ∆V through the mass conservation law, as follows: where ∆m is the magma influx and β m is the compressibility of magma.Since previously observed trapdoor faulting occurred within less than ∼10 s (Geist et al., 2008;Sandanbata et al., 2022Sandanbata et al., , 2023)), we can disregard magma mass influx during trapdoor faulting to reduce Equation 9 to: where is the N C × 1 column vector of the areas of sub-cracks.
By substituting Equations 6 and 10 into Equations 7 and 8, respectively, we obtain the following equations: Equation 11 can be rewritten by: where Equations 12 and 13 represent N C + N F equations with N C + N F unknown values ( , ), if we priorly assume the pre-seismic magma overpressure p e , the stress drop ratio α, the source geometry determining the interaction matrices, and the parameters β m and V 0 .In this study, the source geometry and the parameters are assumed as described in Section 4.2.Also, the stress drop ratio is simply assumed as α = 1; in other words, the pre-seismic shear stress on the fault completely vanishes to zero due to trapdoor faulting.In this case, Equation 12 is reduced to: By solving Equation 14with Equation 13for ( , ), we can determine the motion of trapdoor faulting generated by pre-seismic magma overpressure p e .Also, we can estimate the co-seismic changes of magma pressure and crack volume due to trapdoor faulting by substituting into Equation 10, and the stress drop by substituting into Equation 7.
Model Setting
The source geometry employed for main results is shown in Figure 2. A partial ring fault is along an ellipse with a size of 3.6 km × 2.6 km on seafloor; the center is at (141.228°E, 25.4575°N), and its major axis is oriented N60°E.The fault is on the NW side of Kita-Ioto caldera with an arc length of 90° and dips inwardly with a dip angle of 83°; this fault setting on the NW side is based on our moment tensor analysis that suggests a ring fault orientated in the NE-SW direction (see Text S1 in Supporting Information S1, for details).The fault's down-dip end connects to a horizontal crack at a depth of 2 km.The crack is elliptical in shape, 15% larger than the size of an ellipse traced along the fault's down-dip end.After discretizing the source geometry into sub-faults and sub-cracks, the four interaction matrices (P, Q, R, and U) between sub-faults and sub-cracks are computed by the triangular dislocation (TD) method (Nikkhoo & Walter, 2015), when we assume the Poisson's ratio of 0.25 and the Lame's constants λ and μ of 5 GPa.
The product V 0 β m controls how the magma-filled crack responds to stress perturbation by faulting, as explained by Zheng et al. (2022).For main results, we assume the crack volume V 0 and the magma compressibility β m as 1.5 × 10 10 m 3 (corresponding to a crack thickness of ∼500 m) and 1.0 × 10 −10 Pa −1 (from a typical value for degassed basaltic magma (e.g., Kilbride et al., 2016)), respectively, thereby, V 0 β m = 1.5 m 3 /Pa.This product value is similar to Zheng et al. (2022)'s estimates for a magma reservoir of Sierra Negra caldera.
We emphasize that the model setting above, which is used to obtain the main results shown in Section 5, is just an assumption.The location of the ring fault cannot be constrained from the earthquake information of the GCMT catalog, since the solutions can contain horizontal location errors up to ∼40 km (Hjörleifsdóttir & Ekström, 2010;Pritchard et al., 2006).The bathymetry data containing several cones found on the NW side of the caldera floor (Figure 1c) may suggest the existence of a fault system, given such structures often formed over a sub-caldera ring fault (e.g., Cole et al., 2005), but this is not decisive information.Also, we have no constraint on the magma compressibility and the reservoir depth.In Section 6.1, we will test the sensitivity to those possible uncertainties in the model setting.
Constraint From the Tsunami Data of the 2008 Kita-Ioto Caldera Earthquake
We apply the mechanical model of trapdoor faulting to the tsunami data of the 2008 Kita-Ioto caldera earthquake.Utilizing the linear relationship between ( , ) and p e through Equation 14, we estimate the pre-seismic magma overpressure p e causing the earthquake by constraining the magnitude of trapdoor faulting from the tsunami data.
For estimation of p e , we prepare a model of trapdoor faulting due to unit pre-seismic magma overpressure p e = 1 Pa, which we call unit-overpressure model, and then simulate a tsunami OBP waveform at the station 52404 from the model (see the methodology in Section 4.4).We denote the synthetic waveform as and consider it as the tsunami OBP amplitude due to unit overpressure, whose unit is (mm H 2 O/Pa).Because of the linearity of the tsunami propagation problem we employ, the amplitude of tsunami waveform is linearly related to the magnitude of trapdoor faulting, and thereby to the pre-seismic magma overpressure p e through Equation 14.Therefore, the synthetic tsunami waveform from trapdoor faulting due to an arbitrary p e can be expressed as O/ Pa]), respectively.The time window for calculating the RMS amplitudes is set as it includes major oscillations in earlier parts of the observed waveform (see the gray line in Figure 1d).
Tsunami Waveform Simulation
A tsunami waveform from the unit-overpressure model is synthesized as follows.Assuming ( , ) of the unit-overpressure model, we compute the vertical seafloor displacement by the TD method, and convert it to vertical sea-surface displacement by applying the Kajiura filter (Kajiura, 1963).We then simulate the tsunami propagation over the time of 12,000 s from the sea-surface displacement over Kita-Ioto caldera, generated instantly at the earthquake origin time, by solving the linear Boussinesq equations (Peregrine, 1972) in the finite-difference scheme of the JAGURS code (Baba et al., 2015).The simulation is done with a two-layer nested bathymetric grid system, composed of a broad-region layer with a grid size of 18 arcsec (∼555 m) derived from JTOPO30 data, and a caldera-vicinity-region layer with a grid size of 6 arcsec (∼185 m), obtained by combining data from M7000 series and JTOPO30.The computation time step is 0.5 s, as the Courant-Friedrichs-Lewy (CFL) condition is satisfied.The outputted 2-D maps of sea-surface wave heights, every 5 s, are converted into maps of OBP perturbation by incorporating the reduction of tsunami pressure perturbation with increasing water depth (e.g., Chikasada, 2019).The synthetic waveform of OBP perturbation at the station 52404 is obtained from the OBP maps.
The linear Boussinesq equations employed above do not include the effects of the elastic Earth, the seawater compressibility, and the gravitational potential change, and are less accurate for computation of higher-frequency waves due to the error of dispersion approximation (Sandanbata, Watada, et al., 2021).Hence, we apply a phase correction method for short-period tsunamis (Sandanbata, Watada, et al., 2021) to improve the synthetic waveform accuracy by incorporating the effects (i.e., elastic Earth, compressible seawater, and gravitation potential change) and by correcting the approximation error.
Source Model of the 2008 Kita-Ioto Caldera Earthquake
Under the model setting explained in Section 4.2 (Figure 2), we obtain a trapdoor faulting model for the 2008 Kita-Ioto caldera earthquake that explains the OBP tsunami data (Figure 3).The pre-seismic magma overpressure p e constrained from the OBP tsunami data is 11.8 MPa.Figures 3b and 3c show the spatial distributions of the ring-fault slip and the crack opening/closing during trapdoor faulting.Large reverse slip at maximum of 8.9 m is on the ring fault, near which the inner crack opens by 5.5 m at maximum and the outer closes by 2.7 m.In the SE area, the crack closes broadly with a maximum value of 0.86 m.In total, the crack volume increases by ΔV = 0.0030 km 3 .The co-seismic magma pressure change ∆p is −1.97 MPa during trapdoor faulting, meaning that the magma overpressure drops by 16.7% relative to the pre-seismic state and makes additional storage for magma.The response of the magmatic system to faulting may have postponed eruption timing; on the other hand, post-seismic magma overpressure is estimated to remain at a high level (∼9.8 MPa) even after trapdoor faulting.
The obtained trapdoor faulting model is predicted to cause large asymmetric caldera-floor uplift, thereby generating a tsunami efficiently.The large seafloor displacement is concentrated near the fault, with the maximum uplift of as large as 5.6 m and outer subsidence of 2.8 m (Figure 3d).The sea surface displacement is smoothed by the low-pass effect of seawater, resulting in seawater uplift of 3.6 m within the caldera rim with the exterior subsidence of 1.1 m (Figure 3e). Figure 3f compares the synthetic tsunami waveform from the model with the OBP tsunami signal recorded at the station 52404, which demonstrates good waveform agreement, including later phases that are not used for the amplitude fitting.In addition, the spectrogram analysis confirms quite similar tsunami travel times and dispersive properties of the synthetic and observed waveforms (Figures 3g and 3h).These results support the reasonability of our mechanical model for the 2008 Kita-Ioto caldera earthquake.
Pre-Seismic State Just Before Trapdoor Faulting
From the mechanical model, we consider how trapdoor faulting is caused by the inflated crack.In the pre-seismic state just before trapdoor faulting, the crack has inflated with vertical opening of 12.1 m at maximum due to the pre-seismic magma overpressure p e (Figure 4a).The inner volume has been increased by 0.21 km 3 relative to that in the reference state.This pre-seismic crack opening generates the shear stress on the fault , which takes 15).(g, h) Spectrograms of the (g) synthetic and (h) observed waveforms.In (f-h), black dashed line represents the tsunami arrival time.its maximum value of 11.6 MPa (Figure 4b); this value corresponds to the stress drop during trapdoor faulting, because we assume that the stress totally vanishes co-seismically.
In a simple earthquake paradigm of the stick-slip motion, which assumes that slip occurs when the shear stress overcomes the static frictional stress (e.g., p. 14 of Udías et al. (2014)), the fault requires friction to remain stationarity until faulting occurrence.The total normal stress on the fault 0 is the sum of the effects of the crack opening (Figure 4c) and the lithostatic and seawater loading lit + sea , as shown in Figure 4d (see the caption).By taking a ratio of the area-averaged values of and , the static friction coefficient on the ring fault can be estimated as 0.31.The frictional fault system may enable the caldera system to accommodate the high magma overpressure without fault slip until trapdoor faulting.Note that, however, sophisticated modeling approaches including realistic fault friction law will be needed for investigation of the dynamic initiation process.
Deformation and Elastic Stress Change in the Host Rock
Our model demonstrates how trapdoor faulting deforms the host rock and changes its elastic stress.With the model outputs, we compute the displacement, stress and strain fields in the host rock along an SE-NW profile across the caldera (see the dashed line in Figure 3c) by the TD method; the pre-seismic state is from , the co-seismic change is from ( , ), and the post-seismic state is the sum of the pre-seismic state and the co-seismic change.We also calculate the shear-strain energy from the stress and strain fields (e.g., Saito et al., 2018).When we denote the stress tensors in the host rock as: where ′ is the deviatoric components, the shear-strain energy density W in the elastic medium can be expressed as: Note that the shear-strain energy density is zero in the reference state (p = p 0 ), where the deviatoric stress is assumed to be zero.Using Equation 17, the shear-strain energy density in the pre-and post-seismic states, W pre and W post , can be calculated with the deviatoric shear stress.The co-seismic change in the shear-strain energy density is obtained by: Figures 5a-5c show displacement in the host rock along the SE-NW In the pre-seismic state (Figure 5a), since the fault accommodates no slip, the host rock deforms purely elastically from the reference state due to the opening crack and causes large uplift of the caldera surface by 8.8 m at maximum at the caldera center.During trapdoor faulting, the co-seismic displacement is concentrated along the fault (Figure 5b).The inner caldera block uplifts by 5.7 m at maximum, while the outer host rock moves downward by 3.2 m.The fault motion accompanies crack opening beneath the NW side of the caldera block, whereas slight downward motion is seen in the SE part of the caldera block, which can be attributed to elastic response to magma depressurization.Figure 5c shows the displacement in the post-seismic state, where the upward displacement is confined within the caldera block, with cumulative uplift of 9.9 m at maximum, from the center to near the fault, while notable deformation is not found outside the fault.As shown in Figure 5d, the pre-seismic seafloor displacement takes its uplift peak in the center, while after trapdoor faulting the seafloor becomes almost flat on the NW side near the fault.This indicates that if we take a long term including the pre-seismic inflation and trapdoor faulting, the caldera causes a block-like motion with a clear boundary cut by the fault.
In terms of the stress and the shear-strain energy, trapdoor faulting can be considered as a process that releases the shear-strain energy accumulated in the host rock.Figures 5e-5g show the shear-strain energy density with the principal axes of the stress field in the host rock along the same SE-NW profile.In the pre-seismic state, the shear-strain energy density is concentrated around the crack edge, or near the fault (Figure 5e).The plunge of the maximum compressional stress near the fault ranges from ∼50° in the middle of the fault, which preferably induces a reverse slip on a steeply dipping fault.During trapdoor faulting (Figure 5f), the shear-strain energy density near the fault on the NW side dramatically decreases.Eventually, in the post-seismic state (Figure 5g), the shear-strain energy density almost vanishes near the fault.Note that, on the other hand, the shear-strain density is only slightly reduced on the SE side in response to co-seismic magma depressurization and remains high even after trapdoor faulting.We speculate that the remaining shear-strain energy may be released by other causes, such as aseismic fault slip, a subsequent trapdoor faulting, or viscoelastic deformation of the host rock, which are not incorporated in our modeling; we will discuss the limitations of our models in Section 6.5.
Model Uncertainties
Our source model has been constructed in the model setting as described in Section 4.2.However, since a single tsunami waveform data at a distant location has low sensitivity to the source details, we do not have enough data to constrain the sub-surface structure and magma property.Hence, our model outputs vary depending on how the model setting is assumed priorly.
Depth of a Horizontal Crack
The depth of a horizontal crack, or a magma reservoir, significantly influences our pre-seismic magma overpressure estimation.When a deeper crack is assumed at a depth of 4 km below seafloor (Figure 6), the estimated magma overpressure p e is 22.26 MPa, almost a factor of two larger compared to our main result assuming a depth of 2 km (Figure 3).The obtained model with a 4-km deep crack explains the tsunami data well, even better than that with a 2-km deep crack (compare waveforms and spectrograms in Figures 3f-3h and 6f-6h), implying preference of the deeper crack model.When a crack is located deeper in the crust, the magnitude of the crack opening per unit magma overpressure becomes smaller because it is farther from the free-surface seafloor (Fukao et al., 2018).This lowers the shear stress on the fault generated per unit magma overpressure, and thereby larger pre-seismic magma overpressure is required to cause a similar-sized earthquake and tsunami.Despite the large difference in pre-seismic magma overpressure, the estimated co-seismic parameters for the 2008 earthquake, such as magnitudes of fault slips, crack deformation, and changes in magma pressure and crack volume, do not change largely.
Arc Length of a Ring Fault
The arc length of a ring fault is also an important factor affecting our modeling.As shown in Figure 7, when we assume a ring fault with an arc length of 180°, or a half-ring fault, on the NW side, pre-seismic magma overpressure p e is estimated as 4.84 MPa, less than half of the value from our main results assuming an arc length of 90° (Figure 3).This large difference can be attributed to two main causes.First, the average fault slip amount is known to be proportional to the fault length when the stress drop is identical (Eshelby, 1957); therefore, a longer ring fault causes large slip efficiently, compared to that on a shorter arc length.Additionally, trapdoor faulting 10.1029/2023JB027917 13 of 23 with a longer fault uplifts larger volume of seawater over a broader area (compare Figures and 3e), making its tsunami generation efficiency higher.
Although smaller magma overpressure (p e = 4.84 MPa) is estimated in the case with a ring-fault arc angle of 180°, we emphasize that the co-seismic magma pressure change ∆p is as large as −1.99 MPa.The magma overpressure efficiently drops by 41.1% from the pre-seismic state, in contrast to the ratio of only 16.7% in the case of an arc length of 90° (see Section 5.1).The difference arises from the fact that the fault slip along a longer segment induces the crack opening in a broader area and increases the inner volume more, resulting in more efficient pressure relief.The two models with different ring-fault arc lengths produce very similar tsunami waveforms at 10.1029/2023JB027917 14 of 23 the station 52404 (compare Figures 7f and 3f), indicating the difficulty in distinguishing the arc length from our data set.However, these results provide an important insight that the magma pressure drop ratio strongly depends on a fault length ruptured during trapdoor faulting, suggesting importance to investigate the intra-caldera fault geometry for robust quantification of magma pressure change due to faulting.
Other Uncertainties
We discuss on effects of the product V 0 β m , which controls how the magma-filled crack responds to stress perturbation by faulting.The effects in extreme cases are discussed by Zheng et al. (2022); when V 0 β m → 0, the crack involves no total volume change (ΔV → 0), while a magnitude of magma pressure drop becomes the largest; on the other hand, when V 0 β m → ∞, the net volume change of the crack is at maximum, while no pressure change occurs (Δp → 0).In previous studies of the 2018 Kilauea caldera collapse and eruption sequence, the estimated product ranges 1.3-5.5 m 3 /Pa (Anderson et al., 2019;Segall & Anderson, 2021).We assumed V 0 β m = 1.5 m 3 / Pa for our main results, which is close to the lower end of the range.To examine the model variations, we try the source modeling alternatively by assuming V 0 β m = 6.0 m 3 /Pa, near the upper limit of the range estimated in the case of Kilauea.For the larger V 0 β m , the area of the crack opening becomes broader, while a magnitude of the closure on the other side becomes smaller (Figures S4a-S4c in Supporting Information S1; compare them with Figures 3a-3c).The sea-surface displacement is thereby broader (Figure S4e in Supporting Information S1), exciting long-period tsunamis more efficiently that arrives as earlier waveform phases used for the amplitude fitting (Figure S4f in Supporting Information S1).Thus, in this case, our estimation of the pre-seismic magma overpressure, p e = 9.11 MPa, becomes slightly smaller than the main result (p e = 11.8MPa); on the other hand, we estimate smaller magma pressure drop (Δp = −1.27MPa) and a larger crack volume increase (ΔV = 0.0076 km 3 ).These suggest that if we take a plausible range of V 0 β m , variations of our estimations are insignificant.
It is uncertain on which side of the caldera the ruptured fault is located.Based on our moment tensor analysis (Text S1 in Supporting Information S1), the fault ruptured during the 2008 earthquake can be estimated to be oriented manly in the NE-SW direction, allowing us to assume two different fault locations, either of the NW or SE sides of the caldera; for our main results, we chose the model with a fault on the NW side.Here, we alternatively assume a fault on the SE side to obtain another source model, and consequently estimate the pre-seismic magma overpressure p e as 15.36 MPa (Figure S5 in Supporting Information S1).Despite the fault location difference, the tsunami data is explained well by the model with a SE-sided fault (Figure S5f in Supporting Information S1).The change of the estimated magma overpressure can be attributed to effects of tsunami directivity and complex bathymetry in the source region on the wave amplitude of a tsunami arriving at the station.Thus, our limited data set is not sufficient to determine well the fault location, but the uncertainty in fault location influences our estimations insignificantly.
Comparison With Previous Studies
Our quantification of pre-seismic magma overpressure before trapdoor faulting in Kita-Ioto caldera (p e = 4-22 MPa) is of the same order of magnitude as those estimated geodetically for the subaerial caldera of Sierra Negra.Gregg et al. (2018) applied a thermomechanical finite element method (FEM) model to long-term geodetic data and estimated that magma overpressure of ∼10 MPa in the sill-like reservoir induced a trapdoor faulting event that occurred ∼3 hr before the eruption starting on 22 October 2005.Another trapdoor faulting event on 25 June 2018 (M w 5.4) also preceded the 2018 eruption of Sierra Negra by 10 hours; Gregg et al. (2022) employed the thermomechanical FEM approach to the long-term deformation and suggested that a similar magma overpressure <∼15 MPa had been accumulated to cause the failure of the trapdoor fault system.Zheng et al. (2022), on the other hand, quantified co-seismic magma pressure change by trapdoor faulting with an m b 4.6 earthquake on 16 April 2005.By modeling the interaction between the intra-caldera fault system and the sill-like reservoir, Zheng et al. geodetically estimated the trapdoor faulting event with a maximum fault slip of 2.1 m reduced magma overpressure by 0.8 MPa; the slightly smaller pressure change, relative to our estimation (|Δp| = 1-3 MPa) for the 2008 Kita-Ioto earthquake, may be explained by the discrepancies in the earthquake size or the length of a ruptured fault.Sandanbata et al. (2023) compiled the seismic magnitude and the maximum fault slip of trapdoor faulting events and demonstrated their atypical earthquake scaling relationship; in other words, trapdoor faulting accompanies larger fault slip by an order of magnitude than those for similar-sized tectonic earthquakes.Source models presented in this study for the 2008 Kita-Ioto caldera earthquake also accommodate large fault slip ranging 5-10 m at maximum, which are clearly larger than those empirically predicted for M w 5.3 tectonic earthquakes; for example, the empirical maximum slip for M w 5.3 earthquake is only ∼0.1 m, following Wells and Coppersmith (1994).This indicates the efficiency of intra-caldera fault systems in causing large slip, possibly due to their interaction with magma reservoirs and shallow source depth (Sandanbata et al., 2022).
Long-Period Seismic Waveforms
For validation from a different perspective, we consider long-period seismic excitation by the mechanical source model that we have obtained based on the tsunami data.For this analysis, we follow the methodology used in Sandanbata et al. (2022Sandanbata et al. ( , 2023)), as the detailed procedures are described in Text S2.We here briefly summarize the method.We first approximate the trapdoor faulting model (Figure 3a) as a point-source moment tensor M T by summing up partial moment tensors of the ring fault M F and the horizontal crack M C (Figures 8a-8c).We then compute long-period (80-200 s) seismic waveforms from the moment tensor M T by using the W-phase package (Duputel et al., 2012;Hayes et al., 2009;Kanamori & Rivera, 2008) and compare the synthetic waveforms with broad-band seismic data from F-net and global seismic networks.In Figure 8d and Figure S6 in Supporting Information S1, we show synthetic seismic waveforms from the moment tensor (Figure 8a), which reproduce well the observed seismograms.This supports that our trapdoor faulting model is plausible in terms of seismic excitation, as well as tsunami generation.
We note that the theoretical moment tensor obtained from our model (Figure 8a) is different from the GCMT solution; our theoretical solution has a seismic magnitude (M w 5.6) and is characterized by large double-couple and isotropic components, while the GCMT solution is with a smaller magnitude M w 5.3 and a dominant vertical-T CLVD component (Figure 1c).The difference can be explained by very inefficient excitation of long-period seismic waves by specific types of shallow earthquake sources (Fukao et al., 2018;Sandanbata, Kanamori, et al., 2021).As demonstrated in Figure S7 in Supporting Information S1, major parts of the long-period seismic waves of the trapdoor faulting model arise from limited moment tensor components that constitute a vertical-T CLVD moment tensor, equivalent to M w 5.2 (Figure S7b in Supporting Information S1), whereas the contribution by the horizontal crack M T , and M rθ and M rϕ components in M F are negligibly small.Hence, the GCMT solution determined with the long-period seismic waveforms becomes a vertical-T CLVD moment tensor with a smaller magnitude than that of the theoretical moment tensor of our model.The gap between theoretical and observed moment tensors of trapdoor faulting is discussed in more detail by Sandanbata et al. (2022).
Tsunami Generation by Other Kita-Ioto Caldera Earthquakes
We have conducted a survey of OBP data from the station 52404 to determine if there were any tsunami signals following the other Kita-Ioto caldera earthquakes (Figure S1 in Supporting Information S1), apart from that in 2008.Available data was found only for the event on 15 December 2015 (Figure 9a), for which a clear tsunami signal was recorded in the OBP data with a 15-s sampling interval (Figure 9b).On the other hand, we were unable to obtain OBP data to confirm tsunami signals from the earthquakes in 1992, 2010, 2017, and 2019.The station 52404 had not been deployed yet as of the 1992 event.For the other events, the bottom pressure recorders have been lost, preventing our access to its 15-s sampling-interval data.Although low-sampling data (15-min interval) sent via a satellite transfer are available, they are not useful for confirming tsunami signals with dominant periods of 100-500 s.
We further investigate the tsunami signal from the 2015 earthquake in comparison with that from the 2008 event.Note that the station location (20.7722°N, 132.3375°E) as of 2008 had shifted about 20 km northward to a new location (20.9478°N, 132.3122°E) as of 2015.To examine the similarity between the two earthquake events, we simulate a tsunami waveform at the station location as of the 2015 event from a model similar to that of the 2008 event.We assume the model setting with a deeper crack at a depth of 4 km, based on that presented in Section 6.1.1 (Figure 7).Since the GCMT catalog reports a smaller seismic moment for the 2015 event Although the observed tsunami waveforms from the two earthquakes look different (compare the waveforms in Figures 7f and 9b), the trapdoor faulting model, based on the tsunami data form the 2008 earthquake, also explains that from the 2015 earthquake overall (Figure 9), simply by changing the station location.The nonnegligible waveform difference at the two nearby locations can be attributed to the focusing/defocusing effect by complex bathymetry along the path (Figure S8 in Supporting Information S1; see the figure caption for details).This suggests that the 2015 earthquake was caused by trapdoor faulting, in a similar way to the 2008 earthquake.The similarity is further supported by our moment tensor analysis (see Text S1 in Supporting Information S1).Thus, we confirmed tsunami signals from both of the two events.Therefore, we propose that the quasi-regularly repeating earthquakes with similar magnitudes and vertical-CLVD characters reflect the recurrence of trapdoor faulting in Kita-Ioto caldera, as observed in the three calderas of Sierra Negra, Sumisu, and Curtis, where trapdoor fault ing events have recurred (Bell et al., 2021;Jónsson, 2009;Sandanbata et al., 2022Sandanbata et al., , 2023)). 10.1029/2023JB027917 17 of 23
Limitations of Our Mechanical Trapdoor Faulting Model
Our mechanical model of trapdoor faulting has been developed under some simplifications to focus on the essential mechanics.In this subsection we discuss some factors simplified or ignored in our model, which may influence our results.
Stress Drop Ratio
The stress drop ratio during earthquakes has been controversial in general.Some studies reported complete or near-complete stress drop during tectonic earthquakes (Hasegawa et al., 2011;Ross et al., 2017), while the stress drop ratio can be partial and vary from earthquake to earthquake (Hardebeck & Okada, 2018).For intra-caldera earthquakes, several recent studies estimated stress drop during caldera collapses (Moyer et al., 2020;T. A. Wang et al., 2022), but our knowledge on the stress drop ratio in calderas is poor and the ratio may vary from caldera to caldera.
We have avoided the problem by simply assuming the complete stress drop as an extreme case (Equation 14, obtained by assuming α = 1 in Equation 12); this assumption can influence our estimation of the pre-seismic magma overpressure p e .Because and are determined by the stress drop on the fault, not directly by pre-seismic magma overpressure (Equation 3), if a partial stress drop ratio α (0 < α < 1) is instead assumed in Equation 12, the trapdoor faulting size due to the same pre-seismic magma overpressure becomes smaller proportionally to α, and the tsunami amplitude does.In this case, larger magma overpressure by a factor of 1/α is required to explain the observed tsunami amplitude.Hence, the complete stress drop assumption provides lower-limit estimation of pre-seismic magma overpressure in the model setting.On the other hand, estimations of co-seismic parameters, such as fault slip and crack opening , and changes of magma pressure Δp and crack volume ΔV, do not change regardless of our assumption of the stress drop ratio α, since they are constrained form the tsunami amplitude.
Pre-Slips and Earthquake Cycles
We have attributed the shear stress that generates trapdoor faulting to an inflating crack alone and neglected other factors that may also cause the stress on the fault.First, different segments of the intra-caldera ring fault may have caused microseismic or aseismic slips prior to the occurrence of M w ∼ 5 trapdoor faulting.In Sierra Negra caldera, high microseismicity was observed along the western segment of the intra-caldera fault, leading to trapdoor faulting on the southern segment before eruption (Bell et al., 2021;Shreve & Delgado, 2023).Similarly, during the 2018 eruption and summit caldera collapse sequence of Kilauea, large collapse events accompanying M w ∼ 5 earthquakes were located on the southeastern and northwestern sides of the summit caldera, while high microseismicity was found on other segments (Lai et al., 2021;Shelly & Thelen, 2019).T. A. Wang et al. (2023) further suggested non-negligible effects on large collapses of Kilauea by intra-caldera fault creep in the inter-collapse period.Such high microseismicity or creeping on other fault segments, adjacent to the ruptured segment of trapdoor faulting, may impose additional shear stress.
Additionally, the recurrency of trapdoor faulting can play an important role in the stress accumulation on the fault.Similar earthquakes have been repeated near Kita-Ioto caldera (Figure S1 in Supporting Information S1), strongly suggesting recurrence of trapdoor faulting, as supported by the tsunami signal from the 2015 earthquake (see Section 6.4).If a similar earthquake repeated on the same segment of the fault and the stress drop is only partial, the remaining stress may influence subsequent trapdoor faulting events.Also, assuming that the earthquakes occur on different segments of the ring fault, an event on a segment increases the shear stress on its adjacent segment.Thus, in the presence of additional shear stress by pre-slips or creeping on different segments or previous trapdoor faulting events, the ring fault may be ruptured by smaller pre-seismic magma overpressure.For better understanding of the physics of trapdoor faulting, further studies of the earthquake cycle in calderas are crucial.
Other Factors
Other factors simplified in our model, such as magma reservoir geometry, and viscoelastic and heterogeneous rheological properties of the host rock, may influence the mechanics of trapdoor faulting.While we have modeled a magma reservoir simply as an infinitely thin crack that lies horizontally, the reservoir should have a finite thickness and the geometry may not be flat, as estimated for that beneath Sierra Negra caldera (Gregg et al., 2022).The host rock has been also simplified as a homogeneous elastic medium, but the viscoelastic effects and thermal dependency of the rheological property may impact the deformation and stress and strain states in hot volcanic environments.For example, Newman et al. (2006) showed that the viscoelastic effect significantly reduces the estimated magma overpressure using surface deformation data at Long Valley caldera, compared to that based on a purely elastic model.The viscoelastic effect can be more important in the stress accumulation process, particularly during a long-term caldera inflation phase.Additionally, the temperature-dependency of the host-rock rheology is shown to have an impact on the stress accumulation process in the host rock, impacting on estimation of the timing of host-rock failures and eruption (Cabaniss et al., 2020;Zhan & Gregg, 2019).For further studies, it would be critical to incorporate these effects on the deformation and the stress-strain accumulation in the host rock, as done by previous studies employing the FEM modeling approach (e.g., Gregg et al., 2012;Le Mével et al., 2016;Zhan & Gregg, 2019).
Conclusions
We have presented a new mechanical model of trapdoor faulting that quantitatively links pre-seismic magma overpressure in a sill-like reservoir and the size of trapdoor faulting.We applied this model to a tsunami-generating submarine earthquake in 2008 around Kita-Ioto caldera, for quantifying the caldera's mechanical states.Our trapdoor faulting model explains well the tsunami signal recorded by a single distant ocean bottom pressure gauge, as well as regional long-period seismic waveforms.Although we acknowledge that other possible mechanisms (e.g., fluid-flow or volumetric-change source in magma reservoir) are not tested in this study, and that there is no direct observation of an active fault system in the caldera, our results suggest the plausibility of our hypothesis of the submarine trapdoor faulting in Kita-Ioto caldera.This is also supported by the similarity to trapdoor faulting events found recently in better-investigated submarine calderas (Sumisu and Curtis calderas).Repeating vertical-T CLVD earthquakes and another tsunami signal following the 2015 earthquake suggest the recurrence of trapdoor faulting in Kita-Ioto caldera.
Our mechanical models enable us to infer the pre-seismic magma overpressure beneath the submarine caldera, through quantification of the trapdoor faulting size.In an example case with a ring fault with an arc length of 90° and a horizontal crack at a depth of 2 km in the crust as the main model setting, we estimate that pre-seismic magma overpressure over ∼10 MPa causes the trapdoor faulting event, and that the co-seismic magma pressure drops by ∼15%.Yet, since uncertainty on the source geometry remains due to our limited data set, or a single tsunami record, these estimated values related to magma overpressure vary by a factor of half to twice, depending on model setting; the pre-seismic magma overpressure ranges approximately from 5 to 20 MPa, and the co-seismic overpressure drop ratio from 10% to 40%.For example, a longer ring fault with an arc angle of 180° requires less magma overpressure to generate the similar-sized tsunami but more effectively reduces the overpressure; on the other hand, larger magma overpressure is estimated when the source has a crack at a deeper depth of 4 km.The significant variations suggest that magmatic systems beneath calderas can be strongly influenced by source properties of trapdoor faulting.Therefore, it is critical to study trapdoor faulting in active calderas and its source properties, which would help us obtain more robust estimation of magma overpressure or stress states, providing rare opportunity to achieve comprehensive understanding of how inflating calderas behave in the ocean.
Figure 1 .
Figure 1.Vertical-T CLVD earthquakes near Kita-Ioto caldera.(a) Map of the southern ocean of Japan.Orange triangle represents the ocean-bottom-pressure (OBP) gauge of Deep-ocean Assessment and Reporting of Tsunamis (DART) 52404.(b) Map of the region near Kita-Ioto Island.(c)Bathymetry of the region near Kita-Ioto caldera, a submarine caldera with a size of 12 km × 8 km, near Kita-Ioto Island.Funka Asane is the summit of a cone structure within the caldera rim.Red circle represents the location of the 2008 Kita-Ioto earthquake with its moment tensor, whereas black circles represent locations of similar events; the earthquake information is from the Global Centroid Moment Tensor catalog(Ekström et al., 2012).The focal mechanism is shown as projections of the lower focal hemisphere, and the orientation of the best double-couple solution is shown by thin lines.(d) Tsunami waveform recorded at the OBP gauge of DART 52404.Dashed gray line represents the tsunami arrival time estimated using the Geoware Tsunami Travel Time software(Geoware, 2011).Solid gray line represents the data length for calculating the root-mean-square amplitudes (Equation15).This waveform data is obtained by removing the tidal component from and by applying the bandpass (2-10 mHz) Butterworth filter to the raw OBP data for 12,000 s after the earthquake origin time.Note that oscillations of OBP changes with a few mm H 2 O are recorded after the estimated arrival time, indicating tsunami signals.
Figure 2 .
Figure 2. A source structure for the mechanical model of trapdoor faulting viewed from top (left) and southeast (right).Gray lines are plotted every water depth of 200 m.
Figure 3 .
Figure 3. Mechanical trapdoor faulting model of the 2008 Kita-Ioto earthquake.(a) Mechanical model viewed from the southeast, represented by dip-slip of the ring fault and vertical deformation of the crack .Red color on the ring fault represents reverse slip, while red and blue colors on the horizontal crack represent vertical opening and closure, respectively.(b, c) Spatial distributions of (b) the ring fault and (c) the horizontal crack.In (b), the fault is viewed from the caldera center, and the azimuth from the caldera center to arbitrary point on the fault is measured clockwise from the midpoint of the fault.In (c), dashed line represents a profile shown in Figure 5. (d, e) Vertical displacement of seafloor (d) and sea surface (e) due to the model.Red and blue colors represent uplift and subsidence, respectively, with white lines plotted every 1.0 m.Black lines represent water depth every 100 m.(f) Comparison between a synthetic tsunami waveform from the model (red line) and the observed ocean-bottom-pressure waveform (blue line) at the station 52404.Solid gray line represents the data length for calculating the root-mean-square amplitudes (Equation15).(g, h) Spectrograms of the (g) synthetic and (h) observed waveforms.In (f-h), black dashed line represents the tsunami arrival time.
Figure 4 .
Figure 4. Pre-seismic state of the fault-crack system just before trapdoor faulting.(a) Distribution of the crack opening, .(b) Critical shear stress along dip-slip direction on the ring fault, .(c) Normal stress on the ring fault induced by the critically opening crack, .In (b, c), blue and red colors represent compressive and extensional normal stress, respectively.(d) Total normal stress on the ring fault, 0 = + lit + sea ; here, lit = ℎ, where ρ h , z, and g are the host rock density (2,600 kg/m 3 ), the depth of each mesh, and the gravitational acceleration (9.81 m/s 2 ), respectively, and sea = , where ρ s and H are the seawater density and the approximated thickness of the overlying seawater layer (1,020 kg/m 3 and 400 m), respectively.
Figure 5 .
Figure 5. Displacement and shear-strain energy density in the host rock, along a SE-NW profile shown in Figure 3c.(a-c) Displacement, relative to the reference state (p = p 0 ): (a) the pre-seismic state just before trapdoor faulting, (b) the co-seismic change due to trapdoor faulting, and (c) the post-seismic state after trapdoor faulting.(d) Vertical seafloor displacement in each state shown in panels (a-c).(e-g) Shear-strain energy density W: (e) the pre-seismic state, (f) the co-seismic change, and (g) the post-seismic state.Color represents shear-strain energy density, and bars represent principal axes of compression projected on the profile, whose thickness reflects half the differential stress change (σ 1 − σ 3 )/2, where σ 1 and σ 3 are the maximum and minimum stress, respectively.
Figure 6 .
Figure 6.Same as Figure 3, but for a model with a horizontal crack at a depth of 4 km.See details in Section 6.1.1.
Figure 7 .
Figure 7. Same as Figure 3, but for a model with a longer ring fault of an arc angle of 180°.See details in Section 6.1.2.
10 18 Nm), we adjust the source model assuming a smaller pre-seismic overpressure of p e = 16.41MPa (=22.
Figure 8 .
Figure 8. Long-period (80-200 s) seismic waveform modeling.(a) Moment tensor of the model, composed of partial moment tensors of (b) the ring fault and (c) the horizontal crack.(d) Comparison between synthetic waveforms (red line) and the observation (black line) at representative stations.In inset figures, a large red circle and a blue star represent the station and the earthquake centroid, respectively.On the top of each panel, the network name, station name, record component, station azimuth, and epicentral distance are shown.Note that waveform comparisons in all the tested seismic records are shown in Figure S6 in Supporting Information S1.
Figure 9 .
Figure 9. Tsunami waveform data from the 2015 earthquake.(a) The Global Centroid Moment Tensor solution of the Kita-Ioto caldera earthquake on 15 December 2015.(b) Comparison between a synthetic tsunami waveform from a source model adjusted from the 2008 earthquake model (red line; see Section 6.4) and the observed ocean-bottom-pressure (OBP) waveform (blue line) at the station 52404.(c, d) Spectrograms of the synthetic waveform (c) and the OBP waveform (d).In (b-d), black dashed line represents the tsunami arrival time.Note that the location of the 52404 station as of the 2015 earthquake has been shifted by ∼20 km southward from the location as of the 2008 earthquake (see text and Figure S8 in Supporting Information S1).
and are the root-mean-square (RMS) amplitudes of and (in units of [mm H 2 O] and [mm H 2 = Â .Supposing that the tsunami signal from the 2008 earthquake recorded in the OBP data (denoted by , we can estimate the pre-seismic magma overpressure p e from:
|
v3-fos-license
|
2024-03-15T15:46:37.799Z
|
2024-03-12T00:00:00.000
|
268396444
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsphyschemau.3c00062",
"pdf_hash": "d6968eb61128f54ccc27e725f43a7978e2205d66",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43260",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "f53f37c008f9c57ce5658305c76be9cd780a7bbf",
"year": 2024
}
|
pes2o/s2orc
|
In-Flow Heterogeneous Triplet–Triplet Annihilation Upconversion
Photon upconversion based on triplet–triplet annihilation (TTA-UC) is an attractive wavelength conversion with increasing use in organic synthesis in the homogeneous phase; however, this technology has not performed with canonical solid catalysts yet. Herein, a BOPHY dye covalently anchored on silica is successfully used as a sensitizer in a TTA system that efficiently catalyzes Mizoroki–Heck coupling reactions. This procedure has enabled the implementation of in-flow reaction conditions for the synthesis of a variety of aromatic compounds, and mechanistic proof has been obtained by means of transient absorption spectroscopy.
T he interest for multiphoton photoredox catalysis 1−6 has experienced a considerable growth in the last years, being a powerful tool to circumvent the thermodynamic and redox limitations of conventional photoredox catalysts. 7,8−11 It comprises a bimolecular system (sensitizer or donor + annihilator or acceptor); even though the annihilator is not directly excited, formation of its lowest triplet excited state is achieved through triplet−triplet energy transfer (TTEnT) from the primary sensitizer.Subsequently, the TTA event produces the fluorescent singlet excited state of the annihilator, which efficiently emits higher energy than the one employed at excitation (lower).Indeed, this singlet state can get involved in electron/energy transfer processes, allowing the activation of substrates for organic synthetic purposes. 3,12,13To note, a higher energetic source such as UV light is typically required to generate the annihilator singlet state, making the TTA approach more attractive and advantageous since much lower-energy input radiation can be employed.
On the other hand, flow chemistry technology is considered an important tool to overcome some typical limitations of batch synthesis such as slow heat and mass transfer, offering the possibility to shorten reaction times and, in some cases, to increase selectivities as well as to enable scale-up; in other words, to enable process intensification in a tightly controlled environment.Based on these advantages, continuous-flow photocatalysis represents an important milestone in the pathway of developing milder and more efficient synthetic processes to create new C−C and/or C−X bonds (X = O, N, S,...). 14In this way, continuous-flow photocatalysis has been successfully applied in the synthesis of organic pharmaceuticals, 15 photodegradation of organic compounds, 16 and photoredox catalysis. 17espite the fact that different TTA upconversion systems have been developed, it appears surprising that exclusively homogeneous phases in batch conditions have been investigated so far, whereas application of in-flow or heterogeneous TTA upconversion to photochemical transformations has not been targeted yet.We have recently performed a coupling reaction using a photocatalytic TTA system under flow settings as a preliminary result. 18In this study, although desired products were successfully obtained, yields did not improve as much as those for the batch reaction, presumably due to decomposition of the sensitizer (a BOPHY dye).
A possible strategy to solve this issue might be the immobilization of the sensitizer not only to evade its degradation but also to achieve their successful recovery and reuse. 19In this context, silica-based materials are typical support entities because of their facile surface functionalization, low cost, and inertness. 20The construction of a silica shell covalently functionalized with a BOPHY dye could thus provide suitable conditions to accomplish chemical transformations via in-flow heterogeneous TTA upconversion.As a proof of concept, a C−C coupling reaction photocatalyzed by a heterogeneous TTA system under continuous-flow setting conditions is herein reported for the first time, as far as we know (Figure 1, a very recent review confirms the novelty of this approach). 21Since the TTA process relies on molecular collisions between the triplet acceptors, high mobility of the acceptor is desired.Therefore, covalent bonds are used to immobilize the BOPHY dye (sensitizer) in silica, rather than the 9,10-diphenylanthracene DPA (acceptor).
Hybrid silica@BOPHY material was synthesized according to the following procedure (Scheme 1).We first performed the synthesis of the diiodoBOPHY-like derivative BOPHY-1 by following a previously reported procedure, resulting in an orange solid. 22Then, we proceeded with the synthesis of iodoBOPHY-like derivative BOPHY-2 with a reactive alkene group.To this respect, substitution of one iodine atom by a styrene moiety was accomplished through a metal-catalyzed reaction, yielding 84% of the desired product.Finally, BOPHY-2 was susceptible to bond with 3-mercaptopropyl-functionalized silica gel to form the corresponding hybrid material silica@BOPHY-2.Elemental analysis revealed the percentage of organic matter anchored to the solid, which was found to be 5.3 wt % of the solid photocatalyst.The molecular structure of silica@BOPHY-2 was characterized by Fourier-transform infrared spectroscopy (FTIR), magic angle spinning solid nuclear magnetic resonance (MAS-SS-NMR), and steady-state absorption (see details in the Supporting Information (SI)).From the UV−vis spectra (Figure 2), it was clear that the absorption band in the visible region with a maximum at 460 nm for silica@BOPHY-2 in the solid phase matched perfectly with that observed for BOPHY-2 in solution.
Once the BOPHY dye was immobilized on silica, the next step was to check whether our hybrid silica@BOPHY-2 material could be part of a TTA system for photocatalytic purposes under in-flow heterogeneous conditions.Based on previous successful results on photocatalyzed Mizoroki−Heck reaction for triarylethylenes fabrication using TTA−UC technology, 23 we decided to afford these challenged C−C couplings by in-flow heterogeneous TTA upconversion (Scheme 2).Here, we placed an anaerobic solution of 4bromobenzaldehyde, 1,1-diphenylethylene and DPA in a glass bottle.It was delivered to a Pyrex glass holder containing the hybrid silica@BOPHY-2 material by a Fisher continuous-flow pump at 100 rpm through Tygon tubing (ID = 1.6 mm).A blue laser pointer (λ exc = 445 ± 10) was directed to the hybrid material.The final leaving stream was collected again in the glass bottle to continuously evolve the photoreaction (setup photograph in the SI).
Optimal conditions implied low reagent loadings and visible light irradiation with the low-cost laser pointer (Table S1 in the SI).The result with the hybrid material was very satisfactory, yielding 60% of the isolated product together with a reaction selectivity of 100%.Control experiments clearly demonstrated that the presence of DPA was essential for this photochemical procedure (see Table S1, entries 5−7 in the SI).It is important to note that the same reaction in the Scheme 2. In-Flow Heterogeneous Photocatalytic Mizoroki−Heck Reaction homogeneous phase under similar conditions rather than that performed in the heterogeneous phase (but in this case using 1.2% mol of BOPHY-2 in solution, see procedure A in the SI), either in batch or flow conditions, did result in 21% and 19%, respectively, of the desired product, validating the proposed methodology as a process intensification.More importantly, the reusability of silica@BOPHY-2 retained the photocatalytic activity after 3 cycles tested (see Table S1, entry 3 in the SI).The weak reduction of the silica@BOPHY-2 activity could be explained in terms of adsorption where some amount of the starting material would be adsorbed onto the heterogeneous catalyst in the first run, somehow affecting the next cycles. 24hese results showcased a metal-free, in-flow catalytic system for the Mizoroki−Heck reaction, which is not easy to find in the literature. 25,26Then, we tried to couple aryl chlorides, much more difficult to engage in the Mizoroki− Heck reactions than bromides and iodides, even for palladium catalysts. 27,28Indeed, thiophene chloride derivatives coupled well under these metal-free, heterogeneous photocatalytic reaction conditions, with several diphenylethylenes as starting materials (Scheme 3).In all cases, the selectivity of the process was found to be 100% since observation of the reduced product was negligible.
To shed light onto the mechanistic aspects of the abovementioned process, which involved a photoinduced electron transfer process, transient absorption spectroscopy (TAS) was carried out.Dye BOPHY-2 was chosen as a suitable sensitizer in the bimolecular TTA system, since its structure was analogous to that used in the in-flow heterogeneous conditions whereas DPA was utilized as an emitter.At the first stage, a solution of BOPHY-2 in deaerated N,N-dimethylacetamide (DMA) and acetonitrile (ACN) mixture was selectively excited (λ exc = 450 nm) by TAS in the μs domain.The T-T absorption band of BOPHY-2 ( 3 BOPHY-2*) was observed at 700−750 nm (Figure S3 in the SI), in agreement with literature data, 29 with a triplet lifetime (τ T ) determined as 14 μs that fit perfectly to a monoexponential curve (Figure 3A, black line).A gradual decrease of the 3 BOPHY-2* triplet lifetime was observed in the presence of increasing amounts of DPA (Figure S6).Stern− Volmer analysis (Figure 3B) revealed a quenching rate constant of 4.6 × 10 9 M −1 s −1 , indicating an efficient triplet−triplet energy transfer (TTEnT) process.To detect the resultant formation of the DPA delayed fluorescence ( 1 DPA*) in our conditions, a deaerated DMA/ACN solution of a mixture of BOPHY-2/DPA was submitted to TAS with an excitation of 450 nm.Thus, the upconverted 1 DPA* was observed displaying the typical emission band (Figure 3C, black line).These results agreed with previously reported data for similar systems. 23uenching studies demonstrated the interaction between the high-energy delayed fluorescence 1 DPA* and the aryl bromides through a single electron transfer (SET) process (Figure 3C).A gradual reduction of 1 DPA* was clearly observed in the presence of increasing amounts of 4bromobenzaldehyde.By the Stern−Volmer correlation, where K SV was estimated as 314 M −1 (Figure 3C, inset) and the DPA singlet lifetime value was τ F = 6.96 ns, 8 the quenching rate constant (k q ) was found to be 4.5 × 10 10 M −1 s −1 , indicating that SET occurred at a diffusion-controlled rate.Besides, the triplet 3 BOPHY-2* lifetime in the BOPHY-2/ Scheme 3. In-Flow Heterogeneous Photocatalytic C−C Couplings Figure 3. (A) Decays were monitored at 700 nm after 450 nm laser excitation of BOPHY-2 (0.01 mM) in anaerobic ACN/DMA solution without (black) or with 0.1 mM DPA (blue) or with 0.1 mM DPA plus 12 mM 4-bromobenzaldehyde (red).(B) Stern−Volmer analysis was used for the calculation of the corresponding quenching rate constant.(C) Emission spectra (λ exc = 450 nm) of BOPHY-2 (0.01 mM) and DPA (0.1 mM) mixture in anaerobic ACN/DMA solution recorded at 2 μs after the laser pulse, in the presence of increasing amounts of 4-bromobenzaldehyde.Inset: Stern−Volmer plot to obtain k q (S 1 ); experimental errors were lower than 5% of the obtained results.
DPA system was not affected by the presence of the corresponding quencher (Figure 3A, blue and red lines) which supported the fact that dye BOPHY-2 was not acting as an activator of the reaction, discarding any SET from the excited BOPHY-2.
We propose a plausible mechanism that is outlined in Scheme 4. Regarding the typical mechanism of TTA-UC, BOPHY-2 is first photoexcited to the excited singlet state ( 1 BOPHY-2*), followed by intersystem crossing (ISC) to the excited triplet state ( 2 BOPHY-2*).A rapid triplet−triplet energy transfer (TTEnT) occurs to quantitatively produce 3 DPA*.Triplet−triplet annihilation (TTA) between two 3 DPA* generates the 1 DPA* upconverted fluorescence.Now, this highly energetic species 1 DPA* activates the substrate by SET, leading to the radical ion pair, Ar−Br −• and DPA +• . 30ast scission of Ar−Br −• provides the formation of the aryl radical (Ar•) which is successfully trapped by the corresponding nucleophile Nu, giving rise to the radical intermediate Int-a.To restore DPA (see Figure S4), SET from Int-a to DPA +• occurs and the cationic intermediate Int-a is formed which evolves to the final product after deprotonation.
Summarizing, we have developed a novel procedure based on a heterogeneous TTA system as a photocatalyst for the construction of new C−C bonds under flow conditions.A BOPHY dye was immobilized onto a silica support (silica@ BOPHY) that acted as a sensitizer in the bimolecular TTA system.Then, a continuous-flow solution containing the other partner of the TTA system, and the corresponding reactants, was delivered through the hybrid silica@BOPHY material, which was submitted to visible light irradiation.Product analysis revealed the formation of the desired products.Mechanistic studies by TAS indicated that the most plausible mechanism involved a SET process from the TTA system to the aryl halides.These results open the way to the design of a new photocatalytic process based on heterogeneous TTA systems.
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2008-01-14T00:00:00.000
|
1058771
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-8-11",
"pdf_hash": "815adc0e201356af8c49b9ef9a4193aafecb66ff",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43261",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0408eb041ff5fe4783683229413397bb80a74395",
"year": 2008
}
|
pes2o/s2orc
|
Developing the content of two behavioural interventions: Using theory-based interventions to promote GP management of upper respiratory tract infection without prescribing antibiotics #1
Background Evidence shows that antibiotics have limited effectiveness in the management of upper respiratory tract infection (URTI) yet GPs continue to prescribe antibiotics. Implementation research does not currently provide a strong evidence base to guide the choice of interventions to promote the uptake of such evidence-based practice by health professionals. While systematic reviews demonstrate that interventions to change clinical practice can be effective, heterogeneity between studies hinders generalisation to routine practice. Psychological models of behaviour change that have been used successfully to predict variation in behaviour in the general population can also predict the clinical behaviour of healthcare professionals. The purpose of this study was to design two theoretically-based interventions to promote the management of upper respiratory tract infection (URTI) without prescribing antibiotics. Method Interventions were developed using a systematic, empirically informed approach in which we: selected theoretical frameworks; identified modifiable behavioural antecedents that predicted GPs intended and actual management of URTI; mapped these target antecedents on to evidence-based behaviour change techniques; and operationalised intervention components in a format suitable for delivery by postal questionnaire. Results We identified two psychological constructs that predicted GP management of URTI: "Self-efficacy," representing belief in one's capabilities, and "Anticipated consequences," representing beliefs about the consequences of one's actions. Behavioural techniques known to be effective in changing these beliefs were used in the design of two paper-based, interactive interventions. Intervention 1 targeted self-efficacy and required GPs to consider progressively more difficult situations in a "graded task" and to develop an "action plan" of what to do when next presented with one of these situations. Intervention 2 targeted anticipated consequences and required GPs to respond to a "persuasive communication" containing a series of pictures representing the consequences of managing URTI with and without antibiotics. Conclusion It is feasible to systematically develop theoretically-based interventions to change professional practice. Two interventions were designed that differentially target generalisable constructs predictive of GP management of URTI. Our detailed and scientific rationale for the choice and design of our interventions will provide a basis for understanding any effects identified in their evaluation. Trial registration Clinicaltrials.gov NCT00376142
Background
Despite the considerable resources devoted to promoting the use of new evidence by clinicians, translating clinical and health services research findings into routine clinical practice is an unpredictable and often slow process. This phenomenon is apparent across different healthcare settings, specialties and countries, including the UK, [1][2][3][4][5] other parts of Europe [4] and the USA [5,6], with obvious implications for the quality of patient care.
Many systematic reviews of implementation interventions show that various interventions (e.g. reminder systems, interactive educational sessions) can be effective in changing health care professionals' clinical behaviour [7][8][9][10][11] but a consistent message is that these are effective only some and not all of the time. Why interventions have such variable success is difficult to establish as few of the studies reviewed to date provide an underlying theoretical basis to explain how or why an intervention might work [12]. Without such understanding of an intervention's "active ingredients" and what factors modify its effectiveness, there is little to guide the choice of intervention other than intuition or the knowledge that a similar intervention has been empirically successful in a previous study [9].
Interventions to implement evidence-based practice are often complex. The framework for the investigation of complex interventions suggested by the Medical Research council (MRC) [13] illustrates the current situation with implementation research (Table 1). To date most implementation research studies aiming to change clinicians' behaviour have involved trials at the exploratory or definitive randomised controlled trials (RCTs) stages of this framework, with few published studies providing evidence of preceding theoretical or modelling research. We aimed to address this gap in the current evidence-base through the development of a systematic intervention modelling process (IMP) for intervention development and evaluation that corresponds to each of the theoretical, modelling and experimental phases of the MRC Framework [14].
Incorporating research findings into clinical practice almost invariably necessitates a change in clinical behaviour. Based on the idea that clinical behaviour is a form of human behaviour, we applied psychological models of behaviour change that have been used to predict variation in behaviour in the general population to the clinical behaviour of healthcare professionals. There is growing evidence to support the use of such theories in this way [15][16][17]. Psychological theory also underpins many behaviour change techniques for which there is evidence of effectiveness in changing the behaviour in other settings. Knowledge of the target behaviour or its cognitive antecedents is used to guide the selection of relevant interventions. For example, if individuals' beliefs about their capabilities relevant to a given task predict their behaviour, then their behaviour may be changed if they work through a series of tasks graded in order of increasing difficulty. This technique has been demonstrated to strengthen beliefs about capabilities.
This paper describes the process we used to design two theory-based interventions to promote the evidencebased management of upper respiratory tract infection, by GPs, without prescribing antibiotics. To enable experimental modelling and evaluation of the interventions prior to their use in a definitive RCT -which also forms part of the IMP -, the interventions were developed in the context of an "intervention modelling experiment" (IME) [16]. In an IME, key elements of an intervention are manipulated in a manner that simulates the "real world" as much as possible, but the measured outcome is an interim, or proxy, endpoint that represents the behaviour, rather than the actual behaviour itself. The evaluation of the interventions described here is reported in our partner paper [18].
Methods
The process for the choice and development of the interventions was through a series of systematic steps, summarised in Table 2.
Specification of the target behaviour/s
The consultation for upper respiratory tract infection (URTI) is one of the most frequent in general practice [19]. Research evidence has shown that antibiotics are of limited effectiveness in treating URTI [20][21][22]. However, GPs continue to manage patients with uncomplicated URTI by prescribing antibiotics [23,24]. In specifying our target behaviour, we used the "TACT" principle, a systematic way of defining behaviour in terms of its Target, Action, Context and Time [25]. For the behaviour, "managing patients presenting with uncomplicated URTI with- out prescribing antibiotics", the target is the patient, the action is managing without prescribing an antibiotic, the context is the clinical condition (uncomplicated URTI) and the time is during a primary care consultation.
Selection of the theoretical framework
Our choice of theoretical framework was guided by the findings of a previous study by the authors which explored the utility of a range of psychological models in identifying provider-level factors predictive of clinical behaviour [26]. This study found that three theories included constructs that predicted GPs' prescribing behaviour for URTI: Theory of Planned Behaviour (TPB) [27], Social Cognitive Theory (SCT) [28,29] and Operant Learning Theory (OLT) [30]. These theories explain behaviour in terms of factors amenable to change (e.g. beliefs, perceived external constraints); and they include non-volitional components that acknowledge that individuals do not always have complete control over their actions. They have also been rigorously evaluated in other settings, providing a sound scientific basis for the development of interventions.
According to the TPB, specific behaviours can be predicted by the strength of an individual's intention to enact that behaviour. Intentions are thus the precursors of behaviour and the stronger the intention, the more likely it is that the behaviour will occur. Intention is, in turn, influenced by the individual's attitudes towards the behaviour; their perceptions of social pressure to perform the behaviour ("subjective norms"); and the extent to which they feel able to perform the behaviour ("perceived behavioural control"). SCT considers self-efficacy (confidence that one is able to perform the behaviour), outcome expectancy (an individual's estimate that a given behaviour will lead to certain outcomes), risk perception and individuals' goals in explaining behaviour, including proximal goals (such as intentions). OLT proposes that behaviours that have contingent consequences for the individual are more likely to be repeated when the individual's "anticipated consequences" of their behaviour are favourable, and will become less frequent if their anticipated consequences are less positive. OLT also proposes that behaviours performed frequently in the same situation are likely to become habitual (automatic) [30].
These theoretical frameworks allow the identification of potential causal pathways underlying behaviour change (i.e. evaluation of thought processes that explain behaviour change). Within any subsequent evaluation of the impact of the intervention being developed, the measurement of potential mediators of behaviour change targeted by an intervention allows an understanding of the causal mechanisms involved in the change. This is one part of a "process evaluation".
Identification of constructs to target for change
In addition to guiding our choice of theoretical framework, we also used the findings of Eccles et al. [26], to identify which constructs to target with our interventions.
In that study, a random sample of GPs from Scotland were surveyed about their views and experiences of managing patients with uncomplicated URTI. Theory-based cognitions were measured by a single postal questionnaire survey during a 12 month period. Two interim outcome measures of stated intention and behavioural simulation were collected at the same time as the predictor measures. GPs' simulated behaviour was elicited using five clinical scenarios describing patients presenting in primary care with symptoms of an URTI. GPs were asked to decide whether or not they would prescribe an antibiotic and decisions in favour of prescribing an antibiotic were summed to create a total score out of a possible maximum of five. Data on actual prescribing behaviour were also collected from routinely available prescribing data for the same 12 month period. Analyses explored the predictive value of theory based cognitions in explaining variance in the behavioural data (Table 3).
In considering the most important constructs to target in this modelling experiment we selected constructs that were significantly correlated with GPs' actual behaviour (rates of prescribing antibiotics). There were five candidate psychological constructs: Intention (TPB); risk per- 2. Select theoretical framework (for empirical investigation at baseline and to assess process).
3. Conduct a predictive study with a (preferably representative) sample drawn from the population of interest, to identify modifiable variables that predict the target behaviour(s) and their means/distributions. Based on the findings of this study, choose which variables to target. These variables are the proposed mediators of behaviour change. 4. Map targeted variables onto behaviour change techniques and select techniques that (a) are likely to change the mediator variables and (b) it is feasible to operationalise. 5. Choose appropriate method(s) of delivery of the techniques. 6. Operationalise intervention components (techniques) in appropriate combination and order.
Note: As part of an iterative process, results from the implementation modelling experiment will provide information for feedback loops that address earlier points in this sequence. This feedback loop permits change, development or refinement of the intervention.
ception and self-efficacy (SCT) and anticipated consequences and evidence of habitual behaviour (OLT) ( Table 3). Scores on these constructs were also significantly correlated with behavioural simulation scores. As Intention was also to be a dependent variable in the modelling experiment it was not appropriate to directly target this construct. Habitual behaviour was also not selected as a target variable as it is not a causal determinant but rather [25] * TPB attitudes and PBC constructs can be measured "indirectly" by asking individuals to report their specific beliefs or directly by asking individuals to report at a more general level **The SCT risk perception questions were also used as a measure of OC anticipated consequences. CB = Control Belief; BB = Behavioural Belief an attribute of behaviour, and is modified indirectly by targeting other causal aspects of behaviour. The remaining three constructs: self-efficacy, risk perception and anticipated consequences were the theoretical constructs chosen as targets for our interventions.
Mapping targeted constructs onto behaviour change techniques
In choosing the most appropriate behaviour change techniques for the target constructs, we first mapped the three target constructs onto the theoretical construct domains identified by Michie et al (2005) [31] (Table 4). We then used a recently developed tool which further maps these theoretical construct domains on to behaviour change techniques [32]. This tool documents expert consensus on the use of 35 behaviour change techniques as appropriate interventions to change each construct domain. The techniques are supported by evidence of their effectiveness [33].
Choose an appropriate method of delivery
A paper-based method of delivery of the intervention was chosen because, recognising the geographical spread of the sample, for a subsequent evaluation greater efficiency would be obtained if the experiment could be administered by post.
Operationalising the intervention components
Different ways of operationalising the interventions as paper-based tasks were developed using an iterative process involving the study team members (MJ, JF, SH, EFK & ME). It was important to recognise that a paper-based format might be a relatively passive means of delivering the intervention components. Hence to limit this possibility, the interventions were operationalised to maximise the interactive nature of each intervention component.
Results
Two interventions were developed, directed at changing different constructs. The first intervention targeted the theoretical construct of self-efficacy (from SCT). This construct mapped on to the theoretical construct domain, "beliefs about capabilities". The main behaviour change technique selected was "graded task" [29]. The aim of this intervention was to increase GPs' beliefs in their capabilities of managing URTI without prescribing antibiotics. The graded task technique does this by promoting incrementally greater levels of "mastery" by building on existing abilities, demonstrating success at each level. Two further behaviour change techniques, "rehearsal" and "action planning" were additional components of this intervention. The "rehearsal" technique used the generation of alternative strategies as a way of rehearsing alternative actions that could be applied to the clinical situation. The "action planning" technique involved asking the participants to develop a plan of actions they intended to take when confronted by a clinical situation in which a patient presented with an URTI. Interventions are named according to the principle behaviour technique used.
• Graded Task intervention (Additional file 1): Recipients were presented with five situations in which GPs would be required to manage a patient presenting with sore throat. The situations were derived from questionnaire items used in the predictive survey [17] and ranked in order of difficulty based on the responses to these questions by GPs. Starting with the easiest, respondents were asked to consider each of these situations in turn, and to indicate if they could confidently manage the patient without prescribing an antibiotic. The response format was "Yes," "Maybe" and "No". Thus the typical pattern of responses would be a series of successes ("yes") before a series of failures ("no") in response to more difficult situations. They were then asked to select the situation that they found the least difficult to achieve from those they had rated as "Maybe" or "No," and write the number of this situation in a box provided. If they had rated all of the situations listed as "Yes," they were asked to write down a related situation that they would find difficult to achieve.
Focusing on their selected situation, participants were then instructed to a) generate possible alternative management strategies for that situation and then b) to develop a plan of what they would do to manage this situation in the future.
The second intervention targeted the theoretical constructs of anticipated consequences (from OLT) and risk perception (from SCT). These constructs both mapped on to the theoretical construct domain "beliefs about consequences". The behaviour change technique selected was "persuasive communication." The aim of this intervention was to encourage GPs to consider some potential consequences for themselves, their patients and society of managing URTI with and without prescribing antibiotics. This intervention also incorporated elements of the behaviour change technique, "provide information regarding behaviour, outcome and connection between the two" (Table 4).
• Persuasive Communication intervention (Additional file 2): This intervention presented GPs with two sequences of five pictures illustrating some possible consequences of managing URTIs with or without antibiotics. The consequence illustrated in each fictitious situation depicted was created to reflect the content of questionnaire items used by Eccles et al. [26] to ask about risk perception and anticipated consequences; and the discriminant beliefs identified by Walker et al. [17] as predictive of GPs who do and do not intend to manage URTI without antibiotics. The first row of pictures represents "Dr A", who manages URTI by prescribing antibiotics and the second row representing "Dr B", who manages URTI without prescribing antibiotics. To highlight the suggested consequences and to help recipients relate these possible consequences to each doctor's prescribing behaviour, questions were placed beneath each picture. Participants were not required to respond to these questions. However, to further enhance the interactive nature of this intervention GPs were asked to indicate on a bi-polar analogue scale a) the extent to which they try to be like Dr A or Dr B (i.e. their "intended" behaviour) and b) the extent to which they are actually like Dr A or Dr B (i.e. their "actual" behaviour).
Discussion
A major problem with implementation research to date has been the limited understanding about what interventions contain and how they are meant to work. Contributing to this is the frequently scant, or absent, reporting of the process of intervention development. In addition, few studies provide a theoretical basis for the choice and design of interventions to change clinical practice. We have developed an intervention modelling process (IMP) that corresponds closely to the theoretical and early modelling phases of the MRC Framework [13] -explicit stages of development that are currently lacking in implementation research. The systematic approach we have used here in the development of the content of two theory-based behavioural interventions forms the initial part of the IMP.
The contents of the interventions were designed to differentially target specific "determinants of behaviour change" -theoretical constructs that were identified in a previous study as predictive of both the behaviour and the intention of GPs to manage URTI without prescribing antibiotics. This was achieved by linking these constructs to appropriate behaviour change techniques. The basis for our choice of target constructs is strengthened by the established predictive utility of the theoretical models we used in this process. Likewise, the behaviour change techniques used are also supported by a substantial evidencebase for their effectiveness across a range of settings [33,34]. Thus the final interventions are underpinned by a robust scientific rationale with which to explain "why and how" we expect each intervention to have their effect, and are placed within a sound theoretical framework that guides a process for their evaluation and refinement.
In general, the poor reporting of intervention detail, prevents replication. Such inadequate description of implementation interventions hinders the development of a cumulative science of implementation. We have tried to illustrate here the type of description of intervention components that will make it possible to replicate their essential features. By describing the interventions in terms of discrete and identifiable behaviour change techniques we are clearly differentiating between the key components of the intervention content (the proposed "active ingredients") and the method by which the intervention was delivered (i.e. as a paper-based task). Such differentiation makes it possible to investigate whether the same behaviour change techniques differ in effectiveness across other modes of delivery, whilst also offering the potential to explain differences in effectiveness across different settings. Routine reporting of detailed description -such as we provide here -would greatly enhance the replicability of implementation studies The systematic approach used in this study was constrained in two ways. Firstly, the choice of target constructs was limited to those which predicted both simulated and actual prescribing behaviour. We applied this limitation because an evaluation of these interventions will be generalisable to the real clinical context only if there is close correspondence between the measures of intention, simulated behaviour and actual behaviour. However, external validation for our choice of target constructs is provided by Walker et al 2001, as our target constructs are represented in the discriminant beliefs identified by these authors [17]. Secondly, the chosen mode of delivery (paper-based and postal survey) influenced both the choice of behaviour change technique and the construction of the intervention components. A secondary aim of this theory-based approach is to develop methods for "pre-testing" and optimising the potential effect of interventions (implementation modelling experiments) prior to their use at service-level. Hence, a final consideration was the feasibility of using the techniques in both a modelling experiment context and a servicelevel randomised controlled trial. Our choice of behaviour change techniques was thus further influenced by their adaptability to the real-world setting.
Conclusion
We have demonstrated that it is feasible to develop interventions to change professional practice that are underpinned by a robust, scientific rationale. Theoretical models, empirical data and evidence-based behaviour change techniques were integrated systematically to produce two interventions that aim to change clinical behaviour. This approach is a way forward towards creating a scientific evidence-base relating to the choice, development and delivery of effective interventions to increase evidence-based clinical practice.
|
v3-fos-license
|
2024-04-02T16:47:14.931Z
|
2024-03-28T00:00:00.000
|
268843299
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2024.1331846/pdf",
"pdf_hash": "6a6c1b3ab57a6de95c2535cfdbdf9d239c23e451",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43264",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "0aa32bee7190dfad6b5d0bddf497ad9d4a740fa9",
"year": 2024
}
|
pes2o/s2orc
|
Deciphering the developmental trajectory of tissue-resident Foxp3+ regulatory T cells
Foxp3+ TREG cells have been at the focus of intense investigation for their recognized roles in preventing autoimmunity, facilitating tissue recuperation following injury, and orchestrating a tolerance to innocuous non-self-antigens. To perform these critical tasks, TREG cells undergo deep epigenetic, transcriptional, and post-transcriptional changes that allow them to adapt to conditions found in tissues both at steady-state and during inflammation. The path leading TREG cells to express these tissue-specialized phenotypes begins during thymic development, and is further driven by epigenetic and transcriptional modifications following TCR engagement and polarizing signals in the periphery. However, this process is highly regulated and requires TREG cells to adopt strategies to avoid losing their regulatory program altogether. Here, we review the origins of tissue-resident TREG cells, from their thymic and peripheral development to the transcriptional regulators involved in their tissue residency program. In addition, we discuss the distinct signalling pathways that engage the inflammatory adaptation of tissue-resident TREG cells, and how they relate to their ability to recognize tissue and pathogen-derived danger signals.
Introduction
The immune system is capable of both effectively eliminating internal and external dangers and preventing exacerbated immunemediated tissue pathology.These biological properties, coined disease resistance and disease tolerance, respectively, are complementary and require a controlled balance between proinflammatory and regulatory immune responses (1).This is particularly the case in mammalian hosts, where adaptive immunity allows antigen specificity to sustain long-lasting effector and memory responses that can become a potential threat to the function and homeostasis of an affected tissue long after the elimination of the danger.Amongst the mechanisms capable of controlling inflammation-generated pathology, a lymphocyte of thymic origin, a suppressor or regulatory T cell (T REG ), first described in the late 1960s (2), was shown to be particularly adept at immune suppression.These CD4 + T cells express the Forkhead-Box P3 (Foxp3), a lineage-defining transcription factor that governs a large part of their transcriptional program through the repression of pro-inflammatory genes (e.g.Il2, Ifng) and the activation of genes essential for their suppressive functions (e.g.Il2ra (CD25), Ctla4, Lag3, Entpd1 (CD39), Nt5a (CD73), Il10, Tgfb1, Gzmb) (3,4).In addition, some key signature genes are prominently expressed by these cells when compared to conventional T cells, including Ikzf2 (Helios), Tnfrsf18 (GITR), Nrp1 (Neuropilin 1), and Itgae (CD103) (5).In their capacity, T REG cells occupy a central position in the immune response, and are required to ensure tolerance to self-antigens (6,7), innocuous allergens (8,9), and commensal microflora (10), promote tissue function and regeneration (11), and prevent and control immunopathology (12).
In a mature immune system, T REG cells isolated from tissues encompass a pool of antigen-experienced CD45RA − CD69 + CD45RO + cells that differ in developmental origin, possess unique functions, and display distinct stages of activation (13).A prominent population of T REG cells found in all organs are tissueresident T REG (TR-T REG ) cells that differ from effector memory T REG cells (emT REG ) in that they display higher levels of the alpha E integrin (CD103) (14), lose CCR7 expression, and lose the ability to re-circulate to lymphoid organs (15).Despite the lack of a consensus on the markers to distinguish TR-T REG and emT REG cells in tissues, recent studies have been able to capture the high degree of transcriptional and post-transcriptional modifications that "precursor" TR-T REG cells acquire to localize to nonlymphoid organs, survive, and adjust their specialized functions in situ amidst unfavorable inflammatory, osmotic, or metabolic conditions (16).This program involves the expression of a set of core genes that are typically upregulated, albeit at different levels, by TR-T REG isolated from distinct organs, including the expression of the IL-33 receptor ST2 (17), RORa (18,19), Icos (20, 21) and Gata3 (22)(23)(24).Amongst these differentially expressed proteins, ST2 was recently proposed to distinguish TR-T REG from emT REG (17).Moreover, while there is evidence TR-T REG cells seed nonlymphoid organs, such as the lungs, as early as 8 days of life (25,26), other TR-T REG cells, like visceral adipose tissue T REG (VAT-T REG ), accumulate progressively with age (27), suggesting a highly dynamic and developmental path that is largely organ-specific.Critically, there is novel evidence on the developmental trajectory that lead TR-T REG cells to fully establish in the tissue.For example, recent evidence highlights how the TCR repertoire is a central determinant of TR-T REG localisation (16,28).
Currently, much remains to be understood regarding the origin of TR-T REG cells.Can TR-T REG cells be generated from emT REG cells after the resolution of inflammatory events (29), or do they constitute stable and distinct populations of T REG cells?Seeing that T REG cells found in tissues can originate from the thymic selection process (thymic-derived; tT REG ) or be generated from the induction of Foxp3 in naïve CD4 + T cells in the periphery (peripherallyinduced; pT REG ), can both subsets be considered TR-T REG cells?Thus, a better understanding of the origin, function, and fate of TR-T REG cells is required before we can harness their therapeutic potential.
In this review, we describe the steps required for the generation of TR-T REG , starting from thymic selection and spanning to TCR engagement in the periphery, the switch to distinct metabolic strategies, and the modulation of Foxp3 expression that enables the adoption of key epigenetic and transcriptional changes, which, in turn, lead to the expression of a program that is highly adapted to the target tissue (Figure 1).These processes involve signaling pathways that can, when in excess, hinder, either temporarily or permanently, the stability of their core transcriptional program, revealing mechanisms by which local inflammation guides the timing and potency of immune suppression.Finally, we attempt to guide the reader through the unique signaling events that can lead tissue-resident T REG cells to control type 1, type 2, and type 3driven inflammation.
Origin of tissue-resident T REG cells
Commitment of the T REG cell lineage can occur at various stages of the T cell life cycle.During their development in the thymus, immature thymocytes are selected for the establishment of a functional TCR repertoire.Subsequently, self-reactive thymocytes are either clonally deleted or diverged into a regulatory cell fate as part of a process known as central tolerance.Despite this, a very small fraction of thymocytes escape central tolerance stochastically and must be kept in check by self-reactive thymic-derived T REG cells (tT REG ), making them critical mediators of peripheral tolerance.Importantly, the events giving rise to tT REG cells require optimal TCR signals and a unique combination of cytokines.However, the peptide pool to which thymocytes are exposed to during this selection process does not ensure complete tolerance towards innocuous non-self-antigens such as commensal bacterial peptides or allergens.
This type of peripheral tolerance often requires the in situ induction of peripheral T REG cells (pT REG ) that possess unique nonself TCR repertoires (30)(31)(32) and confer them with non-redundant roles in maintaining homeostatic conditions at barrier sites like the lung and colon.In adoptive transfer models, pT REG cells are capable of suppressing local inflammation in both the colon and the lungs (32-34), but are less efficient at suppressing systemic inflammation (31).Indeed, the distinct transcriptional profiles of tT REG and pT REG cells indicate they favour different suppressive mechanisms that vary in effectiveness in a context-dependent manner (31).Yet, despite these potential differences, attempts at identifying markers in pT REG cells that are distinct from tT REG have so far failed (35,36), rendering them mostly undistinguishable at barrier sites.While Helios and Neuropilin 1 (Nrp1) are highly expressed by tT REG cells (30,37), neither Helios (36) nor Nrp1 (38), were found to be exclusively expressed by these cells.Thus, despite their distinct origin, TCR repertoire, and functions, pT REG cells cannot be distinguished from the pool of tT REG cells in mucosal tissues, and further investigation into features that demarcate each subset is warranted.
Thymic development of T REG cells
Thymic-derived T REG cells undergo the same early core processes of thymic selection as conventional CD4 + T cells (39,40).Namely, newly seeded thymocytes undergo V(D)J recombination in the thymic cortex to generate productive TCR chains capable of self-MHC recognition.Upon successful TCR signaling, committed thymocytes migrate into the thymic medulla where they encounter medullary thymic epithelial cells (mTECs) that express promiscuous transcription factors AIRE and Fezf2, allowing them to transcribe and present tissue-restricted antigens (TRAs) to developing thymocytes (41,42).Here, thymocytes that are strongly reactive toward TRAs and other self-antigens are deleted, while weaker stimulation and the presence of certain cytokines such as TGF-b and IL-2 can skew their fate toward T REG cell differentiation (43)(44)(45)(46)(47).
Optimal TCR signaling is the predominant factor driving T REG cell lineage commitment in the thymus.TCR:peptide-MHC interaction triggers a series of phosphorylation events resulting in downstream activation of NFAT, AP-1, and NF-kB family transcription factors (48,49).Together, these events lead to different T cell lineage specification in the thymus, as well as T cell survival, expansion, and effector function in the periphery.Expression level of the orphan nuclear receptor Nur77 (Nr4a1) has been directly linked to TCR signaling strength, and its expression level is elevated in T REG cells compared to conventional T cells in a TCR-dependent manner (50).Unsurprisingly, since co-stimulatory molecules such as CD28 profoundly augment TCR signaling strength via NF-kB activation, they were found to play an essential role in tT REG cell differentiation (46,(51)(52)(53).Foxp3 transcription is intricately regulated by transcription factor complexes binding at its promoter and four conserved noncoding sequences (CNS), termed CNS0 to CNS3.Upon TCR stimulation, downstream activation of the NF-kB pathway results in the recruitment of c-Rel to the Foxp3 locus at CNS3, which acts as a Foxp3 transcriptional enhancer that is responsive to TCR signaling alone (54,55).By dissecting each CNS region through targeted mutations, Zheng and colleagues demonstrated that CNS3 is the region that acts as a pioneer element to the generation of tT REG cells, while CNS1, a region known to bind TGF-b-induced SMAD factors, and CNS2, a region targeted by CREB and STAT5 signals, were not essential to the induction of Foxp3 in tT REG precursors (55), which still require cytokine signaling to become mature and functional tT REG cells (43)(44)(45)(46)(47).The developmental trajectory of tissue-resident T REG cells involves a series of events starting from thymic selection to peripheral TCR engagement.In this figure, the trajectory of peripheral regulatory T (T REG ) cells is depicted, as currently defined by recent multi-omics approaches conducted in various lymphoid and non-lymphoid tissues.During thymic selection, precursor regulatory T cells (T REG P) expressing self-reactive T-cell receptors (TCR) give rise to a pool of naive CD45RA + CCR7 hi regulatory T cells (T REG ).Once in circulation, these T REG cells encounter their specific antigen, triggering an activation cascade that results in a metabolic shift and chromatin remodeling.Subsequently, CD45RO + CD69 hi effector regulatory T cells (eT REG ) can either stay in lymph nodes as central memory (cmT REG ) or migrate to tissues, where they become tissue-resident (TR-T REG ) or effector memory regulatory T cells (emT REG ).While thymic-derived TR-T REG cells comprise a large portion of T REG cells in tissues, T REG located in the gut, for example, include peripherally-induced regulatory T cells (pT REG ).The absence of clear markers poses a challenge in distinguishing between these two populations in situ.In addition, while TR-T REG cells isolated from various tissues typically display a conserved phenotype marked by the expression of ICOS, ST2, Helios, and GATA3, a significant portion of T REG cells in the gut exhibit a distinctive RORgT-driven phenotype.Interestingly, there is cumulating evidence that T REG cells lacking Helios expression may be more driven to express RORgT, suggesting a possible segregation between TR-T REG cells derived from the thymus or induced in the periphery.
Cytokines, particularly common g (gc) cytokines, are critical for T REG cell development.IL-2 is known to be essential for commitment to the T REG cell lineage (56,57), as well as its maintenance (58).IL-2 signaling mediates STAT5 binding to the distal enhancer CNS0 as well as the promoter of Foxp3 (56,59), and sustains the constitutive expression of Foxp3 through CNS2 binding (57, 59).Not only does STAT5 directly facilitate Foxp3 transcription, Foxp3 also binds to the IL-2 receptor alpha chain (IL2Ra) as a transcriptional activator (60).Completion of this feedforward loop via paracrine IL-2 signaling is obligatory for T REG cell development and homeostasis.Other STAT5-activating gc cytokines have also been linked to T REG cell development, albeit mostly as a compensatory mechanism for impaired IL-2 signaling (45).In addition, TGF-b has also been implicated in tTreg development.While either of its downstream transcription factors, SMAD2 or SMAD3, can directly regulate Foxp3 transcription (61, 62), deletion of the SMAD binding site in the Foxp3 locus predominantly affects the induction of pT REG, but not tT REG cells (62,63).Yet, deletion of the TGF-b receptor TbRI during thymocyte development results in severely reduced T REG cell numbers and defective T REG cell function (64).Nonetheless, a recent study might reconcile the paradoxical discoveries.SMAD3/ 4 can trigger a PKA-dependent signaling cascade that causes the cessation of TCR signaling (65).Thus, the role of TGF-b in tT REG differentiation could most likely be attributed to its effects on TCR signaling rather than direct transcriptional regulation of Foxp3.
2.2
The role for thymic selection events in the genesis of tT REG and pT REG cells.
In recent years, accumulating evidence shows that the nature of TCR signaling during thymic selection influences T REG cell response to signals long after thymus egress.Notably, TCR engagement during thymic selection is a critical step in the establishment of a CpG hypomethylation pattern that characterises the epigenetic background of tT REG cells (66).Numerous studies have identified two distinct tT REG precursor (T REG P) populations thought to develop into CD25 + Foxp3 + tT REG cells (47,(67)(68)(69).The more common CD25 -Foxp3 low and less abundant CD25 + Foxp3 -T REG P cells were shown to have distinct TCR repertoires with affinity to auto-antigens (67).In the thymus, the two T REG P populations display different cytokine and TCRsignaling requirements (47).Importantly, CD25 + T REG P-derived T REG cells are specifically capable of suppressing experimental autoimmune encephalitis (EAE), whereas Foxp3 low T REG P-derived T REG cells cannot (67), suggesting a functional bias within the T REG population.For example, murine T REG cells from the colonic lamina propria that express the same TCRa/b sequence have related transcriptional programs (70), illustrating the close relationship between TCR and the transcriptional fate of antigen-experienced memory T REG cells.
Interestingly, while the relationship between TCR specificity and the establishment of TR-T REG cells is not entirely understood, there are experimental examples that suggest the TCR repertoire generated during thymic selection is critical to the destination of both tT REG and naïve T cells.For example, T REG cells transgenic for a VAT-T REG -derived TCRa/TCRb gene arrangement will preferentially migrate to adipose tissue and differentiate into VAT T REG cells (28).Yet, while these observations suggest TR-T REG cells possess a largely self-specific TCR repertoire, earlier work in viral infection mouse models demonstrated that antigen-experienced T REG cells with predominantly non-self TCR repertoires are generated during tissue injury and activate during re-infection (13, 71), suggesting they also contribute to the TR-T REG pool.In addition, in transgenic mice possessing a fixed TCR-b sequence isolated from a Foxp3 + RORgT + colonic T REG cell, T cells upregulate Foxp3 in the mesenteric lymph node prior to expressing RORgT in the colon (72).As such, both self and non-self-reactive TCR repertoires are key drivers in the generation of TR-T REG cells.
The role of IL-2 and TGF-b
While the strength of TCR signaling acts as the predominant driving force for tT REG cell differentiation, cytokines play a more influential role in the periphery both in maintaining tT REG homeostasis and generating pT REG cells.The signals that lead to the generation of pT REG cells involve chronic suboptimal TCR signaling (73-75) and cytokines such as TGF-b and IL-2 to generate Foxp3-expressing T REG cells in vitro (76, 77) and in tissues (78-80).In addition, TGF-b has been shown to strongly promote Foxp3 induction through its downstream transcription factors (SMAD2 or SMAD3) that target CNS1 (61).Consequently, a deletion of CNS1 predominantly affects the induction of pT REG , but not tT REG cells (63).Lastly, pT REG cell induction via TGF-b can be further augmented by DC-derived retinoic acid in the lamina propria as well as short chain fatty acid metabolites of commensal bacteria (81,82), ensuring the establishment of tolerance at mucosal surfaces.While these examples of signals that promote pT REG induction are part of a complex signaling system that merits its own review, they share the common outcome of facilitating Foxp3 expression in tissue-resident T cells, and further the importance of this transcription factor in forming the regulatory program of tissueresident CCR7 low CD69 + CD45RO + T REG cells.
The epigenetic and transcriptional trajectory of T REG cells
The factors that regulate the differentiation of TR-T REG remain to be fully understood.Miragaia and colleagues demonstrated through single-cell RNA-seq analysis of lymphoid and nonlymphoid (colon and skin) T REG cells that these tissue-specific adaptations originate from events happening in their respective draining lymph node (19).By tracing TCR clonotypes from draining lymph nodes to their respective tissue, the authors were able to establish a pseudo-space relationship detailing the series of events that drive the generation of specialized T REG cells.They were able to establish that T REG cells are activated, switch to a glycolytic metabolism, and cycle rapidly prior to acquiring genes involved in migration to the tissue (19), revealing conserved stages involved in the generation of TR-T REG cells.As such, this seminal work provided confirmation that progressive transcriptional changes guide the generation of eT REG cells that become TR-T REG cells and highlighted how, despite tissue-specific differences, these cells share a series of epigenetic modifications that allow them to migrate, survive, and function at specific non-lymphoid sites.
The importance of peripheral TCR engagement in the generation of TR-T REG cells
The engagement of the TCR of naïve T REG cells is an important prerequisite for the development of tissue-specialized T REG cells (83,84), as it promotes a signaling cascade that elicits the expression of key regulatory genes leading to the suppressive activity of T REG cells (85).Additionally, TCR engagement can induce epigenetic and transcriptional changes in T REG cells, some of which are directly influenced by Foxp3, while others act independently (66).People affected by a loss-of-function mutations in STIM1 or ORAI1, proteins involved in store-operated calcium entry (SOCE), encounter a loss of peripheral tolerance despite some cases displaying normal T REG numbers in circulation (86,87).Similarly, impairing the normal Ca 2+ influx during TCR engagement by deleting proteins that form the Ca 2+ releaseactivated Ca 2+ (CRAC) channels (STIM1 and STIM2) in mice specifically prevents the differentiation of activated T REG cells into follicular and tissue-resident memory T REG cells and generates a cascade of inflammation leading to multiorgan autoimmunity (88).
Aerobic glycolysis in the activation and clonal expansion of T REG cells
Another critical factor involved in the differentiation and clonal expansion of activated T REG cells is the adoption of aerobic glycolysis.This was notably demonstrated in the skin, as aerobic glycolysis by activated T REG cells is required prior to their migration (89).This may, at first glance, seem counter-intuitive, as there is ample evidence that mature T REG cells adopt fatty-acid oxidation (FAO) as a critical metabolic strategy to survive and suppress immune responses in tissues (90).Yet, while less efficient than oxidative phosphorylation (OXPHOS), adopting aerobic glycolysis is a critical step that occurs during T cell activation by rapidly providing the needed energy for expansion and migration, all-thewhile maintaining fatty acid and amino acid reserves for cell division and protein synthesis (91).This is further evidenced by the fact that the mammalian target of rapamycin 1 (mTORC1) which is required for aerobic glycolysis, is not necessary for the thymic or peripheral development of T REG cells, but essential to the function and activation of T REG cells (92).Indeed, to avoid losing their suppressive program, T REG cells balance the intensity of the mTORC1 and mTORC2 pathways (93), a process that is critical during their differentiation.Importantly, however, increasing glycolytic metabolism in T REG cells temporarily deprives them of their suppressive capacity (90, 94), providing further evidence that the differentiation and clonal expansion of T REG cells is contained within a short window of time.As such, the maturational process leading T REG to become eT REG cells requires both TCR engagement and a shift in their metabolic strategy (Figure 1).
The role of Foxp3 in the specialization of memory T REG cells
The Foxp3-driven transcriptome of T REG cells is comprised of a T REG -specific gene signature and a gene set associated with an activation program which is shared with conventional T cells (95).A lymphoproliferative pathology had been previously observed in "Scurfy" mice where the X-linked Foxp3 gene encountered a frameshift mutation that completely disrupts the transcription of Foxp3 (96), confirming the key role of Foxp3 in establishing the suppressive program of T REG cells.Point mutations in Foxp3 that interfere with its function are the cause of a frequently fatal pediatric hereditary syndrome called immune dysregulation, polyendocrinopathy, enteropathy X-linked (IPEX) syndrome (97), featuring early onset diabetes, severe diarrhea, and eczema, which highly reflects the pathology of 'Scurfy" mice.Restoring Foxp3 transcription in mice whose T REG cells were genetically engineered to block Foxp3 expression rescues them from severe autoimmunity as it effectively reinstates their suppressive function (12).However, while Foxp3 is essential for the establishment of T REG cells, it does not determine, by itself, the entire epigenetic and transcriptional identity of mature T REG cells (5,98,99).Rather, Foxp3 ensures that inflammatory and non-inflammatory signals encountered in the periphery do not destabilise the core suppressive program of T REG cells (98,100).
Evidence for the unique roles of Foxp3 in non-lymphoid tissues comes from the observation that functional single nucleotide polymorphisms (SNPs) in the human Foxp3 gene do not generate a homogeneous pathology (97), with multiple accounts of IPEXrelated mutations having distinct functional consequences on T REG cells (101).By transposing human-isolated Foxp3 mutations in conserved murine Foxp3 motifs, Leon and colleagues confirmed that spontaneous multiorgan autoimmunity is largely attributed to mutations in the DNA-binding motifs, while mutations outside these motifs, notably in the N-terminal regions, lead to organspecific dysregulation of T REG cell function (101).In particular, a K199del mutation situated in the zing-finger (ZF) domain or mutations R51Q or C168Y in the N-terminal regions are prone to generating symptoms of enteropathy and skin disorders, while a R337Q mutation in the DNA-binding Fork-head domain can, in addition to these symptoms, lead to the development of diabetes mellitus (101).In addition, a murine model mimicking an A384 mutation in Foxp3 was shown to specifically impair T REG cell function in the periphery, directly impairing the ability of Foxp3 to recognize target genes and altering BATF expression (102), a key transcription factor required for TR-T REG generation (103).As such, the ability of Foxp3 to interact with multiple partners is required to preserve the functional integrity of T REG cells in peripheral tissues.
Although there are elements that suggest protein-protein interactions are critical to this process, we are currently limited in our understanding of how the different molecular complexes that partner with the N-terminal region of Foxp3 (104, 105), such as Tip60, Hdac7, Hdac9, Gata3, c-Rel, Foxp3, Runx1 or Eos, influence the specialization of T REG cells.This is imparted by the fact that it is particularly difficult to dissociate their functions during the early events leading to the differentiation of these cells and the events that happen later in the tissues.One such example is the interaction of Foxp3 with the chromatin remodeling transcription factors TCF1 (encoded by Tcf7) and lymphoid enhancer binding factor 1 (Lef1) of the high-mobility group (HMG) family.In mice, the combined knock-out of both Tcf7 and Lef1 (Foxp3 CRE Tcf7 fl/fl Lef1 fl/fl ) does not perturb lymphoid T REG cells but hinders the capacity of colonic T REG cells to suppress DSS-mediated colitis (106).Mechanistically, the molecular complexes TCF1 and Lef1 form with Foxp3 allow T REG cells to control inflammation by repressing genes associated with excessive cycling and cytotoxic function (GzmB, Prf1, Ifng) and promoting genes associated to a T REG suppressive program (106).Bulk RNAseq of murine mesenteric T REG cells deficient in TCF1 (Foxp3 CRE Tcf7 fl/fl ) show enhanced expression of core genes (including Il2ra, Foxp3, Tgfb1 and Lef1), and a concomitant increase in both pro-inflammatory genes (including Il6ra, Ifngr2, Stat3) and genes involved in TCR activity compared to T REG cells from control mice (107).These data suggest that TCF1 helps maintain a core T REG program and suppress the expression of pro-inflammatory genes during TCR engagement.Similarly, Lef1 is part of an activated T REG program (108), and in vitro gain-offunction experiments reveal it reinforces the expression of Foxp3 target genes (108).As such, these observations indicate that when Foxp3 is abundantly expressed, it interacts with both TFs to suppress pro-inflammatory gene expression and reinforce its own transcriptional profile (109).Yet, both murine and human activated (CD45RO + ) T REG cells display lower Tcf7 and Lef1 expression than conventional T cells (T CONV ) (110) as Foxp3 directly suppresses Tcf7 transcription and protein production, and reduces chromatin accessibility in regions targeted by TCF1 (95).As such, the highlyregulated chromatin-remodelling effect of TCF1 and Lef1 on T REG cells are likely required for their further differentiation and effector function.Furthermore, pseudo-time analysis from single cell RNAseq data of lymphoid and non-lymphoid activated T REG cells reveals Tcf7 and Lef1 to be particularly expressed by lymphoid T REG cells prior to their tissue migration (19), reinforcing the notion that TCF1 and Lef1 are involved during the early specialization events of T REG cells.For example, a T REG -specific depletion of Lef1 abolishes the generation of follicular T REG (T FR ) (107), suggesting Lef1 promotes the generation of these cells in a process similar to what is observed in follicular helper T cells (T FH ) (111).In addition, when compared to murine activated TCF1 -T REG cells, TCF1 + T REG cells display higher mRNA expression of transcription factors associated to helper T cells, including Gata3, Tbx21 and Rorc (107).Collectively, these examples highlight how changes in chromatin accessibility in T REG cells happen mostly after TCR engagement in the lymph node.Nonetheless, Lef1 and TCF1 are but a part of a wide network of known Foxp3-binding partners (104) whose role in defining the specialisation of T REG cells remain ill-defined.
Epigenetic control of T REG differentiation
To effectively reach the tissue, T REG must undergo a series of epigenetic and transcriptional changes that ensure chromatin accessibility in key genes (112).Interestingly, direct comparison between human and murine T REG cells reveal evolutionarily conserved epigenetic mechanisms involved in defining a T REG cell program (110).Histone methylation is an important component in the processes that govern DNA accessibility and, ultimately, a T REG cell transcriptional signature.Importantly, while T REG cells undergo a series of chromatin remodeling events, they actively maintain CpG motif demethylation within the intronic enhancer CNS2 of the Foxp3 locus (55,113,114).Maintaining an open chromatin structure in the CNS2 allows for the robust transcription of Foxp3 by multi-molecular complexes including Foxp3 itself, NFAT, c-Rel, STAT5, Runx1-CBFb, CREB/ATFx and Ets1 (114)(115)(116)(117)(118). Incidentally, a loss of any of these transcription factors or the methylation of CNS2 impairs the transcription of Foxp3 and, ultimately, the suppressive function of T REG cells in the periphery (114)(115)(116)(117)(118)(119), confirming that Foxp3 is critical for the stability of the transcriptional program of tissue-localised T REG cells.
Tagmentation-based whole-genome bisulfide sequencing of lymph node and tissue-isolated murine T REG reveals these cells undergo multiple rounds of DNA alterations before adopting a tissue-residency program, with up to 4000 genes involved in either gain or loss of methylation (120).The processes that govern the establishment of a T REG program by histone modifications have been elegantly reviewed by Joudi and colleagues (121).Globally, a delicate balance between DNA methyltransferases (DNMTs), teneleven translocation dioxygenases (TET), histone acetyltransferases (HATs), and histone deacetylases (HDAC) govern the stability of the T REG cell transcriptional program (119,121), but can be directly influenced by polarizing signals provided during TCR engagement.
Methylation of cytosines located in CpG-rich regions are largely governed by Dnmt1, Dnmt3a and Dnmt3b (122,123).Interestingly, the conditional deletion of Dnmt1, but not Dnmt3a, in murine T REG cells causes a loss of peripheral tolerance by 3 to 4 weeks of life, yet the cells maintain their expression of Foxp3 (124).However, these T REG cells display enhanced expression of pro-inflammatory cytokines (IFNg, IL-6, IL-12, IL-17, IL-22), chemokine receptors (CCR1, CXCR6), and transcription factors (Runx2, Stat3), highlighting the role of Dnmt1 as a non-redundant epigenetic silencer (124).During the S phase, Dnmt1 acts in partnership with the epigenetic regulator ubiquitin-like with plant homeodomain and RING finger domains 1 (Uhrf1) to govern the suppression of these gene loci (125,126), making both Dnmt1 and Uhrf1 important therapeutical targets for the control of T REG stability and function.Yet, because of the necessity of T REG cells to acquire a set of genes associated with pro-inflammatory T cells, it remains to be understood how both regulators act during T REG cell generation.For example, pharmacological inhibition of PI3K through its PIP4K-associated kinase results in a specific decrease in Uhrf1 in human T REG cells but not TCONV cells (127), suggesting that the strength of TCR signaling plays a role in the way T REG cells govern DNA accessibility of pro-inflammatory genes.In addition, signaling by TGF-b leads to the phosphorylation and subsequent sequestration of Uhrf1 outside the nucleus (128), possibly preventing its partnering with Dnmt1.
On the other hand, the modulation of histone acetylation and deacetylation on the epigenetic adaptation of T REG cells remains illdefined.Foxp3 + T REG cells have been found to express histone acetyltransferases (HAT), including p300, Tip60 and CBP, as well as most members of the histone deacetylase family (HDAC) (129).Pan-HDAC inhibitors, for example, promote the acetylation of Foxp3 and the suppressive functions of T REG cells (130), confirming the importance of regulating histone acetylation to maintain a T REG transcriptional program.Interestingly, HATs and HDACs are clearly involved in the helper differentiation of T CONV cells (131), and further investigation is required to understand how they govern the differentiation of T REG cells.
The roles of BATF and Irf4 in the generation of TR-T REG cells
During these early differentiating steps, some transcriptional regulators are found to be particularly critical for the generation of TR-T REG over other emT REG subsets.At its core, the acquisition of a tissue residency program of TR-T REG cells is closely matched to the expression of basic leucine zipper ATF-like transcription factor (BATF) and its downstream targets (16).Delacher and colleagues identified a BATF-dependent transcriptional program that drives, notably, the expression of the IL-33 receptor ST2 (120), a receptor specifically found in TR-T REG (17).A T REG -specific BATF deficiency in mice (Foxp3 CRE Batf fl/fl ; BATF -/-), results in a multiorgan autoimmune disease with death initiating at 6 weeks of age (103).BATF -/-T REG cells fail to accumulate in the lungs, colon, liver, and spleen, and display reduced chromatin accessibility to genes involved in T REG survival in tissue, including Gata3, Irf4, Ikzf4, Ets1 and Icos (103).In addition, Foxp3 CRE Batf fl/fl mice generate exT REG cells that lose T REG -associated genes (Ctla4, Tgfb1, Foxp3) and adopt inflammatory genes (Rorc, Il6ra, Stat3) (103).Specifically, ATAC-seq of murine BATF WT and BATF -/- T REG cells reveals BATF acts as a chromatin regulator, facilitating the expression of TR-T REG -associated genes, including Ctla4, Icos, Gata3, and Irf4, and preserving the demethylated state of the CNS2 region of Foxp3 (103), positioning BATF as the epigenetic guardian of T REG cells as they undergo their differentiation into specialized memory T REG cells.
Another transcription factor (TF) observed to be highly expressed by T REG cells following TCR engagement is the interferon regulatory factor 4 (Irf4) (132).Foxp3 can directly promote the transcription of Irf4 (133) and the BATF-JUN complex (134).In turn, Irf4 collaborates with BATF to further promote T REG activation, proliferation, and transcriptional differentiation (135).Ding and colleagues demonstrated that upon TCR engagement, T REG cells express the SUMO-conjugating enzyme UBC9 to specifically stabilise Irf4 function (136).While not affecting thymic development of murine T REG cells, a T REGspecific deletion of UBC9 causes an early and fatal inflammatory disorder at 3 weeks of age (136), mimicking the dynamics observed in scurfy mice (96).These T REG cells show defects in TCR activation, migration, and peripheral accumulation (136).However, we do not observe such dramatic outcomes when knocking out Irf4 in murine T REG cells, suggesting other factors may compensate for the loss of Irf4.Mice harboring a conditional knock-down (Foxp3 CRE Irf4 fl/fl ) develop spontaneous dermatitis, blepharitis, and lymphadenopathy disease by 5-6 weeks, and die by 3-4 months from a mostly T H 2mediated autoimmune disease (133).Co-immunoprecipitation of Irf4 and Foxp3 shows that both TF interact to, facilitate the transcription of genes such as Icos, Il1rl1, Maf and Ccr8 (133).In addition, Irf4 allows T REG cells to exert their suppressive functions.For example, a knock-out or a disruption of Irf4 expression in murine or human T REG cells, impacts the expression of key suppressive genes, including Il10 (137).Moreover, while there is evidence Irf4 is an important contributor during the early transcriptional events involved in the specialisation of activated T REG cells, this TF is also readily detected in some populations of memory T REG cells in the tissue, suggesting its expression is maintained long after TCR engagement.Finally, BATF and Irf4 are particularly upregulated in relation to the strength of the TCR signal (138,139), and, together, directly suppress Foxp3 transcription in T REG cells induced in vitro (139).Collectively, these observations imply that BATF and Irf4 hinder Foxp3 transcription during the early events that define eT REG formation (Figure 2).
The unique properties of TR-T REG cells
As discussed above, the pool of T REG cells residing in tissues is highly dependent on the organ and is composed in adults of both TR-T REG and emT REG cells whose fate remains ill-defined.Moreover, while the establishment of a peripheral T REG population in mucosal tissues happens in a relatively short amount of time after birth, this is not the case for VAT T REG cells that follow a more gradual accumulation (27), complexifying our understanding of the events that govern TR-T REG accumulation.Notably, fate-mapping systems (Foxp3 eGFPCreERT2 x ROSA26 STOP- eYFP ) in neonate mice reveal that T REG cells seed non-lymphoid organs like the lungs and liver in the first 8 days of life, persisting for up to 12 weeks with little renewal (25).Critically, exposure to an inflammatory event prior to day 8, but not after, significantly reduces TCR diversity of liver and lung TR-T REG and causes long-lasting alterations to their transcriptional program (25), revealing how critical the neonatal period is to the establishment of tissue homeostasis.Here, the establishment of TR-T REG cells is heavily dependent on the acquisition of a core of transcriptional factors.Single-cell RNA-seq (19), bulk RNA-seq (17), microarray and ATAC-seq (16, 112) analysis of T REG from visceral adipose tissue (VAT), lung, skin or colon reveal the epigenetic and transcriptional landscape of these cells is primarily determined by the organ, with only a small set of core genes shared between them.In various non-lymphoid tissues, TR-T REG cells express a shared a set of core genes, including Il1rl1 (ST2), Gata3, Tnfrsf4, Rora, Il10 and Gzmb (16, 19).On the other hand, there is a significant difference in gene expression between the transcriptional signature and DNA methylation profile of colonic and skin-isolated T REG , including increased Dgat2, a gene involved in lipid synthesis (16,19), in skin TR-T REG cells, revealing these cells acquire tissue-specific abilities that allow them to persist in these microenvironments.
Tissue-specific migratory properties of TR-T REG
Following TCR engagement and clonal expansion, the development of TR-T REG involves the adoption of migratory properties through the acquisition and loss of chemokine receptors and other adhesion molecules.Indeed, as they undergo deep transcriptional changes and rapid clonal expansion, they also begin to express chemokine receptors that lead them to egress from the lymph node and migrate to a selected tissue.As with other T cells, activated T REG cells downregulate the surface expression of the Lselectin CD62L and upregulate the expression of the glycoprotein type I CD44 (132).Similarly, T REG cells from human tumors (140) and skin (141), as well as murine T REG cells isolated from multiple nonlymphoid organs (15), display low levels of CCR7, preventing their recirculation in lymphoid organs (142).However, the combination of chemokine receptors TR-T REG cells possess is specific to the type of tissue these cells travelled to.In adult mice, RNA sequencing of two distinct populations of T REG cells isolated from barrier tissues reveals that CCR7 -T REG possess an organ-specific chemokine receptor signature, regardless of their expression of the IL-33 receptor ST2 (17), suggesting that the migration of all TR-T REG cells is determined by a shared group of chemokine receptors.This combination of chemokine receptors can also be appreciated in the seminal work by Miragaia and colleagues, as they observed skin-localised T REG cells preferentially expressed Ccr6, while colonic T REG cells displayed higher levels of Ccr1 and Ccr5; yet, both subsets showed similar levels of Ccr4, Ccr8 and Ccxr4 (19).Unfortunately, we have yet to determine which combination of chemokine receptors is part of their migratory program and which are locally upregulated to provide further movement inside the tissues.
Core transcription factors of TR-T REG cells
Interestingly, while these experiments highlight the transcriptional diversity of TR-T REG, so did they help identify a core identity that govern their residency program (19).Some members of this list include transcriptional regulators that have been clearly associated to tissue The acquisition of a tissue-resident program requires a series of epigenetic and transcriptional changes that involve modulation of Foxp3 expression or activity.After thymic egress into the periphery, T REG cells are TCR-activated by self or non-self-antigens, and undergo a series of epigenetic and transcriptional changes that guide their maturation into TR-T REG cells.While not entirely understood, this process seems to happen in a step-wise manner.First, TCR-engaged T REG cells upregulate key transcriptional programs in part driven by the transcription factor BATF, which, in conjunction with Foxp3, promotes the accessibility of Foxp3 and expression of BATF-driven genes including Ctla4, Icos, Gata3, Irf4.Key to the stability of their epigenetic landscape, T REG cells require Dnmt1 and its partner Uhrf1 to promote the methylation of CpG-rich regions and control the accessibility to inflammatory genes, including Ifng, Il6, Il12, Il17a, Il22, Ccr1, Cxcr6, Runx2 and Stat3.Finally, Foxp3 partners with Lef1 to promote the expression of genes involved in its core program, including Foxp3, Il2ra and Tgfb1, and also with TCF1 to suppress the expression of genes associated with inflammation like Il6ra, Ifngr2 and Stat3.Importantly, BATF and IRF4 can, in turn, suppress Foxp3 expression, a process that, while not fully understood, may enable the temporal accessibility of genes normally repressed by FoxP3.Once in the tissue, BATF enables the continued suppression of genes like Rorc (RORgT), Il6ra and Stat3.GATA3 promotes the transcription of Foxp3, but may be further involved in the expression of other GATA3-associated genes, like Il1rl1(ST2).IRF4 is also required for the expression of core TR-T REG genes, including Icos, Il1rl1 and Il10.Moreover, there is evidence that Lef1 and Tcf7 (TCF1) mRNA expression are significantly decreased in TR-T REG cells, suggesting they are no longer required.Finally, BLIMP-1 expression is increased, and can actively inhibit the action of Dnmt3a, promoting the accessibility of key genes in T REG cells such as Foxp3.Consistently, murine models with Foxp3-conditional deletion of BATF, GATA3, IRF4, TCF1 and BLIMP-1 reveal how critical these regulators are for the function of TR-T REG cells.
residency in other T cell subsets, like tissue-resident T RM CD8 + cells (143,144), including Runx3 and Blimp1 (145).In addition, murine and human TR-T REG also possess unique key markers including transcription factors Ikzf2, Gata3, and Rora.
Helios
An important transcription factor associated with TR-T REG cells is Helios.While the majority of T REG cells in circulation readily express Helios, siRNA-mediated silencing of Helios expression in human and murine T REG cells does not impede their survival and suppressive capacity in vitro (146,147).On the other hand, the conditional deletion of Helios in murine T REG cells (Foxp3 CRE Ikzf2 fl/fl ) leads to the development of a progressive, rather than a scurfy-like, lymphoproliferative disease in adult mice (147), revealing it is not required for the development of T REG cells, but rather for the preservation of T REG cell fitness at barrier tissues.Importantly, Helios potentiates the suppressive function of T REG by directly interacting with Foxp3 and promoting histone deacetylation (148), providing further evidence Helios plays a supportive role to the program provided by Foxp3.
However, not all lymphoid and tissue-resident T REG cells express Helios.Originally thought to be solely expressed by tT REG cells (30), it is now well-appreciated that Helios expression in both murine and human Helios -T REG cells is inducible (31,149) in vivo and in vitro, respectively.Some of the key features that differentiate splenic Helios + from Helios -T REG is the little overlap they share between their respective TCR repertoire, and the expression of genes involved in the differentiation of specialized T H 17 cells, including Rorc, Il6ra and Il23r (31), suggesting a division of labor between two T REG subsets that may have long-reaching consequences in the tissue adaptation of TR-T REG cells.For example, Cruz-Morales et al. showed that colonic Helios + Gata3 + T REG differ greatly from Helios -RORgT + T REG cells by their requirement of CD28, but not MHC-II, to proliferate locally (20), providing a potential point of distinction between colonic Helios + TR-T REG and RORgT + emT REG .Nonetheless, further investigation into the role of Helios in the differentiation and maintenance of TR-T REG cells is required.
Gata3
Gata3 is the transcription factor 3 of the Gata-binding family that comprises six known members.In T cells, it has been shown to govern T cell development, proliferation and maintenance (150) and is particularly important to promote the transcriptional signature of helper type 2 T cells (T H 2) (151).Skin, gastrointestinal, visceral adipose tissue, and pulmonary TR-T REG cell were all shown to express Gata3 (22, 152), albeit with different intensities.This observation could be explained by the different states of activity of these T REG cells, as Gata3 expression is significantly increased in both murine and human T REG cells upon TCR engagement (22).Interestingly, the signaling pathway that leads T REG to express this TF does not require IL-4a cytokine associated with Gata3 expression in conventional T cells ( 153)and depends largely on exogenous IL-2 (22).Deletion of Gata3 in murine T REG cells does not lead to the development of spontaneous autoimmunity before 6 months of age ( 22), after which the mice develop intestinal pathology and dermatitis (104).This is because Gata3-deprived TR-T REG are not hindered in their development, but rather fail to respond to an inflammatory threat, displaying decreased tissue migration, proliferation, transcriptional stability, and suppressive capacity (22,23,104).
While not necessary for the maintenance of peripheral tolerance, Gata3 contributes to the functional adaptation of TR-T REG cells.Gata3 recognizes the CNS2 region of Foxp3 (23), promoting Foxp3 activity and stabilising the transcriptional program of T REG cells to avoid their conversion to proinflammatory T cells under stress (22).In addition, Gata3 partners with Foxp3 to form a complex that contributes to the regulation of a wide array of T REG -associated genes (104).Gastrointestinal, skin, pulmonary, and VAT TR-T REG cells express the IL-33 receptor ST2 (17,24,154), a known target of Gata3 in T cells (155).Unfortunately, while Gata3 is known to remodel the Il10 locus in CD4 + T cells (156), the link between Gata3 and IL-10 has yet to be established in TR-T REG cells.As such, there are many indicators that Gata3 is an important contributor to the tissue adaptation of T REG cells, and future investigation into the epigenetic, transcriptional, and post-transcriptional impact of this TF is warranted.
RORa
Another gene that is consistently found in RNA-seq data from TR-T REG cells is Rora.This gene codes for the retinoic acid receptor-related orphan receptor alpha (RORa), a transcription factor which has been found to be expressed in differentiated T cells, including T H 1, T H 2 and T H 17 (157) cells.Unfortunately, we know very little on the role of RORa in TR-T REG .In T cells, Rora is expressed upon TCR activation, and is closely associated with the expression of their lineage defining T H 1, T H 2 or T H 17 signature (158).Similarly, RORa plays a supporting role in the transcriptional signature of TR-T REG cells.For example, a Foxp3 conditional deletion of RORa does not alter the accumulation of skin localised TR-T REG cells but enables the evasion of immune responses during skin treatment with MC903, a chemical inducer of atopic dermatitis (18).Thus, as with Gata3, RORa is not required during the transcriptional transformation of tissue-migrating eT REG cells, but rather for their function once in the tissue.
Blimp 1
The B lymphocyte-induced maturation protein-1 (Blimp 1) is a transcriptional regulator that is particularly expressed by T REG cells located in secondary lymphoid organs or non-lymphoid tissues (159).A conditional knock out of Prdm1 (Blimp-1) in murine T REG (Foxp3 Cre Prdm1 fl/fl ) generates an increase in the accumulation of T REG cells, accompanied by small increase in T CONV cell abundance that is insufficient to induce autoimmunity (159), confirming Blimp-1 is not essential to the generation, migration or even function of eT REG .Rather, Blimp-1 prevents the methylation of multiple genes, including CNS2 in the Foxp3 locus, by inhibiting the action of the methyltransferase Dmnt3a downstream of IL-6 (160).In doing so, Blimp-1 prevents the full conversion of colonic T REG to non-suppressive RORgT + eT REG cells (161), suggesting that the role of Blimp-1 is to preserve the transcriptional program of TR-T REG cells.
Tissue-specific survival mechanisms of TR-T REG cells
TR-T REG cells have shown a remarkable capacity to communicate with their immediate environment, adopting cytokine receptors, sensing molecular changes in its environment, and providing direct cell-to-cell contact with immune and nonimmune cells (162).TR-T REG achieve this by adopting unique phenotypic characteristics, such as the ability to sense local danger signals and compete in microenvironments with limited IL-2 availability, allowing them to maintain their identity in nonlymphoid organs.
IL-33
IL-33 is a cytokine of the IL-1 family of alarmins constitutively expressed by endothelial and epithelial cells (163) and by activated macrophages and dendritic cells (164).The IL-33 receptor ST2 is transcriptionally upregulated and detected on the surface of TR-T REG (17,120).This is consistent with the fact that the expression of Il1rl1 (ST2) is closely associated to the expression of BATF and is part of the transcriptional signature elicited after DNA methylation in TR-T REG cells (16, 120).However, not all tissue isolated T REG express ST2 in mice at the steady state, nor do skin, lung, gut, or VAT-isolated T REG cells express ST2 with the same intensity (17).As such, while suggested as a marker of TR-T REG cells (17), there is currently no clear evidence that ST2 expression is exclusive to TR-T REG cells, and further investigation into this receptor is warranted.Moreover, the importance of ST2 in the differentiation and function of TR-T REG cells remains ill-defined.For example, while IL-33 can directly promote the homeostatic expansion of T REG cells (24,165), a Foxp3-specific conditional knock-down of ST2 (Foxp3 CRE Il1rl1 fl/fl ) does not impair T REG accumulation in the lungs (166).Rather, IL-33 orchestrates T REG -mediated suppression of local gd T (166), T H 1, and T H 17 cells during tissue injury (24,167).To complicate things, it is unclear if these mechanisms depend entirely on the expression of ST2 by T REG cells (168).Indeed, innate immune cells can readily respond to IL-33 and provide proliferative signals to promote TR-T REG expansion and survival (169).As such, rather than providing a survival signal, ST2 may act as a sensing mechanism for local TR-T REG to rapidly reactivate and produce suppressing cytokines.
Icos
While not exclusive to TR-T REG , the inducible co-stimulator Icos plays a crucial role in both TR-T REG and emT REG cells to maintain their identity and survival within non-lymphoid organs (21).In mice, a Foxp3 conditional knock out of Icos (Foxp3 YFP-CRE Icos fl/fl ) does not generate autoimmunity, but rather prevents tissuelocalised T REG cells from suppressing oxalone-induced dermatitis (170), suggesting Icos is particularly required for T REG cells to control tissue injury.Specifically, Icos coordinates with mTORC1 signaling to support T REG proliferation and the expression of suppressive molecules (171), and is particularly critical for TR-T REG and emT REG cells to persist in the absence of IL-2 signaling by providing anti-apoptotic signals (15).Together, Icos and CD28 act as potent activators of the PI3K/Akt signaling pathway that triggers the phosphorylation of the transcription factor Foxo1 (171,172).In turn, this sequesters Foxo1 in the cytoplasm and leads to downregulation of genes like Klf2 and Ccr7 (173).In the absence of IL-2, T REG cells become susceptible to apoptosis, highlighting the critical role of sustained Icos-IcosL signaling in their survival as they migrate to the tissue (15).On the other hand, abrogating the PI3K-activating capacity of Icos by removing a YMFM motif in its cytoplasmic tail increases VAT TR-T REG accumulation and function (174), suggesting that Icos may have tissue-specific roles for T REG cells.Thus, while there is abundant evidence that Icos promotes the activation and survival of TR-T REG cells, tissuespecific differences are likely at play and must be considered when investigating TR-T REG cell sub-populations.
The metabolic adaptation of TR-T REG cells
Genes involved in fatty acid b-oxidation (FAO) can be readily detected in antigen-experienced T REG cells isolated from nonlymphoid tissues, including in visceral adipose tissue (VAT), the skin, the colon, and the lungs, suggesting TR-T REG default to FAO in non-inflamed tissues (19, 120).However, these transcriptional approaches have not formally demonstrated that TR-T REG cells require FAO to persist in all tissues.Most of the current evidence comes from VAT-isolated TR-T REG , which express the peroxisome proliferator-activated receptor gamma (PPARg), a ligand-activated transcription factor.Functionally, PPARg provides a complex signal to engage FAO in VAT T REG cells (175), providing them with a competitive advantage over T CONV cells to survive, accumulate, and function (176).This crucial metabolic strategy enables VAT T REG cells to catabolize long-chain fatty acids (LCFAs) from the environment, turning to FAO to sustain their demand for energy (177,178).While this process is shared between T REG and T CONV cells, T REG cells utilise fatty acids differently as they do not build endogenous fatty acids from acetyl-CoA, but rely on the acquisition of exogenous fatty acids to meet their metabolic needs (179).Concomitantly, efficient lipid storage by VAT TR-T REG cells is essential to protect them against lipo-toxicity and to provide the metabolic precursors needed for energy generation.These include scavenger proteins such as CD36 and enzymes involved in triglyceride production, such as DGAT1 and DGAT2.Skin and VAT-isolated PPARg + T REG cells readily express CD36, providing them with the ability to capture and secure LCFAs (175,180).DGAT are a family of enzymes involved in triglyceride production and lipid droplet (LD) formation that are preferentially expressed in activated T REG cells (181).Foxp3 itself is a strong repressor of Glut1 (182), the glucose transporter, and favors the expression of FAO genes (178).Yet, this mechanism acts in a feed-back loop, with DGAT1 promoting Foxp3 expression by diminishing protein kinase C (PKC) activity downstream of the TCR (181,183).Interestingly, by tracing the tissue distribution of splenic T REG cells with shared TCR sequences, Li et al. demonstrated that PPARg-expressing eT REG cells localise to other non-lymphoid sites, including the skin and the liver (184), providing new translational evidence that FAO proteins are expressed by other TR-T REG cells.Nonetheless, while these observations highlight the importance of FAO for VAT TR-T REG cells to sustain their bioenergetic demands, it remains to be determined if this metabolic strategy is required to sustain other TR-T REG cells.
The inflammatory adaptation of TR-T REG
One of the most recent and exciting discoveries has been the observation that activated eT REG can further specialize to adopt T H 1, T H 2, T H 17, and even T FH -like features.Importantly, they can express master transcription factors that are part of a transcriptional program typically expressed by helper T cells, including T-bet (T H 1), RORgT(T H 17), Gata3(T H 2), and BCL6 (T FH ).The differentiation, migration, and tissue accumulation of functionally-specialized T REG cells in tissues is a dynamic process that can occur in microbiota-rich barrier sites (10) or during tissue injury (185, 186).Indeed, contrary to the core genes necessary for the generation and maintenance of TR-T REG cells, the role of these "master" transcription factors is not associated with a residency program; rather, these TFs promote a set of specialized functions that allow T REG cells to suppress or orchestrate local immune responses (Figure 3).For example, single-cell analysis performed at distinct times during an Influenza A infection in mice portrays how Gata3 + T REG cells are progressively replaced by antigenspecific T-bet + CXCR3 + T REG cells in the course of disease, suggesting that, contrary to the permanent presence of TR-T REG , T H 1-specialized T REG cells are generated concurrently with the antiviral T H 1 response and follow the pattern of accumulation of these cells (185,187).
Interestingly, some of these specialized T REG cells (RORgT + T REG ) are present at the steady-state in mucosal tissues such as the colon, blurring attempts at defining what constitutes the bona fide TR-T REG phenotype in these tissues.Indeed, key events leading to the generation of specialized T REG cells include the requirement for TCR signaling and aerobic glycolysis to facilitate clonal expansion and differentiation (188).Moreover, Irf4 ( 27) is a necessary stepping-stone for the differentiation of specialized T REG cells (9,28,29).A typical example of these specialized T REG cells is observed in the colon, where resident T REG cells displaying two distinct TCR repertoires can be segregated based on their transcriptional program.Indeed, both RORgT + T REG and Gata3 + T REG are readily detected in the colon; however, absence of a local microflora only hinders the specific generation of RORgT + T REG (189,190) since their TCR repertoire is largely biased towards bacterial antigens (72,191,192).Since specific signals are required for T REG cells to acquire these programs, it is possible to dissect the required pathways that lead T REG cells to acquire these specialized programs.
The effects of polarizing signals on the fate of T REG cells
Some of the better described signals that promote the generation of specialized T REG cells include cytokines that drive the phosphorylation and nuclear translocation of STAT and SMAD proteins (193).In turn, these signals promote the expression of genes that define T cell fate, including the acquisition of master transcription factors T-bet, Gata3, or RORgT.What is particularly interesting, however, is that the pathways that lead T REG to adopt these TFs can also undermine their Foxp3-dependent transcriptional program, either through the loss of Foxp3 expression, the expression of pro-inflammatory genes, or the engagement of apoptosis.As such, at the time when activated T REG cells undergo important epigenetic and transcriptional changes, certain inflammatory signals can promote the loss of Foxp3 expression and their conversion into inflammatory "exT REG " cells.Several key transcription factors have been described to be involved in this inflammatory adaptation process of T REG cells.
T-bet + eT REG
T-bet is a T-box transcription factor expressed in a wide variety of immune cells, and mostly recognized for its role in defining the transcriptional landscape of T H 1 cells (194).Using a unique murine model that enables the tracking of murine T-bet-expressing T REG (Foxp3 Thy1.1 Tbx21 tdTomato-T2A-CreERT2 R26YFP -fl-stop-fl ), Levine and colleagues showed that the conditional deletion of T-bet in Foxp3 + T REG cells does not lead to autoimmunity in adult mice, although it does generate a mild increase in T H 1 activity (195), suggesting T-bet has little to no impact on the way T REG preserve tissue function at the steady-state.Notably, T-bet is a critical regulator for the expression of CXCR3 (196), a chemokine receptor that orchestrates eT REG migration to sites of T H 1-driven inflammation (196,197).Highlighting the role of TCR engagement, T-bet + eT REG cells that progressively accumulate in the lungs of mice infected with acute Influenza A infection recognize viral proteins (185,198).Thus, as with T H 1 cell polarization, the generation of T-bet + eT REG occurs progressively during inflammation and is closely associated to the clonal expansion of antigen-specific CD4 + T H 1 cells.
The signals that promote the generation of T H 1 cells include IFNg (STAT1) and IL-12 (STAT4).Interestingly, an IFNg-STAT1 signal drives the initial expression of T-bet during TCR engagement, while a subsequent IL-12-STAT4 signal is required for their definitive differentiation (199,200).This initial T-bet expression can, in turn, promote the expression of the IL-12 receptor (IL-12Rb2) (201)(202)(203).However, contrary to T H 1 cells, eT REG cells seem to depend exclusively on the presence of IFNg for the acquisition of T-bet (196,204).By activating murine CD4 + Foxp3 + cells in vitro, Koch and colleagues demonstrated that T REG cells acquire T-bet expression and its associated target, CXCR3, only if they possess the receptor IFNgR1 (205), suggesting that IFNg-producing T H 1 cells are responsible for the polarization of T H 1-like eT REG cells.
The control of IL-12 signalling by T REG cells is critical, as excessive pSTAT4 can lead T REG cells to lose Foxp3 expression (206) by, notably, limiting chromatin accessibility of STAT5 to the Foxp3 locus (207).Yet, STAT4 is a major regulator of Ifng in CD4 + T cells (208), and both human and murine T REG exposed to IL-12 produce low levels of IFNg (187,205,206,(209)(210)(211)(212), revealing excessive IL-12 can still be perceived by T H 1-like eT REG cells.However, contrary to STAT1, STAT4 signaling is associated with less suppressive T REG cells and can even lead to the complete loss of Foxp3 expression (187,205,206,(209)(210)(211), suggesting T-bet + eT REG are in a constant struggle to avoid the loss of genes involved in their suppressive functions.In this regard, T-bet + T REG cells possess mechanisms to avoid overt STAT4 signaling.For example, IFNginduced T-bet + eT REG cells suppress IL-12Rb2 surface expression, preventing excessive phosphorylation of STAT4 and further T H 1like commitment (205).Moreover, non-labelled proteomics on circulating human T REG cell populations revealed that, compared to memory or naïve T REG , eT REG maintain low cytosolic levels of STAT4 (213).
There is growing evidence for the role of IL-18 on the function of tissue-resident T-bet + eT REG cells.While the origin of IL-18R1 + eT REG cell remain to be fully understood, T H 1 polarizing conditions, and particularly IL-12, allow T REG cells to adopt the expression of both T-bet and IL-18R1 (187), suggesting that, like for T conv cells, eT REG require a STAT4-dependent chromatin remodeling to express IL-18R1 (214,215).In vitro, IL-18 promotes the expansion and suppressive capacity of IL-12-generated T-bet + T REG cells (187), suggesting this signal can counter the destabilising effects of IL-12.In vivo, T-bet + eT REG cells express IL-18R1 when they accumulate in the lungs during an Influenza A infection (187).Here, IL-18 enhances the production of amphiregulin in local T REG cells, facilitating tissue restoration after pulmonary Influenza A infection (216).In addition, a Foxp3 conditional knock-out of Il18r1 (Foxp3 ERT2-CRE Il18r1 fl/fl ) allowed us to demonstrate that IL-18 is specifically required for eT REG cells to suppress IL-17A responses in the lungs after an Influenza A infection (187).Similarly, IL-18R1 deficiency in T REG cells fails to control the onset of a T cell-mediated colitis (217) as well as inflammation in an experimental model of ovalbumin-induced asthma (218), confirming IL-18 is an important contributor to eT REG function.However, these observations do not necessarily mean that IL-18R1 expression is restricted to T-bet + T REG , as we have observed RORgT expression among a subset of IL-18R1 + T REG cells (187) and IL-18R1 expression has been described in T H 17 cells (217).Collectively, these observations illustrate how the T H 1 adaptation of eT REG cells allows for the suppression of tissue inflammation.
Gata3 + eT REG
The transcription factor Gata3, which is an important component of the transcriptional program of TR-T REG , is best described for its role in driving T H 2 cell differentiation (219).In both human and murine CD4 + T cells, Gata3 promotes T H 2associated genes, allowing for the expression of genes associated to their function, such as IL-4, IL-5, and IL-13 (151, 219).There are numerous accounts of tissue-homing T REG cells expressing high levels of Gata3 during acute T H 2-driven immunity, such as what is observed during asthma (220) or helminth infections (221,222).
The signals driving GATA-3 expression in T REG cells are not fully understood.Two signals have been described to be sufficient to induce Gata3 expression during T H 2 differentiation, namely an IL2/ STAT5-dependent and an IL-4/STAT6-dependent signal (223)(224)(225).In homeostatic conditions, IL-2 (STAT5) is sufficient to promote the expression of Gata3 during TCR engagement (22).However, in T H 2-driven responses, T REG cells require IL-4R to acquire GATA-3 expression and their T H 2-like characteristics (226).This distinction between STAT5 and STAT6-dependent induction of Gata3 may pave the way towards understanding how T H 2-like eT REG cells differ from TR-T REG cells.For example, mice with a Foxp3-specific conditional knock-down of Il4ra (Foxp3 CRE Il4ra fl/fl ) fail to prevent exacerbated asthma-like symptoms when challenged with house-dust-mite (HDM) (226) and helminthdriven inflammation, despite the presence of T REG cells in situ (221).
While IL-4 can favor T REG cell-mediated functions, sustained IL-4 can also force T REG cells to lose Foxp3 expression and their suppressive capacity both in vitro (227) and in vivo (221,222,227).STAT6 can promote the activity of the histone deacetylase HDAC9, which decreases chromatin accessibility to the Foxp3 locus (228).To prevent this, eT REG cells require strategies to avoid excessive IL-4 signaling.First, by maintaining high levels of CD25 expression, eT REG cells remain sensitive to IL-2, whose STAT5 signal competes with STAT6 activity (229).Second, tissue-localised T REG cells prevent further commitment into the T H 2 lineage by producing the E3 ubiquitin ligase Itch (230,231).Finally, murine in vitroinduced T REG cells exposed to IL-4 express higher levels of the JAK/ STAT inhibitor SOCS2 to prevent further STAT6 phosphorylation and the expression of pro-inflammatory cytokines (232).Thus, while it remains to be fully confirmed in tissue-resident T REG cells, there is cumulating evidence that IL-4 is important for the commitment of Gata3 + eT REG cells, and responsible for their transcriptional destabilisation and conversion into T H 2-like ex-T REG cells.
Finally, IL-33, which contributes to the proliferation of TR-T REG cells (165), can also govern the function of Gata3 + eT REG cells during inflammation.In this regard, IL-33-responding activated T REG cells where shown to produce high amounts of IL-10 and TGF-b (233), playing a key role in maintaining intestinal homeostasis (24).Similarly, ST2 + T REG cells promote the suppression of anti-tumor immune responses (234)(235)(236).However, IL-33 can also drive the production of the T H 2 associated cytokines IL-5 and IL-13 in pulmonary eT REG cells (233,237,238) and interfere with their capacity to supress T H 2 responses (238).Thus, the role of IL-33 on Gata3 + T REG cells is specific to the inflammatory context and may depend on whether it targets TR-T REG cells or eT REG cells accompanying T H 2 responses.
RORgT + eT REG
While complex and not entirely defined, the signaling events that lead T REG cells to adopt a T H 17-like phenotype include some of the same polarizing JAK-STAT and SMAD signals that are required for the generation of T H 17 cells.Indeed, the promoter functions of both Stat3 (239) and RORgT (240) are required to establish a T H 17 cell transcriptional program (241), and T REG cells have been shown to share part of this transcriptional program through the acquisition of these TFs (239).In the gut, RORgT + T REG cells play an essential part in maintaining gut homeostasis and contribute to maintain local homeostasis by, notably, suppressing T H 17-driven responses (242).Transcriptionally, RORgT + T REG cells from the mouse colon at steady-state express higher levels of Il23r, Il1r1, Maf, Irf4, and Ikzf3 than their RORgT -counterparts (191), revealing they possess a unique landscape that encompasses some key T H 17-associated genes.Moreover, RORgT is required for IL-10 production by colonic T REG cells and prevention of T cell-mediated colitis (191).Similarly, RORgT is required for T REG cells to control T H 17mediated autoimmune arthritis and EAE (192,243), suggesting that RORgT expression allows emT REG cells to target and suppress T H 17-driven responses.However, the role of RORgT and its impact on the transcriptional landscape of emT REG cells remains ill-defined and is likely driven by the inflammatory microenvironments these cells are exposed to.
While many cytokines can promote the nuclear translocation of Stat3 in T H 17 cells, the simultaneous signals provided by TGF-b (SMAD2/3) and IL-6 (Stat3) are sufficient, in vitro, to induce RORgT expression in T REG cells (162,192).Interestingly, a delicate balance is achieved between the signal provided by TGFb and IL-6.For example, TGF-b and IL-6 synergistically promote the proteasome-dependent degradation pathway of Foxp3 ( 244), contributing to a partial loss of Foxp3 function.Interestingly, once colonic RORgT + T REG cells are generated, they display a significantly stable phenotype, with maintained demethylation of T REG -specific genes like Foxp3, Ikzf2, Ctla4, Gitr and Ikzf4 (Eos) (191).In fact, these cells possess intrinsic mechanisms to avoid their full conversion towards T H 17 cells.As with IL-12 and T H 1 cells, subsequent signals provided by IL-23(Stat3) can further destabilise the transcriptional program of RORgT + T REG cells and even engage an apoptotic cascade in these cells (245).Indeed, Il23r is amongst the genes upregulated by Stat3 and RORgT (246), making RORgT + eT REG particularly sensitive to IL-23 (245).In a recent report, Jacobse and colleagues demonstrated IL-23R expression is restricted to RORgT + T REG under homeostatic conditions in the colon, and colonic T REG cells maintain a competitive advantage over WT T REG cells to survive in these conditions (245).Concomitantly, the authors demonstrate that T REG cells isolated from the lamina propria of patients with active IBD express high levels of Il23r and pro-apoptotic genes (126), suggesting an evolutionary conserved mechanism that orchestrates RORgT + eT REG survival and function.
In addition to IL-23, IL-1b was found to promote the differentiation of human CD4 + CD25 high CD127 low Foxp3 + T REG cells into IL-17-producing cells (247,248), suggesting IL-1 may promote a pro-inflammatory phenotype in T REG cells.However, the role of IL-1 in RORgT + eT REG cells remains ill-defined.Through a T-cell mediated colitis model in mice, we demonstrated that a knock-out of IL-1R1 in T REG cells favors an accumulation of Gata3 + T REG cells over RORgT + T REG cells in the colon, as IL-1 directly promotes RORgT + T REG expansion (167).Despite this effect, a lack of IL-1 signaling in T REG cells results in more abundant accumulation in the colon compared to WT T REG (167), suggesting IL-1 is a negative signal for the maintenance of colonic T REG cells.Interestingly, there are specific situations where this effect is beneficial.For example, IL-1R1 -/-mice infected with Cryptococcus neoformans are particularly sensitive to the infection, as they cannot mount an effective T cell response (249).In their lungs, these mice lack RORgT + T REG cells and have increased ST2 + T REG cells in the lungs compared to WT mice (167), suggesting sustained immunosuppression.To counter this, activated T REG cells express high levels of the decoy receptor IL-1R2, which allows them to neutralize IL-1 signalling (250-252).
Conclusion
In this review, we aimed to detail some of the major elements that govern the trajectory of a precursor T REG P cell to a highly specialized TR-T REG cell.It is particularly interesting that the trajectory of a T REG cells is, in most regards, highly like that of the conventional T cell as it undergoes further polarization prior to reaching peripheral tissues.Importantly, the epigenetic malleability of T REG cells is central to their ability to perform outside of the thymus, as these transformations allow them to sense tissue-derived signals that, in turn, modulate their suppressive functions.However, while we have accumulated a lot of information in recent years, much remains to be understood on how these tissue and inflammation-specific adaptations govern the function of TR-T REG cells.For example, the notion that T REG cells can adopt a specific differentiation path and revert to their previous state, labelled "plasticity" (253,254), remains to be proven experimentally.
Finally, recent reviews have addressed how Foxp3 gene editing, IL-2 therapy, and the use of T REG cells as cellular therapies represent key strategies to engage human T REG cells (255).However, most of our current knowledge on TR-T REG cells has not been specifically exploited by T REG -targeting therapeutical approaches.There is, however, some evidence these strategies may facilitate the expression of a tissue residency program.For example, the development of muteins or low-dose therapies (256) aimed at promoting IL-2 signaling in T REG cells can promote the expression of genes associated with TR-T REG cell function, such as Il1rl1(ST2), as well as migratory and other tissue resident genes (257).Thus, it is of interest to understand how T REG targeting strategies can influence both the developmental trajectory and the function of tissue resident T REG cells.In addition, understanding the migratory cues that enable TR-T REG cells to recognize specific tissues can have long reaching therapeutical benefits.Chimeric antigen receptor (CAR) T REG cells have been proposed as a new avenue to circumvent the constraints of low T REG cells numbers and the unknown TCR repertoire of T REG in autoimmune or graftversus-host (GvHD) diseases (258).However, this approach is still very novel, and, in the absence of additional modifications, is expected to suffer from the same limitations of CAR-T cells (258, 259), including failing to adopt metabolic strategies to survive, preventing exhaustion, and maintaining their function in tissues.Thus, it is by establishing a solid understanding of the entire pathway leading T REG cells to adapt to non-lymphoid organs that we provide the basis for the development of better T REG cellbased therapies.
FIGURE 3
FIGURE 3 Specific inflammatory signals alter the trajectory of T REG cells in non-lymphoid sites by engaging specialized programs prior and during their migration to inflamed tissues.During active inflammation, the presence of cytokines such as IFNg, IL-2, IL-6, and TGF-b can divert the differentiation of T REG cells to adopt helper-like phenotypes, allowing them to migrate to specific sites of inflammation alongside conventional T cells.Importantly, by acquiring these master transcription factors, effector T REG cells (eT REG ) become responsive to signals provided by IL-12, IL-4 or IL-23.While these cytokines further promote the transcriptional program engaged by these specialized T REG cells, they can ultimately diminish their suppressive functions and allow them them to contribute to inflammation as exT REG cells.Importantly, it remains to be determined if the resulting population of emT REG cells in the tissue after inflammation acquire a residency program that lead them to form part of the TR-T REG cell population.
|
v3-fos-license
|
2020-07-09T15:02:13.614Z
|
2020-05-12T00:00:00.000
|
220415313
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-020-01759-z",
"pdf_hash": "67ea60ee48783fa889fd50d346f31e7de2cb813b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43265",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Biology"
],
"sha1": "67ea60ee48783fa889fd50d346f31e7de2cb813b",
"year": 2020
}
|
pes2o/s2orc
|
Metabolic engineering of phosphite metabolism in Synechococcus elongatus PCC 7942 as an effective measure to control biological contaminants in outdoor raceway ponds
Background The use of cyanobacteria and microalgae as cell factories to produce biofuels and added-value bioproducts has received great attention during the last two decades. Important investments have been made by public and private sectors to develop this field. However, it has been a challenge to develop a viable and cost-effective platform for cultivation of cyanobacteria and microalgae under outdoor conditions. Dealing with contamination caused by bacteria, weedy algae/cyanobacteria and other organisms is a major constraint to establish effective cultivation processes. Results Here, we describe the implementation in the cyanobacterium Synechococcus elongatus PCC 7942 of a phosphorus selective nutrition system to control biological contamination during cultivation. The system is based on metabolic engineering of S. elongatus to metabolize phosphite, a phosphorus source not normally metabolized by most organisms, by expressing a bacterial phosphite oxidoreductase (PtxD). Engineered S. elongatus strains expressing PtxD grow at a similar rate on media supplemented with phosphite as the non-transformed control supplemented with phosphate. We show that when grown in media containing phosphite as the sole phosphorus source in glass flasks, the engineered strain was able to grow and outcompete biological contaminants even when the system was intentionally inoculated with natural competitors isolated from an irrigation canal. The PtxD/phosphite system was successfully used for outdoor cultivation of engineered S. elongatus in 100-L cylindrical reactors and 1000-L raceway ponds, under non-axenic conditions and without the need of sterilizing containers and media. Finally, we also show that the PtxD/phosphite system can be used as selectable marker for S. elongatus PCC 7942 transgenic strains selection, eliminating the need of antibiotic resistance genes. Conclusions Our results suggest that the PtxD/phosphite system is a stable and sufficiently robust strategy to control biological contaminants without the need of sterilization or other complex aseptic procedures. Our data show that the PtxD/phosphite system can be used as selectable marker and allows production of the cyanobacterium S. elongatus PCC 7942 in non-axenic outdoor reactors at lower cost, which in principle should be applicable to other cyanobacteria and microalgae engineered to metabolize phosphite.
Background
Cyanobacteria are emerging as promising systems for biotechnological applications. They offer a number of advantages over other microorganisms, including rapid reproduction with conversion rates into biomass much higher than that of plants, as well as the possibility of photoautotrophic cultivation harvesting environmental CO 2 and harnessing solar light energy [1]. Also, cyanobacteria do not compete for arable land with crop cultivation and some species can be cultivated using wastewater [2]. A number of cyanobacterial species have been explored not only as laboratory models, but also for development of platforms to produce ethanol, chemicals, high-value bioproducts, cosmetics, nutraceutics, and as biofertilizers in agriculture [3][4][5].
Although progress in developing genetic and genomic tools has been slow compared to that for bacteria or even plants, genetic manipulation of some cyanobacterial strains by classical genetic approaches, gene transfer, and genome editing-based techniques is now possible. Model cyanobacteria such as Synechococcus elongatus PCC 7942 and UTEX 2973, and Synechocystis sp. PC6803 have been engineered through different strategies to produce a number of chemicals such as isoprene, acetone and ethanol, offering new avenues to potentiate production of target compounds [6][7][8]. To have a significant economic, environmental, and social impact of these applications, the main challenge is to design production schemes that warrant biomass production of the desired strain at a cost-effective level using inexpensive reactors with low operation costs. To date, the most convenient and cost-effective type of reactor for cyanobacteria and microalgae cultivation are raceway ponds operated under outdoor conditions. However, in outdoor systems, strains are challenged by environmental conditions and are highly susceptible to contamination by other microorganisms, including bacteria, yeast, fungi, weed algae/ cyanobacteria, and protozoans [9,10]. Biological contaminants must be quickly and effectively controlled, as these organisms can rapidly cause complete loss of production batches. Cyanobacteria are competitive organisms and some fast-growing strains have been reported recently [9,10]. However, under laboratory conditions, they display growth rates slower than that of many bacteria (i.e., doubling time of S. elongatus PCC 2973, Synechococcus PCC 7942, and Synechocystis PCC 6803, is 1.9, 4, and 7-10 h, respectively), representing a potential disadvantage if scaled-up under outdoor conditions. A similar situation is faced for microalgal strains which usually display even slower doubling times [11]. Thus, production costs become a major factor in selecting cyanobacterial versus bacterial as production systems.
Invasion of cyanobacterial cultures by contaminant organisms has long being recognized as a major constraint for large-scale cultivation, which occurs not only in open cultivation systems, but also on closed and hybrid systems that have been specially designed to decrease contamination risks [12,13]. To control contamination, chemical treatments such as use of herbicides, antibiotics, detergents, hypochlorite, and phenol are often used [14,15]. Moreover, in most cases sterilization of growth media and bioreactors must be implemented to maintain the desired monoculture. However, these practices substantially increase operating costs for closed or hybrid bioreactors and open ponds cannot be sterilized or kept under sterile conditions. A strategy to deal with contamination in outdoor open ponds is the use of selective culture environments such as high salt concentrations for halotolerant strains [12,16,17] or N-deprived media for N-fixing cyanobacteria [18]. Unfortunately, these practices are limited to a few cyanobacterial species and can also affect final product quality. An additional measure which may favor the establishment of the target strain in sequential open ponds of different sizes, is using high inoculum percentage, because opportunistic organisms, naturally present and more adapted to environmental conditions, may dominate the system more easily when the starter culture is small [19]. For production of bioproducts, deployment of the target cyanobacterium, native or engineered, should be ideally outdoor and with minimal external requirements.
Previously, we reported on the implementation of the metabolic engineering of Chlamydomonas reinhardtii to metabolize phosphite (Phi) as an effective strategy to control biological contamination. This was achieved by expressing a ptxD gene encoding a phosphite oxidoreductase (PtxD), that enable the engineered strains to metabolize Phi as the sole phosphorus (P) source, and therefore, outcompete contaminant organisms unable to use Phi as a P source [20]. This system allows the establishment of a highly selective environment by using Phi as a growth control agent and P source at the same time.
More recently, the Phi metabolism was also established as a biocontainment strategy that allows the control of proliferation of genetically modified S. elongatus in case an accidental release to the environment, as well as a selectable marker for microalgae, cyanobacteria, and several plant species [21][22][23][24][25]. Biotechnological applications of the selective nutrition based on Phi and the ptxD gene for microalgae/cyanobacteria cultivation have been successful as a proof of concept using small volume closed or semi-closed systems [20,21,23,26]. However, there are genuine concerns regarding the stability and robustness of the Phi metabolism trait for cultivation in larger scale reactors in outdoor conditions. Here, we report the metabolic engineering of Synechococcus elongatus PCC 7942 to assimilate Phi by expressing only a codon-optimized sequence of ptxD, without the need of a Phi-specific transporter. We also report the successful containment of contamination in the scaled cultivation of the engineered strain up to 1000-L raceway pond under outdoor non-axenic conditions. The engineered strain became the dominant species in mixed cultures without any additional measure to avoid contamination. Our results show that the capacity of metabolizing Phi provides a competitive advantage to the engineered strain that effectively prevents or severely limits invasion of open or closed culture systems by undesirable biological contaminant. We also show that the PtxD/Phi system can be used as a selectable marker in S. elongatus PCC 7942. This is the first report providing evidence of the stability and robustness of the PtxD/Phi technology for outdoor cultivation of the model cyanobacterium S. elongatus.
Synechococcus elongatus strains expressing ptxD are able to use phosphite as the sole phosphorus source
We hypothesized that a metabolic advantage to outcompete contaminant organisms can be provided to S. elongatus PCC 7942 (SeWT) strain by expressing the ptxD gene from P. stutzeri WM88 and using Phi as the sole P source. We first studied the effect of different concentrations of Phi (0.1, 0.2, 0.8, 1, 1.8, 2 mM) on the growth of SeWT in liquid media as compared to growth in media supplemented with phosphate (Pi), the normal P source, or lacking a P source, as positive and negative controls, respectively. When SeWT cells were inoculated in control media devoid of a P source, no increase in cell density was observed (Additional file 1: Figure S1); while as expected, in media supplemented with Pi the SeWT culture displayed rapid growth reaching a cell density higher than 1.6 × 10 9 cells/mL after 6 days of cultivation (Additional file 1: Figure S1). By contrast, no increase in SeWT cell density at any Phi concentration was detected. These results showed that SeWT is unable to metabolize Phi.
As a next step, we designed a DNA construct harboring a codon-optimized version of the ptxD gene based on S. elongatus codon usage, under control of the psbAI constitutive promoter [27] (see Methods section). This construct was inserted into a plasmid vector also containing a pSp::aadA2 gene cassette for resistance to spectinomycin and used for the genetic transformation of SeWT using homologous recombination into the neutral site 1 (NS1). Hundreds of independent colonies resistant to spectinomycin were obtained in each transformation experiment. Six fast-growing colonies (SeptxD-1 to 6) were selected to test their capacity to use Phi as their sole P source. The six selected transformed strains were directly cultured in liquid media containing 1.8 mM Phi as the sole P source. After 6 days of cultivation we observed that the six ptxD-transgenic strains were capable of growing using Phi as a sole P source, to a similar extent to that observed for SeWT in media supplemented with Pi as P source (Fig. 1a). However, when we determined cell density (cells/mL), some quantitative growth differences were observed between the different ptxDtransgenic strains (Fig. 1b). Strains SeptxD-2, -5, and -6 reached a density of over 3 × 10 9 cells/mL in media supplemented with Phi, which was similar or slightly better than that observed for the SeWT using Pi (2.9 × 10 9 cells/ mL), whereas strains SeptxD-1, -3 and -4 reached slightly lower cell densities (~ 2.3 × 10 9 cells/mL) (Fig. 1b). The expression level of the ptxD gene was also determined by RT-qPCR analysis. The ptxD transcript was detected in all the engineered strains, being higher in strains Sep-txD-2, -5, and -6 than in strains SeptxD-1, -3 and -4. ptxD transcript was not detected in the WT untransformed control (Fig. 1c). The higher level of ptxD transcript in strains 2, 5, and 6 corresponds with their better growth in Phi media when compared with strains 1, 3, and 4. The activity of the PtxD enzyme in one of the transgenics trains, SeptxD-2, was also confirmed by an optimized fluorometric protocol (Fig. 1d). As expected, no PtxD activity was observed in the WT control, whereas the signal was clearly detected in the transgenic strain (Fig. 1d). The correct integration of the expression cassette in the six engineered strains initially tested, was confirmed by PCR using primers specific for the NS1 and the ptxD gene, and the fragments verified by Sanger sequencing (Additional file 1: Figure S2).
To determine the concentration of Phi that sustained optimal growth of the engineered strains, we tested the growth of SeptxD-2 in media supplemented with 0.1, 0.8, 1.5, 1.8, 2.5, 3, and 5 mM Phi as a sole P source. We observed that SeptxD-2 was able to grow in almost all tested Phi concentrations, excepting in 0.1 mM, but that optimal growth was achieved at 1.8 mM Phi (Additional file 1: Figure S3). The poor growth of SeptxD-2 at 0.1 mM is probably due to an insufficient amount of P to sustain the growth of the transgenic line whereas high Phi concentrations (2.5-5 mM) may exert an inhibitory effect. The slight growth of SeWT in media with Phi observed in some of these experiments was probably due to the use of an inoculum that came from containing media containing 0.2 mM Pi (Fig. 1b), which accumulate Pi reserves that can be used to sustain initial growth in media lacking a P source or in the presence of Phi [28].
SeptxD-2 transgenic strain is able to overcome competition from a model microalga and naturally occurring contaminants
The capacity of the strain of interest to outcompete other microalgae/cyanobacteria is crucial to obtaining the desired product in outdoor conditions. Cultivation media provide all the essential nutrients (i.e., P) that favor cell viability and facilitate reproduction of the strains, allowing maximum production of biomass under optimal conditions. Therefore, all microalgae/cyanobacteria naturally present in the environment will also compete for P and other resources with the desired strain, which is crucial at early stages of cultivation. To test whether Phi metabolism provides ptxD-transgenic strains a growth advantage to compete against contaminant organisms, we performed competition experiments between SeptxD-2 with a fast-growing microalga. We selected Chlorella sorokiniana (CsWT) as a competitor microalga because displays a growth rate similar to that of S. elongates in BG-11 media and has a clearly distinguishable morphology from that of the cyanobacterium [29]. Since SeWT is rod shaped and CsWT is larger and spherical, each cell type is easily identified by microscopy. In these experiments, we included monocultures of each strain under Pi and Phi treatment, as well as mixed cultures with three different proportions of the inocula, 1:1, 1:4 and 4:1 (SeptxD-2:CsWT) (Fig. 2). When the strains were grown as monocultures in media with Pi, we observed similar growth rate of both species reaching a density of about 1.2 × 10 9 cells/mL after 8 days of cultivation (Fig. 2a, Additional file 1: Figure S4a, b). However, when cultured in media with Phi, only SeptxD-2 rapidly proliferated at about 20% higher growth than that obtained in media supplemented with Pi, whereas CsWT was unable to grow and its cell density did not increase with the time (Fig. 2a, Additional file 1: Figure S4). As expected, when the transgenic strain was confronted with CsWT and cultivated in Phi media, SeptxD-2 rapidly outgrew the competitor microalga at all inoculation proportions (1:1, 1:4, and 4:1). We did not observe a significant increase in CsWT cell density during the 8 days of the competition experiment ( Fig. 2a-d). Growth of SeptxD-2 increased substantially from the first day of cultivation, even when the initial inoculum of CsWT competitor was fourfold higher than the ptxD-engineered strain (Fig. 2c). Under this condition, SeptxD-2 reached 1.4 × 10 9 cells/mL after Values are the mean of three replicates ± SD. Bars with asterisks are significantly different from the control (Student's t test, *P < 0.05, **P < 0.01, ***P < 0.0001) 8 days of cultivation, which was substantially superior to that displayed by CsWT under the same condition (Fig. 2c). Differences in growth were easily visible under a microscope. Only spherical CsWT or rod-shaped Sep-txD-2 cells were observed when grown as monocultures using Pi (Fig. 2e), whereas in the competition experiments in Phi media SeptxD-2 cells predominated over the competitor (Fig. 2e).
To test whether the transgenic strain is also able to outcompete other more natural competitors from the environment, we carried out similar experiments as described before but using competitors that we isolated from stationary irrigation canals located near our facilities. We collected a number of samples from two different water canals that feed agricultural fields in the Bajio region (Guanajuato, México, see Methods section). Algal bloom frequently occurs in these water bodies due to nutrient run-off from crop fertilization. From these samples, we obtained some consortia (denominated competitor, Comp) that grew well in the same media used for S. elongatus. To have an idea of the nature of the wild organisms present in the isolates, the 16S rRNA of Comp 1 was sequenced. The results indicated the presence of members of the genera Nostoc and Allinostoc (Additional file 2: Table S1). Two of the consortia with predominant species of spherical shaped cells, Comp 1 and Comp 2, were used to challenge axenic SeptxD-2 using 1:1 or 1:4 (SeptxD-2:Comp) inocula proportions. After 8 days of growth as monoculture, Comp 1 and Sep-txD-2 displayed vigorous growth in media supplemented with Pi as P source reaching cell densities of 4.3 × 10 10 and 4.9 × 10 10 cells/mL, respectively ( Fig. 3a-c). As previously observed, SeptxD-2 was also able to sustain rapid growth in media supplemented with Phi as a sole P source reaching a cell density (4.36 × 10 10 cells/mL), whereas no detectable growth was registered in media supplemented with Phi for the microorganisms present in the Comp 1 consortium (Fig. 3a-c). When SeptxD-2 was challenged by Comp 1 as a competitor in Phi media, SeptxD-2 sustained normal growth, achieving a density of 4.79 × 10 10 cells/mL, whereas Comp 1 cells did not grow (Fig. 3d, Additional file 1: Figure S4). In media supplemented with Pi, both SeptxD-2 and Comp 1 reached cell densities similar to that observed when cultivated as monocultures ( Fig. 3e; Additional file 1: Figure S5). Interestingly, for the case of Comp 2, we observed less growth when cultivated as monoculture in Pi media (Additional file 1: Figure S6c), which was enhanced when in competition with SeptxD-2 under the same condition (Additional file 1: Figure S6e). However, when in competition with SeptxD-2 in Phi media, growth of Comp 2 was effectively controlled by Phi (Additional file 1: Figure S6d). These data suggest that the PtxD/Phi system is effective to control contamination by organisms present in the environment that could act as a natural contaminant and could compromise growth of microalga/cyanobacterium of interest in raceway ponds. Therefore, the PtxD/Phi system seems suitable for outdoor cultivation under nonsterile conditions. ptxD together with phosphite is effective to control contaminants in outdoor cultivation systems Scaling-up outdoor cultivation of cyanobacteria and microalgae is a prerequisite to implement commercially viable processes to produce biomass or bioactive compounds. In order to validate the effectiveness and robustness of the PtxD/Phi system in outdoor open conditions without the use of sterilization to prepare media or bioreactors, we implemented a cultivation process of the SeptxD-2 strain to scale up to an outdoor 1000-L raceway pond (Additional file 1: Figure S7). The first steps of propagation, from Petri dishes to 1-L glass bottles, of SeptxD-2 and SeWT were carried out under sterile conditions in the laboratory. Cultures of SeWT and Sep-txD-2 were initiated by scraping a portion of the culture from solid media in a Petri dish that was suspended in 5 mL of liquid media to inoculate a 50-mL flask with 25 mL of media; 20 mL of this culture were used to inoculate 700 mL of media in 1-L glass bottles, which was then used to inoculate a 7-L home-designed cylindrical bioreactor. Since the percentage of inoculum seems to be crucial to start a rapidly growing culture, we first tested 3, 5, and 7% inoculum of the transgenic strain. We observed that 3 or 5% SeptxD-2 inoculum was insufficient to establish a successful culture and 7% inoculum led to a final high cell density (Additional file 1: Figure S8). Using 7% inoculum of SeWT and SeptxD-2 cultivated in sterile Pi and non-sterile Phi media, respectively, we observed that the transgenic strain produced a little higher biomass (0.53 g/L) than the control (0.43 g/L) (Additional file 1: Figure S9). Cultures produced in 7-L cylindrical bioreactors were then used to inoculate a home-designed 100-L cylindrical bioreactors. To determine the percentage of inoculum required to establish a successful 100-L culture, we again tested 3, 5, 7, and 10% inocula of SeWT and SeptxD-2 cultivated in non-sterile Pi and Phi media, respectively, in a 100-L cylindrical open reactor. It is important to note that for this step the reactor and media to grow SeWT, as well as SeptxD-2, were not sterilized prior to inoculation. After 8 days of inoculation, we observed that none of the amounts of SeWT inoculum allowed the establishment of a successful culture in the 100-L reactor and because of a slightly milky color in the media apparently only non-photosynthetic organisms grew in Pi media (Additional file 1: Figure S10). In the case of SeptxD-2, the cyanobacterium did not proliferate using 3 and 5% inoculum, and little bacterial proliferation was observed as the media remained nearly transparent (Additional file 1: Figure S10). A successful culture of SeptxD-2 in 100-L reactor was established using 7 and 10% inoculum, with slight differences in growth curves (Additional file 1: Figure S10); suggesting that 7% inoculum can be used to decrease operation costs (Additional file 1: Figure S11).
Fig. 3
Growth competition experiments between SeptxD-2 and a natural competitor using phosphite as the only phosphorus source. a S. elongatus transgenic strain (SeptxD-2) and a natural competitor (Comp 1) were grown in monocultures and mixed cultures using BG-11 medium supplement with 0.2 mM phosphate (Pi) or 1.8 mM phosphite (Phi) as phosphorus source. Cultures were photographed 8 days after the inoculation. Cell number (cells/mL) was determined for the monocultures of SeptxD-2 (b), and the natural competitor 1 (Comp 1) (c), and for the mixed cultures under Phi (d) and Pi (e) treatments. Values are the mean of three replicates ± SD. Bars with asterisks are significantly different from the control (Student's t test, *P < 0.05, **P < 0.01, ***P < 0.0001) To further carry out scaling-up S. elongatus into an outdoor 1000-L raceway pond, cultures in 7 and 100-L cylindrical bioreactors were sequentially prepared using 7% inoculum and used to inoculate 1000-L raceway ponds outdoor under non-sterile conditions and without the use any other agent to control contamination. During outdoor experiments, we monitored solar irradiance, maximum and minimum environmental temperature, culture pH and temperature, and cell number (cells/ mL). During summer experiments in 2017, 2 days after inoculation of SeWT strain in 100-L cylindrical bioreactors with Pi media, the culture began to decline and completely collapsed on day 4 (Fig. 4a, b), whereas SeptxD-2 grew normally reaching a cell density of 2 × 10 10 cells/ mL and a biomass production of 0.44 g/L, after 8 days (Fig. 4a, b). When SeptxD-2 was inoculated in the 1000-L open raceway pond, growth of the transgenic strain was stable during the timeframe of the experiment and reached 1.5 × 10 10 cells/mL allowing a biomass production of about 0.35 g/L (Fig. 5a-c). Temperature and pH of the media, and environmental temperature and solar irradiance during the process were relatively stable and only slight changes were detected over the timeframe of the experiments (Additional file 2: Table S2). Similar results were observed in experiments performed during different seasons of different years (fall 2016, summer 2019) (Additional file 1: Figure S11; Additional file 2: Tables S3 and S4).
To monitor presence of contaminants in the cultures of SeWT and SeptxD-2 in 100-L cylindrical reactors outdoor, we took samples and determined colony forming units (CFU) of potential bacterial contaminants using Luria-Bertani (LB) rich medium, commonly used to grow bacteria. We observed that at 6 days after Fig. 4 SeptxD-2 cultivation in outdoor cylindrical reactors using phosphite as phosphorus source under non-sterile conditions. a S. elongatus PCC 7942 (SeWT) and SeptxD-2 transgenic strain were grown in BG-11 media prepared with industrial grade reagents and supplemented with 1.8 mM phosphite (Phi) and 0.2 mM phosphate (Pi) as P source, respectively. b Cell number (cells/mL) and c biomass were determined every 2 days during the experiments. d Colony Forming Units (CFU/mL) derived from SeptxD-2 and SeWT cultures. Cultures were performed using 7% (v/v) inoculum and non-sterile 100-L cylindrical reactors bubbled with air outdoor. Values are the mean ± SD of three replicates. Bars with asterisks are significantly different from the control (Student's t test, *P < 0.05, **P < 0.01, ***P < 0.0001). Photographs correspond to experiments carried out during summer, 2017 starting the experiment, CFU increased in media supplemented with Pi, where SeWT was growing, reaching CFU/mL of 5.7 × 10 4 (Fig. 4d, Additional file 1: Figure S12). By contrast, in SeptxD-2 culture using media supplemented with Phi, contamination was several orders of magnitude lower (< 10 2 CFUs/mL) than in media supplemented with Pi ( Fig. 4d; Additional file 1: Figure S12). Our data suggest the PtxD/Phi system is stable under varying environmental conditions and robust enough to provide a competitive advantage to the strain of interest and to control external contamination in open reactors of at least 1000 L. Therefore, the system has the potential for large-scale production of S. elongatus and other species to produce lipids for biofuels or other bioactive compounds.
To comply with biosafety measures in Mexico, we monitored the potential dispersion of the transgenic strain into the environment and used an arrangement of traps in which 1200-L capacity plastic tanks were filled with 500 L of tap water supplemented with BG-11 media (see SeptxD-2 cultivation in raceways ponds using phosphite as phosphorus source under non-sterile conditions. a SeptxD-2 transgenic strain was grown in BG-11 media prepared with industrial grade reagents and supplemented 1.8 mM of phosphite (Phi) as P source in 1000-L raceway ponds in non-sterile, outdoor conditions. b Cell number (cells/mL) and c biomass were determined every 2 days during the experiments. Values are the mean of three replicates ± SD (Student's t test, *P < 0.05, **P < 0.01, ***P < 0.0001). Photographs correspond to experiments carried out during summer, 2019 Methods section; Additional file 1: Figure S13). The tanks were placed north, south, and east from the cultivated open pond. In east direction, three traps were placed 3, 6, and 28 m from the raceway facility, whereas in the north and south, only a single tank was placed at 1.5 m from the cultivation place (Additional file 1: Figure S13). Samples were taken three times a week and analyzed by PCR and RT-qPCR to detect presence of the ptxD gene. No positive signal was detected in any of the samples collected from the different traps (Additional file 1: Figure S13), suggesting that no dispersion of the engineered strain occurs during the timeframe of these experiments.
PtxD and phosphite can be used as selectable marker in S. elongatus PCC 7942
Previous works demonstrated that the PtxD/Phi system can be used as an effective alternative selectable marker in Synechococcus sp. PCC 7002 [22]. To test whether the system can be also implemented in S. elongatus PCC 7942, an aliquot of cells (8.7 × 10 8 cells/mL) were naturally transformed with the psyn_6_PtxDopt vector and spread directly onto agar plates with 1.8 mM Phi as sole P source, that resulted a very effective Phi concentration to suppress cell growth (Additional file 1: Figure S1). After 15 days, hundreds of green isolated colonies emerged, similar to spectinomycin selection ( Fig. 6; Additional file 1: Figure S14). However, selection and isolation of Phi-resistant colonies was more effective when low amounts of cells were spread (i.e., 1.7 × 10 8 cells/mL) per plate, because when the complete recombined aliquots were plated, according to the standard recommended protocol, a lawn of the cyanobacterium was observed and, thus, a poor efficiency of colony formation (Fig. 6). The use of lower Phi concentration (0.5 mM) failed to Fig. 6 The PtxD/Phi system can be used as a selectable marker in S. elongatus PCC 7942. a Number of isolated colonies obtained 15 days after genetic transformation with psbAI::SeptxD construct. Three different amounts of cells (1.7, 6.1, and 8.7 × 10 8 ) were spread onto agar plates with BG-11 medium supplemented with 1.8 mM Phi or 100 μg/mL spectinomycin. b Photographs of the agar plates were taken 15 days after cells were spread. A close-up view of the colonies is presented in the upright corner of the pictures. Values are the mean of three independent experiments with multiple plates each ± SD. Bars with asterisks are significantly different from spectinomycin selective condition (Student's t test, *P < 0.05, **P < 0.01, ***P < 0.0001; ns, no significant differences) select transgenics (Fig. 6). Eleven colonies were randomly selected for PCR analysis to detect the ptxD gene and tested for growth using Phi. All of them were able to grow using 0.8 and 1.8 mM Phi and found PCR positive (Additional file 1: Figure S15). Thus, this system can be used to select colonies of S. elongatus PCC 7942 in which the desired heterologous genes have been integrated and are functional.
Discussion
Biological contamination has been recognized by the NAAB (National Alliance for Advanced Biofuels and Bioproducts) and the DOE ASP (Aquatic Species Program) as one the most prevalent issues for high-scale cultivation of microalgae and cyanobacteria using different types of bioreactors [30,31]. Strategies such as computerized methods, use of chemicals, and high-throughput sequencing for crop protection are thus highly recommended to maintain monocultures. Adopting these measures is critical when open ponds are used, as this type of bioreactor typically experiences frequent contamination that produce culture crash events. Therefore, due to the lack of effective, simple, and low-cost alternatives generally applicable for controlling biological contaminants, the recommendation is to search for more competitive local strains displaying the desired phenotypes over the use of elite and model strains. Although in some instances this is feasible, there may be the potential disadvantages of the lack of molecular tools and methods for improvement of wild species, which could hamper their exploitation as cell factories.
Here, we describe the efficacy of the phosphite oxidoreductase (PtxD)/phosphite (Phi) system to control biological contaminants in a scale of up to a 1000-L raceway pond reactor using the model cyanobacterium Synechococcus elongatus. The PtxD/Phi system was an effective tool to control biological contamination during the timeframe of the experiments by providing S. elongatus the capacity of metabolizing Phi as P source. In our experiments, we challenged the transgenic strain with the microalga Chlorella sorokiniana and two microbial consortia isolated from a close location. S. elongatus capable of metabolizing Phi was able to grow using Phi as its sole P source, outcompeting intentionally inoculated or natural contaminants present when using non-sterile outdoor conditions. Our research group originally developed the PtxD/Phi system for the control of weeds in agriculture [32,33] and later proved it to also be effective for the control of biological contaminants in the cultivation of microalgae [20]. More recently, other research groups reported implementation of the PtxD/Phi system in model cyanobacteria, such as Synechocystis sp. PC6803, S. elongatus PCC 7942, Synechococcus sp. PCC 7002 [21,22,26]. Furthermore, the system was also refined to develop biocontainment strategies through more complex molecular strategies [21][22][23]. Previous works with Synechocystis sp. PC6803 and S. elongatus PCC 7942 proposed using a Phi-specific transporter system, in addition to the ptxD gene, as an essential component for successful implementation of the Phi metabolism. This is because the ptxD-expressing strains were unable to grow in media with low Phi concentrations as the P source and the WT strains seem to be unable to take up Phi [21,26]. Here, we were able to successfully implement the Phi metabolism in S. elongatus PCC 7942 by expressing only a codon-optimized version of the ptxD gene, without the need of expressing an additional Phi-specific transporter. We observed that Phi concentrations below 0.8 mM are insufficient to sustain normal growth of the transgenic strains; and, that the most effective Phi concentration to support P nutrition is 1.8 mM. Similar results were reported previously for ptxD-expressing Synechococcus sp. PCC 7002 strains [22]. Engineered strains were able to use Phi without the need of a Phi-transporter; but, displayed restricted growth under 0.37 mM Phi, which improved under higher Phi concentrations and neared Pi culture normal growth at 7.4 mM Phi [22]. Motomura et al. [21] reported the use of 0.2 mM Phi in their experiments, in which the WT strain displayed slight growth. The engineered strains, harboring both the ptxD gene and the Phi-transporter genes, showed also a reduced growth using Phi compared to the WT using Pi, which was attributed to the type of transporter they expressed (HtxBCDE), that has an extremely low affinity for phosphite (K d > 10 mM) [34].
The mechanism of transport and metabolism of Phi have been widely studied in P. stutzeri WM88, in which Phi is taken up via the PtxABC transporter and then oxidized into Pi by the PtxD oxidoreductase [34,35]. Some reports suggest that certain marine cyanobacteria are able to take up and metabolize Phi as P source and Phi metabolism-related genes may be widely present in marine environments [26,36,37]. To our knowledge, this has been experimentally validated only for Prochlorococcus MIT930 and Trichodesmium erythraeum IMS101, which are capable of using Phi as P source and possesses a functional ptxABCD operon [26,37]. However, no Phi transporters have been experimentally identified to date in many other microalgae or cyanobacteria, including S. elongatus PCC 7942 and Synechococcus sp. PCC 7002, and efforts to identify them using genomic information available have been also unsuccessful [21,22,38]. Although the physicochemical properties of Phi and Pi are distinct, it has been proposed that due to their structural similarities, Phi enters plant and microalgae cells using the same protein that transport Pi into the cell [36,39,40]. The reason behind higher Phi concentration than that of Pi required to provide optimal P nutrition to cyanobacterium expressing PtxD could be that the Pi transporters have lower affinity for Phi than Pi. In plants, two types of Pi transporters exist, low-affinity and high-affinity transporters, the latter operating when external concentrations of Pi are below 100 μM and the former when concentrations of Pi are above 300 μM [41,42]. Since ptxD-transgenic plants sustain normal growth when supplied with 100 μM Phi, it is likely that both high and low-affinity Pi plant transporters are capable of efficiently transporting Phi. Therefore, it is also possible that only low-affinity, not high-affinity, Pi transporters are able to transport Phi into cyanobacterium cells. In this context, the fact that ptxD-expressing Synechococcus strains do not require a Phi-specific transporter to take up Phi and display normal growth only under high Phi concentrations, is a potential advantage for outdoor cultivation over a system dependent on the Phi-transporter. Further research is needed to elucidate Phi transport in cyanobacteria.
To date, the potential of the Phi-based system to control contaminations had only been tested using small volumes of growth media in Petri dishes, laboratory glassware, and small closed bioreactors [20-22, 26, 43].
Here we described the scaled-up cultivation process of a PtxD-engineered S. elongatus strains under outdoor conditions, using non-sterile 100-L cylindrical bioreactors and 1000-L open raceway ponds without any measure to control contaminants. ptxD-expressing culture was able to perform normally outdoor, whereas the WT culture collapsed due contamination 4 days after inoculation. SeptxD-2 displayed normal tolerance to varying environmental temperature, solar irradiance, and pH, during cultivation in a 1000-L outdoor raceway pond, suggesting that metabolism of Phi has no negative effect on the fitness of engineered S. elongatus. Moreover, we observed clear control of contamination as minimal presence of bacteria, rotifers, grazers, fungi or ciliates were observed.
Highly variable biomass concentration for different microalgae strains has been reported previously using different outdoor cultivation systems (polybags, stock ponds, and raceways). For instance, using stock ponds, biomass concentration of nine different strains ranged from 0.162 to 0.914 g/L [44], but apparently due photobleaching, only two of these strains, Scenedesmus dimorphus (UTEX 1237) and Nannochloropsis salina (NCMA 1776), were able to display consistent growth in raceways ponds [44]. We obtained S. elongatus biomass concentration of 0.44 and 0.35 g/L in 100-L cylindrical bioreactors and in 1000-L open ponds, respectively. Therefore, the biomass production we achieved using the PtxD/Phi system is an intermediate value in this range, but achieved in raceway pond rather than in a stock pond. Moreover, authors often report culture crashes due to contamination [44][45][46][47][48][49], which was not the case when using the PtxD/Phi system. Sterilization of growth media using a micro filter cartridge and of tubing by washing with 95% ethanol for 5 min are commonly applied measures to prevent contamination for outdoor cultivation in polybags, stock ponds, and raceways [44]. Our results suggest that one of the major advantages of using Phi-metabolizing strains is the possibility of avoiding all these preventive measures, as the system does not require sterilizing growth media, reactors, tools, and piping. Thus, reducing one of the major operating costs for large closed and open bioreactors [20,21,23]. Therefore, the system could potentially be implemented for microalgae and cyanobacteria species other than those capable of growing under extreme environmental conditions. Although studies using higher volume raceway ponds are necessary, the PtxD/Phi system has the potential of opening new opportunities to exploit those strains with interesting characteristics and that are highly susceptible to contamination.
The amount and quality of microalgae/cyanobacteria seed culture are two important factors to move from the laboratory level to large-scale cultivation of microalgae/ cyanobacteria as they determine the success or failure of the process [19]. During the scaling-up process, the use of variable amounts of seed inoculum have been reported in high volume bioreactors; however, in general, using seed inocula of about 10 to 25% of the final culture volume with a cell density above 1 × 10 7 cells/mL favors a successful cultivation and avoids unwanted long lag phase and thus, might reduce the risk of contamination [15,44]. Our results showed that using the PtxD/Phi system we were able to guarantee the quality of the seed culture throughout the different steps of the process using non-axenic conditions and reduce to 7% the inoculum required to establish a successful 100-L reactor or to 10% for a 1000-L raceway pond.
A comparative economical assessment of the Phi-based system implemented in S. elongatus with other control measures (i.e., antibiotics and sterilization by filtration), using raceway ponds for cultivation, suggest that the PtxD/Phi system would decrease operating costs. Using Phi as the P source and contamination control, operating costs would be 15.47 and 37% lower (Additional file 2: Tables S5 and S6) than using antibiotic [9,50] and sterilization by filtration, respectively [51][52][53]. The cost of cultivation in open ponds is substantially lower than that of using other types of bioreactors, suggesting the PtxD/ Phi system has great promise to contributing to overcome the contamination issue and opening the possibility of implementation of this type of bioreactor to cultivate many other microalgae/cyanobacteria species to produce biofuels and other valuable products.
In addition to phenotype stability in outdoor conditions, the ecological risk due potential accidental environmental release of genetically engineered microalgae/ cyanobacteria has emerged as a major concern during large-scale cultivation [54,55]. Evaluation in open pond production systems represents a crucial step in understanding potential ecological risk of genetically engineered cyanobacteria/microalgae. This is essential data required to develop a regulatory frame for responsible and sustainable use of genetically engineered cyanobacteria, which are key to crop improvement to meet increasing food, fuel, and value-added product demand.
In the experiments performed here, potential dispersion of the genetically engineered strain was also assessed by implementing a kind of traps surrounding the raceway ponds, as reported previously [55]. It was not possible to detect the transgene in any of the tanks used as traps during the timeframe of the experiments, suggesting no dispersion of the transgenic strain under our experimental conditions. It is important to keep in mind that, whereas the PtxD/Phi system confers a competitive advantage to the engineered strain over other organisms, this trait would be only effective when Phi is present in amounts sufficient to support its growth. Although some studies have revealed a primary role of Phi on the prebiotic synthesis of phosphorylated biomolecules on early Earth, Phi have been scarcely detected in the environment [56]. Nowadays, most of the Phi present in nature might be of anthropogenic origin since it can be a byproduct of industrial processes such as metal electroplating [57] and from agricultural practices because Phi salts are used as fungicides and as growth stimulator [58]. It has been possible to detect Phi in rivers, lakes, swamps, and geothermal pools because of agricultural run-off and industrial wastewater [59][60][61][62]. In these studies, detected Phi concentrations ranged from 0.1 to 1.3 μM, which will not provide an advantage to the engineered strain over other native microalgae/cyanobacteria. Therefore, if an accidental escape occurs, also considering that high concentrations of Phi are required to sustain the normal growth (0.8 to 1.8 mM) of the transgenic strain, the Phi metabolism trait would not represent a competitive advantage in natural conditions and the transgenic strains would display similar requirements as those of the WT counterpart in a natural community. Nevertheless, it is important to consider biocontainment strategies as the ones already developed with the PtxD/Phi system to avoid any risk due accidental release to the environment and of horizontal gene transfer [21,23]. However, as these biocontainment strategies have only been assessed on closed systems and very small scale, their validation in raceway ponds in outdoor conditions is also necessary. Our study represents the first assessment of the effectiveness and robustness of the PtxD/Phi system to control contamination during outdoor cultivation of engineered S. elongatus, which can be implemented for the cultivation of other microalgae/cyanobacteria.
Conclusions
Our data show that the PtxD/Phi system has great potential for an effective control of biological contaminants in open raceway ponds for cost-effective cultivation of the cyanobacterium S. elongatus, which in principle should be applicable to other cyanobacteria and microalgae engineered to metabolize Phi. The expression of PtxD does not have any detectable negative effect on the fitness of the engineered strains for growth in outdoor raceway ponds and is efective as selectable marker system. Finally, since Phi is not found in nature at concentrations high enough to support the growth of engineered strains, the risk of gene dispersal or increased fitness in natural conditions is unlikely.
Microalgal/cyanobacterial consortia, Comp 1 and Comp 2, were isolated from a pond water (coordinates 20°43′15.6″N 101°19′51.1″W) close to the pilot plant. During the isolation process using BG-11 [63], the collected samples were immediately subjected to the blooming process by incubation in a growth chamber at 34 ± 1 °C and irradiance of 100 μmol photons m −2 s −1 of continuous fluorescent white light and then to serial dilutions. Samples were then spread onto BG-11 agar plates and incubated for 2 weeks. Afterwards, isolated single colonies were picked up and maintained on BG-11 agar plates.
Plasmid construction and genetic transformation
The ptxD encoding gene from P. stutzeri WM88 (AF061070, http://www.ncbi.nlm.nih.gov/nucco re/ AF061 070) was codon-optimized to the nuclear codon usage of S. elongatus according to the OptimumGene ™ -Codon optimization Tool (GenScript, Piscataway, NJ) and placed under control of the psbAI constitutive promoter of the psyn_6 vector (Life Technologies Corporation, Carlsbad, CA) [27]. psyn_6 harbors a spectinomycin resistance cassette for the selection of E. coli and S. elongatus and has NS1 (neutral site 1) homologous recombination sites for the integration of the TDNA into the S. elongatus genome. NS1a and NS1b sites are also present on S. elongatus genome to guide double homologous recombination of DNA contained between the neutral sites in the vector [66]. The ptxD gene was synthesized with the required restriction sites and cloned into NdeI and NsiI sites to create the plasmid psyn_6_PtxDopt. The resulting plasmid was then subjected to restriction digestion analysis using EcoRI and BglII and sequencing analysis to corroborate the correct psbAI::ptxD gene construct.
The wild type strain S. elongatus PCC 7942 used in this study was then transformed via natural for genomic integration of exogenous genes according to the user's guide of GeneArt ® Synechococcus Protein Expression Vector (Life Technologies Corporation). Briefly, 15 mL of culture from middle logarithmic phase at 8.7 × 10 8 cells/ mL were collected by centrifugation and resuspended in 300 μL BG-11 without any source of P in the medium. Plasmid DNA (200 ng) was added to the cell suspension and incubated in dark for 16 h at 34 °C and 110 rpm. Transformants were selected using BG-11 agar plates supplemented with 100 μg/mL spectinomycin as a selective agent. Engineered strains were then tested for their capacity to grow on Phi medium as the sole P source using multiwell culture plates and/or 50 mL glass flasks.
Selected transgenic strains were always maintained in Phi containing medium for future experimentation.
PCR colony analysis
Colonies grown onto BG-11 agar plates under conditions mentioned above were picked out and resuspended in 5-10 μL DMSO and subjected to heat at 95 °C for 5 min. One microlitre of the resultant suspension was used as a template for PCR. SeptxD primers, Fw (5′-CTC TGC TGG TCA ATC CGT GT-3′) and Rv (5′-GCC TTG GGC AGG CGA TTA -3´) were used to amplify a 304 base pairs (bp) fragment of the ptxD gene using Taq DNA Polymerase Recombinant (Life Technologies). The thermocycler was programmed as follows: 94 °C for 3 min, denaturation (94 °C for 30 s), annealing (64 °C for 30 s) and extension (72 °C 1 m) for 34 cycles, then a final extension at 72 °C for 7 min. The PCR products were examined on 1% agarose gel using SYBR-Safe DNA gel stain (Life Technologies).
Insertion of the expression cassette at NS1
Strains SeptxD-1 to -6 grown onto BG-11 agar plates supplemented with Phi were picked out and resuspended in 5-10 μL DMSO and subjected to heat at 95 °C for 5 min. One microlitre of the suspension was used as a template for the PCR. Three different combinations of primer were utilized to verify the correct insertion: (1) NS1 primers, Fw (5′-CTA CCG AAG TCG CTC GTA G-3′) and Rv (5′-CTA TGG TTC GGG ATC ACT G-3′), (2) NS1_Fw with SeptxD_Rv, and (3) Ns1_Rv with Sep-txD_Fw, which resulted in fragments of 3288, 2739, and 804 bp, respectively. The thermocycler was programmed as follows: 98 °C for 3 min, denaturation (98 °C for 30 s), annealing (60 °C for 30 s) and extension (72 °C 1 m) for 34 cycles, then a final extension at 72 °C for 7 min, with Phusion ® High-Fidelity DNA Polymerase (New England Biolabs). After examination of the PCR products on 1% agarose gel using SYBR-Safe DNA gel stain (Life Technologies), the expected fragments were cut-off from the gel, purified using Zymoclean Gel DNA Recovery Kit (Zymo Research), and subjected to Sanger DNA sequencing [65] by Macrogen Inc, Seoul, South Korea. Sequences were analyzed with Genious 2019.0.4 (https ://www.genei ous. com) and trimmed to eliminate low quality bases.
PtxD activity determination
The enzymatic activity of PtxD was measured using total protein extracts of S. elongatus cells. 0.2 g of freshly collected wet biomass, obtained by centrifugation and removing the supernatant with a pipette, were weighed, washed twice with 1 mL of 50% acetone to extract the chlorophyll, and then centrifuged at 4000 rpm for 5 min at 4 °C. The pellet was resuspended in 1 mL of lysis buffer (50 mM MOPS pH 7.25, 150 mM NaCl, 5% glycerol, 5 mM β-mercaptoethanol, 1 mM PMSF) and sonicated with six short pulses bursts of 30 s followed by intervals of 30 s for cooling on ice, at an amplitude of 40%. To lower cell debris, the samples were centrifuged at 20,000 rpm for 30 min at 4° C. Total protein concentration of the supernatant was determined by Bradford method (Bradford, 1976) using the Quick Start ™ Bradford dye (BioRad), following the supplier's specifications. PtxD activity was determined by measuring the fluorescence emitted by NADH. The reaction mixture was prepared at a final concentration of 50 mM MOPS (pH 7.25), 0.5 mM NAD + , 1 mM phosphite, and 50 μg total protein (protein extract), in a final volume of 250 μL. The fluorescence intensity was measured after 1 h of incubation at 30 °C in a fluorescence reader (Fluoroskan Ascent ™ Microplate Fluorometer), at an excitation and emission wavelength of 340 and 460 nm, respectively. Linear increase of the fluorescence using protein extract of the transgenic strain was verified under these conditions (Additional file 1: Figure S16).
Growth of S. elongatus PCC 7942 and transgenic strains on phosphite media
For the experiments to study the capacity of S. elongatus PCC7942 (SeWT) and the transgenic strains to use Phi, the source of P on the conventional BG-11 media, potassium phosphate dibasic (K 2 HPO 4 , 0.2 mM) was substituted by potassium phosphite monobasic (KH 2 PO3, Wanjie International CAS No. 13977-65-6) at different concentrations as noted for each experiment, using standard BG-11 and BG-11 devoid of P media as controls. SeWT was cultivated in 50 mL glass flask with 30 mL media, inoculated at 1% (v/v), and incubated at 34 °C, 110 rpm and 100 μmol photons m −2 s −1 of continuous fluorescent white light. SeWT inoculum was produced using Pi-containing media (0.2 mM). Growth was estimated by measuring cell density using a Neubauer chamber. All the experiments were conducted in triplicate during a period of 8 days after inoculation.
PtxD/phosphite system as a selectable marker
To determine the tolerance of SeWT to spectinomycin and Phi, 80 µL of a P-starved culture at 8.7 × 10 8 cells/ mL in middle logarithmic phase, were inoculated in BG-11 media with 0, 5, 10, 20, 30, 50 and 100 µg/mL spectinomycin or 0.1, 0.2, 0.5, 1.8, and 2 mM Phi in multiwell plates or agar plates. For the selection of transgenics, three different amounts (1.7, 6.1, and 8.7 × 10 8 ) of cells previously recombined as described above with the expression cassette psbAI::ptxDopt, were directly spread onto agar plates with 100 μg/mL spectinomycin or 0.5 and 1.8 mM Phi. Isolated colonies were counted manually after 15 days.
Growth competition assays
Chlorella sorokiniana UTEX 1230 (CsWT) and the two microalgal/cyanobacterial consortium (Comp 1 and Comp 2) were used as the competitors for in vitro assays. For the competition experiment with CsWT, proportions 1:1, 1:4, and 4:1 of inocula C. sorokiniana:SeptxD-2 were used, whereas for experiments with Comp 1 and Comp 2 proportion 1:1 (SeptxD-2:Comp) was implemented. The competition cultures were grown in BG-11 media supplemented with 1.8 mM Phi or 0.2 mM Pi as the phosphorus (P) source. Monocultures of both the microalga and the cyanobacterium were used as controls. Experiments were carried out using 50 mL glass flasks with 30 mL of media, under culture conditions described above. Each treatment was performed in triplicate. The growth was estimated by cell counting with a Neubauer chamber. For experiments under non-sterile conditions, sterilization of materials and growth media were avoided to allow microorganisms to invade the cultures. Cultures were observed and photographed using a Zeiss Axio Lab. A1 microscope.
Scale-up of liquid cultures for inocula preparation
For inocula preparation, 25 mL cultures of the strains were started by scraping a portion of the culture from a plate and re-suspending in 5 mL of media before added to the 50 mL flask. 20 mL cultures were then passed directly to 700 mL cultures in 1 L glass bottles or Erlenmeyer flasks and then to 7 L of media in 10-L home-designed cylindrical bioreactors. 20 mL cultures were grown in an orbital shaker at 110 rpm, 34 ± 1 °C, 100 μmol photons m −2 s −1 of continuous fluorescent white light, whereas 700 mL cultures were bubbled with air at a flow rate of 2 L/min. Autoclaved BG-11 media were used for cultivation in Petri dishes, in 50 mL and 1-L containers, whereas non-sterile media were used for the rest of the process. pH was adjusted to 7 in all freshly prepared media. For experiments under non-sterile conditions, sterilization of materials and growth media were avoided to allow microorganisms to invade the cultures.
Cultivation in 10-L cylindrical bioreactors
After growth, 700 mL cultures were used to inoculate 7 L of media in 10-L home-made cylindrical bioreactors and incubated at 100 μmol photons m −2 s −1 of continuous fluorescent white light, and using charcoal-filtered dechlorinated municipal tap water. Cylindrical bioreactors of 45 cm height, 20 cm internal diameter, and 7 L working volume, were constructed with acrylic using a 12 mm thick sheet PLASTICRYL purchased from Brunssen de Occidente (item #0171-0010-012) from Guadalajara, Jalisco, Mexico. Air was supplied and maintained to each culture at a flow rate of approximately 10 L/min by manually adjusting tubing connections to a small pump. We did not detect any observable growth variations for any culture in reference to air flow-rate.
Cell counting and pH was monitored every day during the timeframe of the experiments. As we detected no significant pH variations throughout the experiments, pH was not subsequently adjusted. Samples were withdrawn through the vent holes in the cap using glass pipettes. During the continuous cultivation, only 6.5 mL were removed and the bioreactors refilled with BG-11 media. To avoid cross contamination between the transgenic strains, tubing and accessories were washed using 1.5 mg/L calcium hypochlorite and flushed with water.
Cultivation in 100-L cylindrical bioreactors
Outdoor experiments were performed in StelaGenomics México facility located in Irapuato Guanajuato Mexico (20°42′56.2″N 101°20′16.4″W). Cylindrical bioreactors of 1.45 m height, 35 cm internal diameter, and 100-L working volume (110 L total volume), were constructed with acrylic using a 12 mm thick sheet purchased from Brunssen de Occidente from Guadalajara, Jalisco. Each cylindrical bioreactor was installed in a robust metal structure and suspended about 1.95 m above soil level. The air was supplied and maintained to each column at a flow rate of approximately 20 L/min by manually adjusting a blower; an internal removable stainless-steel diffuser device was installed in each column. Separate PVC lines were installed in each cylindrical bioreactor to allow culture transfer to 1000-L raceway ponds. CFU from contaminating bacteria was estimated by serial dilutions (1:10, 1:100, and 1:1000) of the sample and plating onto Petri dishes with LB (Luria-Bertani) medium. The plates were then incubated at 37 °C for 48 h to favor bacterial growth. As controls, SeptxD-2 strain was also plated onto LB and BG-11 agar plates and incubated under the same conditions. CFU was manually counted and data analyzed.
Cultivation in 1000-L raceway ponds
Following growth in cylindrical bioreactors, the culture was used to inoculate 1000-L raceway ponds. For the raceway ponds, we constructed a carbon steel structure with an oval shape (1.5 m width, 3.5 m in length, 60 cm depth), which were covered with a HDPE (high-density polyethylene) geomembrane. Raceways were installed 25 cm above soil level and operated at a depth of 25 cm. A stainless-steel propeller device was installed to recirculate the culture, which was operated at about 10 cm depth and rotated at the speed of 15 rpm.
Optical density (OD 750 ), pH, temperature of the media, and solar irradiance were recorded every day at 9 am and 6 pm during the timeframe of the experiments. As we detected no significant pH variations throughout the experiments, pH was not subsequently adjusted. Biomass concentration was determined after 7 days of cultivation.
Biosafety regulation
Based on the provisions of Mexico's Biosafety Law of Genetically Modified Organisms, on August 15, 2014, StelaGenomics submitted a notice for the Confined Use of Genetically Modified Organisms (application no. 09/ J7-0081/08/14) to the Secretariat of Environment of Mexico (SEMARNAT), whereby the company stated that the activities with genetically modified organisms would be the culture of microalgae, cyanobacteria, and other microorganisms, in cylindrical reactors and open raceways ponds located in the facilities of StelaGenomics, applying the appropriate biosafety measures. On March 22, 2016, SEMARNAT delivered a favorable opinion, through the document SGPA/DGIRA/DG/1639, in which the aforementioned activities were approved, as long as the biosafety measures stated in the notice are applied. One of the S. elongatus strains generated under this application, SeptxD-2, was selected for outdoor cultivation. Some of the biosafety measures implemented are: access to the facility is restricted by a perimeter fence and there is personal and video surveillance 24 h/ day; bioreactors and raceways ponds are installed on a cement platform covered with an industrial polymer (or with a high-density polyethylene impermeable geomembrane) to retain and control any potential leak and to avoid leaks to the soil; the core facility is surrounded by a 30 cm perimeter barrier to prevent the escape of biological material to the environment in case of accidental spill caused by flooding or excessive rain; this platform has PVC connections to discharge potential spilled liquids into a 10,000-L plastic container (Rotoplast) installed belowground, where the liquids are chlorinated and treated with an UV lamp, before recycling into the process; 7-L cylindrical reactors are covered with acrylic caps; raceways ponds are covered with an anti-bird netting and anti-aphid mesh, preventing the access of any type of birds and insects that could spread microalgae/ cyanobacteria in the surrounding area; after harvesting to determine production of biomass, waste water is chlorinated, UV treated, and recirculated into the system.
Dispersal experiment
To examine the potential dispersion of the transgenic strain from the raceway ponds to the environment, we implemented a kind of traps surrounding the pilot plant. Traps consisted of 1200-L plastic containers placed on the north, south, and east directions from the source cultivated pond, filled with 500 L water and supplemented with BG-11 medium (Additional file 1: Figure S13). In east direction, three traps were placed at 3, 6, and 28 m from the raceway facility. In the north and south, only one tank was placed at 1.5 m from the raceway. 50 mL samples were collected from the containment tanks three times per week and preserved at − 20 °C for PCR and RT-qPCR analysis. One mL of each sample was then taken for analysis. Upon centrifugation at 4200 rpm for 20 min, the pellet was resuspended in 0.5 mL DMSO and subjected to heat at 95 °C for 5 min. One or two microlitre of the suspension were used as a template for the PCR and RT-PCR analysis using SeptxD primers as described above.
Statistical analysis
Statistical analysis was performed using the program R 3.5.2. Data collected from the different experiments were subjected to paired Student's t test, with Bonferroni correction or one sample Student's t test. P values < 0.05 were considered significant (*P < 0.05, **P < 0.01, ***P < 0.0001). One-way ANOVA and Tukey's multiple comparison test were applied to analyze gene expression.
|
v3-fos-license
|
2018-11-11T01:39:44.448Z
|
2018-10-26T00:00:00.000
|
207372828
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0206288&type=printable",
"pdf_hash": "08042c753e39e691a214efba04d5d5d32f9e43e3",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43266",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "08042c753e39e691a214efba04d5d5d32f9e43e3",
"year": 2018
}
|
pes2o/s2orc
|
Efficacy of acupuncture for lifestyle risk factors for stroke: A systematic review
Background Modifications to lifestyle risk factors for stroke may help prevent stroke events. This systematic review aimed to identify and summarise the evidence of acupuncture interventions for those people with lifestyle risk factors for stroke, including alcohol-dependence, smoking-dependence, hypertension, and obesity. Methods MEDLINE, CINAHL/EBSCO, SCOPUS, and Cochrane Database were searched from January 1996 to December 2016. Only randomised controlled trials (RCTs) with empirical research findings were included. PRISMA guidelines were followed and risk of bias was assessed via the Cochrane Collaboration risk of bias assessment tool. The systematic review reported in this paper has been registered on the PROSPERO (#CRD42017060490). Results A total of 59 RCTs (5,650 participants) examining the use of acupuncture in treating lifestyle risk factors for stroke met the inclusion criteria. The seven RCTs focusing on alcohol-dependence showed substantial heterogeneity regarding intervention details. No evidence from meta-analysis has been found regarding post-intervention or long-term effect on blood pressure control for acupuncture compared to sham intervention. Relative to sham acupuncture, individuals receiving auricular acupressure for smoking-dependence reported lower numbers of consumed cigarettes per day (two RCTs, mean difference (MD) = -2.75 cigarettes/day; 95% confidence interval (CI) = -5.33, -0.17; p = 0.04). Compared to sham acupuncture those receiving acupuncture for obesity reported lower waist circumference (five RCTs, MD = -2.79 cm; 95% CI: -4.13, -1.46; p<0.001). Overall, only few trials were considered of low risk of bias for smoking-dependence and obesity, and as such none of the significant effects in favour of acupuncture interventions were robust against potential selection, performance, and detection bias. Conclusions This review found no convincing evidence for effects of acupuncture interventions for improving lifestyle risk factors for stroke.
Introduction
Stroke is a major health issue with a significant burden upon quality of life and disability [1].The control of stroke risk factors plays a vital role in reducing the risk of new or subsequent strokes of all types [2].Three types of risk factors have been identified for stroke, including non-modifiable risk factors, medical risk factors, and lifestyle risk factors [2,3].Lifestyle risk factors for stroke-hypertension, high cholesterol, smoking-dependence, alcohol-dependence, obesity, poor diet/physical inactivity-approximately accounted for 80% of the global risk of stroke [3].Therefore, lifestyle risk factors for stroke are an ideal target for stroke prevention in comparison with other risk factors [4].A growing stroke burden throughout the world suggests contemporary stroke prevention strategies for modifiable lifestyle risk factors may be insufficient and new effective approaches are needed [5].However, the evidence for modification of lifestyle risk factors which are recommended by clinical guidelines for stroke management are not satisfactory [5,6].
Acupuncture is a traditional Chinese therapeutic intervention characterised by the insertion of fine metallic needles through the skin at specific sites (acupoints), with body and ears being the most common locations of acupoints [7].Needles may be stimulated manually or by applying electric current [8].There are various types of acupuncture treatments, such as needle acupuncture, electroacupuncture, acupressure, laser therapy, and transcutaneous electric acupoint stimulation (TEAS) [9].Acupuncture has long been used for chronic diseases including musculoskeletal pain and hypertension [7].The biological effects of acupuncture treatments, such as local inflammatory responses, anti-analgesia effects, and increase of opioid peptides, play an important role in the therapeutic effects of such therapy [10].Nevertheless, the challenges inherent in designing and implementing rigorous acupuncture research may limit the understanding of the effectiveness of acupuncture, such as those relating to acupuncturists' use of distinct syndrome classifications identified among people with the same condition and use of different skills when selecting and manipulating acupoints [11].
Using acupuncture to manage each lifestyle risk factor for stroke has attracted substantial and growing research interest over many decades.Previous reviews reported promising results of acupuncture use in controlling hypertension-associated symptoms [12], attaining weight loss [13], and reducing nicotine withdrawal symptoms [9].In addition, WHO has indicated the effect of acupuncture for alcohol-dependence, in particular auricular acupuncture [14].Nonetheless, a comprehensive systematic review assessing the effect of all forms of acupuncture for all identified lifestyle risk factors for stroke has not been conducted.As such, the aim of this paper is to identify and summarise the contemporary evidence of acupuncture interventions for lifestyle risk factors for stroke.
Methods
The systematic review reported in this paper has been registered with PROSPERO (International prospective register of systematic reviews, #CRD42017060490).
Search strategy
In accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guideline, a systematic search of the literature was conducted using the MEDLINE, CINAHL/EBSCO, Scopus, and Cochrane Database of Systematic Reviews databases for studies published from January 1996 to December 2016.The lifestyle risk factors for stroke included in this systematic review are high blood pressure (hypertension & prehypertension), high cholesterol, obesity (overweight/obesity), smoking-dependence, alcohol-dependence, and physical inactivity.The literature search employed keyword and MeSH searches for terms relevant to 'acupuncture' and each lifestyle risk factor for stroke.Search terms used for each database are available in Table 1.Relevant randomised controlled trials (RCT) listed as references of published systematic review papers on selected lifestyle risk factors for stroke were also searched via Google Scholar by title, in order to include all relevant RCTs in this field.
Types of interventions.
There was no limitation on the forms of (traditional) acupuncture and the frequency and duration of the intervention.However, contemporary acupuncture such as trigger points and dry needling was not eligible for inclusion in this review.
Types of outcome measures.Only anthropometric parameters and the widely used indicators of each lifestyle risk factor for stroke were included.The primary outcomes were a change in systolic blood pressure (SBP) and/or diastolic blood pressure (DBP) for hypertension-focused RCTs; triglycerides, LDL/HDL cholesterol for hyperlipidemia/dyslipidemiafocused RCTs; body weight (BW), body mass index (BMI), waist circumference (WC) for obesity-focused RCTs; alcohol craving, completion rate of treatment, withdrawal symptoms for RCTs focusing on alcohol-dependence; withdrawal symptoms, daily cigarette consumption, abstinence rate for RCTs focusing on smoking-dependence; physical activity minutes/day and cardiorespiratory fitness for physical inactivity-focused RCTs.
Data extraction
Title and abstracts of all citations identified in the search were imported to Endnote (Version X8) and duplicates removed.These citations were independently reviewed for eligibility by two authors (WP and RL) and the full texts of ambiguous articles were retrieved if consensus was not reached.Any disagreements were assessed by a third author.We contacted authors regarding raw data of their RCTs where necessary for meta-analysis.Where we failed to obtain such raw data, the RCT had to be excluded in the meta-analysis.According to the RCT description in the articles included, raw data were extracted from post-intervention effect and/ or follow-up (long-term) effect.
Data were extracted into a pre-determined table (Table 2) and checked for coverage and accuracy by two authors independently.Table 2 includes detailed information on sample size, inclusion criteria, participants' characteristics, intervention groups, add-on strategy, results of outcome measures, and side-effects.Both statistically significant within-group and/or between-group effect of acupuncture interventions for each lifestyle risk factor for stroke were recorded if reported.
Data syntheses
Cochrane RevMan version 5.3 software was employed to conduct meta-analysis of the outcome measures and heterogeneity was determined using I 2 statistic [15].The meta-analysis included all studies where acupuncture was employed with or without co-interventions, provided that such intervention was given to all groups.However, meta-analyses were conducted only if at least two RCTs were available exploring a specific outcome of a risk factor.Acupuncture approaches shown in the meta-analysis include needle acupuncture (body, aural region, electroacupuncture), laser acupuncture, and acupressure.Analyses were performed separately for type of experimental interventions (acupuncture, acupressure, laser acupuncture, or the combination of acupuncture and acupressure) according to the RCT design.Random effects model (Mantel-Haenszel for dichotomous/categorical variables and inverse variance for continuous variables) was used to calculate mean differences (MD), standardized mean differences (SMD), or risk ratios (RR), and 95% confidence intervals (CI) were reported.Sensitivity analyses were used to test the robustness of statistically significant results for RCTs with low risk versus high risk of bias for the domains selection bias and performance/detection bias.Effects sizes of acupuncture compared to other interventions were shown in Table 3. ; male a Add-on strategy of all the intervention groups.
Quality assessment
Two authors (DS and WP) independently assessed the risk of bias of all included studies using the Cochrane Risk of Bias Tool for selection bias (random sequence generation and allocation concealment), performance bias (blinding of participants and personnel), detection bias (blinding of outcome assessment), attrition bias (incomplete outcome data), reporting bias (selective outcome reporting), and other bias (Table 4).Disagreements were assessed by a third author.It is worth noting that, due to methodological reasons and the uniqueness of acupuncture treatments, it is not feasible to blind the acupuncturist in acupuncture RCTs.Therefore, we adopted the domain of performance bias and only focused on adequate participant blinding.
Results
The key database searches identified 2,502 records with another six records from Google Scholar search, of which 299 duplicates were removed.After screening, the full texts of 305 papers were reviewed, of which a total of 62 full-text articles (reporting on 59 RCTs) were considered eligible and included in this systematic review.The PRISMA flowchart of literature search and article selection details has been shown in Fig 1.
There were 59 RCTs (5,650 participants) regarding the use of acupuncture interventions in treating lifestyle risk factors for stroke, of which 7 RCTs for alcohol-dependence (845 participants), 15 RCTs for smoking-dependence (1,960 participants), 12 RCTs for hypertension (927 participants), and 25 RCTs for obesity (1,918 participants).No publication reported on a trial examining the efficacy of acupuncture for the lifestyle risk factor for stroke of high cholesterol or physical inactivity as a primary outcome.
Alcohol-dependence
Seven RCTs [16][17][18][19][20][21][22] focused on acupuncture treatments for alcohol-dependence using outcomes of alcohol craving (four RCTs), alcohol withdrawal symptoms (four RCTs), and drinking days (one RCT).Table 2 shows details of such RCTs' characteristics and safety-related information.Most of the included studies defined alcohol-dependence according to the 3 rd version (revised)/4 th version of the Diagnostic and Statistical Manual of Mental Disorders (DSM) or the 10 th version of the International Statistical Classification of Diseases and Related Health Problems (ICD) [16][17][18][19][20][21].The sample size of RCTs focusing on alcohol-dependence ranged from 20 to 503 participants with only two studies recruiting more than 100 participants.
Psychiatrists/nurses [17,20], acupuncturists [18,22], and oriental medical doctors [16] were reported as administering the acupuncture interventions.The modes of acupuncture delivered within the interventions included both specific and nonspecific/symptom-based auricular acupuncture (five studies), body acupuncture (one study), and combined auricular and body acupuncture (one study).Acupuncture treatment sessions ranged from 30-minutes to 45-minutes.Only one RCT employed needle stimulation technique for the acupuncture treatment of alcohol-dependence [17].
Non-significant differences between acupuncture and control groups for alcohol craving were reported in three RCTs [16,17,20], alcohol withdrawal symptoms in two RCTs [17,18], and drinking days in one RCT [20].Statistically significant within-intervention group effects were reported for alcohol craving with specific auricular electroacupuncture [21] and alcohol withdrawal symptoms with combined use of auricular and body acupuncture [19], while statistically significant between-group effects were reported for alcohol withdrawal symptoms with symptom-based auricular acupuncture (VS specific auricular acupuncture) [22].Risk of bias assessment indicated that three RCTs did not report information on random sequence generation, four RCTs failed to apply blinding to participants and personnel, one did not report adequate blinding of outcome assessors, and three failed to report complete outcome data (Table 4).Due to the great heterogeneity regarding intervention details and outcomes applied in the RCTs focusing on alcohol-dependence, no meta-analysis could be conducted.
Both statistically significant within-intervention group and between-group effects were reported in five RCTs for (a) SBP as well as DBP levels with body acupuncture (VS non-specific acupuncture) [48], combined body acupuncture and exercise (VS sham acupuncture plus exercise) [44], combined laser body acupuncture with/without music treatment (VS starch tablets) [47], body acupressure (VS sham acupuncture) [50], (b) nighttime DBP level with body acupuncture (VS sham acupuncture) [40].In addition, study results reported statistically significant within-intervention group effects for (a) SBP as well as DBP levels with laser acupuncture [41], (b) SBP level with body electroacupuncture [42], (c) DBP level with combined body and auricular acupuncture [49], and statistically significant between-group effect for SBP level with body electroacupuncture (VS sham acupuncture) [43].
Meta-analyses did not show evidence for neither post-intervention nor long-term effect of acupuncture interventions on SBP control (two RCTs on acupuncture, MD = -0.54mmHg; 95%CI: -10.69, 9.60; p = 0.92) and DBP control (two RCTs on acupuncture, MD = -1.38 mmHg; 95%CI: -4.06, 1.31; p = 0.32) compared to sham acupuncture (Table 3).Risk of bias assessment indicated only six hypertension-focused RCTs blinded participants and personnel appropriately and seven RCTs did not report information on blinding of outcome assessment (Table 4).
Relative to sham acupuncture, meta-analyses only found those receiving acupuncture interventions for obesity reported lower waist circumference (five RCTs, MD = -2.79cm; 95% CI: -4.13, -1.46; p<0.001; heterogeneity: I 2 = 0%; Chi 2 = 1.61; p = 0.81).However, after excluding RCTs with other than low risks of selection and performance/detection bias, none of the effect remained statistically significant.In comparison with no treatment intervention, meta-analyses did not show evidence for post-intervention effect of acupuncture interventions on BW (two RCTs on acupuncture, MD = -1.12kg; 95%CI: -5.51, 3.27; p = 0.62; two RCTs on auricular acupressure, MD = -2.87Kg; 95%CI: -6.47, 0.74; p = 0.12).Meta-analyses also did not show evidence for post-intervention effect of auricular acupressure interventions on BMI (two RCTs, MD = -0.41kg/m 2 ; 95%CI: -1.56, 0.73; p = 0.48) compared to no treatment (Table 3).Risk of bias assessment was unclear in numerous obesity-focused RCTs due to a lack of detail in the publications.Specifically, nine RCTs did not report random sequence generation and allocation concealment information.Twelve RCTs failed to report complete outcome data.Fifteen RCTs did not blind participants and personnel and 20 RCTs did not provide information on blinding of outcome assessment (Table 4).
Discussion
This article reports the first systematic review of the effect of acupuncture interventions for lifestyle risk factors for stroke.A number of acupuncture techniques have been used for the management of these lifestyle risk factors and have yielded limited improvements in outcomes.No analysis can be conducted on RCTs focusing on alcohol-dependence and no evidence of the effect of acupuncture treatments on high blood pressure was shown based on meta-analysis.The meta-analysis showed individuals receiving auricular acupressure reported better outcomes in daily cigarette consumption than sham acupressure.Furthermore, acupuncture users have reported better outcomes in reducing waist circumference compared to sham acupuncture.No serious side effects occurred when using acupuncture on these four lifestyle risk factors.However, approximately half of the RCTs focusing on hypertension and obesity did not report safety information of acupuncture users.As such, acupuncture appears to be a relative safe treatment for the management of lifestyle risk factors for stroke.Some evidence of the benefits of acupuncture and/or auricular acupressure was revealed for RCTs of lifestyle risk factors for stroke-smoking-dependence and obesity-in our review.However, a total of eight and 14 types of acupuncture-related interventions have been examined in RCTs focusing on smoking-dependence and obesity, respectively.The findings reported here highlighted the gaps in the evidence of clinical acupuncture use in the specific field of lifestyle risk factors for stroke and generally.Consistent with findings of prior systematic reviews [9,78], acupuncture involves a range of techniques.Both acupuncture-associated clinical trials and observational studies are required to determine methodology issues such as the use of acupuncture only, acupressure only, or the combination of acupuncture and acupressure, and the further choices of acupuncture like needle acupuncture, electroacupuncture and laser acupuncture.Therefore, future high-quality research is warranted to confirm our preliminary findings and provide robust effect estimates of acupuncture interventions for lifestyle risk factors for stroke.
In our review, approximately half of the RCTs focusing on smoking-dependence and obesity employed auricular acupressure alone or in combination with other acupuncture intervention(s).Acupressure is considered more practical (ease of application by patients themselves) with low cost, compared to other acupuncture treatments [79].However, no consistent and convincing evidence has been found in this review on whether acupressure is effective for the management of overall lifestyle risk factors for stroke.As a result, there is insufficient evidence to conclude that the use of acupressure could improve the lifestyle risk factors for stroke and more studies are required.
Sham acupuncture is the most frequently employed comparison for acupuncture treatments in general [80] and among people with lifestyle risk factors for stroke which has been shown in our review.Although meta-analysis presented here reported statistically significant benefits of real acupuncture interventions regarding the management of the lifestyle risk factors of smoking-dependence and obesity than sham interventions, none of the effects of the RCTs included in the analyses was robust against potential selection, performance, and detection bias.In addition to the identified design challenges of acupuncture-associated RCTs regarding the choice of control group with the fact that sham acupuncture may also trigger physiological effect [81], future acupuncture-associated RCTs should avoid high risk of bias from lack of allocation concealment and missing outcome data, persuade original investigators to provide sufficient information on blinding of outcome ascertainment and if necessary, choose an appropriate comparable control intervention for clinical acupuncture research.Some limitations of our systematic review are worth noting.The acupuncture interventions varied greatly across the RCTs of each lifestyle risk factor for stroke included in this review in terms of inclusion criteria of participants, acupuncture forms, acupoint selection, manipulation methods, and frequency/duration of the treatments.Also, this systematic review was restricted to RCTs published in English-language peer-reviewed journals.Furthermore, a proportion of included studies were not registered before they were published, we therefore cannot rule out the possibility of reporting or publication bias.The findings in this systematic review regarding the effect of acupuncture for lifestyle risk factors for stroke should be interpreted with caution.However, compared to previous Cochrane and systematic reviews [9,12,13,82], based on the risk of bias evaluation (Table 4), the methodological quality of RCTs on acupuncture treatments identified in our review has improved over recent years, including regards to random sequence generation application, the reporting of acupuncture treatments, and use of long-term follow-ups.
Conclusion
This review shows no convincing evidence regarding the effect of acupuncture, acupressure, laser acupuncture or their combination use for lifestyle risk factors for stroke.However, the translation of findings of this systematic review may contribute to the evidence-base of potential clinical practice guideline recommendations for stroke prevention.
|
v3-fos-license
|
2023-10-21T15:05:13.877Z
|
2023-10-19T00:00:00.000
|
264371876
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1259202/pdf?isPublishedV2=False",
"pdf_hash": "2600966fa0f5b6efd6fcc135eb5eae3e2e9d5cdb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43267",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "9f8226e8eca2b5f80b2234b2fb6dbb617052a600",
"year": 2023
}
|
pes2o/s2orc
|
Evaluation of the effect of the COVID-19 pandemic on the all-cause, cause-specific mortality, YLL, and life expectancy in the first 2 years in an Iranian population—an ecological study
Background COVID-19 pandemic resulted in excess mortality and changed the trends of causes of death worldwide. In this study, we investigate the all-cause and cause-specific deaths during the COVID-19 pandemic (2020–2022) compared to the baseline (2018–2020), considering age groups, gender, place of residence, and place of death in south Khorasan, east of Iran. Methods The present ecological study was conducted using South Khorasan Province death certificate data during 2018–2022. The number of death and all-cause and cause-specific mortality rates (per 100,000 people) were calculated and compared based on age groups, place of residence, place of death, and gender before (2018–2020) and during the COVID-19 pandemic (2020–2022). We also calculated total and cause-specific years of life lost (YLL) to death and gender-specific life expectancy at birth. Results A total of 7,766 deaths occurred from March 21, 2018, to March 20, 2020 (pre-pandemic) and 9,984 deaths from March 21, 2020, to March 20, 2022 (pandemic). The mean age at death increased by about 2 years during the COVID-19 pandemic. The mortality rate was significantly increased in the age groups 20 years and older. The most excess deaths were recorded in men, Aged more than 60 years, death at home, and the rural population. Mortality due to COVID-19 accounted for nearly 17% of deaths. The highest increase in mortality rate was observed due to endocrine and Cardiovascular diseases. Mortality rates due to the genitourinary system and Certain conditions originating in the perinatal period have decreased during the COVID-19 pandemic. The major causes of death during the pandemic were Cardiovascular diseases, COVID-19, cancer, chronic respiratory diseases, accidents, and endocrine diseases in both sexes, in rural and urban areas. Years of life lost (YLL) increased by nearly 15.0%, which was mostly due to COVID-19, life expectancy at birth has steadily declined from 2018 to202 for both genders (from 78.4 to 75). Conclusion In this study, we found that All-cause mortality increased by 25.5% during the COVID-19 pandemic, especially in men, older adult, Rural residents, and those who died at home (outside the hospital). Considering that the most common causes of death during the COVID-19 pandemic are also non-communicable diseases. It is necessary to pay attention to non-communicable diseases even during the pandemic of a serious infectious disease like COVID-19. The years of life lost also increased during the COVID-19 pandemic, which is necessary to pay attention to all age groups, especially the causes of death in young people. In most developing countries, the first cause of death of these groups is accidents.
Background
Coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged in Wuhan in December 2019 and rapidly spread across the world (1).The World Health Organization (WHO) declared the COVID-19 pandemic in March 2020 (2).In Iran, on February 19, 2020, two patients were confirmed as SARS-CoV-2 positive in Qom, and the disease rapidly spread all over the country (3).
Wide-spread societal responses to the COVID-19 pandemic dramatically changed human interactions and movements as well as the access to and delivery of healthcare resulting in disruptions in healthcare services that have affected the supply and demand of non-communicable disease (NCD) care, which created the potential for other significant changes to mortality patterns that are not directly attributable to COVID-19 (1,4).It has been well reported in the studies that patients with NCD (such as cardiovascular diseases, hypertension, diabetes Mellitus, congestive heart failure, chronic kidney disease, and cancer), males, and older adults have an increased risk of death due to COVID- 19 (5, 6).
More than 5 million deaths have been reported worldwide due to COVID-19, and more than 144,000 deaths have been reported in Iran from January 2020 to January 2023. 1 Most studies on COVID-19 mortality have mainly focused on attributable deaths to COVID-19 and the overall excess deaths during the pandemic compared with those in previous years (7,8).However, the analysis of Causes of Death (CoDs) is necessary to understand all effects of the COVID-19 pandemic on mortality (9).
Studies reported a significant increase in cardiovascular diseases, diabetes, Alzheimer's disease, and dementia and a decline in mortality due to road traffic accidents and suicide.Also, COVID-19 has negatively affected the outcome of low-energy trauma in older adults, e.g., the 30-day mortality has increased in patients admitted with hip fractures during the pandemic in the UK (1,(9)(10)(11)(12)(13)(14).
Studies in Minnesota and São Paulo demonstrated the most excess mortality among older, male, and non-rural residents in 2020 compared with 2018-2019 (before and after the COVID-19 pandemic) (1,11).
In Iran, most studies in this regard investigated attributable mortality to COVID-19 or the all-cause mortality rate and did not examine cause-specific mortality rates.In a time series study from March 21, 2013, to March 19, 2020, all-cause mortality was investigated, revealing excess mortality during fall and winter, which might be attributed to COVID-19 and the influenza epidemic (15).
In this study, all-cause and cause-specific mortality rates, life expectancy and YLL were compared before and during the COVID-19 pandemic overall and in gender (male and female), place of residence (urban and rural), place of death (home and hospital) and age subgroups in South Khorasan Province, east of Iran, to reveal the effect of COVID-19 pandemic on the all-cause and cause-specific mortality rates, since COVID-19 pandemic is a unique event which provides a valuable experience which can help us in future pandemics.
Study design and data sources
The studied population of this ecological study consisted of all individuals in South Khorasan Province during 2018-2022.Population data were obtained from the Planning and Budget Organization of South Khorasan Province, We included all individuals from March 21, 2018, to March 20, 2022, registered in South Khorasan.Which included the detailed estimated population of South Khorasan Province in gender, place of residence, place of death, and 5-year age groups for the years 2018 to 2022.The Information of the deceased based on death certificate data for four years was divided into two primary periods: 2018-2019 (pre-pandemic) and 2020-2022 (pandemic).Decedent demographics taken from the death certificate, including age, sex, place of death, place of residence, and underlying cause of death (based on the ICD10 code), were obtained from the Death Registration System of South Khorasan Health Department.The all-cause and cause-specific mortality rates per 100,000 population based on age and gender and ICD-10 chapters were calculated before and during the COVID-19 pandemic.All-cause and cause-specific mortality rates were compared between the two primary periods of pre-pandemic (2018-2019) and pandemic (2020-2022).The all-cause mortality rate was also compared in gender (male/ female), place of residence (urban/rural), place of death (home/ hospital), and, age (0-19, 20-29, 30-59, 60, and over) subgroups (11), and relative risk between subgroups were calculated.To calculate the denominator of the mortality index fraction in two periods before and after of COVID-19 in this study, the middle
Statistical analysis
Total and cause-specific years of life lost (YLL) to death were also calculated before and during the COVID-19 pandemic by multiplying the number of deaths in each age and gender subgroup by the residual life expectancy in Iran provided by the World Health Organization (16).Overall and gender-specific life expectancy at birth was also calculated using Analyzing Mortality and Cause of Death (version 3) (ANACod3), which is an online tool developed by the World Health Organization (WHO) for analysis of causes of death that provides several indicators, including life expectancy at birth (17).For each cause of death, demographic-specific rates of death were compared between 2018-2019 and 2020-2022.p-values below 0.05 were considered statistically significant, and 95% confidence intervals (95% CI) were calculated.All statistical analyses were performed using Microsoft Excel software and IBM SPSS (version 26).
Results
A total of 17,750 deaths were recorded in the study timeframe in South Khorasan Province, of which 1).
Figure 1 shows the most common causes of death before and during the COVID-19 pandemic.Before COVID-19, the most common causes of death were Cardiovascular, respiratory, neoplasm, and accidents.During the COVID-19 pandemic, the most common causes of death were Cardiovascular, COVID-19, neoplasm, respiratory, and accidents.
Figure 2 shows the most common causes of death before and during the Covid-19 pandemic by sex.In both genders, the most common causes of death before and after COVID-19 pandemic, Cardiovascular diseases, and during the pandemic, COVID-19 is the second cause of death in both sexes.Death due to cardiovascular diseases, neoplasms, and endocrine diseases has increased in both sexes during the COVID-19 pandemic.Death due to respiratory diseases during the COVID-19 pandemic has decreased in both sexes, death due to accidents has increased in men and decreased in women.
Figure 3 shows the most common causes of death before and during the COVID-19 pandemic based on the place of residence.In urban and rural areas, the most common causes of death before and after COVID-19 pandemic cardiovascular diseases, and during the pandemic, COVID-19 is the second leading cause of death in both locations.Mortality due to cardiovascular diseases (especially in the rural area), neoplasms, and endocrine diseases have increased in both the rural and urban during the COVID-19 pandemic.Deaths due to respiratory diseases have decreased in both places during the COVID-19 pandemic, but deaths due to accidents have increased in the rural and decreased in the urban area.
Figure 4 shows the causes of death according to the place of death before and during the COVID-19.It is noteworthy that the highest mortality rate of COVID-19 was in the hospital than at home.
On the other hand, death due to Cardiovascular diseases was significantly higher during the COVID-19 pandemic at home.
Figure 5 shows the most common causes of death according to different age groups before and during the COVID-19 pandemic.The important and debatable points are that, Death due to cardiovascular diseases is the first cause of death in both periods in people over 45 years old.Also, from the age of 5 to 44 years, accidents are the most common cause of death in both periods.
In Table 2, Causes of death and its changes during this period have been shown.As can be seen, during the COVID-19 pandemic, death due to endocrine diseases, cardiovascular diseases, and death with unknown causes has increased Top cause of death per 100.000populaƟon pre -pandemic and pandemic South Khorasan Province, the top causes of death (mortality rate per 100,000 population) before and after of COVID-19 pandemic.
Female Male
South Khorasan Province, the top causes of death (mortality rate per 100,000 population) based on gender pre-pandemic and pandemic.statistically, and death due to genitourinary diseases and perinatal period has decreased.Other causes did not increase or decrease significantly.Years of life lost (YLL) increased by nearly 15.0% from 10339.0 to 11885.2 years/100,000 population/year after COVID-19 pandemic compared with before the pandemic.The increased YLL was mostly due to COVID-19 (1535.8years/100,000 population/years), while minimal changes were observed in other causes of death.YLL due to Cardiovascular diseases and Accidents was slightly increased, while a considerable increase was observed in neoplasms (nearly 15.3% from 1309.1 to 1510.0) and remarkable reductions were seen in certain conditions originating in the perinatal period (nearly 17.5% from 1130.3 to 932.4) and Congenital malformation (nearly 16.7% from 800.9 to 667.1).
The 4-year trend of life expectancy at birth in South Khorasan Province shows a steadily declining trend for both genders with the most remarkable decline being observed in the first year of COVID-19 pandemic (from 77.1 to 75.6 years).The declining slope of life expectancy was relatively blunted in the second year of the pandemic (from 75.6 to 75.0 years) (Figure 6).
Rural Urban
South Khorasan Province, the top causes of death (mortality rate per 100,000 population) based on residence status pre-pandemic and pandemic.
Home Hospital
South Khorasan Province, the top causes of death (mortality rate per 100,000 population) based on place of death pre-pandemic and pandemic.
Discussion
According to our searches, this is the first study in Iran that examines the causes of mortality before and during the COVID-19 pandemic in a region of Iran.Death certificate data in South Khorasan Province showed 25.5% excess deaths in 2020-2022 compared with 2018-2020 (28% for men and 22% for women).which is consistent with the reported results from Latvia (16%), São Paulo (29.6% for men and 20.0% for women), and Minnesota (15.2% for men and 12.8% for women) (1,4,11,18).The urban population had 23% excess in mortality and the rural population had 29% excess during the pandemic, In the other study in Minnesota mortality had excess too (11% for rural and 15% for urban) (1).Mortality occurred in homes over about 33% whereas mortality occurred in hospitals more than 25%.In England, mortality during COVID-19 had excess too (39% for home and 21% for hospital) (19).Men and urban populations were probably at greater risk for COVID-19 than women and rural populations.During COVID-19 many people did not go to the hospital for fear of contracting COVID-19, and the mortality rate increased at home, while the amount of care and follow-up of patients in the hospital increased.The mortality rate was reduced in the 0-19 age group compared with before the COVID-19 pandemic (approximately −7%).Deaths due to Genitourinary diseases (−20%), Respiratory diseases (−8%), and Certain conditions originating in the perinatal period (−17%) had decreased during the COVID-19 pandemic compared with before the COVID-19 pandemic.Moreover, no significant changes were observed in terms of other causes of death, including Nervous diseases and other causes.In terms of gender, more excess deaths due to COVID-19, Cardiovascular disease, and Neoplasms were observed among men; on the other hand, more excess deaths due to Endocrine, nutritional, and metabolic diseases were reported among women.After the COVID-19 pandemic, in terms of place of residence, more excess deaths were reported due to Neoplasms and Endocrine diseases among urban people, while more deaths due to Cardiovascular disease were observed among rural people.Before COVID-19, Cardiovascular diseases were the most important causes of death, after COVID-19 there was a significant increase (10.6%) and they were still at first in South Khorasan Province.In Latvia mortality due to this cause had excess too (4%), while death due to these diseases has significantly reduced in Norway (−19.7%) and São Paulo (−7%), and not significantly changed in Minnesota and South Korea (1,4,11,12,20).The highest increase in mortality at home was related to Cardiovascular disease which is related to the non-referral of Cardiovascular disease patients to the hospital.The increased rate of mortality due to Cardiovascular disease in South Khorasan and Latvia may be attributed to the challenges of providing healthcare services to patients with cardiovascular diseases and the significant decrease in primary percutaneous coronary South Khorasan Province, the top causes of death (mortality rate per 100,000 population) based on age group pre-pandemic and pandemic.(1,11,12,20).Excess deaths due to Neoplasms in South Khorasan during the COVID-19 pandemic are partly due to the population aging and the health transition (23) as well as the reduced screening and delayed cancer treatments due to limited health care sources available for cancer treatment during the COVID-19 (26).Moreover, the nature of cancer and the immunocompromised state of patients undergoing chemotherapy may have reduced their referral for healthcare services during the pandemic and increased the rate of COVID-19 complications (27, 28).During the COVID-19 pandemic, mortality due to Endocrine, nutritional, and metabolic diseases increased by 33.3% in South Khorasan Province.Mortality related to diabetes mellitus in Norway was also higher than expected (49.9%).In Minnesota, mortality due to diabetes had an excess of approximately 8%.However, mortality due to diabetes mellitus was not significantly changed in São Paulo and South Korea (1,11,12,20).The increasing trend of mortality due to non-communicable diseases, especially diabetes mellitus, has already been reported in epidemiological studies in Iran before the COVID-19 pandemic (29,30).On the other hand, Nouhjah et al. 's study showed that the self-care behaviors of Iranian diabetic patients using insulin pens had significantly declined during the COVID-19 pandemic (31).Besides, Mirahmadizadeh et al. 's study in Iran showed a significant decrease in visits of diabetic patients by physicians and health workers during the COVID-19 pandemic (32), which might probably affect the mortality due to diabetes.COVID-19 enter cellular access by angiotensin-converting enzyme 2 (ACE2) and transmembrane serine protease 2 (TMPRSS2) which both of them are expressed in many endocrine glands that's why this virus causes thyroid dysfunction such as thyroiditis, insufficient pancreatic insulin secretion and hyperglycemia or ketoacidosis, adrenal infarction, and disruption of sex hormones in both sexes (33).Mortality due to accidents had an excess of nearly 4%, which was not statistically significant.Consistent with the findings of the present study, The mortality due to accidents increased insignificantly in Norway and Minnesota and decreased by nearly 2% in South Korea (1,12,20).Even before and during COVID-19, accidents were the most important cause of death in the 5-44 age group, which shows that COVID-19 has not been able to cause a significant impact on this age group and to reduce the death of young people, the number of accidents should still be reduced.A study in Shiraz, Iran showed reduced hospital admission due to traffic accidents during the COVID-19 pandemic; though, they reported increased mortality due to road traffic accidents which are probably due to reduced traffic safety leading to more lethal accidents (34).Similarly, Shaik and Ahmed investigated the effect of COVID-19 on road traffic crashes.They reported that COVID-19 has reduced traffic flow and increased risky driving behaviors leading to fewer, though more serious, traffic accidents (35) Mortality due to the respiratory system decreased by about 8% % in South Khorasan Province.Also in South Korea mortality related to respiratory diseases decreased by about 12.8% (37); it may be related to the promotion of levels of personal hygiene and mask-wearing from the beginning of the pandemic, which are established major factors in the decrease in respiratory infections.However, in a similar study in Pavia Province Italy, mortality due to respiratory system excessed about 30% among women and 40% among men, and in another one in Rome, mortality due to respiratory system excessed among men only (38,39).Mortality due to genitourinary diseases in South Khorasan Province decreased by about 20% during the COVID-19 pandemic which was statistically significant.In a similar study in South Korea, mortality due to genitourinary diseases exceeded about 1% in South Korea, and about 23% in England (19,37).On the other hand, the mortality due to genitourinary diseases did not remarkably change among women while it was reduced by nearly 20% among men in Pavia Province (39).Also in Costa Rica, mortality due to this cause decreased (40).Reducing the incidence of sexually transmitted diseases and reducing the use of nephrotoxic drugs during the pandemic probably decreased Mortality due to genitourinary diseases.Mortality due to certain infectious and parasitic diseases, nervous diseases and diseases of the blood and blood-forming organs, and certain disorders involving the immune mechanism decreased during COVID-19, though the change was not statistically significant.On the other hand, in Pavia Province in Italy, mortality due to nervous diseases decreased by nearly 8% among men and increased by about 14% among women.Mortality due to Dementia and Alzheimer's increased in Pavia by nearly 24% among men and by nearly 10% among women (39).Moreover, deaths due to dementia and Alzheimer's disease increased in England.Mortality due to these diseases mostly occurred at home, while it decreased in the hospital (19).These diseases most commonly affect the older adult who are at high risk of COVID-19; such deaths might be recorded as death due to COVID-19.In the other study reducing access to health care as well as the emotional distress and fear of COVID-19 infection were considered to contribute to increased mortality due to these diseases (41).During the peak of COVID-19 death caused by this virus was at the top of the causes of death, but the death caused by other respiratory and infectious diseases decreased, Outside of this peak, death due to chronic non-communicable diseases such as cardiovascular disease, stroke, and diabetes has increased.It may be related to improving the level of personal hygiene and health promotion of society, reducing visits to medical centers controlling non-communicable diseases, and performing routine check-ups of these patients.The four-year trend of life expectancy at birth in South Khorasan Province showed a remarkable decline after the COVID-19 pandemic; comparing 2020-2022 with 2018-2019, life expectancy was reduced by 2.1 years (from 77.1 to 75.0 years) and males were affected more than women.Other studies showed that life expectancy was also reduced in Mexico (ranging between 0.5-4 years in different cities in 2020), Russia (2 years in 2022), the United States (1.67 years in 2020), and Iran (1.4 years in 2020) during the COVID-19 (42)(43)(44)(45).The decrease in life expectancy is probably due to the direct impact of COVID-19 and its indirect impact due to the increase in The male population makes up a larger part of YLL compared to women.Wrongly recording the cause of death in the death certificate due to the excess in the total number of deaths, excess in the number of deaths that occurred at home, and the overcrowding of hospitals and other medical centers, the mysterious and unknown effects of the SARS-CoV-2 on the physiology of the body, absence of time interval, absence of certifier signature, incorrect underlying cause-of-death and competing causes of death increased during the COVID-19 pandemic, which was the most important limitation of this study.Due to the fear of contracting the COVID-19 virus, during the pandemic, detailed examinations may not have been done exquisitely to identify the cause of death of the deceased (49).
Conclusion
In this study, it was observed that the overall mortality rate has increased significantly during the COVID-19 pandemic.This increase was observed in both sexes, rural and urban residents, in all age groups above 20 years.Among the causes of death, the most common causes were Cardiovascular diseases, cancer, chronic Respiratory diseases, accidents, and endocrine diseases.This shows that there are still non-communicable diseases and it makes us plan to reduce these diseases.Even during the pandemic of a serious infectious disease such as COVID-19, we should not ignore these diseases.Death due to Cardiovascular diseases in homes increased significantly.This issue indicates that we should have centers to provide healthy and safe services for chronic patients during the pandemic of serious infectious diseases so that people are not deprived of service due to the fear of infectious diseases.In developing countries, accidents are still the main reason for the death of young people, so government officials should amend the traffic laws, provide safe vehicles to the people, and make the roads safer.Years of life lost (YLL) increased by nearly 15.0%, most of it was directly due to COVID-19, and YLL due to the other cause has minimal change.Excess mortality may be related to reduced access to healthcare services, delayed therapeutic measures, and altered healthcare behaviors.The factors leading to excess deaths should be carefully identified and addressed to prevent further excess deaths in future pandemics.
*Reference group for calculation RR is Pre pandemic category.**: statistically significant.
TABLE 2
South Khorasan Province, the cause-specific mortality rate per 100,000 population before and after of COVID-19 pandemic.
* indicate that the p-value is significant.
. Contrarily, earlier studies, such as Wegman and Katrakazas study, showed reduced vehicle kilometers by 10% and mortality due to COVID-19 by 12.9% in the 24 investigated countries early in the COVID-19 pandemic (36).Mortality due to Symptoms, signs, and abnormal clinical and laboratory findings not elsewhere classified increased by about 24% significantly, This cause of death includes deaths due to unknown causes, this excess may be related to excess death at home that they often do not have an exact cause, and a verbal autopsy is needed for accurate diagnosis.Whereas during the COVID-19, there wasn't enough time and a Lack of familiarity among general practitioners with verbal autopsy methods.
|
v3-fos-license
|
2021-08-27T17:04:04.595Z
|
2021-01-01T00:00:00.000
|
237958088
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/71/e3sconf_wfsdi2021_05006.pdf",
"pdf_hash": "5d2952722674cc83355087b355ff70322f56d093",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43268",
"s2fieldsofstudy": [
"Education"
],
"sha1": "9112b0c8b43b3ded4dc792e0ecf629db765bf7ae",
"year": 2021
}
|
pes2o/s2orc
|
Integrating sustainability issues into English language courses at university
. Education for Sustainable Development is a relatively new methodology that promotes principles of sustainable living and is aimed at integrating sustainability issues into the curriculum at all cycles of formal education. This paper discusses how sustainability development concepts can be taught through English language courses at university. The range of topics that cover sustainable development issues within the framework of key dimensions, such as economy, society, environment and culture is selected in line with the guiding principles of Education for Sustainable Development. The approaches to designing a course in English for undergraduate non-linguistic students through the prism of sustainable development principles are highlighted. It is proposed to develop a syllabus with a focus on the areas that are embedded in the concept of Education for Sustainable Development. These include such topics as gender equality and human rights, sustainable lifestyles, promotion of a culture of peace and non-violence, global citizenship, and appreciation of cultural diversity. The paper summarizes the findings of the research conducted at universities in Russia and Turkey to develop a strategy for designing English language courses that incorporate sustainability issues.
Introduction
One of the ways of promoting sustainability principles is to educate people on the importance of sustainable living for their future. The terms Education for Sustainability (EfS), Education for Sustainable Development (ESD) and Sustainability Education (SE) basically refer to the same concept and are quite often used interchangeably. To avoid confusion we will use the term 'ESD', which is understood as a new educational paradigm encompassing four dimensionseconomy, society, environment and culturethat are interrelated and cannot be treated separately. Living in a society, everyone should strive to improve the quality of their life as individuals and as members of the community at the same time. The concepts of sustainability are based on the idea that everyone should get involved with the initiatives that will contribute to sustainable living in their personal lives, within their community and also at a global scale, now and in the future.
The need to raise people's awareness of the environmental problems and to address the most urgent societal issues is of paramount importance, which is why ESD is one of the core elements of the Sustainable Development Goal 4 that focuses on quality education [1]. One of the characteristic features of ESD methodology is its interdisciplinary nature, which means that all aspects of sustainability, including social, economic, environmental and cultural factors can be integrated into the curriculum.
The relevance of the course contents to the SDGs in the university learning context was discussed in a number of studies [2][3][4][5]. For example, in [2,6], the authors focus on general approaches to implementing sustainability in higher education. Some aspects of designing specialized course syllabus on sustainable development in English language teaching were covered in [3,5]. A holistic research framework for the research on sustainable education was proposed in [4]. Since quite a lot of countries have already embarked on a mission of implementing the ESD principles in their educational policies and curricula, it is expedient to investigate the problem from the perspective of designing English language courses incorporating sustainability issues. In this paper, the following questions will be adressed: -What sustainability-related topics can be added to the English language courses delivered at universities? -What learning objectives related ESD can be integrated with the existing university curricula? -What educational tools can be used to achieve the desirable learning outcomes?
Selecting the appropriate content for English language courses is a serious task as most university students want to focus on their industry-related language rather than just study English. When introducing sustainability-related topics, course designers have to bear in mind that it is necessary to balance the students' needs and the relevant content. Another important consideration is the learning objectives, which have to be adjusted to make sure that they are formulated in the right manner and will help students to achieve the learning outcomes. Course designers also have to take into account that the choice of education tools can make a big difference to the course implementation. Project-based learning, collaborative learning, task-based learning are just a few of the instruments that can be used in the English language classroom at university.
Methods and materials
The research was organized as a study that was based on a questionnaire, the analysis of relevant ESD materials and toolkits, as well as the study of the experience of other universities.
The first step involved conducting the needs analysis survey. To achieve this a questionnaire was developed to find out the students' expectations from the course, and the possibility of integrating sustainability-related topics with the course.
The importance of the needs analysis is manifested in the following definition of the course objectives, which are understood as 'the goals of a course in English, as indicated by the needs analysis, and expressed in terms of what the learner should be able to do' [7 p.221]. This means that needs analysis is a prerequisite for a successful formulation of learning outcomes. It is the most important stage in any course design as it helps to identify what skills and knowledge students need. By analyzing students' needs the course designer can prioritize the goals they want to achieve and make sure the course will help them to fulfill this task. Apart from this, students can reveal their weaknesses and see the gap between what they already know and what they need to learn.
The second step was aimed at formulating the learning objectives in accordance with the students' feedback on the questionnaires. The goal was to specify what specific sustainability knowledge students need to acquire. It was important to make sure that on completion of the course students are equipped with skills and knowledge that they need to deal with challenges of sustainable development both locally and globabally. For the accurate formulation of the learning objectives Education for Sustainable Development Goals: Learning Objectives was used as a guide, which is a part of the Global Action Program (GAP) on ESD. This program resulted from the UN Decade on ESD that lasted for 10 years, from 2005 to 2014. The guidelines can be used in different learning context and contain methods for the implementation of the program, learning activities and suggested topics [1].
The third step comprised selecting the activities and educational tools that can be used in the classroom to achieve the learning outcomes. One of the outcomes of the English language course at university is acquisition of transferrable skills and soft skills. These terms basically mean the same, and include the skills and abilities that can be used in different fields. Soft skills are included in transferrable skills and can be defined as personal attributes that enhance an individual's interactions and his/her job performance' [9][10][11]. Traditionally, university education has been centered around the development of hard skills, which are seen as a primary goal of university training. Yet, communication skills, teamwork, ability to handle stress and manage time efficiently, leadership, problem-solving skills are also of great importance as employers seek candidates who are 'not only are competent in their field of specialization but also possess adequate soft skills' [11].
The research rests on the assumption that teaching sustainability issues at university across the curriculum can improve considerably the development of students' soft skills. Project-based learning seems to be a good teaching method as it is student-centered, involves collaboration and creativity and improves critical thinking and communication skills. Students work together or individually on a challenging and authentic question, they do research and present their findings in a form of presentation. In project-based learning, 'a learning process in which students are engaged in working on authentic projects and the development of products' [12, p.2].
Project-based learning can be supplemented by the collaborative learning, which promotes working together in order to achieve the learning goal. In this approach, work is organized in pairs or small groups and it can be done both in classroom and outside the classroom. The obvious advantage of such an approach is that it involves active participation of all students and engagement with the meaningful content and each other. By doing this students build up their knowledge of key concepts (hard skills) and learn how to interact with each other (soft skills).
Another effective educational tool is Task-Based Learning (TBL), which is method of teaching language through the completion of a certain task. The lesson structure comprises a series of activities aimed at solving a task. This means that the focus is shifted from drilling just grammar and vocabulary to a more authentic use of language. A typical TBL lesson consists of three cycles. The first one is the pre-task stage involves introducing the task and activating the topic-related vocabulary. The second cycle is completion of a communicative task. Students are encouraged to use words and phrases they are familiar with. In the third cycle is language work, where a more detailed analysis of the language structures that emerged during the second cycle is made. If the need occurs, teachers are encouraged to provide more practice activities to fit into the lesson.
The teaching methods mentioned above seem to be well-suited to teach sustainabilityrelated topics in English language classes at university. All of them have the following common features (see Table 1): • They are learner-centered, the teacher plays the role of a facilitator whose job is to make sure that learning takes place in the appropriate learning environment and provide guidelines and language input. It is not the teacher, but the learner who is in the center of the lesson.
• Language gradually emerges through various stages of the lesson. All three methods are based on a holistic approach to teaching language by focusing on the function rather than the form.
• Language is used for communication. Students get involved in the lesson as they need to convey a message to the other members of the group in order to find a solution to a problem, or to solve a certain task.
Results and discussion
The needs analysis survey was conducted in September 2020 at two universities -Tambov State Technical University (Tambov, Russia) and Istanbul Medipol University (Istanbul, Turkey). A sample of 600 respondents was selected for the survey. It comprised 345 students from the Russian university and 255 students from the Turkish university. The survey was set up in Google forms and consisted of several questions. The aim was to identify the learners' profile in both universities, to select to topics related to sustainable development for their further integration into the English language courses, to develop strategies for the course design, including its learning objectives and tools for practical implementation. Below is a part of the questionnaire that students had to fill in (Table 2). Table 2. Learner profile questionnaire (sample).
Personal details (optional) Gender
Age
Number of years learning English Specialization
The survey showed that the respondents were enrolled in the following programs: Architecture, Finance and Economics and Law. The distribution of the respondents is given in Fig.1. The survey was conducted by the method of random sampling; students were selected from three departments at both universities.
Selecting sustainability-related topics
The second section on the students' questionnaire contained two questions: the first question concerned the choice of topics for study, and the second onethe expected learning outcomes. We consider each question and analyze the answers to them separately. 10. Responsible consumption and production.
11. Climate change and its impacts.
13. Access to justice for all.
14. Global networking for sustainable development.
The students were supposed to select the topics that might be of interest to them (see Table 3). The list of topics was compiled using UNESCO Education for Sustainable Development Guide [1]. The results of the survey were summarized in two diagrams (see Figs. 2 and 3). Interestingly, the most popular topics for the Russian students were Economic growth, full employment and decent work (40%) and Access to justice for all (35%), respectively. The other three were Quality education and lifelong learning opportunities for all (15%), Poverty in all its forms (6%), and Responsible consumption and production (4%). The most relevant topics for the Turkish students were Access to justice for all (35%) and Quality education and lifelong learning opportunities for all (28%). The other three were Responsible consumption and production (17%), Economic growth, full employment and decent work (12%) and Climate change and its impacts (8%).
As you can see from the diagrams, four of the topics chosen by the students of both universities were the same, with the topic Access to justice for all being of interest to the same number of students from both countries.
Learning objectives for English courses with a focus on ustainable development
The survey findings were used to define the learning objectives for each of the topics chosen by the students of both universities.
All learning objectives are described from three different perspectives: cognitive, socioemotional and behavioral. The cognitive perspective is related to skills and abilities necessary for acquiring content referring to each topic. The socio-emotional perspective focuses on the development of interpersonal skills as well as self-reflection skills. The behavioral perspective deals with the ability of learners to take actions that may help to solve societal problems [1]. The learning objectives are summarized in Table 5.
Ending poverty in all its forms cognitive socio-emotional behavioral
The learner understands: -the concepts of poverty and has their own views on its causes and consequences; --how poverty relates to human rights.
The learner is able to: -research the facts about poverty and propose their own solution to the problem; -show empathy for poor people; -realize their own role in the world of inequality.
The learner is able to: -plan, evaluate and implement activities that contribute to poverty reduction; -take part in the decision of local governments in the issues of poverty eradication; -propose solutions to address problems related to poverty.
Quality education and lifelong learning opportunities for all cognitive socio-emotional behavioral
The learner understands: -the importance of education and the need for lifelong learning opportunities; -the importance of equal opportunities in obtaining quality education; -that education can help create a more sustainable and peaceful world.
The learner is able to: -realize the importance of quality education for all; -identify their own needs for personal development and quality education; -recognize the need to acquire new skills to improve the quality of their own life.
The learner is able to: -advocate for gender education equality; -promote the empowerment of young people; -engage in lifelong learning, without missing any opportunity and apply the knowledge gained in life.
Economic growth, full employment and decent work cognitive socio-emotional behavioral The learner understands: -the concepts of sustainable economic growth, full employment and decent work; -the relationship between employment and economic growth.
The learner is able to: -discuss various economic issues; -define their own economic rights, values and needs; -develop a plan for their own economic life.
The learner is able to: -find out facts about sustainable economic models and have their own definition of decent work; -develop and evaluate ideas for entrepreneurship; -plan and implement entrepreneurial projects.
Responsible consumption and production cognitive socio-emotional behavioral
The learner understands: -how the economic development of a country is influenced by the lifestyle of an individual; -how responsibilities and rights are distributed between the subjects of consumption and production.
The learner is able to: -talk about production and consumption; -determine their own desires and needs, have an idea of their consumer behavior; -be responsible for the consequences of their consumer behavior.
The learner is able to: -plan, implement and evaluate their activities as a consumer; -promote sustainable production patterns; -justify their social and cultural orientation in production and/or consumption.
Climate change and its impacts cognitive socio-emotional behavioral
The learner understands: -such a natural phenomenon as the greenhouse effect; -which human activities contribute most to climate change; -the main ecological consequences of climate change locally and globally.
The learner is able to: -explain the impact of climate change on the environment; -encourage others to protect the climate; -be aware of their own role in changing the global climate from a local or global point of view.
The learner is able to: -evaluate your activities in improving the climate from a personal and professional point of view; -promote climate-protecting public policies; -support climate-friendly economic activities.
Access to justice for all cognitive socio-emotional behavioral The learner understands: -the concept of justice and lawabidingness; -their local and national legislative systems and is able to compare it with those of other countries; -the role of human rights in the international system of relationships.
The learner is able to: -discuss the main legislative issues; -express solidarity and help those who have been subjected to unfair justice; -have their own idea of access to justice for individuals of various economic, social, political and gender groups.
The learner is able to: -have their own point of view on issues of fair justice; -help those groups that are experiencing a conflict situation related to justice; -speak out against injustice by taking part in settlement processes and decision-making.
Activities for English classes to achieve the learning objectives
The aim of any course is to develop a set of competencies that will be useful for the learners when solving both professional and everyday problems. By integrating sustainability issues into the English language course teachers can help their students to acquire a number of competencies, such as anticipatory competency, collaboration competency, systems thinking competency, strategic competency, critical thinking competency, and self-awareness competency.
Anticipatory competency is the ability to develop a certain vision for the future and to evaluate the consequences of your actions. This competency can be developed in the English classroom when students reflect on the problem, estimate possible risks and make informed decisions. For example, as part of task-based learning Collaboration competency is the ability to work together, be able to support others and show empathy, be able to solve problems and resolve conflicts. This can be done by offering students a number of activities that involve interaction and communication in order to achieve common goals.
Critical thinking competency is the ability to analyze and reflect on your own actions and actions of other people, understand the logical connections and engage in independent thinking. The formation of students' speech activity presupposes the ability to analyze, compare facts and deduce value judgments in written and oral form.
Systems thinking competency is the ability to deal with complex systems, to understand how these systems are connected with each other and to see the big picture of the problem.
Strategic competency is the ability to work together on innovative solutions and take innovative actions.
Self-awareness competency is the ability to assess you role in the community and to motivate oneself to take further action.
For each of the topics selected for the English course, it is expedient to have a list of questions that can be discussed in class. For example, for Economic growth, full employment and decent work the following topics can be suggested: -What is economic growth? And why is it so important? -The reality of people's material living conditions around the world -What is full employment and what is unemployment -What makes decent work? -What is economic ethics?
One of the activities that that can be done in the university learning context within the framework of the abovementioned sustainable development goal is "An enquiry-based project: What can my career contribute to sustainable development. This type of project can be organized for a group of students in order to make their learning more meaningful. It involves serious investigation of the problem, with lots of collaboration, resulting in creation of knowledge. In order to complete the study, students have to make use of their self-management skills, communication skills, problem solving, presentation skills and project management skills. Besides students have to use technology for accomplishing the task. They do research using digital tools and resources in a purposeful manner. As part of the project students have to formulate the research question and collect the necessary data, they need to select the tools for accomplishing their research. For the topic "Ending poverty in all its forms", the following problems can be incorporated in the class activities: -Poverty and richness corrupt people's souls -Poverty prevents people's happiness -Poverty causes increase in crime -In a world of wealth, poverty has become a necessity -Sweatshops, child labor and modern slavery A sample activity for an English class at university might be a lesson arranged as a game where students are divided into teams and do the quiz on Sweatshops, child labor and modern slavery. The teacher prepares handouts with questions that are either facts or myths about the topic under discussion. Each question has a point value. The students from each team select a question, discuss it and then present their answer to the class. The questions might include: • Sweatshops are legal in your country.
• Victims of human trafficking are always illegal immigrants.
• There are more people in slavery now than at any other time in human history.
• Many big businesses use child labor.
• Human trafficking is the 2nd largest criminal industry in the world.
• The cost of slave labor has decreased over time.
• Most slavery victims are women and children.
• Trafficking and slavery victims are always poor and uneducated.
• Human trafficking only occurs in illegal, underground industries.
• Most sweatshops are in Europe.
Students have to decide whether the statement is a myth or a fact and provide good reasoning to support their position. This activity involves a great deal of student engagement and collaboration. Students have to use their critical thinking skills in order to make good judgment.
For the topic Quality education and lifelong learning opportunities for all it is suggested to tackle the following problems: -Do universities offer high quality education? -Is lifelong learning necessary? -Benefits of lifelong learning. -Equality of educational opportunities An activity that can be used in the classroom is planning a project on promoting ESD at university. Students work in groups and select the problem for their campaign. These can range from inclusive education and access to high quality education in deprived countries, gender inequality issues to importance of lifelong learning and skills for the 21st century. They make a plan and design the stage of their campaign. Then they give a presentation of their ESD campaign and other students assess their work using the assessment criteria grid. For Climate change and its impacts it is suggested to include the following topics: -The impact of sea level rise on countries (e.g. small island states) -Greenhouse effect; -Global warming; -Ways of protecting the climate; -Ethics and climate change; -Assessing the risk and preventing natural disasters.
An activity for the English class is a mini-debate on the topic "Climate Change: Causes and Consequences". Students work in groups of three where each one is assigned a role of a moderator, who listens to both sides of the debate, asks questions, evaluates the arguments presented and gives preference to the one who turned out to be more convincing, a representative of those who believe that climate change is a serious global problem, and a representative of those who do not consider climate change a serious global problem. Students' arguments should be formulated in terms of environment, society and economy (Three Pillars of Sustainability).
Conclusion
The study contributed to the research into ESD and proposed some ways of dealing with the problem of incorporating sustainability issues into the English language courses. It is important to bear in mind that reorienting a curriculum in ESD requires a combination of a holistic approach and an interdisciplinary approach. At the same time it is necessary to align students' learning objectives and the content of the course.
The research conducted in two universities was focused on exploring students' needs and expectations and possibility of adding sustainability-related topics to the English course syllabus. The findings showed that the students are interested in studying such topics as Economic growth, full employment and decent work, Access to justice for all, Quality education and lifelong learning opportunities for all Ending poverty in all its forms, Responsible consumption and production, Climate change and its impacts. The learning outcomes were formulated for the selected topics in cognitive, socio-emotional and behavioral domains. Sample activities for English classes at university were suggested and described.
|
v3-fos-license
|
2018-04-03T05:18:07.137Z
|
2014-04-30T00:00:00.000
|
18387910
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/crihep/2014/795261.pdf",
"pdf_hash": "9d44c6f4442aa83b3aff300d1dd6be5dee7242f6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43269",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ff53efac76bfddc08243bfc291030be091872622",
"year": 2014
}
|
pes2o/s2orc
|
Low-Dose Tolvaptan for the Treatment of Dilutional Hyponatremia in Cirrhosis: A Case Report and Literature Review
Dilutional hyponatremia is common in decompensated cirrhosis and can be successfully treated by tolvaptan, a vasopressin V2-receptor antagonist. Data were lacking regarding the effects of tolvaptan on cirrhotic patients with a Child-Pugh score of >10 and a serum sodium concentration of <120 mmol/L. We report a case of forties man with a 20-year history of chronic hepatitis B presenting with yellow urine and skin. Laboratory tests demonstrated prolonged prothrombin time, markedly elevated total bilirubin, severe hyponatremia, and a Child-Pugh score of >10. The patient was diagnosed with dilutional hyponatremia and was treated with recommended dosage tolvaptan at first. The serum concentration of sodium recover but the patient felt obviously thirsty. As the dosage of tolvaptan was decreased accordingly from 15 mg to 5 mg, the patient still maintained the ideal concentration of serum sodium. This case emphasizes that cirrhotic patient with higher Child-Pugh scores and serum sodium concentration of <120 mmol/L can be treated with lower dose of tolvaptan.
Introduction
Dilutional hyponatremia is defined as a serum sodium concentration of <135 mmol/L. This condition is common in decompensated cirrhosis, with an incidence as high as 49% [1]. Recent studies have shown that hyponatremia is not only a sign of disease severity but also a direct factor that could aggravate the disease. In addition, acute hyponatremia is considered as an independent predictor of mortality in patients with cirrhosis [2,3]. However, traditional therapies such as fluid restriction and supplementation of high-dose sodium show little effect. A prospective randomized study showed that only 0−26% of patients treated with water restriction had serum sodium concentrations of >5 mmol/L [3]. Patients feel thirsty and have difficulty in finishing water restriction therapy too. The efficacy of high-dose sodium replacement is limited and the replacement can aggravate ascites and edema as well. Therefore, sodium administration is not recommended for treatment of dilutional hyponatremia in cirrhotic patients [4]. Recently, it was reported that tolvaptan, a vasopressin V2-receptor antagonist, had significant effect on the therapy of hyponatremia in cirrhotic patients. Nevertheless, few data are available regarding the effects of tolvaptan on cirrhotic patients with a Child-Pugh score of >10 and a serum sodium concentration of <120 mmol/L. Herein, we present a case of a decompensated cirrhotic patient with a serum sodium concentration of 117 mmol/L who received low-dose tolvaptan.
Case Presentation
A 47-year-old man with a 20-year history of chronic hepatitis B was hospitalized with the complaint of yellow urine and skin for 5 months. The patient had been treated with lamivudine in combination with adefovir for 1 year and discontinued treatment of his own 10 months ago. Five months ago, he developed the symptoms of yellow urine and skin and followed by being diagnosed as decompensated hepatitis cirrhosis B and treated with entecavir. During entecavir treatment, he had a total bilirubin concentration of 200−350 mol/L, prothrombin time of 20−31 s, and 2 Case Reports in Hepatology a Child-Pugh score of >10. The bilirubin level was persistently high despite repeat (up to four times) artificial liver support therapy. Meanwhile, during the course of treatment, the patient showed hepatic encephalopathy, a large amount of ascites, and hyponatremia with numerous serum sodium concentrations as low as 115 mmol/L. He received large dosage of diuretic, lower salt intake (less than 2 grams per day), water restriction to less than 500 mL per day, and one time paracentesis, but the symptom was not relieved.
The patient was treated with tolvaptan at an initial dose of 15 mg qd. After therapy, he had a urine volume of 6100 mL and became obviously thirsty, although the serum sodium concentration showed significant recovery. The tolvaptan dosage was decreased to 7.5 mg qd, but he remained thirsty.
For the large volume of urine and the obvious thirst, the dosage of diuretic was decreased once but resumed soon because of increasing ascites. Finally, the dosage of tolvaptan was maintained at 5 mg qd, and the patient's serum sodium concentration was consistently 128 mmol/L and daily urine volume was between 3000 and 4000 mL (see Table 1). During the course of therapy, the condition of patient was markedly improved, as indicated by the continuously decreased level of total bilirubin. After 1 month of treatment with tolvaptan, the drug was stopped and the serum sodium concentration was maintained at 130 mmol/L. Coused drugs were entecavir tablet, ademetionine injection, zolpidem tartrate tablet, lactulose, magnesium isoglycyrrhizinate injection, albumin injection, piperacillin-tazobactam injection, and ornithine aspartate injection during the therapy of tolvaptan. The patient was discharged from the hospital with a total bilirubin concentration as low as 94 mol/L and a serum sodium concentration of 135 mmol/L. From that time, the patient stopped diuretics therapy and just used entecavir. At the 3-month follow-up, total bilirubin concentration was only 38 mol/L and serum sodium concentration was 137 mmol/L. B ultrasound showed small amount of ascites. At the 1-year follow-up, the liver function of the patient was normal and B ultrasound showed no ascites at all.
Discussion
We presented a middle-aged man with a history of 20-year chronic hepatitis B who presented with yellow urine and skin.
Laboratory tests demonstrated prolonged prothrombin time, markedly elevated total bilirubin, severe hyponatremia, and a Child-Pugh score of >10. The patient was diagnosed with dilutional hyponatremia and was successfully treated with low-dose tolvaptan. Dilutional hyponatremia is the most common complication of cirrhosis. The Study of Ascending Levels of Tolvaptan in Hyponatremia 1 and 2 (SALT-1 and SALT-2) trials demonstrated that when tolvaptan was used at the dosage between 15 and 60 mg/d for 30 days, serum sodium concentrations can be resumed in most patients. In addition, the most common adverse event was thirst, and rapid correction of the serum sodium concentration did not occur [5]. However, it should be noted that only 63 patients with cirrhosis were enrolled in these 2 trials and those with a Child-Pugh score of >10 or a serum sodium concentration of <120 mmol/L were excluded. Thus, more clinical experience is required to investigate the effects of tolvaptan on cirrhotic patients with Child-Pugh scores of >10.
Recently, several studies have evaluated the effects of tolvaptan at lower doses for the treatment of cirrhosis. In a double-blind, parallel-group, multicenter phase III clinical trial in Japan [6] which aimed to verify the efficacy of low-dose tolvaptan in patients with liver cirrhosis-associated ascites and insufficient response to conventional diuretic treatment and investigate its pharmacokinetic and pharmacodynamic profiles, a total of 40 patients with cirrhosis and an average concentration of initial serum sodium for all patients being >120 mmol/L were included. 20 patients belonged to Child-Pugh Class C. The results showed that tolvaptan at a dose of 7.5 mg/d could increase the urine output and decrease the ascitic volume. The serum sodium concentrations were increased significantly on the first day. In our case, low-dose of tolvaptan was successfully used to treat a patient with an initial serum sodium concentration as low as 117 mmol/L and a Child-Pugh score of >10. Tolvaptan showed good safety since the blood pressure of patient was in the normal range during the therapy.
There are several possible reasons for the successful treatment of hyponatremia in cirrhosis with low-dose of tolvaptan. Firstly, it is reported that 99% of tolvaptan molecules bind to plasma proteins after entry into the bloodstream. For patients with cirrhosis, the serum albumin concentration was low due to decreased protein synthesis, which could result in a reduced protein-tolvaptan binding rate and an increase in free tolvaptan plasma concentration. Moreover, albumin levels vary in healthy individuals by 10% [7], which could lead to altered drug efficacy. Secondly, tolvaptan is primarily metabolized by CYP3A4 and the activity of CYP3A4 enzymes is changed in cirrhotic patients [8]. Thirdly, portal-systemic shunting in patients with advanced cirrhosis could reduce the first-pass effect of drugs and lead to a significant increase in absorption. The above effects might account for the improved efficacy of low-dose tolvaptan in patients during decompensation of liver function. To date, there is no simple endogenous marker to predict hepatic function with respect to the elimination capacity of specific drugs and guide dose adjustment in patients with liver injury. The semiquantitative Child-Pugh score is frequently used to assess the severity of liver function impairment. However, the Child-Pugh score only offers rough guidance for dosage adjustment and more sensitive markers need to be developed to guide drug dosage adjustment in patients with hepatic dysfunction. Finally, we cannot ignore that the concentration of serum sodium of this patient (about 130 mmol/L) did not reach normal level during the tolvaptan therapy. Since hyponatremia develops slowly and cirrhotic patients show good hyponatremia tolerance, 130 mmol/L might be enough for the chronic dilutional hyponatremia patient.
The improvement of hyponatremia such as reduced occurrence of hepatic encephalopathy [9,10] and improved quality of life [11] and prognosis of cirrhosis [12] may also lead to clinical benefits in patients. In our study, the hyponatremia correction was accompanied by gradual improvement of liver function and the Child-Pugh score. However, further studies are needed to investigate the clinical benefits of tolvaptan therapy after the correction of dilutional hyponatremia. In contrast, a recent meta-analysis indicated that the treatment of dilutional hyponatremia with vaptans did not result in a good prognosis. Twelve randomized, controlled trials with a total of 2,266 patients were included in this analysis, and the main outcome measures were mortality, spontaneous peritonitis, hepatic encephalopathy, and upper gastrointestinal hemorrhage. The results showed that vaptans could significantly increase serum sodium levels and lead to reduction in weight, whereas there was no clear difference between vaptans and placebo groups regarding prognosis [13]. During the therapy, we cannot ignore another phenomenon that although the urine volume was large, we still cannot decrease the dosage of diuretic or stop it during the therapy of tolvaptan. This may contribute to different mechanisms of drugs action.
The results of the present study suggested that, for cirrhotic patients with higher Child-Pugh scores (Class C) and serum sodium concentrations of <120 mmol/L, lowdose tolvaptan is effective for gradually increasing serum sodium concentrations, maintaining electrolyte balance, and possibly improving liver function. Further studies are needed to determine the optimal method of tolvaptan dose adjustment.
|
v3-fos-license
|
2023-04-20T15:17:11.280Z
|
2023-01-01T00:00:00.000
|
258222883
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "CLOSED",
"oa_url": "https://doi.org/10.1051/e3sconf/202338102026",
"pdf_hash": "03c6ca73cb5326e3c1000b832da662466abaf7ba",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43273",
"s2fieldsofstudy": [
"Computer Science",
"Economics"
],
"sha1": "0e65df77cbada20fffc19f092d44fe21e23b7b03",
"year": 2023
}
|
pes2o/s2orc
|
The black market of personal data: financial, legal and social aspects
. This article touches on the topic of the black market of personal data. The concept of the "black market of personal data" and the description of its financial, legal and social aspects are considered. The relevance of this topic lies in the fact that offers on the black market have not only not decreased, on the contrary, their number has visibly increased. This article describes the basic rules for the security of personal data storage.
Introduction
There is a law on personal data in Russia.In order to collect, process and store data about employees, subscribers to the newsletter and site visitors, you almost always need to get their consent, and store the data in Russia.
Any data, whether personal or not related to a person's personality at all, has value for business if it meets three characteristics at once -relevance, reliability, completeness.
Data with all these properties are very rare on the black market.But even if suddenly a large array of full-fledged data appeared on the darknet (for example, this is a completely fresh database of some reputable personal data operator), no one will ever be able to guarantee their quality.The operator, of course, will be interested in the fastest destruction of the consequences of the leak and will never contact the buyer from the darknet.And it is completely illogical to believe the seller of stolen data.
From January to September 2020, 96.5 million records of personal data and payment information leaked in Russia (according to InfoWatch).At the same time, the share of leaks related to fraudulent actions exceeds 10%this figure is three times lower in the world (Figure 1).
The main part
The black market of personal data is constantly growing, and the main reason for this is insiders -employees of companies that sell or give away confidential data of their customers for free, according to a study by the IT company Krok (Forbes has it).In 2020, the total damage from personal data leaks exceeded 3 billion rubles, according to the materials of Krok.
The sale of personal data on the darknet takes place through special forums or marketplaces.The transaction is carried out directly or through a guarantor -a person who verifies that the information provided by the seller corresponds to the buyer's request [1][2][3][4][5][6][7].
In the darknet, you can find various personal information -from a simple email address to a complete data package for a specific person.
The stolen information is used to make fake passports and other fake identity cardsthe stolen data is superimposed on photos taken in the editor.Sale of the Fullz database ($100 for data about one person) contributes to the development of SIM card fraud, when an attacker using stolen data convinces a mobile operator that he is a real customer who has lost his phone and wants to get a new SIM card.After activating the card, the fraudster gains control of the victim's mobile number and uses it to change passwords and gain access to the bank account.
In addition, the stolen data can be used with the help of free software to try to log in to sites that require only a login and password [5].
Personal data refers to any personal information about a person that could help identify him or find his property or place of residence.The Russian market for the sale of personal data in the shadow sector of the Internet, the so-called dark net'e (access to its sites is only available using Tor or similar software), is not well researched.Much more extensive statistics are collected in the US market, where information is consolidated with the help of specialized agencies.The first goal is to use it as a potential customer base.But "cold" calls from unknown numbers, and even with the wrong commercial offer, can only damage business.An actual example from the world of auto insurance.Many car owners have faced calls from insurance agents who offered "the best conditions" for a car that has been sold for a long time.Naturally, such contacts with a potential client can only cause irritation (and sometimes a complaint to Roskomnadzor, but more on that below).The use of any data, the origin of which you cannot explain to the client, damages the reputation of the company [1][2][3][4][5][6][7][8][9][10].
The second purpose of the purchase is that the company wants to know more about the customer.She needs information about whether he travels abroad, whether he has a car, etc.But buying data on the black market will not be able to answer this question.If indirect signs about the well-being and lifestyle of the client are important to the company, then the relevance of the information is important to it.Has he lost his job three months ago, can he travel abroad right now, or even better, has he already bought a plane ticket (for example, to sell insurance)?And this requires not dubious sources, but access to really relevant, and even better, reference databases.
The third goal is the fight against fraud, checking customer information (for example, passport data).Only reference sources really work here.The company needs reliability guarantees, and no one gives them on the darknet.
As a general rule, the processing of personal data and the transfer of this obligation to another person is allowed only with the consent of the data subject.At the same time, consent to data processing must be specific, informed, conscious and can be withdrawn at any time.Without it, operators1 and other persons who have gained access to personal information are not entitled to disclose and distribute data, unless otherwise provided by federal law [11][12][13][14][15].
There are cases when documents containing personal data, including copies of contracts, passports, questionnaires, were thrown into the trash.The reasons for such behavior may be the negligence of employees, the lack of a formed culture of "confidentiality" and employers' control over its compliance, and sometimes proper conditions for storing documentation.An equally significant factor is that now there is no total control over compliance with the requirements by the regulator -Roskomnadzor.
Sometimes information becomes available to outsiders due to carelessness: when data is sent by e-mail via unsecured communication channels (without the use of encryption tools) or via messengers.There are cases of "innocent" data dissemination through selfies.Also, the information is disclosed by copying it to flash cards or as a result of the removal of documents that have not been completely destroyed from the building of the organization.
Sometimes personal information is disclosed by the operator's employees intentionally for selfish reasons.As a result, credit card data, passports, customer profiles may fall into the hands of fraudsters.
The problem of criminal data penetration is relevant today for the whole of Russia, although most often employees of telecom operators, bank managers and civil servants in the regions are engaged in this, Andrey Arsentiev, head of analytics and special projects at InfoWatch Group, told Forbes.This is due to both a lower level of security of information resources and low salaries, Arsentiev argues.His words are confirmed by the data of the GAS "Justice" system: according to information from it, 43 cases under criminal article 138 h were considered in 2020. 2 -violation of the secrecy of correspondence, telephone conversations, postal, telegraphic or other communications of citizens.There are only five such cases so far this year [1,8,13].
In May 2020, a specialist of the Vimpelcom office in Kazan, M.V. Burlak, received a fine of 120,000 rubles for punching and sending his friend data on calls from his ex-wife, it follows from the court materials.The boatman did it for free, and also attracted his colleague, whose name is not disclosed, to the breakdown.The information about the calls was forwarded by the Boatman to the customer via the WhatsApp messenger.When the victim realized that her ex-husband was aware of the details of her communication by mobile phone, she wrote a complaint to Vimpelcom.Later, the operator's security service and the FSB contacted her, according to the case materials.Court cases against the victim's ex-husband and another employee of Vimpelcom, who appeared in the case, are still underway [13][14][15][16][17].
Another similar case occurred in Yakutia, where in June 2020, the seller of the office of one of the mobile operators, Patrangel V.O., was punished with 200 hours of community service for leaking subscriber calls in one month.In July 2019, an Internet user wrote to Patrangel asking him to send a list of outgoing and incoming calls from a certain number for June of the same year.Patrangel entered the call detail viewer of the SSVO, using his service username and password, entered the phone number and received information about the calls.After that, the criminal copied the data to Excel, uploaded it to his mobile phone and sent it to the customer.How much money Patrangel received for such "work" is not indicated in the court documents, the FSB was also engaged in its development.The identity of the customer could not be established.
"Most employees don't think about the fact that their messengers are controlled."For violation of the rules of personal data processing, a whole range of types of administrative responsibility has been established.At the same time, if several violations are revealed during the audit, they will be held accountable for each of them, including separately for each "episode".Also, both the organization and the guilty individualan employee of the organization -can be held responsible for the violation at the same time (Part 3 of Article 2.1 of the Administrative Code of the Russian Federation) [1][2][3][4][5][6][7][8][9][10][11][12][13][14].
The main type of administrative punishment for violation of the legislation on personal data is a fine, the amount of which depends on the specific violation.The maximum possible is 75,000 rubles.It is provided for the organization for processing personal data without the written consent of a citizen, when such consent is required, or for the absence of all necessary information in it (Part 2 of Article 13.11 of the Administrative Code of the Russian Federation).
If a leak of non-essential personal data is found, a person faces a fine of up to 50,000 rubles, and a legal entity -up to 6 million rubles, says Alexey Gavrishev, managing partner of AVG Legal law company.
The penalty for violating the secrecy of telephone conversations and correspondence can be up to four years in prison, Gavrishev noted.In addition, the information that passes through the telecom operator is most often a trade secret, and its disclosure is punishable by imprisonment for up to five years Below are the types of administrative responsibility relevant to the situation of data purchased on the black market (Figure 2).For violation of the rules of processing personal data: • not in the cases provided by law (paragraph 1 of article 13.11 of the administrative code), the penalty for organizations to 50 000 RUB; • for purposes incompatible with the purpose of gathering (part 1 of article 13.11 of the administrative code), the penalty for organizations to 50 000 RUB; • without consent, when required, or with the consent of, but with incomplete information (part 2 of article 13.11 of the administrative code) the penalty for an organisation to 75 000 rubles.
Failure to comply with personal data protection requirements: • o not publish the necessary documents on your policy in relation to the processing of personal data and on what requirements to protect you sell or otherwise provide unrestricted access to them (part 3 of article 13.11 of the administrative code), the penalty for the organization -30 000 RUB.; • o not ensure the safety of data during manual processing, if this entails unlawful or accidental access, destruction, modification, blocking, copying, provision, dissemination or other misconduct (section 6 of article 13.11 of the administrative code), the penalty for the organization -50 000 RUB.; Non Separately, we will point out the prospect of criminal liability.Although there is no special rule on liability for violation of the legislation on personal data in the Criminal Code of the Russian Federation, the actions of a person who violated the rules for working with personal data may constitute another crime, in particular: • for the illegal gathering and dissemination of information on private life of an individual that make up his personal and family secret, without his consent (paragraph 1 of article 137 of the criminal code); • illegal access to computer information, resulted in the destruction, blocking, modification (change) or copy (part 1 of article 272 of the criminal code).
Of course, only an individual can be brought to criminal responsibility (Article 19 of the Criminal Code of the Russian Federation).However, bringing a guilty individual to criminal responsibility does not exempt an organization from administrative responsibility (Part 3 of Article 2.1 of the Administrative Code of the Russian Federation) [8][9][10][11][12][13][14][15][16].
The sale or transfer of personal data of clients for companies is primarily a reputational loss, says Konstantin Ankilov, CEO of TMT Consulting.But operators also spend money to solve this problemthey have to keep an increased security staff that deals with the problem of leaks, as well as spend money on legal costs when such cases reach court, the expert says.To reduce the number of leaks, it is necessary to rebuild the system of employee access to subscriber data, according to the CEO of Telecom Daily Denis Kuskov.According to him, it is necessary to implement such solutions so that no one can get information about customer calls, for example, without presenting special codes that would show the feasibility of obtaining data.If this is not done, then the mobile penetration market will exist for many more years, complains Kuskov.Now, to prevent leaks, companies are implementing systems using machine learning to identify the most likely channels of leaks and employees who could potentially become insiders, says Alexander Chernykhov, a leading expert in the information security field of Krok.Digital marking systems for documents and interfaces are being developed to investigate leaks, which also helps in the search for insiders, he noted [7][8][9][10][11][12][13][14][15][16][17].
A representative of Sberbank told Forbes that in 2020, the company did not find a single case of leakage of personal data from bank employees.This was made possible thanks to the "new architecture of processes to counter leaks," noted in the "Savings Bank", but did not disclose the details of working with this problem.MTS has an architectural differentiation of access to information, a full range of technical protection against the dissemination of personal data has also been developedthese are software and technical means of access control and constant monitoring, the representative of the operator said.He added that the company also tells employees that outsiders should not be allowed to access the protected information, and that criminal liability is threatened for this.
The black market of data has really scaled, including due to the transition of some sellers and buyers to messengers, and this trend will continue due to the general economic crisis, Dmitry Budorin believes: "Very soon, people who have lost their jobs will go into small-scale hacking.Those forums that sold merged databases have already become known a wide range of people.The more enterprising began to organize sales channels through messengers in order to earn not on the hacks themselves, but on the resale of information." It is not entirely correct to talk about the "boom" of leaks that has begun, Knysh believes.In his opinion, recently merged databases have been talked about more often, primarily because there is more information itself.
Today we provide our data to various services and structures, whether it is a loan or a Netflix subscription, because it is quite difficult to talk about the full protection of our privacy.
"We have already passed privacy, we have missed itwe are all in the digital world.Pandora's box is already open," Sergey Solonin, head of the Qiwi payment service, said last year.
Nevertheless, some steps can be taken to secure your data: * do not link your social media accounts and email to the main phone number that you use as a contact in various services.Get a separate SIM card for these purposes, the number of which will be known only to you.Do not insert this SIM card into your main phone.
* get different mailworking and for personal purposes.Delete emails containing confidential information (passwords, passport details, phone numbers).
* use two-factor identification, but not via SMS.Use services like Google Authenticator.
* set passwords consisting of a large number of characters of different types (numbers, capital and small letters).Do not use the same password for multiple emails.
* periodically check your email for hacking.You can do this using services like Have I Been Pwned or HackenAI.
Conclusion
Thus, the following conclusions can be drawn: Firstly, not only have there been no fewer offers on the black market, on the contrary, their number has visibly increased.Perhaps the number of resellers of the same data has increased, but there is definitely no shortage of offers.
Secondly, prices for almost everything have increased.Especially noticeable is the rise in prices for the so-called bank "breakout".It can be assumed that banks are actively (but some of them have so far unsuccessfully) trying to combat this phenomenon, which causes prices to rise.
Thirdly, judging by the number of offers, low prices and the range of "services", with the security of user data from some mobile operators, everything is very bad.It is in this segment that there is the widest selection of sellers and data (from all kinds of statements to constant tracking of the subscriber's geolocation)."Competition" for these operators can only be made by government agenciesprices are not high here, but there is a rich choice.
In addition, the current level of risk for data providers is affected.Law enforcement officers and security services of banks or telecom operators are fighting against them.And, for example, if a special operation was recently carried out against "breakouts" or mass data drains, then prices are rising.
Fig. 1 .
Fig. 1.Information leakage related to fraudulent activities from January to September 2020.
Fig. 2 .
Fig. 2. Administrative penalties for violation of personal data processing rules.
-fulfillment of obligations when interacting with Roskomnadzor: * do not provide the information requested by them in accordance with Part 3 of Article 23 of the Law on Personal Data (Article 19.7 of the Administrative Code of the Russian Federation); * do not comply with the legal order of Roskomnadzor on the elimination of violations on time (Part 1 of Article 19.5 of the Administrative Code of the Russian Federation); * you will hinder the inspection or evade it (Part 1 of Article 19.4.1 of the Administrative Code of the Russian Federation); * do not comply with Roskomnadzor's requirement to clarify, block or destroy personal data if they are incomplete, inaccurate, outdated, illegally obtained or are not necessary for processing purposes (Part 5 of Article 13.11 of the Administrative Code of the Russian Federation) [4-9] (Figure 3).
Fig. 3 .
Fig. 3. Criminal penalties for violation of personal data processing rules.
But the American data helps to draw up a picture of what is happening on the Russian market, taking into account the significantly smaller scale of what is happening.
|
v3-fos-license
|
2012-12-18T22:00:40.000Z
|
2012-12-18T00:00:00.000
|
119294976
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2013/02/epjconf_up2012_02007.pdf",
"pdf_hash": "23dc89f6eab273ce82b2a5688a337815c0e0c6d6",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43276",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "23dc89f6eab273ce82b2a5688a337815c0e0c6d6",
"year": 2012
}
|
pes2o/s2orc
|
Electron acceleration in vacuum by ultrashort and tightly focused radially polarized laser pulses
Exact closed-form solutions to Maxwell's equations are used to investigate electron acceleration driven by radially polarized laser beams in the nonparaxial and ultrashort pulse regime. Besides allowing for higher energy gains, such beams could generate synchronized counterpropagating electron bunches.
Introduction
The advent of ultra-intense laser facilities has led to exciting possibilities in the development of a new generation of compact laser-driven electron accelerators. Among the proposed laser acceleration schemes, the use of ultra-intense radially polarized laser beams in vacuum (termed direct acceleration) is very promising, as it takes advantage of the strong longitudinal electric field at beam center to accelerate electrons along the optical axis [1]. Numerical simulations have shown that collimated attosecond electron pulses could be produced by this acceleration scheme [2,3].
Recent studies on direct acceleration have shown that reducing the pulse duration and beam waist size generally increases the maximum energy gain available [4,5]. However, these analyses were carried under the paraxial and slowly varying envelope approximations. These approximations lose their validity as the beam waist size becomes comparable with the laser wavelength and the pulse duration approaches the single-cycle limit, conditions that are now often encountered in experiments . We propose a simple method to investigate direct acceleration in the nonparaxial and ultrashort pulse regime, and show that it offers the possibility of higher energy gains. We also highlight a peculiar feature of the acceleration dynamics under nonparaxial focusing conditions, namely the coexistence of forward and backward acceleration. This could offer a solution to the production of synchronized electron pulses required in some pump-probe experiments.
Exact solution for a nonparaxial and ultrashort TM 01 pulsed beam
Ultrashort and tightly focused pulsed beams must be modeled as exact solutions to Maxwell's equations. A simple and complete strategy to obtain exact closed-form solutions for the electromagnetic fields of such beams was recently presented by April [6]. For a TM 01 pulse, which corresponds to the lowest-order radially polarized laser beam, the field components are described by [6,7]: Here E 0 is an amplitude parameter, is the inverse Fourier transform of the Poisson-like frequency spectrum of the pulse, in which ω 0 = ck 0 is the frequency of maximum amplitude and φ 0 is a constant phase [8]. The parameter a, called the confocal parameter, is monotonically related to the beam waist size and characterizes the beam's degree of paraxiality: k 0 a ∼ 1 for tight focusing conditions, while k 0 a 1 for paraxial beams. The pulse duration T , which may be defined as twice the root-mean-square width of |E z | 2 , increases monotonically with s. In the limit k 0 a 1 and s 1, Eqs. (1)-(3) reduce to the familiar paraxial TM 01 Gaussian pulse [9].
The TM 01 pulsed beam described above may be produced by focusing a collimated radially polarized input beam with a high aperture parabolic mirror. Its field distribution consists of two counterpropagating pulse components, as shown in Fig. 1 [10].
On-axis acceleration in the nonparaxial and ultrashort pulse regime
Direct acceleration is simulated by integrating the conventional Lorentz force equation for an electron initially at rest at position z 0 on the optical axis and outside the laser pulse. Since E r and H φ vanish at r = 0, the particle is accelerated by E z along the optical axis. Figure 2 illustrates the variation of the maximum energy gain available ∆W max (after optimizing for z 0 and φ 0 ) with the laser peak power P peak for different combinations of k 0 a and s. Figure 2a, in which ∆W max is expressed as a fraction of the theoretical energy gain limit ∆W lim [9], shows that for constant values of s, the threshold power above which significant acceleration occurs is greatly reduced as k 0 a decreases, i.e., as the focus is made tighter. According to Fig. 2b, MeV energy gains may be reached under tight focusing conditions with laser peak powers as low as 15 gigawatts. In constrast, a peak power about 10 3 times greater is required to reach the same energy with paraxial pulses. At high peak power, Fig. 2a shows that shorter pulses yield a more efficient acceleration, with a ratio ∆W max /∆W lim reaching 80% for single-cycle (s = 1) pulses. Additional details about those results can be found in [7]. In the highly nonparaxial regime (k 0 a ∼ 1), a closer look at the dynamics in the (z 0 , φ 0 ) parameter space reveals the existence of two different types of acceleration (see Fig. 3a). In the first type, the electron is accelerated in the positive z direction (forward acceleration), and may reach a high energy gain if its motion is synchronized with a negative half-cycle of the forward-propagating component of the beam. In the second type, the electron is accelerated in the negative z direction (backward acceleration), and may similarly experience subcycle acceleration from the backward-propagating component of the beam. The maximum energy gain available from forward and backward acceleration is illustrated in Figs. 3b-c. A significant backward acceleration is only observed under tight focusing conditions (k 0 a < 10), since the amplitude of the backward-propagating component of the laser beam rapidly decreases as k 0 a increases.
|
v3-fos-license
|
2022-07-09T06:17:24.997Z
|
2022-07-07T00:00:00.000
|
250358458
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "26ca2c6e6bd300e22764fade5c23bab9c824dd65",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43277",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "8286d4aaafccf17735050b844b48eaa904542d46",
"year": 2022
}
|
pes2o/s2orc
|
Phyllanthus amarus shoot cultures as a source of biologically active lignans: the influence of selected plant growth regulators
This is the first comprehensive study of the influence of plant growth regulators (PGRs) on the development of shoots and accumulation of biologically active lignans—phyllanthin and hypophyllanthin, in the shoot culture of P. amarus Schum. & Thonn. (Euphorbiaceae) obtained by direct organogenesis. The following PGRs were included in the experiments—cytokinins: kinetin (Kin), 6-benzylaminopurine (BAP), 2-isopentenyladenine (2iP), 1-phenyl-3-(1,2,3-thiadiazol-5-yl)urea, thidiazuron (TDZ) and auxin, indole-3-butyric acid (IBA) and used at various concentrations. Depending on PGRs and their concentrations, differences in the culture response and lignan accumulation were observed. The highest content of the investigated compounds was found in the shoot culture grown on Murashige and Skoog’s (MS) medium supplemented with Kin 0.25 mg/L. The sum of phyllanthin and hypophyllanthin was ~ 10 mg/g of dry weight (DW), which was similar or even higher than that in the plant material obtained from natural conditions. The results of the research provide new data on the selection of the optimal growth medium for the production of plant material with a significant level of phyllanthin and hypophyllanthin biosynthesis. The obtained data may also be valuable in designing systems for large-scale cultivation of P. amarus shoots with high productivity of hepatoprotective lignans.
Materials and methods
All the methods were performed in accordance with relevant guidelines and regulations.
Culture media and growth conditions. All aspects of preparation of culture media and growth conditions were described previously 14,20,40 .
Analysis of lignans. Preparation of extracts.
Dried and pulverized plant material (0.5 g) was extracted with methanol in ultrasonic bath (3 × 50 mL, 3 × 30 min) at temperature 50 °C. The methanol extract was evaporated under reduced pressure to a dry residue, which was then re-dissolved in methanol (10 mL).
Analyses were performed on Chromolith Performance RP-18E (100-4.6) (Merck, Darmstadt, Germany) at 25 °C. Mobile phase A was acetonitrile and mobile phase B was water. The following linear gradient elution was used: 0-20 min from 40 to 50% mobile phase A. The sample injection volume was 10 μL and the flow rate was 1 mL/min. Chromatograms were recorded at λ-280 nm. www.nature.com/scientificreports/ The HPLC method was validated in terms of selectivity, linearity, precision, repeatability, intra-and inter-day precision, LOD, LOQ, and recovery according to the method described earlier 42 . Phyllanthin and hypophyllanthin standard stock solutions (1 mg/mL) were prepared in methanol. For quantitative analysis, the stock solutions was diluted to five different working solutions having concentrations from 250 to 5 µg/mL (250; 125; 75; 25; 5 µg/mL). Standard solutions of phyllanthin and hypophyllanthin at concentration 125 µg/mL were used to establish repeatability, intra-and inter-day precision.
The identification of phyllanthin and hypophyllanthin was carried out by comparison of their retention times, UV spectra and m/z values of molecular ions with obtained for the standard compounds.
Analysis of securinega-type alkaloids, flavan-3-ol derivatives and β-sitosterol. Analysis of alkaloids, flavan-3-ol derivatives and β-siterols were performed according to previously established methods 42-44 . Total tannins evaluation. Evaluation was performed according to method described in European Statistical analysis. Statistical analysis was performed as described earlier 14 .
The results of the quantitative analysis are the mean of 3 trials ± SD. The results of the PGR influence on the development of P. amarus shoot culture are the means of ≥ 50 trials ± SD.
Results
Phytochemical analysis. The following shoot cultures of leafflower species were included in the screening for lignans-P. amarus, P. multiflorus, P. glaucus, P. juglandifolius and P. grandifolius. P. amarus was the only species that accumulated the analyzed compounds.
The separation of lignan complex from P. amarus shoots was carried out by a use of HPLC-DAD-ESI-MS method and the following compounds were identified-phyllantin, hypophyllantin, nirantin and nirtetralin. Two lignans were identified by comparison with standard compounds, namely phyllantin (t R 12.2 min) and hypophyllantin (t R 12.8 min) (Fig. 1, Table 1). Peaks eluted at t R 14.9 min and t R 16.0 min were assigned to nirtetralin and nirantin based on data from their ESI-MS spectra and compared with literature data 18,29 (Fig. 1, Table 1).
The influence of PGRs on the shoot culture of P. amarus. The following PGRs were used to study the effects of single cytokinins on shoot culture development of P. amarus: 2iP, BAP, and Kin (0.25-2.0 mg/L) and TDZ (0.05-0.5 mg/L). Shoots grown in MS0 medium were used as control. They reached an average length of 8 cm and showed no proliferation or callus induction. 100% of the explants rooted spontaneously (Table 4, Fig. 2).
The cytokinins BAP, Kin, and 2iP affected shoot proliferation. The highest number of shoots/explant (1.98-2.86) was obtained with 0.5-2.0 mg/L Kin. The other cytokinins stimulated from 1.26 to 1.76 shoots/ explant (Table 4). There was a statistically significant reduction in shoot length compared with the control. The strongest effect was observed for TDZ, which at concentrations ranging from 0.05 to 0.20 mg/L strongly inhibited (statistically significant) the development of P. amarus shoot culture, yielding shoots less than 2 cm in length, and at the highest concentration, dying of explants was observed (Table 4, Fig. 3.).
Regardless of the type of cytokinin, simultaneous root development and callus formation were observed at the explant cut-off site. As the concentrations of 2iP, Kin and BAP increased, root length and root number decreased. BAP (0.25-2.0 mg/L) showed a stronger inhibitory effect on rhizogenesis (roots length-1.25 to 0.18 cm) compared to 2iP (3.44-1.97 cm) and Kin (3.32-2.78 cm) ( Table 4). Using TDZ (0.05-0.5 mg/L), complete inhibition Table 4). The use of 2iP (0.25-2.0 mg/L) in combination with BAP and Kin (1 mg/L) resulted in 1-2 shoots/explant. The inhibitory effect on rhizogenesis and callus formation was stronger with 2iP in combination with BAP (Table 5). On media supplemented with 2iP (0.25-2.0 mg/L) in combination with TDZ, 2-3 shoots were obtained, but they were very short (≤ 0.51 cm) ( Table 5) and also strongly thickened, making separation and passage difficult. The shoots were stunted, produced abundant callus, and had highly altered morphology compared to the control.
In studies on auxin effects, IBA (0.25-2.0 mg/L) was tested-as a single PGRs and in combination with 2iP. IBA (0.25-2.0 mg/L) had no effect on shoot proliferation. At concentrations ranging from 0.25 to 0.5 mg/L it did not affect shoot length, while at concentrations 1.0-2.0 mg/L or in combination with 2iP (1.0 mg/L) shorter shoots were obtained (statistically significant difference compared with the control) ( Table 6). Table 4. Effects of single cytokinins on the growth and proliferation of P. amarus shoot culture. The results are the arithmetic means of ≥ 50 trials ± SD. The values in each column marked with different letters (a, b, c…) indicate statistically significant differences (p < 0.05; Tukey's RIR test). www.nature.com/scientificreports/ As IBA concentration increased, root length and number of roots/explant decreased. A similar response of explants was observed using IBA (1.0 g/L) in combination with 2iP, but statistical differences between the values obtained for the above-mentioned parameters were weakly marked (Table 6). On media supplemented with IBA (0.5-2.0 mg/L) alone and in combination with 2iP, the rooting response ranged from 98 to 100% (Table 6).
The influence of PGRs on the biosynthesis of lignans in the shoot culture of P. amarus. The study showed a statistically significant effect of PGRs on the accumulation of phyllanthin and hypophyllanthin, compared to the control, which contained 2.87 mg/g DW and 2.24 mg/g DW, respectively (Fig. 4, Supplementary Tables 1, 2, 3).
The highest increase was shown for Kin (0.25-2.0 mg/L) and depending on its concentration it was 5.16-6.64 mg/g DW and 2.97-3.89 mg/g DW for phyllanthin and hypophyllanthin, respectively (2.3-fold and 1.7-fold increase compared to control) (Fig. 4). On medium supplemented with 2iP (0.25-2.0 mg/L), the concentration of phyllanthin and hypophyllanthin in biomasses was 3.33-4.55 mg/g DW and 2.17-3.50 mg/g DW, respectively (Fig. 4). A decrease in lignan concentration with increasing cytokinin concentration was observed Table 1). The phyllanthin content in biomasses grown on media with the addition of 0.25-0.5 mg/L BAP was similar to the control sample (~ 2.83 mg/g DW) (Fig. 4, Supplementary Table 1). Higher BAP concentrations (1.0-2.0 mg/L) resulted in a statistically significant decrease in phyllanthin accumulation (2.21-2.33 mg/g DW). Hypophyllanthin content was statistically lower than in the control (Fig. 4, Supplementary Table 1).
The addition of TDZ (0.05-0.5 mg/L) to the medium caused a statistically significant, more than tenfold decrease in phyllanthin and hypophyllanthin content compared to the control, depending on the concentration of TDZ in the medium (Fig. 4, Supplementary Table 1).
There were variable effects of medium supplementation with 2iP (0.25-2.0 mg/L) in combination with other PGRs on lignan biosynthesis level. 2iP in combination with Kin (1.0 mg/L) reduced the accumulation of phyllanthin and hypophyllanthin compared with supplementation with a single PGR (Fig. 4, Supplementary Tables 1, 2). www.nature.com/scientificreports/ There were no significant differences in lignan content in biomasses obtained on media supplemented with BAP alone or together with 2iP. However, application of 2iP (0.25-2.0 mg/L) with TDZ increased the concentration of lignans, compared to supplementation with TDZ alone (Fig. 4, Supplementary Tables 1, 2). Supplementation with 0.25-0.5 mg/L IBA resulted in a statistically significant increase of phyllanthin concentration (4.34-4.58 mg/g DW), which decreased with IBA 1.0-2.0 mg/L to that of the control. The concentration of hypophyllanthin (2.72-3.88 mg/g DW) was higher compared to the control over the full range of IBA concentrations. In biomasses grown on media supplemented with IBA (1.0 mg/L) and 2iP (0.25-2.0 mg/L) compositions, the content of phyllanthin and hypophyllanthin was lower compared to supplementation with single PGRs (Fig. 4, Supplementary Tables 1, 3).
Discussion
Due to the growing demand for plant raw material from P. amarus and its biologically active compounds, it seems necessary to develop alternative breeding methods of the species. The cultivation of P. amarus in natural conditions requires appropriate humidity and composition of soil substrate. The level of lignan biosynthesis in P. amarus is also influenced by many environmental factors such as average rainfall and temperature, duration of snow cover, soil type, radiation intensity and the length of the growing season, related to the geographical altitude 25,26 . The in vitro conditions partially eliminate the aforementioned limitations and variable factors and can provide high-quality plant material, that will be a source of plantlets that will later grow ex vitro or will allow to obtain biomass with a high content of particular compounds.
There are relatively many studies on the multiplication of P. amarus in vitro but the results of individual studies take into account different concentrations of particular PGRs, different explants and culture conditions, hence the fragmentary data obtained by various authors are difficult to compare 30,[34][35][36]38,39,46 . Individual authors have been observed very different effects for cytokinins in shoot/explant number and % explant response, even when the same concentrations are used. The proliferation rate for BAP and Kin used separately at the concentration of 0.5 mg/L ranged from 1 to 18 new shoots/explant and 3.5-15 shoots/explant, respectively 34,35,38,39 and may be due to the different origin of the explants. Ghanti et al. observed, that the largest number of P. amarus shoots was regenerated on the medium supplemented with BAP 0.5 mg/L from shoot tip explants (18.3) compared to internodal (12.6) and nodal (6.7) explants 39 . However, it should be emphasized, that even using the same PGR's in the same concentration and the same type of explants different results can be obtained [37][38][39] . Interpretation of individual results obtained in the assessment of the effect of cytokinins in combination with auxin is also www.nature.com/scientificreports/ difficult 34,46 . Hence, in order to make a proper comparison of the influence of individual PGRs on the development of P. amarus shoot cultures, it is necessary to observe their effects in experiments conducted simultaneously. On the basis of results obtained in our research, cytokinins can be classified depending on the influence they exert on the morphology of P. amarus. 2iP had little effect on changes in shoots morphology, which especially at lower concentrations (0.25-1.0 mg/L), were very similar to the control (MS 0 ) (Figs. 2, 3). A similar effect was observed with kinetin supplementation, obtaining slightly shorter shoots compared to 2iP. BAP, along with an increase in concentration, stimulated stunting and deformation of shoots, while TDZ clearly inhibited the growth of P. amarus shoot culture (Table 4, Figs. 2, 3).
Among the cytokinins, the effect of 2iP on leafflowers shoot cultures is relatively poorly understod. Cultivated P. urinaria and P. caroliniensis on the medium with the addition of 2iP at a concentration of 5 µM gave about 17 shoots/explant and 4-5 shoots/explant, respectively 47,48 . On the other hand, the studies of P. stipulatus did not show any effect of 2iP on the propagation 49 . The influence of 2iP on P. amarus in vitro cultures has not been analyzed so far. Our research showed that 2iP, both used alone and in combination with other PGRs (TDZ, BAP, Kin, IBA), has little effect on the rate of proliferation (Table 4). However, other cytokinins used in the experiment, including BAP and Kin, which are mentioned as the most effective growth regulators in the propagation of this species, also had a rather weak influence on P. amarus shoots proliferation of ( Table 4) and this did not confirm some of the literature data 34,37 . Among the tested cytokinins, 2iP slightly inhibited rhizogenesis and only slightly induced callus formation at the base of P. amarus shoots (Table 4). A reduction in leaf fall was also observed at the end of the breeding cycle. The process of leaf fall of varying intensity was observed for all PGRs used in these experiments.
In the studies carried out earlier for P. glaucus, 2iP also did not promote shoots proliferation and caused a reduction of shoot length, however, unlike P. amarus complete inhibition of rhizogenesis was observed under the influence of this PGR. In terms of accumulation of secondary metabolites 2iP had negative influence on the concentration of indolizidine alkaloids-securinin and allosecurinin present in the culture of P. glaucus shoot 14 . The results obtained for P. amarus and the previously published data on P. glaucus shoot cultures 14 showed significant interspecies diversification in response to the action of individual PGRs, indicating the need for individual selection of the conditions for leafflower species, depending on the planned purpose of the experiments (biomass multiplication or accumulation of secondary metabolites).
Lignans are a structurally diverse group of plant secondary metabolites that are widespread in the kingdom of higher plants. These compounds have dimeric structures formed by a β,β′-linkage between two phenylpropane units with a different degree of oxidation in the side chain and a different substitution on the aromatic moieties 50 . They possess many valuables types of pharmacological activity making them an important source of novel drug candidates and/or leading structural scaffolds used in medicinal chemistry. Biologically active lignans are common in e.g., Linum, Schisandra, Sesamum or Podophyllum species [50][51][52] .
One of the richest dietary sources of lignans are flax seeds. They contain i.a. secoisolariciresinol diglucoside and its aglycone secoisolariciresinol, which are metabolized to mammalian lignans known as enterodiol and enterolactone in the presence of the enzymes of the intestinal microflora. These compounds are functionally similar to estrogens and contribute to a number of human health benefits-they reduce the risk of breast and prostate cancer and improve hyperglycemia 51,53 .
The importance of lignans in the biological activity of Phyllanthus species has been the subject of numerous studies, which mainly concern their anti-viral and protective effect on liver cells and their activity in diseases of urinary system [54][55][56][57][58] . Among the lignans most abundant in P. amarus phyllanthin and hypophyllanthin are distinguished ( Supplementary Fig. 1), as well as niranthine, nirtetralin, and pyltetralin 17,18 . Phyllanthin dominates in the lignan complex. In plant material obtained from natural conditions its content is variable and usually ranges from 3 to 7 mg 18,25,59,60 . The richest source of lignans are leaves of P. amarus and it has been shown that for plants growing at sites higher above sea level, phyllanthin concentration can even reach more than 11 mg/g DW. However, due to the small amounts in stems their average concentration in above-ground part was significantly lower (2-3 mg/g DW) 25 . The content of hypophyllanthin in P. amarus is usually lower than that of phyllanthin, ranging from 1.8 to 3.2 mg/g DW, respectively 18,19,59,60 .
The lignans identified in the studied culture of P. amarus shoots belong to two different types of lignans and show different patterns of fragmentation. Based on the literature data 18,61,62 (Table 1) were present, at m/z 387 and 401 and at m/z 355 and 369, respectively.
The available data on the effect of PGRs on the content of lignans in P. amarus cultures in vitro mainly concern callus tissue or regenerated microshoots [28][29][30][31] . The studies conducted so far indicate that the callus of the Phyllanthus species is a poor source of lignans (lignan content in the range of µg/g DW) [28][29][30] .
The studies performed by Nitnaware et al. 30 showed that the highest concentration of lignans was found in the callus culture of P. amarus cultivated on the MS medium supplemented with BAP, the lower with TDZ, and the lowest with the Kin, ranging from 4.6 to 42 µg/g DW. Lignans concentration was inversely proportional to cytokinin concentration 30 . The inverse effect of lignan content on auxin (IAA, NAA, 2,4-D) concentration was less marked. The highest content of phyllanthin and hypophyllanthin (0.84 µg/g DW, 0.38 µg/g DW, respectively) was determined in the biomass grown on MS medium supplemented with NAA 2.15 µM and the lowest on MS with the addition of 2.4-D (~ 0.10 µg/g DW, 0.04 µg/g. DW respectively). Simultaneous supplementation with auxins and cytokinins also resulted in a very low concentration of lignans which was at similar to supplementation with cytokinins alone 30 25,59,60 . These results confirmed the thesis that some secondary metabolites morphological differentiation is necessary to obtain a higher yield of secondary metabolites 30,32 . So far, no comprehensive research has been carried out on the effect of single plant growth regulators on the level of lignan accumulation in Phyllanthus shoot cultures obtained by direct organogenesis, which is a desirable method of shoots multiplication that guarantees genetic stability and prevents or reduces the occurrence of somaclonal variation. In the presented studies, it was observed that for individual growth regulators the growth inhibition of cultures was accompanied by a decrease in the concentration of lignans in the biomass. TDZ in concentration 0.05-0.1 mg/L caused a statistically significant, more than tenfold decrease in the level of phyllanthin and hypophyllanthin, compared to the control, and clearly inhibited culture growth, while at concentration of 0.2-0.5 mg/L death of explants was observed. Similar results were observed in the study of the effect of PGRs on the biosynthesis of securinega-type alkaloids in the cultivation of P. glaucus shoot-TDZ inhibited shoot growth and decreased the content of alkaloids 14 .
The effect of growth regulators on cultured plant cells include their growth, metabolism and the process differentiation. It is generally believed that PGRs do not react with intermediates of the biosynthetic pathways but appear to alter cytoplasmic conditions for product formation. Elevated levels of cytokinins in the medium affect cell differentiation. The production of metabolites related to such differentiation is expressed/enhanced in culture. The effect of PGRs on the level of secondary metabolite biosynthesis (including lignans) is very variable and difficult to predict. Even within the same species, different results are obtained when cultivating different types of biomass (e.g. shoot culture or callus culture) 63 .
The presented research showed that low Kin concentrations (0.25-0.5 mg/L) can be used to obtain P. amarus shoot culture by direct organogenesis with a high content of analyzed lignans (above 10 mg/g DW) (twice as high as in the control sample) (Fig. 4, Supplementary Table 1). Moreover, the study showed that 2iP and IBA (0.25 mg/L) used separately have the potential as PGRs, which significantly increase the level of lignan accumulation compared to the control sample (total content of phyllanthin and hypophyllanthin ~ 8 mg/g DW) (Fig. 4, Supplementary Tables 1, 3). These concentrations are comparable to plant material originated from natural conditions 25,59,60 .
Conclusion
This is the first comprehensive study on the influence of PGRs on the development of shoots and the accumulation of biologically active lignans-phyllanthin and hypophyllanthin, in the shoot culture of P. amarus obtained by direct organogenesis. The obtained data compared the effect of 5 selected plant growth regulators, cytokinins-Kin, BAP, 2iP, TDZ and auxin-IBA used in a different concentrations.
The studies showed that the accumulation of lignans was dependent on the type of PGRs and their concentration in harvesting medium. On the basis of the obtained results, the cytokinins used can be divided depending on the influence they exert on the morphology of P. amarus, into those that have a positive effect (Kin, 2iP) and those that slightly (BAP) and significantly limit the growth of culture (TDZ). Growth inhibition was observed to be accompanied by a decrease in lignan biosynthesis and a more than tenfold decrease in phyllanthin and hypophyllanthin was observed with TDZ supplementation, compared to control. The highest content of tested compounds was found in the shoot culture grown on MS medium supplemented with Kin, 2iP or IBA (0.25 mg/L). The content of lignans as the sum of phyllanthin and hypophyllanthin was at the level ~ 8-10 mg/g DW, which is similar or even higher than the content in the plant material collected from natural conditions. Due to the demand for raw plant material, the limited possibilities of obtaining it and due to the low content of these compounds in the biomass obtained so far in in vitro conditions, it is a significant value and achievement of the research carried out.
The research results provide new data facilitating the selection of the optimal culture medium for the production of plant material with a significant level of phyllanthin and hypophyllanthin biosynthesis. The obtained data may also be a starting point for the design of bioreactor systems for large-scale cultivation of P. amarus shoots with high productivity of hepatoprotective lignans e.g. using different elicitors.
Data availability
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
|
v3-fos-license
|
2018-04-03T00:50:20.781Z
|
2016-12-29T00:00:00.000
|
7509474
|
{
"extfieldsofstudy": [
"Political Science",
"Medicine",
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0168529&type=printable",
"pdf_hash": "0001b4b6ba0b66ea5ec8238e07372ee8bf453cf2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43278",
"s2fieldsofstudy": [
"Economics",
"Environmental Science"
],
"sha1": "0001b4b6ba0b66ea5ec8238e07372ee8bf453cf2",
"year": 2016
}
|
pes2o/s2orc
|
Impact of High Seas Closure on Food Security in Low Income Fish Dependent Countries
We investigate how high seas closure will affect the availability of commonly consumed food fish in 46 fish reliant, and/or low income countries. Domestic consumption of straddling fish species (fish that would be affected by high seas closure) occurred in 54% of the assessed countries. The majority (70%) of countries were projected to experience net catch gains following high seas closure. However, countries with projected catch gains and that also consumed the straddling fish species domestically made up only 37% of the assessed countries. In contrast, much fewer countries (25%) were projected to incur net losses from high seas closure, and of these, straddling species were used domestically in less than half (45%) of the countries. Our findings suggest that, given the current consumption patterns of straddling species, high seas closure may only directly benefit the supply of domestically consumed food fish in a small number of fish reliant and/or low income countries. In particular, it may not have a substantial impact on improving domestic fish supply in countries with the greatest need for improved access to affordable fish, as only one third of this group used straddling fish species domestically. Also, food security in countries with projected net catch gains but where straddling fish species are not consumed domestically may still benefit indirectly via economic activities arising from the increased availability of non-domestically consumed straddling fish species following high seas closure. Consequently, this study suggests that high seas closure can potentially improve marine resource sustainability as well as contribute to human well-being in some of the poorest and most fish dependent countries worldwide. However, caution is required because high seas closure may also negatively affect fish availability in countries that are already impoverished and fish insecure.
Introduction
Food security, as defined at the 1996 World Food Summit, exists when "all people, at all times, have physical and economic access to sufficient, safe and nutritious food that meets their dietary needs and food preferences for an active and healthy life". Feeding the world's expected population of 9 billion people by 2050 is a pressing global issue [1]. Despite progress in a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 reducing hunger over the past decade, around 795 million people remain undernourished in 2015, with the situation being more pronounced in Southern Asia and Sub-Saharan Africa [2,3]. Against this backdrop, several authors have recently stressed the need to consider the role fish can play in contributing to securing food and nutritional security for the world's growing population [1,4].
Currently, fisheries and aquaculture provide up to 3 billion people with almost 20% of their average per capita animal protein intake [5]. Due to the affordability of fish relative to other protein sources, it is especially crucial for the food and nutritional security of coastal communities in poor, low development countries. In fact, almost three-quarters of countries where fish is an important source of protein (defined here as contributing to more than one-third of total animal protein supply) are low-income, food deficient countries [6]. Moreover, fish is also part of the staple diet for people in some developed countries. Yet, the food security for some of the world's poorest populations is threatened by the current degraded state of global fisheries, and the situation is expected to be amplified by the impacts of future climate and socio-economic change [7].
The sustainability of high seas fisheries is of concern because of increasing fishing pressure, inadequate management, and the tendency for deep sea fishes to have long lived life histories which make them vulnerable to overfishing [8,9]. Some high seas species, especially commercially important tunas and billfishes, forage both in the high seas and Exclusive Economic Zones (EEZs) of coastal nations. Overexploitation of high seas fish stocks can therefore affect the availability of fish in countries' EEZs. Recent proposals to close the high seas to fishing have indicated that this may be beneficial for the rebuilding of fish biomass, increase the quantity and improve the distributional equality in global fisheries catch, and increase the resilience of fish stocks to climate change [10][11][12]. For instance, Sumaila et al. [11] found that biomass spillover from closing the high seas would benefit the domestic fisheries in 120 maritime countries under a scenario in which post high seas closure catches increased by 42%. At the same time, it would result in net losses for 65 countries, particularly those which specialise in fishing the high seas, such as Japan, China, and Spain.
Although prior research has identified winners and losers from closing the high seas, how this closure will impact food security for the poorest and most fish dependent countries is not clear. As such, this paper aims to answer the research question: How will high seas closure affect the availability of domestically consumed fish in fish reliant, low income countries? Our approach is to first identify which countries will be positively and negatively affected by high seas closure. Then, we assess whether the effect of high seas closure will impact upon locally consumed food fish. While there are four dimensions to food security-food availability, economic and physical access to food, stability over time, and food utilization [13], we focus on the availability aspect of food security in this study.
Methods
Projected changes in catch of straddling fish taxa due to high seas closure sensitivity analysis, and choose to focus on the mid-range scenarios of 20% and 42% projected increase in straddling taxa catch (we leave out 18% due to its proximity to 20%).
The results from Sumaila et al. [11] did not identify the fish taxa or groups associated with predicted changes in catch at the country level. High seas closure is expected to positively affect the biomass of straddling fish stocks. Therefore, we assume that positive changes in catch resulting from high seas closure relate to the catch of straddling stocks. A list of straddling fish taxa caught by coastal countries globally was provided by [14], which is used as the basis for this analysis.
Fish Dependency
The majority of countries which are highly dependent on fish for protein are low-income, fish deficient countries [6]; therefore, this study focuses on two groups of countries: 1) countries that are highly fish dependent; and 2) low-income, least developed countries (LDCs). Data for determining the fish dependency of countries was obtained from [6] and the Food and Agriculture Organization of the United Nations (FAO) Statistics Division (FAOSTAT, http:// faostat3.fao.org), which provides data on the quantity of animal protein supply (g/capita/day) from different sources, including fish, seafood, and meat. Following [6], we calculated fish dependency as the percentage of fish and seafood out of total animal protein supply. Countries with fish dependency of more than 30% were identified as high fish dependent countries (HFDCs). The United Nations categorises 43 nations, included those that are land-locked, as 'least developed". In this study we only include the 32 maritime countries for which predicted catch and landed values from high seas closure were available from the study by [11].
Benefits from high seas closure
We defined two types of benefits arising from high seas closure. First, countries could benefit directly through an increase in the supply of fish to the local population. This would occur if projected catch gains following high seas closure consisted of the same type of fish that are commonly consumed by local populations. Second, indirect food security benefits could still arise if the projected catch gains involved species that are not consumed domestically but are used for trade or other purposes; these economic activities would, in principle, contribute to the revenues of citizens and national governments, which would allow for improved economic opportunities for local populations, thereby providing them with income necessary for purchasing food. Importantly, we assume that fisheries within each EEZ are managed well, thereby enabling the benefits from high seas closure to be realised.
Direct benefit-Domestic consumption of straddling fish taxa. We assessed whether projected changes in fish catch following a high seas closure involved the same type of fish that are consumed by the local population, or whether changes consisted of fish species that are primarily exported or targeted by foreign fishing vessels fishing within the country's EEZ. To determine this, catches of straddling fish taxa from each of the assessed countries were extracted from the Sea Around Us catch database (www.seaaroundus.org) for 2006, which was the most recent year for which data was available at the time of the analysis by [14]. We then reviewed the literature, both primary and grey, to identify the main uses of straddling fish taxa in each country. High seas closure was determined to have a direct impact on domestic food supply, and hence food security, if the straddling species was a fish that was commonly consumed by the local population. Likewise, the direct impact of high seas closure was assumed to be minimal if the straddling species was predominantly used for export, or caught by foreign fishing fleets.
By fish supply, we refer to the country's annual fisheries catch. We acknowledge that the nature of each country's fish marketing chain will affect the final amount of fish made available to the local population; however, it is beyond the scope of this paper to account for differing market systems. As such, we assumed that for each country, catches of species that are commonly consumed food fish by the local population will mainly be used domestically. Note that this does not assume that the same fish species may not be used for export or other purposes.
An exception was made for the case of tunas in the Pacific Island Countries and Territories (PICTs). Nearshore pelagics make up between 20-30% of the total coastal fishery catches of the 6 PICTs analysed in this study, although tuna dominates the nearshore pelagic catch only in Kiribati [15]. These coastal fisheries take only a tiny fraction of the regional catch of skipjack and yellowfin tuna, the vast majority of which are targeted by industrial fisheries fishing offshore [16,17], and which do not contribute to the domestic fish supply of PICTs [18]. Further, fish and invertebrates from reefs, mangroves, and other nearshore habitats dominate the catch targeted for subsistence [16,19]. Therefore, although consumed domestically, we treat the catches of tunas and other large pelagics in PICTs as industrial fisheries targeted for export, and not for domestic consumption.
Indirect benefit-Economic value of projected catch. Countries where straddling taxa are not consumed domestically could still potentially obtain food security benefits indirectly through increases in economic activity and household incomes arising from projected increases in fisheries catch, thereby improving the ability for people to purchase food. To capture this effect, we used economic and income multipliers estimated by [20]. These multipliers reflect the impact a change in fisheries output will have on fisheries related economic activities and the household income of fishery workers, and were estimated for all maritime countries globally. Projected percentage changes in landed value relative to the status quo were taken from [11] for each of the two high seas catch scenarios. We estimated the economic and household income effect associated with projected increases in landed value as follows: Income effect: LV% x income multiplier; Economic effect: LV% x economic multiplier; Where LV% is the projected change in landed value under each catch gain scenario [11], and income and economic multipliers were taken from [20]. We used the calculated income and economic effect as an indicator of the indirect food security benefits arising from high seas closure for countries which did not benefit directly in terms of an increase in domestically consumed fish.
Mitigating losses from high seas closure
Mitigating direct loss in fish catch-Alternative fish (i.e. non-straddling taxa) and nonfish food sources. A concern for countries with projected catch losses arising from high seas closure is whether alternative fish and non-fish food sources are available in the event of decreased supply of straddling fish species. Alternative fish sources include inland, freshwater, or reef fisheries, or aquaculture. Agriculture could also compensate for the shortfall in fish supply, notwithstanding the difference in nutrients obtained. The availability of food safety programmes, such as those operated by food aid agencies or national governments, could also mitigate the fish supply shortfall. Further, high levels of adaptive capacity, which encompasses human capital, governance effectiveness, and social capital, may indicate a better ability to carry out planned adaptation to future shocks and changes [7], such as changes in food supply.
To investigate the potential for countries to mitigate the impact of decreased straddling fish taxa supply, we reviewed the literature to document the presence of mitigating factors and indicators in 21 countries projected to experience net catch losses under the various scenarios. We undertook a qualitative comparison, assuming that a higher presence of the 7 factors listed below represented a better opportunity for the country to cope with the impact of decreased straddling fish taxa supply: 1. Aquaculture; 2. Inland fisheries; 3. Reef and coastal fisheries; 4. Food safety net programmes-this indicator measures the presence of public initiatives provided by non-governmental organisations (NGOs), government or other multilateral agencies to protect the poor from shocks to food supply for the year 2015. It is a qualitative score provided by the Global Food Security Index (foodsecurityindex.eiu.com). 0 = minimal programmes run only by NGOs or multilateral agencies; 1 = moderate presence of programmes run mainly by NGOs or multilateral agencies; 2 = moderate prevalence and depth of programmes run by the government, multilateral agencies, or NGOs; 3 = national coverage with very broad but not deep coverage of programmes run mainly by government with some reliance on NGO or multilateral agency support; 4 = presence of national government run programmes, with minimal support required by NGOs or multilaterals; 5. National level of adaptive capacity obtained from [7]; 6. Percentage of agricultural land that is equipped for irrigation for the year 2011-this is an indicator of a country's exposure to food supply shock [13], and was obtained from FAO-STAT (http://faostat3.fao.org); 7. Livelihood diversification by fishers-we documented whether, in general, fishers in the respective countries also engaged in other food producing activities, such as farming or livestock rearing. Having a diversified livelihood acts as a buffer which enables households to grow or buy food in the event of external environmental or socio-economic shocks.
The presence of alternate food sources may not be able to supplement or make up for decreased straddling fish supply if those food systems are themselves under pressure to fulfil national food security demands. To account for this, we used the Global Food Security Index (GFSI) to gauge a country's general food security status. The GFSI (www.foodsecurityindex. eiu.com) score for each country incorporates three dimensions of food security: affordability, availability, and quality, and ranged from 0 (low) to 100 (high food security).
Indirect mitigating factors-Income equality and governance effectiveness. In addition to obtaining alternative sources of food, coastal communities in countries projected to experience losses in catch may still be able to secure sufficient food if there is a conducive economic environment which enables them to improve their incomes for buying or accessing food (i.e., there is a trickle down effect from national governments to local communities), or national governments provide the appropriate support and investment for enhancing food security [13]. To account for this, we looked at two national level indicators: 1. Gini coefficient: this is an indicator of income equality within a country (0 = perfect equality, 1 = perfect inequality). Income inequality decreases the ability of poor households to stay healthy and to move out of poverty because it hampers their ability to accumulate human and physical capital [21]. As such, we expect that opportunities for coastal communities in a country with a low Gini coefficient may be relatively better in terms of receiving economic and/or food security support and services from national governments compared to a country where the Gini coefficient is high. Global Gini coefficient data represented by a Gini index was obtained from the World Bank World Development Indicators [22].
2. Governance: Good governance is key to food security [5]. The Global Governance Index developed by the World Bank provides a score for six different aspects of governance, including voice of accountability, political stability, government effectiveness, regulatory quality, rule of law, and control of corruption [23]. Data for these indicators were obtained from the World Bank (http://info.worldbank.org/governance/wgi) for the year 2014. Of these, we chose 3 of the most relevant governance aspects that might affect the ability of local communities to obtain the necessary support to improve their food security situation. These included: a. Government effectiveness-indicates the quality of public services, the quality of civil service and its independence from political interference, quality of policy formulation and implementation, and the credibility of governments' commitments to such policies [23].
Fish Dependency
The contribution of fish to total animal protein supply for the 46 countries included in the present analysis is summarised in [6]. The top 10 fish dependent countries are located in South and Southeast Asia, West Africa, or are island nations in the Western Pacific and Indian Oceans. Of the 32 least developed countries (LDCs), 18 were also considered to be high fish dependent countries, and are hereafter referred to as high fish dependent LDCs (HFDLDCs). Among LDCs, Sierra Leone had the highest fish dependency (76%), while others, mainly in Africa, had minimal fish dependency. Despite the low fish consumption rates in some of these countries, national governments are trying to promote fish as an alternative protein source in countries such as Eritrea, thereby reiterating the importance of fish for future food security.
Direct food security benefit from high seas closure-Domestic use of straddling fish taxa Breakdown by country groups. Twenty-six (56%) of the assessed countries used straddling species for domestic consumption (Fig 1). Seventy-one percent of HFDCs made use of straddling species locally, compared to 39% of highly fish dependent LDCs (HFDLDCs) and 64% of LDCs. This indicates that high seas closure may have the largest effect on domestic fish supply in HFDCs.
Projected net catch gains and losses. Seventy percent of assessed countries were projected to experience net catch gains under both scenarios, with average increases ranging from 13-31% relative to the status quo. Of these countries, slightly above half (56%) used straddling taxa locally. Another 24% of assessed countries had projected losses under both scenarios, with average decreases ranging from -53 to -47% relative to the status quo. Fourty-five percent of these countries used straddling taxa locally. Therefore, high seas closure was projected to have Table 1 a relatively more positive than negative impact on low income fish dependent countries because the number of countries projected to gain from high seas closure was larger than those projected to lose. However, it is noted that the magnitude of projected losses in catch exceeded the projected gains. In addition, more than half the countries projected to experience losses were classified as least developed countries, making the projected negative impact particularly damaging to the already impoverished state of these countries. Across both scenarios, 55-60% of those countries projected to gain would potentially see a benefit in terms of local fish availability ( Table 2). When considered among all 46 assessed 3 No data in FAOSTAT. Fish plays a very minor role in the national diet (citation removed). Therefore we assigned Eritrea a fish protein % that was equal to the lowest percentage of all assessed countries (<1% for Sudan). * Straddling taxa (i.e., tunas) are not treated as being used for domestic consumption because a much larger quantity of tunas is taken by industrial fleets, relative to local coastal fisheries. The industrial catch does not contribute to local fish supply in the PICTs. countries, about 40% (39-46%) of the countries would potentially benefit from increased local fish availability. Among the countries projected to lose from high seas closure, the proportion that relied on straddling taxa domestically ranged from 45% to 57%. As a percentage of all assessed countries, 11% to 17% of countries, depending on scenario, would potentially see local fish availability decline due to high seas closure. An increase of at least 18% in catch of straddling taxa following a high seas closure was expected to result in net gains in global catch relative to the status quo [11]. However, when considering all fish dependent and low income countries as a group, we find that on average, these countries would collectively experience net gains in catch (relative to the status quo) only under the scenario of 42% catch gain following high seas closure (Fig 2). On average, countries where straddling taxa were not consumed locally were projected to experience a net loss of -6% and gain of 11% in catch relative to the status quo under the 20% and 42% scenarios, respectively. This was fairly similar to countries which used straddling taxa locally, for which projected changes in catch were -3% and 10% relative to the status quo under the 20% and 42% scenarios, respectively (S1 Table).
Country
Under the 42% catch increase scenario, projected changes in catch relative to the status quo were not high. Among countries which used straddling taxa domestically, average net catch gains of 14% were projected for LDCs, while similarly minimal gains of 8% and 7% was projected for HFDCs and highly fish dependent LDCs, respectively. Among countries that did not consume straddling taxa domestically, HFDLDCs and LDCs were projected to experience average net gains of 18% and 14%, respectively, whereas HFDCs were projected to experience an average net loss of 11%. While countries that do not consume straddling taxa domestically may not directly benefit from high seas closure, the projected changes in catch may still indirectly affect food security via the economic impact on local communities through the increase in secondary and tertiary activities and services, e.g., processing [20].
Least Developed Countries (LDCs). About sixty percent (64%) of LDCs made use of straddling taxa locally. The majority (79%) of LDCs were projected to experience net gains across both 20% and 42% catch gain scenarios, with average increases of 15% and 32% relative to the status quo, respectively. Of these countries, 80% used straddling taxa domestically. Samoa, Tanzania, and Yemen were the three LDCS with the highest projected losses across both scenarios, with an average of -54% and -49% loss relative to the status quo under the 20% and 42% scenarios, respectively. Although projected losses are very high for Samoa (around -90% relative to the status quo under both scenarios), high seas closure would likely not substantially affect the supply of domestically consumed fish because the main straddling species caught is tuna, which is primarily caught offshore by longliners and exported [27]. In contrast, high seas closure may affect the supply of locally consumed food fish in Tanzania and Yemen (S2 Table), where straddling taxa such as Indian mackerel, Spanish mackerel, and tunas are commonly preferred species [28,29].
High Fish Dependent LDCs. Among the 18 HFDLDCs, 7 (39%) used straddling taxa domestically. Eleven countries were projected to experience net catch gains across both scenarios, with average increases of 17% and 36% under the 20% and 42% catch gain scenarios, respectively. Five of these countries, mainly located in Africa, consumed straddling fish taxa locally (Bangladesh, Congo Democratic Republic, Gambia, Guinea, and Equatorial Guinea). This is a positive sign in terms of fish protein security, given that per capita fish availability has been decreasing in much of sub-Sahara Africa [30]. The domestically consumed fish species in these countries were mainly small, low-value species such as herring, sardinella, and Hilsa shad (Bangladesh) that are affordable for poor rural coastal communities. For this group of countries, high seas closure could likely increase the availability of important food fish for local populations. This is also particularly important for supporting the nutritional requirements of poor populations, as small fish have high nutrient content, e.g., Omega-3, vitamin A, iron, zinc, and calcium, which can potentially reduce micronutrient and essential fatty acid deficiencies among the undernourished [30].
Four HFDLDCs (Comoros, Togo, Kiribati, and Vanuatu) were projected to incur catch losses across both 20% and 42% scenarios, with average decreases of -43% to -39% relative to the status quo, respectively. Straddling taxa are not consumed locally in Togo and Vanuatu; consequently, high seas closure may only affect the availability of domestically consumed fish in Comoros and Kiribati. While sardinella, which made up about 20% of Comoros' total straddling taxa catch, is consumed domestically, skipjack tuna, which made up 57% of straddling taxa catch, is mainly caught by foreign fishing fleets and not landed in Comoros [31].
Moreover, even though tunas are caught by local artisanal fishers [32], many coastal fishing communities in Comoros prefer the taste of reef fishes and believe them to be superior to pelagic species in terms of nutritious value [33].These factors may therefore dampen the projected impact of high seas closure on local Comoros fish supply.
Two negatively affected HFDLDCs (Kiribati and Vanuatu) are Pacific island states where the main straddling species are tuna that are primarily caught by foreign fleets or for export. Although nearshore pelagics make up around 21% of total coastal catches in Kiribati and Vanuatu [15], small-scale fishing for tuna occurs only in Kiribati, but not Vanuatu [18]. The total quantity of tunas caught by the foreign fleet dominated industrial fisheries in Kiribati and Vanuatu far exceed the amount taken by local nearshore fisheries [17,34]. Thus, closing the high seas may have a proportionately larger effect on tunas caught by foreign fleets within the EEZs of these 2 PICTs relative to the amount caught for domestic consumption. Further, demersal reef fish, the bulk of which are caught for subsistence, make up the majority of coastal fisheries catches in Pacific island states [15]. As such, high seas closure may have a minimal direct effect on local food security in highly fish dependent LDCs where catches are projected to fare the worst. However, there may be indirect impacts on food security because access fees paid by foreign fishing vessels to fish within the EEZs of these countries contribute substantially to national revenues. For instance, fishing access fees totalling USD 47.4 million made up approximately half of Kiribati's total government revenue in 2012 [35], and about 25% of its gross domestic product [17]. Further, increasing the use of offshore tuna stocks to supply local markets in Pacific islands was identified as a means of adapting to potential climate change impacts on coral reef fisheries [16,24]. Therefore, the projected loss in tuna catches still poses an indirect food security concern for the PICTs.
For the remaining highly fish dependent LDCs with projected catch gains, the main straddling species also consisted of tuna and other large pelagics that were primarily exported or caught by foreign vessels (Table 3). In Cambodia, which is among the countries with highest projected catch gains and fish dependency, high seas closure may nevertheless not have any large noticeable effect on local fish supply as the majority of fish consumed in the country is from inland fisheries [36]. Thus, overall projected increases in catch of straddling taxa from high seas closure may not substantially increase local fish supply for highly fish dependent LDCs, which are likely the countries with the most urgent need for an increased supply of fish as an affordable protein source.
High Fish Dependent Countries (HFDCs). The majority (71%) of HFDCs made use of straddling taxa domestically. These countries were mainly located in Asia and Africa, with the remainder located in the Indian and Pacific Oceans (Table 3). With the exception of Japan and Korea, the other HFDCs are developing countries which generally have large populations of rural and poor fishing communities who rely on fish as the major source of food and livelihood. In particular, countries in Southeast Asia, especially the Philippines and Indonesia, and countries of western Africa, have the highest nutritional dependence on fish and marine ecosystems [91]. As such, the catch of straddling taxa is paramount to supporting the economic as well as social well-being in coastal areas of these countries.
Eight HFDCs (Cameroon, Cote d'Ivoire, Indonesia, Maldives, Nigeria, Philippines, Thailand, and Vietnam) were projected to experience net gains in catch across both scenarios, with average projected catch increases of 9% and 26% relative to the status quo under the 20% and 42% scenarios, respectively (S3 Table). Among this group, straddling taxa were consumed domestically in 6 countries, with small pelagics such as sardinellas and scads and skipjack tuna being the most common straddling species consumed (Table 3).
Another 4 HFDCs were projected to experience catch losses across both 20% and 42% catch gain scenarios (Fiji, Seychelles, Sri Lanka, and Korea), with losses of -53% to -45% relative to the status quo, respectively (S3 Table). Of these countries, straddling taxa were consumed locally in Sri Lanka, Fiji, and Korea. Fiji and Sri Lanka were the HFDCs with highest projected catch losses, averaging 51% and 43% across both scenarios, respectively. In Fiji, albacore and yellowfin tunas are the main straddling species, and albacore is also commonly consumed either fresh or canned among the local population [48]. While high seas closure may affect the availability of this fish source, the impact on overall fish availability may be minimal because Fijian coastal communities rely heavily on reef fisheries and gleaning for subsistence and artisanal purposes [92]. Similarly, both types of straddling species in Sri Lanka-skipjack tuna and trevally, are consumed locally, accounting for 10% and 5.5% of monthly household fish consumption, respectively. Therefore, high seas closure may decrease the supply of fish to local communities, although not by a large extent.
Catch was projected to decrease by around 34% for Korea, where local consumption of seafood, including tuna and squid, is high. The negative impact of high seas closure may be offset to a certain degree in Korea due to its large distant water fleet, as catches from Korea's distant water fleet are generally consumed in Korea [57]. However, this depends on how the fishing grounds of Korea's distant water fleet will be affected by high seas closure. On the whole, projected decreases may not have a heavy negative impact on this group of countries.
The most negatively affected HFDC with the highest projected losses but limited domestic straddling taxa dependence was Seychelles, where the dominant straddling taxa-skipjack tuna-is primarily caught by foreign fishing fleets and processed for export. Coastal communities in the Seychelles generally fish on coastal reefs for demersal fish, invertebrates, and nearshore pelagics for subsistence and to supply local markets [93]. Thus, the negative impact of high seas closure may not have a large effect on local fish supply, although the projected decrease in tuna catches may have reverberating economic effects on local communities since the Indian Ocean Tuna canning factory is the country's largest single employer [74].
Indirect food security benefits of high seas closure
Fourteen countries with projected catch gains did not consume straddling taxa domestically, but could potentially improve their food security indirectly through the projected increase in revenues, incomes and profits generated by straddling taxa. Half of these countries were located in Africa, with the remainder being Asian, Pacific island, or Caribbean countries (Table 4). Projected landed value gains for these countries ranged from a low of 2.4% to 24% relative to the status quo, under the 20% catch gain scenario, and from 10% to 51% under the 42% catch gain scenario (Table 4). More than half (57%) of the countries were highly fish dependent LDCs (HFDLDCs), and another 29% were LDCs. On the other hand, 42% of the countries with projected losses did not consume straddling taxa domestically. Most of these countries were Pacific Island states, and would stand to suffer indirect food security losses, through the loss in trade of straddling taxa or reductions in fishing access fees. The economic and income multipliers indicate the impact an increase in fisheries output will have on fisheries related economic activities and the household income of fishery workers [20]. Income multipliers for all countries ranged from 0.05 to 0.84, while economic multipliers ranged from 0.28 to 3.34. This means that, depending on the country, a one dollar increase in fisheries sector output (measured by landed value) could potentially generate 5 to 84 cents in household income output, and 28 cents to $3.34 in economic output. Nigeria appears to have the lowest income effect among the countries considered here, while Vietnam had the highest (Table 4). This suggests that an increase in fisheries landed value in Vietnam could potentially result in higher increases in household incomes relative to Nigeria, thereby providing Vietnamese fishery households with a better opportunity for improving their food security. Similarly, Vietnam also had the highest economic effect, while Sierra Leone had the lowest.
Mitigating food security losses from high seas closure
Alternative sources of fish and non-fish food. High seas closure may adversely affect domestic fish supply in the 21 countries projected to experience net losses in catch under the 2 scenarios. On the positive side, it appears that all these countries had at least one other type of fishery that could potentially supplement the decreased catch of straddling fish taxa (Table 5). Inland and reef fisheries play an important role in providing subsistence catches for rural communities in Africa and the Pacific islands. Freshwater fish is also a crucial source of affordable protein for lower income groups in developing Asian countries [94]. The prevalence of inland and reef fisheries in the 21 countries indicates the importance of maintaining the sustainability of these fisheries resources and habitats in conjunction with marine coastal fisheries management, given that inland and reef fisheries are also overexploited where they occur [95][96][97]. Japan and Korea, the two developed countries projected to experience catch losses, are likely the most capable of coping with decreased fish supply because their high wealth and trading power allows them to turn to international markets to obtain food. Further, although they have limited inland fisheries, both countries have high food security scores (above 70) and well developed aquaculture industries that are an important contributor to national production and food security [98,99]. With capture fisheries having levelled off globally, aquaculture is widely seen as the option to fill the future demand for fish [1,100], despite the debate over the environmental sustainability of certain aquaculture systems. However, while aquaculture presently plays a crucial role in providing an affordable source of protein for impoverished populations in developing countries of Asia and Africa [100], its expansion in low income food deficient countries may be limited by energy and technology demands [101]. It Limited.^I mportant contributor to national fisheries production and/or for food security. + Emphasised for development to satisfy fish demand.
* No data/data deficient from cited source. Ranking is provided based on that of surrounding countries/country group. ** No data from [7]. Ranking is based on [126] for Thailand and [137] for Seychelles. has also been argued that the nutritional quality of diets would drop if global fish supply becomes dominated by aquaculture [102].
Reef fisheries were assumed to take place in all countries where coral reefs occur [131] (Table 5). Places where inland and reef fisheries are most depended upon tend to be in tropical, developing countries where rapid population growth is occurring. In fact, the future availability of reef fish per capita in many Pacific island nations is expected to decrease due to population growth, and will be exacerbated by climate effects [15]. Consequently, we emphasise that the presence of alternative fish and/or food sources does not imply that there will be no problem when the projected decrease in straddling fish supply occurs. Population growth and other global change drivers may impede the alternative food sources from making up for the projected shortfall in straddling fish species supply.
Most of the 21 countries with projected losses (Table 5) are highly fish dependent and/or least developed countries, where the alternative food sources considered here are already being used to attain food security. In particular, African countries and island nations either had the lowest food security scores (less than 40) or were considered to be vulnerable to food insecurity [132][133][134][135][136][137] (Table 5), indicating that their food resources are already under stress, and it may not be possible to increase production in these food sectors. Non-fish alternatives already face substantial challenges-global crop production has to increase much more from current levels in order to meet the increased demand from population growth by 2050, and this is exacerbated by climate effects on rainfall and temperature [138]. In light of these considerations, it is important that the effect of global drivers on the potential for alternative food systems to make a substantial contribution to fish protein supply be taken into account in the context of high seas management.
Having a diversified livelihood portfolio is a way of increasing households' resilience to shocks [139], and a means of reducing hunger and malnutrition for the rural poor [13]. Fishers in all negatively affected countries participated in diversified livelihoods by simultaneously engaging in fishing and farming, although farming opportunities for fishers was limited in the Comoros, Japan, and Korea. Nonetheless, the overall presence of diversified livelihoods in the affected countries is a positive sign that fishers may still be able to obtain food, albeit of different nutritional quality, in the event of decreased fish supply from high seas closure. Bushmeat is another alternate food source in times of low fish supply [140], but fish is still comparably cheaper than bushmeat, and thus preferred by the poor [41,141]. It is noted that the literature on fishers who participate in diversified livelihoods mainly referred to small-scale fishers; thus, the impact of decreased fish supply may be different for industrial fishers.
At the national level, the proportion of agricultural area that is equipped for irrigation can be used as an indicator for a country's exposure to food supply shock [13]. In general, the availability of irrigated agricultural land in the African countries and Pacific islands considered here is very low. The prevalence of low irrigation reflects inadequate food production, and the projected decrease in fish supply may potentially exacerbate demands put on these countries' already poor agricultural capacity. In particular, the loss of arable agricultural land in Pacific islands to housing and tourism development has already sparked concern [142]. The frequency of droughts and tropical cyclones in parts of Africa and the Pacific further impair food production [13,142]. In a global context, the irrigated area per person has been decreasing by 1% per year since 2000, and sources of irrigation water are scarce [143]. Both these factors may contribute to decreased opportunities for securing alternate sources of food in countries projected to experience net losses in domestically consumed fish due to high seas closure.
The countries with lowest adaptive capacity levels were also concentrated in Africa, where poor infrastructure, low human capital in many coastal areas, and political stability are among the factors which channel through food systems and hinder people from obtaining a stable supply of food [144]. This highlights that improving food security also encompasses overcoming the social, economic, and institutional constraints for coping with external stressors [145] such as climate change (e.g., drought), conflicts, and disease. In terms of the factors investigated here, it appears that Japan and Korea have the best potential for mitigating the effects of decreased domestic fish supply due to high seas closure, whereas Comoros and Yemen have the least opportunity for doing so.
Inequality and governance. Inequitable distribution of resources and poor governance institutions can create barriers for sustainable food systems and societal well-being, thereby ultimately affecting food security. In Comoros, the lack of alternate food sources is exacerbated by high income inequality and poor governance effectiveness, while in Yemen opportunities for food security may be hampered by high levels of corruption, and poor political stability and governance effectiveness, relative to all other assessed countries (Table 6). In contrast, the economic and governance conditions in Japan puts it in a much better position for achieving food security, as it has the highest levels of income equality and sound governance among the countries. Compared across countries, income inequality may pose the biggest barrier in Seychelles. Thailand, Cote d'Ivoire, and the Philippines had the highest political instability, which can potentially restrict the availability and access to food (see the example of Cote d'Ivoire in [146]). Poor governance effectiveness may also hamper food security measures in 3 other African countries-Liberia, Sierra Leone, and Togo. This amplifies the already poor prospects for alternate non-fish sources of food in these countries, given that they also have limited agriculture and food safety net programmes (Table 5). In contrast, while Pacific island nations also have limited agricultural potential, they generally have more favourable governance conditions and income equality relative to African countries. We acknowledge that this qualitative assessment deals primarily with availability of fish, and does not fully consider the other three dimensions of food security (accessibility, affordability, and utilization). Future research can therefore incorporate other vital factors that determine who will ultimately benefit from improved food security, e.g., people's access to livelihoods in fish value chains and the affordability of fish [1].
Our results are built on the projected impacts of high seas closure on individual countries' catches, which may be affected by the underlying model assumptions from [11]. Briefly, these assumptions included: 1) the catch data used were representative of true fisheries catches (i.e., insignificant misreporting of straddling taxa); 2) increased catches of straddling taxa was applied evenly across all EEZs without accounting for geographic and interspecific variation arising from the accuracy of reported data and the potential spillover of biomass from closed high seas areas. Both these assumptions could possibly affect the magnitude of projected changes in catch. For instance, IUU (illegal, unreported, and unregulated) fishing not captured in catch statistics could result in lower than expected gains of straddling fish taxa to certain countries. Importantly, we stress that the projected benefits arising from high seas closure can only be realised if fisheries within each EEZ are themselves managed well. The outcomes presented here are also subject to climate effects on the spatial and biological behaviour of straddling fish taxa, which were not accounted for in the underlying model. However, recent research suggests that closing the high seas to fishing or managing its fisheries cooperatively could increase catches in EEZs by around 10% by 2050 under 2 climate change scenarios [12].
Summary and Concluding Remarks
The purpose of this study was to investigate the effect high seas closure would have on the availability of commonly consumed food fish in fish reliant, low income countries. We find that just above half (54%) of the assessed countries used straddling fish taxa locally, and hence would potentially be affected by high seas closure. At the same time, countries which did not consume straddling fish taxa domestically could also be indirectly affected, for instance, through the loss in fishing access fees. Overall, it appears that high seas closure affected more countries positively than negatively in terms of improving catches of straddling fish taxa; however, the magnitude of projected losses for negatively affected countries exceeded projected gains. Moreover, only slightly more than a third (37%) of the countries where straddling fish taxa were consumed domestically were projected to experience an increase in fish supply under both scenarios. It should be noted that since future consumption levels of straddling stocks is likely to change in these countries, this conclusion could change in the near future.
The majority (64%) of both highly fish dependent countries (HFDCs) and least developed countries (LDCs) made use of straddling taxa domestically. Slightly above half (57%) the HFDCs were projected to gain in terms of increased fish supply, while almost 30% would be negatively affected across both scenarios. Among LDCs, 20% of the countries were projected to be negatively affected across both scenarios. Least developed countries that are highly dependent on fish (HFDLDCs) are arguably the countries with the greatest need for improved access to affordable fish for food and nutrition security. However, among all country groups, HFDLDCs had the lowest proportion (33%) that used straddling fish taxa domestically. Across both scenarios, slightly above a quarter (28%) of HFDLDCs would likely benefit from high seas closure in terms of increased fish availability, and only one of the countries was projected to be negatively affected.
Thus, in general, high seas closure may not have a substantial impact on improving fish supply in countries where it is most needed. At the same time, local food security for HFDLDCs that were projected to fare the worst (i.e., losses in both catch scenarios) may not be heavily impacted because the affected straddling taxa are tuna, which are either used for export or are caught by foreign fishing fleets. Nevertheless, high seas closure may affect food security indirectly through economic effects stemming from loss in exports and foreign fishing access fees. In summary, while high seas closure may benefit local fish supply in less than half the assessed countries overall, it is important to bear in mind that countries projected to experience catch gains but where straddling taxa are not used domestically can still attain food security benefits indirectly through economic and household income effects arising from an increase in fisheries output.
Protecting the high seas is a conservation issue that concerns the global community. Although prior studies have shown that high seas protection is likely to provide ecological benefits, this study is, as far as we are aware, the first to investigate the food security impact of high seas closure on the world's poorest and most fish dependent countries. Our results indicate that while it may not likely improve domestic fish supply substantially in these countries, its negative impact upon food security in these countries also appears to be minimal. Furthermore, fish catch increases arising from high seas closure can indirectly contribute to improved food security via other economic activities in countries where straddling fish taxa are not consumed domestically. At the same time, this also implies that indirect negative impacts may be experienced in those countries which do not consume straddling fish taxa domestically. In particular, a decrease in tuna catches may not only result in certain Pacific Island States losing substantial amounts of fishing access fees, but also a source of future food security in the face of climate change.
Although it is beyond the scope of this study to consider the political and technological requirements of high seas protection, our results suggest that high seas closure can potentially benefit biodiversity loss and food insecurity, which were identified by the Millennium Assessment as two of the biggest challenges facing humanity. However, we also caution that high seas closure can negatively impact food security in some countries, and that this impact will be particularly amplified in those that are already highly fish dependent and low income. By doing so, our study provides a starting point for further evaluation of the costs and benefits of high seas protection, an international action that is urgently needed in the face of global ocean degradation.
|
v3-fos-license
|
2019-03-13T13:53:52.593Z
|
2019-03-12T00:00:00.000
|
75136441
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-019-40956-1.pdf",
"pdf_hash": "169cac815a599b6d87cfa2864dbd39beaf146000",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43279",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "169cac815a599b6d87cfa2864dbd39beaf146000",
"year": 2019
}
|
pes2o/s2orc
|
Dental integration and modularity in pinnipeds
Morphological integration and modularity are important for understanding phenotypic evolution because they constrain variation subjected to selection and enable independent evolution of functional and developmental units. We report dental integration and modularity in representative otariid (Eumetopias jubatus, Callorhinus ursinus) and phocid (Phoca largha, Histriophoca fasciata) species of Pinnipedia. This is the first study of integration and modularity in a secondarily simplified dentition with simple occlusion. Integration was stronger in both otariid species than in either phocid species and related positively to dental occlusion and negatively to both modularity and tooth-size variability across all the species. The canines and third upper incisor were most strongly integrated, comprising a module that likely serves as occlusal guides for the postcanines. There was no or weak modularity among tooth classes. The reported integration is stronger than or similar to that in mammals with complex dentition and refined occlusion. We hypothesise that this strong integration is driven by dental occlusion, and that it is enabled by reduction of modularity that constrains overall integration in complex dentitions. We propose that modularity was reduced in pinnipeds during the transition to aquatic life in association with the origin of pierce-feeding and loss of mastication caused by underwater feeding.
Organisms are organised into multiple identifiable parts on multiple levels. These parts are distinct from each other because of structure, function or developmental origins. The fact that parts of an organism are distinguishable reflects their individuality and a degree of independence from each other. Nevertheless, these different parts must be coordinated in their size and shape and integrated throughout the entire organism to make up a functional whole. Tension between the relative independence and the coordination of organismal parts is expressed in concepts of morphological integration 1 and modularity 2,3 . Both concepts are closely related and concern the degree of covariation or correlation between different parts of an organism or other biological entity. Integration deals with the overall pattern of intercorrelation, and modularity involves the partitioning of integration into quasi-independent partitions. Integration exists if parts vary jointly, in a coordinated fashion, throughout a biological entity. Modularity exists if integration is concentrated within certain parts that are tightly integrated internally but is weaker between those parts. Parts that are integrated within themselves and relatively independent of other such internally integrated parts are called modules [4][5][6] . Integration and modularity are seen at various levels of biological organisation, from genes to colonies, not only in a morphological context but also in other contexts (e.g. molecular 7 , metabolic 8 , ecological 9 ), and are viewed as a general property of many different webs of interactions beyond biology 4 .
Morphological integration and modularity have received increased attention among modern evolutionary biologists because the integrated and modular organisation of biological entities has important implications for understanding phenotypic evolution. Integration constrains the variability of individual traits, and modularity enables modules to vary and evolve independently of each other whilst still maintaining the integrity of the functional or developmental unit 4,10,11 . An integrated and modular organisation has therefore potential to affect evolutionary paths in multiple ways that include circumventing the effects of genetic pleiotropy and developmental canalisation as well as facilitating and channelling evolutionary transformations of functional and developmental units 5,12,13 .
Studies of mammalian evolution often rely on information from the dentition. Teeth are highly informative of a mammal's taxonomic identity, phylogenetic relationships and ecological adaptation; and still constitute the most common and best-preserved mammal remains in the fossil record, adding a historical perspective to the study 14,15 . The dentition as a whole appears to be a module of the dermal exoskeleton 16 . Potential different modules within the mammalian dentition include tooth generations (milk vs permanent teeth) and tooth classes (incisors vs canines vs premolars vs molars) and can also include other groups of teeth (e.g. carnivore carnassials vs other premolars and molars) 16 . At lower levels of dental organisation, individual teeth 17 or tooth cusps 16 can be separate modules.
Many studies of integration and/or modularity have been conducted on complex mammalian dentitions where tooth classes are distinguishable, and teeth differ in form depending on their location in the dental arcade. These studies chiefly involved dentitions of primates 1,17-24 , carnivores [25][26][27][28][29][30][31][32][33][34][35][36] , rodents 22,[37][38][39][40][41][42] and lagomorphs 43,44 . Much less attention has been directed to simple or simplified dentitions where tooth classes are absent or not distinguishable, and teeth are similar to each other regardless of their location in the dental arcade. Notably, there has been, to our knowledge, only one study of integration and no study of modularity on a secondarily simplified dentition. This study 45 investigated morphological integration among mandibular premolars and molars of harp seals (Pagophilus groenlandicus).
Pinnipeds (earless seals, Phocidae; sea lions and fur seals, Otariidae; walruses, Odobenidae) are a clade of secondarily aquatic carnivores that evolved from terrestrial ancestors with complex dentition [46][47][48][49] . Unlike their ancestors, pinnipeds forage under water where they capture, handle and swallow their prey. Prey are swallowed whole or, if too large, first torn (usually extraorally) into swallowable chunks. Pinnipeds do not masticate food but instead employ their dentition in most cases solely to catch and hold prey using a foraging style called pierce feeding 50,51 . As a likely consequence, ancestral differentiation between premolars and molars has been lost in pinnipeds. Both tooth classes are similar in size and shape (both within and between the arcades) and therefore often collectively called postcanines. Pinniped postcanines are simple or relatively simple in form, effectively two-dimensional because of the lack of a lingual cusp, and lack the refined occlusion characteristic of morphologically complex and differentiated premolars and molars in most non-pinniped (fissiped) carnivores and most mammals in general 15,52 .
The demands of functional occlusion and the process of natural selection constrain phenotypic variation and impose morphological integration in complex dentitions 53 . The simplified pinniped dentition with simple occlusion is expected to be more variable and less integrated because of relaxed functional and selective constraint. In accordance with this expectation, large intraspecific variations in tooth number have been reported from multiple pinniped species [54][55][56][57][58][59][60][61][62][63][64] . Furthermore, large variations in tooth size have been recorded, as expected, in ribbon seals (Histriophoca fasciata) 65 and ringed seals (Pusa hispida) 45,65 but, unexpectedly, not in spotted seals (Phoca largha), northern fur seals (Callorhinus ursinus) or Steller sea lions (Eumetopias jubatus), in all of which variations in tooth size were found to be smaller and similar to those seen in fissipeds with complex dentition and exact dental occlusion 65 . Moreover, size correlations among mandibular postcanines of Pagophilus groenlandicus were reported as similar to or stronger than those in fissipeds and other mammals with precisely occluding teeth 45 , suggesting an unexpectedly strong dental integration in this pinniped species. Limited size variability and strong integration are surprising in the pinniped dentition and merit further study.
In a previous paper 65 , we presented results on dental size variability in two otariid (Eumetopias jubatus, Callorhinus ursinus) and two phocid (Phoca largha, Histriophoca fasciata) species. Here, we report results on dental integration and modularity in the same species. All of these species are pierce feeders 50,66 that feed mainly on fish (Phoca largha), fish and squid (Eumetopias jubatus, Callorhinus ursinus) or fish and benthic invertebrates (Histriophoca fasciata) 67 . Whilst these species are broadly representative of both their families and pinnipeds as a whole, which contributes to the generality of our findings, general similarities in their diets and foraging style rather do not let expect large differences in dental integration and modularity. We first measured teeth of the four species using serially homologous measurements, next calculated correlation matrices based on the collected measurement data, and then analysed correlation data in these matrices to assess the strength and structure of integration and modularity in the dentition of each species. We investigated integration at three hierarchical levels: whole dentition, among teeth and within teeth. The level of among-tooth integration included testing two classic hypotheses related to integration, the rule of neighbourhood 68,69 and the rule of proximal parts 70 . The former states that adjacent parts of an organ are more strongly intercorrelated with respect to size than more distant parts; the latter states that proximal parts of an organ are more strongly correlated with respect to size than distal parts. We also comparatively evaluated the degree of dental occlusion among the four species to examine how integration and modularity relate to occlusion, and referred to our earlier assessment of tooth-size variability in these species 65 to test the hypothesis that integration is negatively related to variability.
Material and Methods
Measurement data collection. Length (L; maximum linear mesiodistal distance) and width (W; maximum linear vestibulolingual distance perpendicular to the length) were measured on permanent tooth crowns in skeletonised specimens of Eumetopias jubatus (31 males, 30 females), Callorhinus ursinus (43 males, 59 females), Phoca largha (80 males, 60 females, 52 of undetermined gender) and Histriophoca fasciata (62 males, 86 females, 39 of undetermined gender). These specimens derived from wild animals on and around the Japanese Islands according to institutional collection records (Supplementary Tables S1-S4). All measurements were taken with digital calipers to the nearest 0.01 mm on one body side (left or right, depending on the state of preservation) of each specimen. Specimens with an incomplete dentition or a supernumerary tooth on both sides of the upper or lower arcade were not measured. The dental formulae of these species were I 1-3 /I 2,3 C 1 /C 1 P 1-4 /P 1-4 M 1 /M 1 for Eumetopias jubatus, Phoca largha and Histriophoca fasciata and I 1-3 /I 2,3 C 1 /C 1 P 1-4 /P 1-4 M 1,2 /M 1 for Callorhinus ursinus, where I, C, P and M denote permanent incisors, canines, premolars and molars in either half of upper and lower arcades, respectively, and superscript and subscript numbers indicate positions of upper and lower teeth, respectively (Fig. 1). Because of a difference in the number of upper molars, a total of 34 measurements were applied to Eumetopias jubatus, Phoca largha and Histriophoca fasciata and a total of 36 to Callorhinus ursinus. www.nature.com/scientificreports www.nature.com/scientificreports/ Correlation matrix calculation. Correlations were calculated using Pearson's product-moment correlation coefficient (r). Measurement data were first pairwise correlated for males and females separately. Because no significant differences were observed between r values for males and females of each species (P < 0.05, Student's t-tests with Holm-Bonferroni correction), specimens of both genders and those of undetermined gender were combined, and all pairwise correlations were recalculated. The r values resulted from these calculations were assembled into matrices, one for each species. These and all other statistical analyses were performed in r version 3.2.4 Revised 71 .
Integration assessment. Integration was assessed using r entries in the correlation matrix as well as other indices directly or indirectly based on these entries and designed for a particular level of integration. High r values were interpreted as indicating strong integration; lower r values were interpreted as indicating weaker integration.
Whole-dentition integration. The relative standard deviation of the correlation-matrix eigenvalues, SD rel (λ) 72 , and the average of the absolute pairwise r values, I r 73 , were used to estimate the strength of overall integration. These indices were calculated with equations (1) and (2), respectively: where λ i denotes an eigenvalue of the correlation matrix, and p denotes the number of intercorrelated measurements; where |r i | denotes an absolute off-diagonal r value in the correlation matrix, and k denotes the number of these values. Both indices are independent of the sample size or the number of intercorrelated measurements and vary between zero (no integration) and one (perfect integration), with the I r index tending to yield lower values than those of the SD rel (λ) index 72,74 . www.nature.com/scientificreports www.nature.com/scientificreports/ Among-tooth integration. Correlation matrix r values were used to test the rules of neighbourhood and proximal parts and to assess the strength of integration between teeth. The relative strength and the structure of integration among teeth were analysed with hierarchical unweighted pair-group average (UPGMA) clustering using the average of absolute pairwise r values between measurements of two different teeth (r M ) subtracted from one as a dissimilarity measure. The r M metric was calculated, using a pair of upper and lower canines as an example, as the sum of r values between LC 1 and LC 1 , between LC 1 and WC 1 , between WC 1 and LC 1 , and between WC 1 and WC 1 divided by four. Clustered teeth were interpreted as more strongly integrated than non-clustered ones.
Within-tooth integration. The strength of integration within teeth was estimated using the absolute r value between measurements of the same tooth. Species-specific patterns of within-tooth integration were identified by plotting these r values along the arcade.
Modularity assessment. The potential modular structure of the dentition was analysed by hierarchical UPGMA clustering of teeth using a dissimilarity measure of 1 − r M . Potential modules were expected to be identified by clusters. We additionally assumed that tooth classes could be modules as expected for a mammal's dentition 75,76 . All hypothesised modules (whether identified or assumed) were next tested using the covariance ratio (CR) 77 and Escoufier's 78 RV coefficient 79 . Statistical significance of these coefficients was assessed using 9999 iterations of the permutation procedure as described in ref. 77 (CR) and ref. 79 (RV). Both coefficients were also used to estimate the strength of modularity. The RV coefficient ranges from zero (perfect modularity) to one (no modularity) 79 . The CR coefficient ranges from zero to positive values: the CR values between zero and one imply a modular structure, with low values corresponding to relatively more modularity, and higher values corresponding to relatively less modularity; the CR values higher than one imply no modularity 77 . The CR coefficient is unaffected by the sample size or the number of intercorrelated measurements 77 , whereas the RV coefficient has been shown to be sensitive to both 77,80,81 . Despite this bias, we used the RV coefficient because it has commonly been applied to quantify morphological modularity, and to check whether both coefficients converge on similar results. Assessment of modularity was supplemented by observations of the shape of adjacent teeth within and between hypothesised modules, assuming that teeth are similar in form within a module and different between modules. occlusion evaluation. The relative degree of dental occlusion among species was qualitatively evaluated using four criteria: the number of teeth lacking occlusal contact with opposing teeth, the number of wear facets on the crowns, the size of these facets relative to the size of the crown, and the size of spaces between adjacent teeth of the same arcade. These criteria were interpreted such that fewer non-contacting teeth, more and larger wear facets and smaller interdental spaces indicated relatively more occlusion, whereas more non-contacting teeth, fewer and smaller wear facets and larger interdental spaces indicated relatively less occlusion.
Results
Integration. All Whole-dentition integration. Pairwise r values among measurements were in most cases higher in both otariid species than in either phocid species, with Eumetopias jubatus generally showing the highest values and Histriophoca fasciata the lowest (Figs 2-5). Consistent with this observation, as expected, were values of integration indices, SD rel (λ) and I r , which were, respectively, 0.776 and 0.767 for Eumetopias jubatus, 0.660 and 0.643 for Callorhinus ursinus, 0.549 and 0.535 for Phoca largha, and 0.510 and 0.500 for Histriophoca fasciata. These results indicated the strongest overall integration in Eumetopias jubatus, followed in descending order by those in Callorhinus ursinus, Phoca largha and Histriophoca fasciata.
Among-tooth integration. Measurements of teeth that occluded with each other tended to be more strongly intercorrelated than those of upper vs lower teeth that did not occlude in each of the four species evaluated (Figs 2c, 3c, 4c and 5c; P < 0.018, Mann-Whitney U-tests), indicating stronger integration between occluding teeth compared to that between non-occluding ones. Furthermore, as predicted by the rule of neighbourhood, measurements of adjacent teeth of an arcade tended to be more strongly intercorrelated than those of more distant teeth of that arcade in each of the four species (Figs 2a,b, 3a,b, 4a,b and 5a,b; P < 0.033, Mann-Whitney U-tests), which indicated a tendency for stronger integration between adjacent teeth of the same arcade compared to that between non-adjacent ones. However, contrary to the rule of proximal parts, measurements of more mesial teeth of both arcades tended not to be more strongly intercorrelated than those of more distal teeth of both arcades in each of the four species (Figs 2-5; P = 0.13-0.71, tests for the significance of correlation between the r coefficient and the position of the tooth pair using Student's t-distribution), which indicated that integration did not tend to be stronger between more mesial teeth compared to that between more distal teeth.
Measurements of C 1 and C 1 were more strongly intercorrelated than those of any other teeth in all four species evaluated and especially in both otariid species (Figs 2-6), which indicated the strongest integration between the canines. Canine measurements were most strongly correlated with those of I 3 in all of the four species and especially in both otariid species (Figs 2-6), indicating strong integration among C 1 , C 1 and I 3 . Measurements of postcanines that corresponded in position to the carnassials in fissipeds (P 4 and M 1 ) were relatively weakly intercorrelated in all the four species (Figs 2c, 3c, 4c, 5c and 6), indicating a relatively weak integration between these teeth. The most distal upper postcanines of both otariid species (M 1 of Eumetopias jubatus and M 2 of Callorhinus ursinus) were positioned separately from all other teeth in the respective dendrograms resulted from cluster analysis (Fig. 6a,b), and their measurements tended to be most weakly correlated with those of other teeth (Figs 2a,c and 3a,c; P < 0.0001, Mann-Whitney U-tests), indicating the weakest integration with other teeth of (2019) 9:4184 | https://doi.org/10.1038/s41598-019-40956-1 www.nature.com/scientificreports www.nature.com/scientificreports/ the dentition. In contrast, the most distal upper postcanine of either phocid species (M 1 ) was not positioned separately from all other teeth in the respective dendrograms resulted from cluster analysis (Fig. 6c,d), and its measurements were relatively strongly correlated with those of other teeth (Figs 4a,c and 5a,c; Mann-Whitney U-tests did not reject the null hypothesis of M 1 measurements being not most weakly correlated with those of other teeth, with P = 0.89 for Phoca largha and P = 0.93 for Histriophoca fasciata), indicating a relatively strong integration of M 1 with other teeth of the dentition.
Within-tooth integration. A comparison of r values between measurements of the same tooth along the upper and lower arcades of each evaluated species revealed patterns of within-tooth integration. These patterns were more similar between both otariid species than between both phocid species and differed between the otariid and phocid species (Fig. 7). The canines were the most strongly internally integrated teeth of their arcades in all of the four species evaluated except for the Histriophoca fasciata lower arcade where P 1 was more strongly integrated internally than C 1 (Fig. 7). The internal integration of C 1 was stronger than that of C 1 in all of the four species, and both were very strong in both otariid species and weaker in both phocid species (Fig. 7). The P 4 and M 1 of all the four species as well as M 1 of Eumetopias jubatus and M 2 of Callorhinus ursinus were relatively weakly integrated internally, whereas M 1 in both phocid species was relatively strongly integrated internally (Fig. 7).
Modularity.
Cluster analyses identified a potential module composed of C 1 , C 1 and I 3 in all four species evaluated but did not reveal a distinct modular structure in the whole dentition of any of these species (Fig. 6). In turn, analyses of the CR and RV coefficients (both coefficients mostly provided congruent results) supported a modular nature of the canine-I 3 complex in both phocid species and, to a lesser extent, in Callorhinus ursinus but not in Eumetopias jubatus (Table 1). In addition, contrary to the cluster analyses, results of the CR and RV analyses generally implied a modular structure with tooth classes as modules in both phocid species and, to a lesser extent, in both otariid species, although all CR and most RV values were high (closer to one than to zero), www.nature.com/scientificreports www.nature.com/scientificreports/ which indicated that the modular structure was weak ( Table 1). All CR and most RV values for comparisons of the molars with either the premolars only or the premolars combined with the canines and the incisors were higher for both phocid species than for either otariid species, indicating the lesser distinctiveness of molars from the rest of the dentition in these phocid species (Table 1). The CR and RV values for other comparisons between groups of teeth were in most cases lowest in Histriophoca fasciata, followed in ascending order by those in Phoca largha, Callorhinus ursinus and Eumetopias jubatus (Table 1). This order of species was exactly opposite to that according to increasing SD rel (λ) and I r values for whole-dentition integration, indicating a negative relationship between the degrees of modularity and integration.
These results were congruent with and extended by observations that I 3 closely resembled C 1 in form in all four species evaluated, and that teeth were serially similar except relative discontinuities between C 1 and P 1 in all of the species, between C 1 and P 1 in both phocid species, and between P 4 and M 1 in both otariid species (Fig. 1). These observations indicated that the molars are more distinctive from the premolars in the upper arcade than in the lower one in both otariid species. occlusion. A comparison of the degree of dental occlusion showed that overall occlusion was more extensive in both otariid species than in either phocid species, and that it was least pronounced in Histriophoca fasciata in which spaces between adjacent postcanines of the same arcade were largest relative to postcanine size, and the opposing upper and lower postcanines often did not come into occlusal contact with each other (Fig. 1). Wear facets on postcanine crowns were larger relative to the size of the crown and occurred more often in Eumetopias jubatus than in Callorhinus ursinus, indicating a more extensive occlusion in the former species. These observations indicated the highest degree of dental occlusion in Eumetopias jubatus, followed by those in Callorhinus ursinus, Phoca largha and Histriophoca fasciata, in this descending order, thus matching the order of these species according to weakening whole-dentition integration and increasing modularity.
Regarding the most distal upper postcanines, M 1 of Eumetopias jubatus and M 2 of Callorhinus ursinus lacked occlusal contact with teeth of the lower arcade (Fig. 1a,b), M 1 of Phoca largha occluded with M 1 (Fig. 1c), and M 1 Supplementary Table S6. www.nature.com/scientificreports www.nature.com/scientificreports/ of Histriophoca fasciata was variable. It occluded with M 1 in some specimens but was deprived of any contact in others (Fig. 1d).
Discussion
This study found that dental integration was positively related to dental occlusion across four representative pinniped species, and that integration was stronger between occluding teeth than between non-occluding ones in each of these species. A comparison with our previous findings on tooth-size variation in the same species 65 shows that dental integration and occlusion are roughly negatively related to dental size variability, with the most integrated and occluding dentition being the least variable (Eumetopias jubatus) and the least integrated and occluding dentition the most variable (Histriophoca fasciata). This concurs with the expectation that the degree of integration is related positively to the degree of occlusion and negatively to the degree of variability, providing a functional rationale for many differences in dental integration and dental size variability among the four species. This also indicates that functional requirements of occlusion significantly contribute to integration in the pinniped dentition despite the fact that both the postcanines and occlusion are considerably simplified in this dentition compared to those in the complex dentition of most other mammals. This conclusion is further supported by our observations from the canines, I 3 , P 4 , M 1 and the most distal upper postcanines.
The primary role of the canines in mammals is to serve as occlusal guides for the postcanines 82 , a function that is a plausible candidate to account for the strong integration observed between and within the canines in the four pinniped species. The strong integration among the canines and I 3 and the likely modular nature of the canine-I 3 complex found in this study suggest that I 3 may also be involved in this function in all of the four species. A positive relationship between the degrees of canine-I 3 integration and dental occlusion (both were highest in Eumetopias jubatus and decreased, in descending order, in Callorhinus ursinus, Phoca largha and Histriophoca fasciata) supports this functional interpretation. The strong internal integration of P 1 relative to that of C 1 observed www.nature.com/scientificreports www.nature.com/scientificreports/ Supplementary Table S8. www.nature.com/scientificreports www.nature.com/scientificreports/ in Histriophoca fasciata and Phoca largha suggests that P 1 might be an additional element of this functional complex in these phocid species, but the outcomes of cluster analysis contradicted this hypothesis by showing that the measurements of P 1 were most strongly correlated with those of P 1 and that the P 1 -P 1 cluster was far from the canine-I 3 cluster in both phocid species.
Groups of teeth
Another potential influence on integration of the canines derives from the fact that males of many pinniped species use their canines in combat over territory and females. However, whilst this behaviour holds true for Eumetopias jubatus and Callorhinus ursinus, which mate on land, it does not hold for Phoca largha and Histriophoca fasciata, which mate in the water where there is no need for the male to defend territory or compete for females by trying to dominate other males 83 . Moreover, we observed no significant differences between canine r values of males and females for each of the four species, which suggests that male-to-male combat behaviour does not importantly affect the canine integration. Interestingly, the canines were considerably sexually dimorphic in both otariid species and larger relative to other teeth than those in both phocid species 65 , which is apparently because of the difference in mating systems 84 .
Unlike fissiped carnassials, which are rather strongly integrated relative to other teeth of the dentition 25,27-31 , their positional counterparts in the four pinniped species (P 4 and M 1 ) were relatively weakly integrated both with each other and within themselves, which is expected from a functional standpoint because these teeth lost www.nature.com/scientificreports www.nature.com/scientificreports/ their carnassial function early in pinniped evolution 51,85 . Furthermore, the most distal upper postcanines of both otariid species exhibited the weakest integration with other teeth of the dentition and a relatively weak internal integration as well as a considerable size variation 65 ; which was in contrast to the most distal upper postcanines of both phocid species, which exhibited a relatively strong integration with other teeth of the dentition and a strong internal integration as well as a size variation comparable to that of other teeth of the dentition 65 . This is also expected from a functional standpoint because the most distal upper postcanines of both otariid species lacked occlusal contact with teeth of the lower arcade, whereas the most distal upper postcanines of both phocid species invariably or variably occluded with a tooth of the lower arcade. The situation in these otariids is comparable to that in fissipeds where the most distal teeth that show no or little occlusion are less integrated and more variable than other teeth [25][26][27]29,31,86,87 .
Our study also revealed evidence showing that developmental factors play an important role in shaping integration in the pinniped dentition. Specifically, a modular structure with tooth classes as modules, albeit weak, was identified. Moreover, our results generally concurred with previous findings regarding the validity of the rules of neighbourhood and proximal parts in the case of both a whole dentition and a series of teeth representing more than one tooth class 19,[25][26][27]29,31,34,45 , indicating that not only complex mammalian dentitions but also secondarily simplified pinniped dentitions generally hold to the rule of neighbourhood but not to the rule of proximal parts. Adherence to a modular structure among tooth classes and to the rule of neighbourhood are expected in a mammal's dentition from a developmental point of view given that developmental histories can be common within but different between tooth classes (e.g. premolars vs molars, the former having two generations and the latter only having one), and that teeth are considered developmentally interrelated metameric members of a serially homologous meristic series 75,[88][89][90] , and adjacent tooth buds or teeth physically contact each other along the dental lamina or arcade during ontogeny and can also otherwise influence each other (e.g. the first molar to develop can determine the size of the successive ones 91,92 ).
A comparison of our results from four pinniped species (I r = 0.500-0.767) with values of this index calculated from previously reported dental correlation matrices for mammal species with complex dentition 20,21,23,25,[27][28][29]31,34,35,37,43,44 (I r = 0.291-0.683) shows that dental integration in these pinniped species with simple dental occlusion is stronger than or similar to that in mammal species with refined occlusion. This is surprising when viewed from a solely functional perspective. We propose that both functional factors related to dental occlusion and developmental factors related to modularity have contributed to the strong integration in the pinnipeds in this study. Specifically, modularity was found in this study to be weak and negatively related to integration. Both are not surprising given reduced heterodonty in the pinnipeds examined, and the fact that modules require no or weak intermodular integration to exist, which constrains overall integration of a structure composed of modules. We hypothesise that high levels of modularity in complex mammalian dentitions 17,[22][23][24]36,40,42 effectively constrain overall integration to moderate levels, whilst the lower levels of modularity revealed in the simplified pinniped dentitions in this study enable the higher levels of overall integration. We further hypothesise that the potential high levels of integration enabled by reduced modularity have effectively been achieved in these pinnipeds in response to selective pressure driven by functional requirements of dental occlusion, which, albeit weak in these pinnipeds, positively influences dental integration. www.nature.com/scientificreports www.nature.com/scientificreports/ It has been suggested that evolutionarily conserved developmental programmes for the mammalian dentition underlie integration in the pinniped dentition 45 . Whilst the weak tooth-class modules identified in our study are apparently the remnant from a conserved ancestral mammalian pattern, we propose that the decisive developmental programme is an evolutionary novelty that arose in pinnipeds during the transition from terrestrial to aquatic life in association with the origin of pierce feeding and loss of mastication driven by functional requirements of underwater feeding. The simplification of tooth form and increased mutual similarity of teeth representing different classes are apparently associated with reduced dental modularity, and together with increased tooth spacing that is associated with decreased postcanine size 51,66 , they are likely manifestations of adaptation to underwater feeding. Developmental processes that lie behind these changes in early pinnipeds likely converge to some extent with those hypothesised for cetaceans 93 .
The greater disparity in patterns of within-tooth integration between phocid species than between otariid species found in our study suggests a greater diversification of integration patterns in Phocidae than in Otariidae. A comparison of our results from four representative pierce feeding species with correlation data from mandibular postcanines of another pierce feeding species, Pagophilus groenlandicus 45 (I r = 0.587), suggests that high levels of dental integration are common among pierce feeders, and we expect other pinnipeds (both suction feeders and filter feeders 50 ) to show similarly high levels provided that there is a functional factor that drives integration in their dentition. If there is no functional factor, we expect a rather weak integration. Our findings indicate that this factor is dental occlusion in pierce feeders. Exploration of suction and filter feeding pinnipeds is needed to determine whether their dental integration is weak or strong and, in the latter case, to identify the functional factor that drives the integration.
Data Availability
Measurement data analysed in this study are available in Supplementary Tables S1-S4.
|
v3-fos-license
|
2017-08-27T13:16:08.988Z
|
2021-07-20T00:00:00.000
|
43415419
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jds-online.org/journal/JDS/article/1118/file/pdf",
"pdf_hash": "a1ea6c1869f0273f58c1b4ce06e79e38a8abfbc6",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43280",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"sha1": "a1ea6c1869f0273f58c1b4ce06e79e38a8abfbc6",
"year": 2021
}
|
pes2o/s2orc
|
Interpretation of Epidemiological Data Using Multiple Correspondence Analysis and Log-linear Models
: In this work we present a combined approach to contingency tables analysis using correspondence analysis and log-linear models. Several investigators have recognized relations between the aforementioned methodologies, in the past. By their combination we may obtain a better understanding of the structure of the data and a more favorable interpretation of the results. As an application we applied both methodologies to an epidemiological database (CARDIO2000) regarding coronary heart disease risk factors.
Introduction
Simple and multiple correspondence analysis has quite a long history as a method for the analysis of categorical data.It started in the middle 1930s and since then correspondence analysis has been reinvented several times.The term correspondence analysis originates from France, probably due to Benzecri and his colleagues (1973).However, correspondence analysis is not very popular outside France because of two main reasons: (a) the language problem, (b) it is often introduced without any reference to other methods of statistical treatment of categorical data, which have proven their usefulness and flexibility ( Van der Heijden, 1989).
A major difference between correspondence analysis and most other techniques for categorical data analysis lies in the use of models.For example in log-linear analysis a distribution is assumed under which the data are collected, then a model for the data is hypothesized and estimations are made under the assumption that the model is true.Thus, it is possible to make inferences about the population on the basis of the sample data (Greenacre, 1984).In correspondence analysis it is claimed that no underlying distribution has to be assumed and no model has to be hypothesized, but a decomposition of the data is obtained in order to study their "structure".However, conclusions about the data may not be generalized at population level as suggested by Greenacre (1984).Several investigators in the past have attempted to bridge the gap between correspondence analysis and model-based approaches, and to understand under what conditions correspondence analysis results are similar to those of the log-multiplicative models (Goodman, 1986).
It is well known that in epidemiological studies the number of the investigated variables is usually large.Consequently, the investigation of the significance of the produced k-order interaction terms may delay the computational procedure and could mislead the interpretation of the results.
Aim of the Study
In this work we aimed to analyze epidemiological data using a combination of multiple correspondence analysis and log-linear models.In particular, by the application of multiple correspondence analysis we aim to reduce the number of the tested interaction terms in the final log-linear model.This combination could abbreviate the computational procedures and lead us to a better understanding of the results from the final log-linear model.
Methods
In the following paragraphs, a general introduction to correspondence analysis as a tool of data analysis will be presented.
Simple and multiple correspondence analysis
Correspondence analysis is a descriptive, exploratory technique designed to analyze simple two-way and multi-way contingency tables containing some measure of correspondence between the rows and columns.These methods were originally developed primarily in France by Jean-Paul Benzerci in the early 1960's and 1970's (Benzerci, 1973), but have only more recently gained increasing popularity in English-speaking countries.The results provide information, which is similar in nature to those produced by factor analysis techniques, and they allow one to explore the structure of categorical variables included in the table.In a typical correspondence analysis, a cross tabulation table of frequencies is first standardized, so that the relative frequencies across all cells sum to one.One way to state the goal of a typical analysis is to represent the entries in the table of relative frequencies in terms of the distances between individual rows and/or columns in a low-dimensional space.
Assuming the k-column values in each row of the table as coordinates in a m-dimensional space, we could compute the Euclidean distances between the krow points in the m-dimensional space.The distances between the points in the m-dimensional space summarize all information about the similarities between the rows.Afterwards we hypothesize that we could find a lower-dimensional space, in which to position the row points in a manner that retains all, or almost all, of the information about the differences between the rows.We could then present all information about the similarities between the rows (i.e., risk factors in epidemiological data) in a simple one, two, or m-dimensional graph.While this may not appear to be particularly useful for small tables, we can easily imagine how the presentation and interpretation of very large tables (e.g., differential preference for 10 consumer items among 100 groups of respondents in a consumer survey) could greatly benefit from the simplification that can be achieved via correspondence analysis (e.g., represent the 10 consumer items in a two-dimensional space).
Terminology
Assuming a two-way table, computationally, then in the simple correspondence analysis we will first compute the relative frequencies for the frequency table, so that the sum of all table entries is equal to one (each element will be divided by the total).This table now shows how one unit of mass is distributed across the cells.In the terminology of correspondence analysis, the row and column totals of the matrix of relative frequencies are called the row mass and column mass, respectively.The term inertia in correspondence analysis is used by analogy with the definition in applied mathematics of "moment of inertia", which stands for the integral of mass times the squared distance to the centroid.Inertia is defined as the total Pearson chi-square for the two-way table divided by the total sum.If the rows and columns in a table are completely independent of each other, the entries in the table (distribution of mass) can be reproduced from the row and column totals alone, or row and column profiles in the terminology of correspondence analysis.According to the well-known formula for computing the chi-square statistic for two-way tables, the expected frequencies in a table, where the column and rows are independent of each other, are equal to the respective column total times the row total, divided by the grand total.Any deviations from the expected values (expected under the hypothesis of complete independence of the row and column variables) will contribute to the overall chi-square statistic.Thus, another way of looking at correspondence analysis is to consider it as a method for decomposing the overall chi-square statistic (or inertia = chi-square /N ) by identifying a small number of dimensions in which the deviations from the expected values can be represented.This is similar to the goal of factor analysis, where the total variance is decomposed, so as to arrive at a lower-dimensional representation of the variables that allows one to reconstruct most of the variance/covariance matrix of variables.
Since the sums of the frequencies across the columns must be equal to the row totals, and the sums across the rows equal to the column totals, there are in a sense only (no. of columns −1) independent entries in each row, and (no. of rows −1) independent entries in each column of the table (once we know what these entries are, you can fill in the rest based on your knowledge of the column and row marginal totals).Thus, the maximum number of eigenvalues that can be extracted from a two-way table is equal to the minimum of the number of columns minus one, and the number of rows minus one.If we choose to interpret the maximum number of dimensions that can be extracted, then we can reproduce exactly all information contained in the table.It is customary to summarize the row and column coordinates in a single plot.However, it is important to remember that in such plots, one can only interpret the distances between row points, and the distances between column points, but not the distances between row points and column points.
Multiple correspondence analysis
Multiple correspondence analysis (MCA) may be considered to be an extension of simple correspondence analysis, presented above, to more than two variables.In other words MCA is a simple correspondence analysis carried out on an indicator (or design) matrix with cases as rows and categories of variables as columns.Actually, we usually analyze the inner product of such a matrix, called the Burt Table in an MCA.The Burt table is the result of the inner product of a design or indicator matrix, and the multiple correspondence analysis results are identical to the results one would obtain for the column points from a simple correspondence analysis of the indicator or design matrix.
Finally, it should be noted that correspondence analysis is an exploratory technique.Actually, the method was developed based on a philosophical orientation that emphasizes the development of models that fit the data, rather than the rejection of hypotheses based on the lack of fit (Benzecri's "second principle" states that "The model must fit the data, not vice versa;" Greenacre, 1984).Therefore, there are no statistical significance tests that are customarily applied to the results of a correspondence analysis; the primary purpose of the technique is to produce a simplified (low-dimensional) representation of the information in a large frequency table (or tables with similar measures of correspondence).
Log-linear analysis
As it is well known, log-linear analysis is a method for studying structural relationships between variables in a contingency table.In a two-way case the unrestricted log-linear model has the form where π ij denotes the probability for cell (i, j) and the {u} parameter have to be constrained to identify the model.However, the interpretation of individual {u} parameters is sometimes difficult, especially if the number is very large, which may be the case when the number of categories is large and when there are higher order interactions that cannot be neglected (Van Der Heijden, 1989).This lag in the analysis we aimed to cover by the application of multiple correspondence analysis.
Relationships between correspondence analysis and log-linear models
It is well known that one way to overcome the problem of interpreting a large number of log-linear parameters is to restrict the interaction parameters in some form or another, i.e., to have o product form (interaction term).Andersen (1980) has already done this in the row-columns model, at the early 1980s.Thus, when the number of categories is large the number of parameters to be interpreted can be substantially reduced by the use of correspondence analysis, which is closely related to row-columns models (Andersen, 1980), and it is concluded that in correspondence analysis the interaction is decomposed approximately in a log-multiplicative way, while the graphical correspondence analysis shows approximations of log-multiplicative parameters.All the statistical calculations were performed in STATISTICA 1999 software.
Study's population
The CARDIO2000 project is a multicentre case-control study that investigates the association between several demographic, nutritional, lifestyle and medical risk factors with the risk of developing non-fatal acute coronary syndromes (Panagiotakos, 2001).From January 2000 to August 2001, 848 of the individuals who had entered to the hospital for a first event of coronary heart disease were randomly selected from the study's coordinating group.After the selection of the cardiac patients, 1078 cardiovascular disease free subjects (controls in epidemiological terminology) were randomly selected and matched to the patients by age ( ±3 years), sex, and region.The number of the participants was decided through power analysis, in order to evaluate differences in the coronary relative risk greater than 7% (statistical power > 0.80, significant level < 0.05).In order to reduce the unbalanced distribution of several measured or unmeasured confounders, both patients and controls were randomly s elected.A sequence of random numbers (1 . . .0) was applied in the hospitals' admission listings.Thus, the coronary patients who assigned to the number 1 were included into the study and interviewed (i.e., approximately the one half of the cardiac patients that visited each cardiology clinic).The same procedure was applied for the controls, after taking into account the matching criteria.
Data analysis
Based on Table 1 we created the Burt's Table (see Appendix).The application of multiple correspondence analysis showed that the total inertia explained is equal to 1.500 (percent of inertia: 12% is due to the first axis and 11% due to the second axis).A visualization of the results is presented in Figure 1.As we can see the profiles of cardiac patients (group 1) and controls (group 0) are quite different, as it was expected.In particular, presence of hypercholesterolemia (hchol 1), hypertension (htn 1), diabetes mellitus (dm 1), depression (depre 1), smoking status (smoki 1), male sex (sex 1), low education (educ 1), and physical inactivity (exerc 0) seems to characterize the patients group (group 1), since the distances in the factorial design are smaller than the other variables.On the other hand, subjects in the disease free group (group 0) are characterized by the absence of hypercholesterolemia (hchol 0), hypertension (htn 0), diabetes mellitus (dm 0), depression (depre 0), as well as the presence of middle to higher education (educ 2, educ 3).Now according to the contributions of the investigated parameters on the principal axis, we can see (see Appendix) that the first dimension include, beyond the study group, the classical cardiovascular risk factors (i.e., smoking habit, hypertension, hypercholesterolemia, diabetes mellitus) as well as an emerging risk factor (i.e., presence of depression), while the second dimension include physical activity and educational level, which seems to be secondary risk factors for the development of the disease in the investigated group.Moreover the parametric association model used in this work is the multinomial logit, described below: The analysis showed that the previous model fits the data well since the chisquare for the likelihood ratio was found equal to 197.34 (d.f.= 183) and the significance is well above 5% (Type-I error = 0.220).In Table 2 we present selected results from the applied log-linear analysis.
As we can see, hypercholesterolemia triples the risk (odds ratio = elog-odds) of developing coronary heart disease (log-odds = 1.2, 95% confidence interval 0.98 -1.42), hypertension twofold the risk of developing the disease (log-odds = 0.76, 95% confidence interval 0.52 -0.99), while physical activity prevents the development of coronary disease by reducing the relative risk by 22% (log-odds = -0.33,95% confidence interval (−0.56, −0.10).However, the introduced model explains only the 16% of the total dispersion (source of dispersion due to model / total = 181.47/ 1138.49).The previous results were, also, confirmed by the application of multiple correspondence analysis mentioned above (Figure 1).
Discussion
In this work we presented a combined analysis of categorical data, using multiple correspondence analysis and log-linear models.
It is widely adopted that by the application of multi correspondence analysis we can visualize the associations between the investigated (exposure) parameters and the disease.Therefore, applying correspondence analysis we can reduce the interaction parameters that are necessary for the classical log-linear models.Beyond the better understanding of the structure of the data the computational time may be significantly reduced.Moreover the graphical interpretation of the data that shows approximations of log-multiplicative parameters could be a useful tool in an exploratory epidemiological research, especially in the investigation, and, potentially, the reduction, of the level of the associations between the investigated parameters (interactions).Finally, interpreting the results from a public health perspective, epidemiologists could find inherent associations between the investigated variables and, consequently, design their policies with a more efficacious way.For example, in our data we can see that • depression is closely related to the presence of hypercholesterolemia and the development of the disease These associations, and more other that can be found viewing the data should be taken into account, as interaction terms, for the fitting of log-linear models.This will enhance the analytical procedure and the interpretation of the data.
Although, it is suggested that association model (i.e., log-linear) and correspondence analysis are highly related (Benzecri, 1973, Van Der Heijden, 1989, Blasius, 1994, Greenacre, 1994), the faintness of inference of correspondence analysis at population level limits the findings, only, to the observed data.
Table 1 :
Risk factors' distribution of the patients and controls, by gender
Table 2 :
Selected results from the log-linear analysis; analysis of dispersion
|
v3-fos-license
|
2024-05-22T15:04:04.677Z
|
2024-05-18T00:00:00.000
|
269934711
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-6412/14/5/639/pdf?version=1716011516",
"pdf_hash": "ce4feaba7657d947a9ca36f7516185db76df3d8c",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43284",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "54d104fdd2cd7e70d59cf3db91843fe9825008f6",
"year": 2024
}
|
pes2o/s2orc
|
Competitive Mechanism of Alloying Elements on the Physical Properties of Al 10 Ti 15 Ni x 1 Cr x 2 Co x 3 Alloys through Single-Element and Multi-Element Analysis Methods
: Altering the content of an alloying element in alloy materials will inevitably affect the content of other elements, while the effect is frequently disregarded, leading to subsequent negligence of the common influence on the physical properties of alloys. Therefore, the correlation between alloying elements and physical properties has not been adequately addressed in the existing studies. In response to this problem, the present study focuses on the Al 10 Ti 15 Ni x 1 Cr x 2 Co x 3 alloys and investigates the competitive interplay among Ni, Cr, and Co elements in the formation of physical properties through a single-element (SE) analysis and a multi-element (ME) analysis based on the first principles calculations and the partial least squares (PLS) regression. The values of C 11 and C 44 generally increase with the incorporation of Ni or Cr content in light of SE analysis, which is contrary to the inclination of ME analysis in predicting the impact of Ni and Cr elements, and the Ni element demonstrates a pronounced negative competitive ability. The overall competitive relationship among the three alloying elements suggests that increasing the content of Ni and Cr does not contribute to enhancing the elastic constants of alloys, and the phenomenon is also observed in the analysis of elastic moduli. The reason is that the SE analysis fails to account for the aforementioned common influence of multiple alloying elements on the physical properties of alloys. Therefore, the integration of SE analysis and ME analysis is more advantageous in elucidating the hidden competitive mechanism among multiple alloying elements, and offering a more robust theoretical framework for the design of alloy materials.
Introduction
Alloy materials, encompassing a diverse array of alloy components, often exhibit exceptional physical properties, e.g., high strength [1], good ductility [2], outstanding thermal stability [3], superior corrosion and wear resistance [4,5], excellent electrical and thermal conductivity [6,7].The modulation of alloying element content can effectively control these aforementioned characteristics, as alterations in alloy composition trigger a cascade of intrinsic property transformations within the alloy materials.For instance, Al, Ta and Nb elements induce phase transformation [8][9][10], Si and B elements refine grain structure [11,12], C, Mo and Ti elements promote nanotwin generation [13][14][15], Cu, Cr, and S elements alleviate or accelerate component segregation [16][17][18], and Zn and W elements enhance solid solution strengthening [19,20].It can be seen that the internal structural characteristics of alloy materials can be directly or indirectly influenced by adjusting the content of alloying elements, thereby impacting the physical properties of alloy materials.Therefore, relevant exploration is warranted to investigate the potential regular effects of varying alloying element content on the physical properties of alloy materials.
To this end, researchers have consistently pursued the exploration of alloy materials with excellent physical properties by effectively controlling the content of alloying elements, and then uncovering the influence trend.Zhang et al. [21] incorporated the Ag element into the Al-33Zn-2Cu alloy and observed a linear increase in both yield strength and tensile strength with increasing Ag content.Ye et al. [22] observed that the hardness of CuCoFeNiTi x high-entropy alloy (HEA) gradually increased with the increase in Ti content, while the ductility gradually decreased.Nguyue et al. [23] found that the phase structure of Al x FeMnNiCrCu 0.5 HEA exhibits multiple transformations with increasing Al content, leading to parabolic fluctuations in the tensile properties of the alloy.Luo et al. [24] calculated that the Young's modulus, bulk modulus and hardness of Fe-Mn-Al alloy showed an overall decreasing trend with the increase in Mn content.Liu et al. [25] simulated that increasing Mn content in CrFeCoNiMn x (0 ≤ x ≤ 3) HEAs will improve the required fracture energy for their crystal cell structure.Meanwhile, researchers controlled the content of diverse alloying elements and conducted corresponding investigations.The content of Mn, C, and Al elements in Fe-Mn-Al-C low-density steels was simultaneously increased by Wang et al. [26].The results demonstrated that the value of yield strength declines with the increase in Mn content, and rises with higher Al and C contents.The study conducted by Li et al. [27] demonstrates that the influences of Cu and W on CoNiCuMoW HEAs are opposite, with an observed enhancement in thermodynamic stability and dislocation energy factor resulting from increased W content.And Fan et al. [28] independently studied the effects of Al and Cu contents on the mechanical properties of (FeCrNiCo)Al x Cu y HEAs, observing a significant increase in hardness and yield strength with higher Al content, while noting a substantial reduction in fracture strength with increased Cu element presence.Obviously, the variation in alloy element content in alloy materials has a regular impact on physical properties.Through the regularity analysis, it is possible to identify optimal ratios of alloy elements that yield superior physical properties.
However, based on the aforementioned researches, it can be observed that whether through manipulation of a single alloying element or multiple alloying elements, the final analysis solely focuses on the impact of altering the content of a single alloy element on the physical properties of alloy materials.The potential effect of altering the content of a single alloy element on other alloy elements and their collective influence on the physical properties of alloy materials are disregarded.Therefore, the regulatory strategies of the aforementioned alloying elements have not been adequately investigated.Addressing this issue, the present study employs a combination of first principles calculations and partial least squares (PLS) regression to simultaneously regulate the content of Ni, Cr, and Co elements in Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys, and the physical properties, lattice constant, elastic constants, elastic moduli, Vichers hardness, and yield strength, are calculated and discussed.Subsequently, the differences between single-element (SE) analysis and multi-element (ME) analysis were explored for the same alloying element, revealing the competition mechanism between alloying elements and providing more reliable theoretical guidance for further experimental preparation.
Materials and Computational Methods
The exact muffin-tin orbitals (EMTO) and coherent potential approximation (CPA) methods based on the density functional theory were employed to implement the first principles calculations of Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys [29][30][31].In which, the full charge density technique is chosen to calculate the total energy [32].The Kohn-Sham equations [33,34] is used to solve the single-electron equations of optimized overlapping muffin-tin potential spheres.To represent the exchange-correlation function, the Perdew-Burke-Ernzerhof Coatings 2024, 14, 639 3 of 11 (PBE) generalized gradient approximation (GGA) is utilized [35].The paramagnetic state is characterized using the disordered local moment model [36], while ensuring convergence accuracy in the Brillouin zone by setting 25 × 25 × 25 inequivalent k-points for integration calculations.Meanwhile, the EMTO basis group optimizes the convergence of s, p, d, and f orbitals [37], and the electrostatic correction of the single-site CPA method is implemented using the screened impurity model, employing a screening parameter of 0.7 [38].As a measure to ensure the accuracy of calculated results, we solve for the Green's function at 16 complex energy points located on the Fermi surface [39].By fitting the predicted energy-volume data using a Morse-type function and then deriving the state equation [40].Consequently, the results enable us to determine the equilibrium volume and lattice constant of Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys.
Subsequently, the PLS regression [41] is utilized to investigate the disparity in the impact of one alloying element between SE and ME analyses according to the calculated results of first principles calculations.The PLS regression is a sophisticated statistical method that integrates various analytical techniques such as multiple linear regression analysis [42], principal component analysis [43], canonical correlation analysis [44], and others.It adeptly tackles challenges related to multicollinearity, high-dimensional variables, and limited sample sizes [45].For this study, the contents of Al and Ti elements in Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys are, respectively, fixed at 10 at% and 15 at%, and the Ni, Cr, and Co contents are control variables in the range of 0-75 at%.Clearly, the modification of one element within the Ni, Cr, and Co inevitably results in an alteration in the content of other elements.Therefore, a main control (MC) element is established with a 15 at% content increment for each calculation step, while the other two elements serve as slave control (SC) variables with equivalent content values.For instance, the red dotted line with a circular box is shown in Figure 1, Cr is the MC element, with its content C Cr gradually increasing from 0 at% to 75 at% in increments of 15 at%, while the Ni and Co are the SC elements and the corresponding content values are determined by C Cr as C Ni = C Co = (75 at% − C Cr )/2.Consequently, the content variations in Ni, Cr, and Co elements for different MC element are list in Table 1.Obviously, the analyzed samples are small and characterized by multiple independent variables that exhibit correlation within each sample.Therefore, the numerical analysis problem addressed in this paper is well suited for employing the PLS method to establish regression models, enabling an insightful examination of the influence of various alloying elements on the intrinsic properties of
Results and Discussions
To compare the disparities between SE and ME analyses for the same alloying element, the lattice constant a 0 of Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys is initially determined by the EMTO-CPA method.Subsequently, the relationships between the lattice constant a 0 and the content of Ni, Cr, and Co elements are individually obtained, as shown in Figure 1.The three curves in the figure correspond to only one MC element, namely Ni, Cr, and Co, while the homologous SC elements are not displayed.Clearly, the a 0 value shows an almost linear increase with the rise in Cr content, and gradually decreases with the augmentation of Ni or Co content.The results demonstrate a positive correlation between the lattice constant a 0 and the Cr content, while exhibiting a negative correlation with the Ni and Co contents.The observed trend can be attributed to the relatively larger atomic radius of the Cr element in comparison to the relatively smaller atomic radii of Ni and Co elements.At the same time, it can be seen that the value of a 0 presents a more pronounced decline with increasing Co content compared to the increase observed with Ni content, indicating that the presence of the Co element has a stronger negative influence on the lattice constant of alloys.In general, the Cr element exhibits a pronounced positive influence on the lattice constant of alloys, whereas Ni and Co elements exert an opposing effect, with Co demonstrating a more substantial negative impact.
Obviously, the analysis of each curve is exclusively focused on the impact of varying the content of one MC element, disregarding any corresponding changes in SC elements and thus failing to account for their combined effect on the lattice constant of Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys.Meanwhile, it is evident from Table 1 that the contents of SC elements exhibit significant variations in accordance with the content of MC element.Therefore, it is imperative to concurrently consider the combined influence of variation in the content of the three alloying elements Ni, Cr, and Co on the lattice constant in Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys.Herein, the PLS regression is employed to explore deeper layers of information to fulfill this objective [41].The optimal number of principal components is initially determined as 1 through cross-validation analysis based on the data results presented in Figure 1 and Table 1.Subsequently, regression analysis is conducted, and the corresponding findings are summarized in Table 2.In the regression analysis, the content of Ni, Cr, and Co elements is set as the independent variable, and the value of lattice constant a 0 is regarded as the dependent variable.The standardized regression coefficient serves as a metric for assessing the relative influence of Coatings 2024, 14, 639 5 of 11 the independent variable on the dependent variable, and the larger the absolute value of the coefficient, the more significant its impact becomes.The projected importance index quantifies the explanatory ability of the independent variable to the dependent variable, with a higher value indicating a stronger ability to explain.The R 2 value, in addition, serves as an indicator of the goodness of fit for the PLS regression model, with a higher value indicating a stronger fit degree.In light of the analysis results, the standardized regression coefficients of Ni, Cr and Co elements are −0.15,0.633 and −0.484, respectively.Hence, the standardized regression relationship between the a 0 and the C Ni , C Cr , C Co can be formulated as a 0 = −0.15CNi + 0.633C Cr − 0.484C Co .The result indicates that the Cr element exerts the most significant positive influence on the formation of lattice constant a 0 in the Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys, whereas the Ni an Co elements exhibit an opposing effect, and the Co element displays a larger magnitude of negative impact.Meanwhile, Figure 2 provides a visual representation illustrating the varying degrees of impact for better understanding.Herein, the absolute value of regression coefficient quantifies the degree of influence exerted by each alloying element on the lattice constant, aligning consistently with the atomic radius of these three alloying elements, and the positive and negative signs well reflect the influence direction of each element.Meanwhile, the corresponding values of projected importance indexes, respectively, are 0.32, 1.353 and 1.033, showing that the Cr and Co elements significantly contributes to the construction of the regression expression, whereas the influence of the Ni element is comparatively minor.However, on the whole, the regression expression exhibits a robust fit with the experimental data for the independent variables C Ni , C Cr , C Co and the dependent variable a 0 , as indicated by an impressive R 2 value of 98.6%.Consequently, it follows that the influence trend of the same alloying element on the lattice constant shows no significant difference between SE and ME analyses.ward trend with the increase in Ni, Cr, or Co content.The influence of Co is the most pronounced among them, while the influences of Cr and Ni decrease sequentially, with a particular flattening out observed in higher Ni content, as shown in Figure 3a,c.However, the fluctuation of elastic constant 12 C exhibits a higher degree of complexity, as illus- trated in Figure 3b.When the content of MC element is low, there is a rapid decline in the value of At the same time, the competitive relationships between Ni, Cr, and Co elements are further elucidated through the implementation of ME analysis based on the PLS regression, as listed in Table 3.According to the calculated results, the standardized regression relationships between the dependent variables C 11 , C 12 , C 44 and the independent variables C Ni , C Cr , C Co could be formulated as follows in which, the standardized regression coefficients corresponding to the content of Ni, Cr, and Co elements are utilized as the coefficient values preceding the independent variables, and the coefficients reflect the influence trend of the independent variables on the dependent variable.To intuitively observe the different influence trends, a corresponding histogram is illustrated in Figure 4. Overall, based on the regression analysis results, it can be inferred that the presence of the Co element exerts a significant positive influence on the elastic constants C 11 , C 12 and C 44 , and the result is consistent with the SE analysis in Figure 3.Meanwhile, the Ni and Cr elements demonstrate a negative effect on the elastic constants in general; the adverse impact of the Ni element is particularly significant, and that of the Cr element is small.Therefore, a noteworthy phenomenon can be observed when comparing the outcomes depicted in Figures 3 and 4. It can be seen that the augmentation of Ni and Cr content is observed to positively impact the values of C 11 and C 44 in the SE analysis, while the ME analysis reveals a negative promoting effect of the two alloying elements.The analysis of the curves in Figure 3 reveals that the positive promoting effect of Ni and Cr elements is not superior to that of the Co element, and the influence of Cr and Ni elements exhibit a gradual decrease in succession.Consequently, a subsequent regression analysis reveals the latent competitive relationship among the various alloying elements concerning the elastic constants as shown in Figure 4.In Table 3, the projected importance indexes represent the explanatory power of independent variables on their corresponding dependent variables, as previously mentioned.And the corresponding R 2 values, respectively, are 72.5%,75.3% and 76.1%, indicating that the fitting degree of regression equations are relatively good, the intricate numerical relationship between the content of alloying elements and the elastic constants in Figure 3 lead to the results, and ignoring the role of Al and Ti elements may also have some influence.This issue needs further investigation.However, the outcomes of the competition among multiple alloying elements remain valuable as a point of reference.Elastic moduli, including bulk modulus B, shear modulus G, and Young's modulus E, are further determined to facilitate a comparative analysis of the impact of alloying elements in the SE and ME analyses.The relationships between the content of MC elements and the elastic moduli are illustrated in Figure 5.For the bulk modulus B, the corresponding curves display a complex trend of variation as the content of one MC element increases, as shown in Figure 5a.The B value exhibits an initial decrease followed by an increase as the Ni or Cr content increases, while the overall trend rises with the increase in Co content.Therefore, it can be predicted that the promoting effect of the three alloying elements on the bulk modulus B is positive for the Co element, negative for the Ni element, and inconclusive for the Cr element.Meanwhile, the variation in curves exhibits similar regularities in Figure 5b,c, and the values of G and E gradually rise with the increase in one MC content.And the influence trend of the three alloying elements is similar at low content, while the impact of the Co element becomes predominant at high concentrations.Therefore, the SE analysis demonstrates that augmenting the Co content among the three alloying elements is more conducive to improving the elastic moduli of Al10Ti15Nix1Crx2Cox3 alloys.Elastic moduli, including bulk modulus B, shear modulus G, and Young's modulus E, are further determined to facilitate a comparative analysis of the impact of alloying elements in the SE and ME analyses.The relationships between the content of MC elements and the elastic moduli are illustrated in Figure 5.For the bulk modulus B, the corresponding curves display a complex trend of variation as the content of one MC element increases, as shown in Figure 5a.The B value exhibits an initial decrease followed by an increase as the Ni or Cr content increases, while the overall trend rises with the increase in Co content.Therefore, it can be predicted that the promoting effect of the three alloying elements on the bulk modulus B is positive for the Co element, negative for the Ni element, and inconclusive for the Cr element.Meanwhile, the variation in curves exhibits similar regularities in Figure 5b,c, and the values of G and E gradually rise with the increase in one MC content.And the influence trend of the three alloying elements is similar at low content, while the impact of the Co element becomes predominant at high concentrations.Therefore, the SE analysis demonstrates that augmenting the Co content among the three alloying elements is more conducive to improving the elastic moduli of Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys.
In order to further elucidate the competitive relationship among the three alloying elements on the elastic moduli of alloys, a ME analysis based on the PLS regression is conducted using the data presented in Table 1 and Figure 5, and the corresponding results are summarized in Table 4.As mentioned above, the standardized regression relationships between the dependent variables B, G, E and the independent variables C Ni , C Cr , C Co could be formulated as follows Coatings 2024, 14, 639 where the coefficients associated with C Ni , C Cr , and C Co in the equations correspond to the standardized regression coefficients presented in the table.The absolute value of coefficients reflects the intensity of competition among the three alloying elements, while the positive and negative signs signify the direction of their respective influences.
E, are further determined to facilitate a comparative analysis of the impact of alloying elements in the SE and ME analyses.The relationships between the content of MC elements and the elastic moduli are illustrated in Figure 5.For the bulk modulus B, the corresponding curves display a complex trend of variation as the content of one MC element increases, as shown in Figure 5a.The B value exhibits an initial decrease followed by an increase as the Ni or Cr content increases, while the overall trend rises with the increase in Co content.Therefore, it can be predicted that the promoting effect of the three alloying elements on the bulk modulus B is positive for the Co element, negative for the Ni element, and inconclusive for the Cr element.Meanwhile, the variation in curves exhibits similar regularities in Figure 5b,c, and the values of G and E gradually rise with the increase in one MC content.And the influence trend of the three alloying elements is similar at low content, while the impact of the Co element becomes predominant at high concentrations.Therefore, the SE analysis demonstrates that augmenting the Co content among the three alloying elements is more conducive to improving the elastic moduli of Al10Ti15Nix1Crx2Cox3 alloys.In order to further elucidate the competitive relationship among the three alloying elements on the elastic moduli of alloys, a ME analysis based on the PLS regression is conducted using the data presented in Table 1 and Figure 5, and the corresponding results are summarized in Table 4.As mentioned above, the standardized regression At the same time, the bar chart in Figure 6 clearly presents the regression coefficients, facilitating a better understanding of the competitive relationship among alloying elements.Evidently, the positive coefficients for Cr and Co indicate that increasing their content is advantageous in enhancing the elastic moduli of alloys, particularly with a more pronounced effect observed for higher Co content due to its larger corresponding coefficient value.The result is consistent with the influence trend of the Co element predicted by SE analysis, and clearly illustrate the uncertain impact of the Cr element in Figure 5.For the Ni element, it can be seen that the coefficient values are all negative, showing that the addition of Ni content is not conducive to improving the elastic moduli of alloys.In which, the coefficient of −0.427 aligns well with the predicted trend of the Ni element on the bulk modulus B in Figure 5a.And the coefficients of −0.405 and −0.398 indicate a hidden competitive relationship in Figure 5a,c values reflect the fitting degree of the corresponding regression expressions as 79.4%, 70.6%, and 71.5%, respectively.The degree of fit is not optimal, as it is determined by the intricate Coatings 2024, 14, 639 9 of 11 interplay of data relationships.However, the results still hold some theoretical reference value in terms of revealing the competitive relationship between various alloying elements and promoting the mechanical properties of alloy materials.
Conclusions
In summary, the competitive mechanisms between Ni, Cr, and Co elements on the physical properties of Al10Ti15Nix1Crx2Cox3 alloys are investigated through SE and ME analyses based on first principles calculations and PLS regression.The key findings are outlined as follows: The increase in Ni or Co content in the Al10Ti15Nix1Crx2Cox3 alloys leads to 0 a reduc- tion, whereas the opposite effect is observed with the addition of the Cr element in light of SE analysis.Meanwhile, the ME analysis reveals that the Cr element exhibits the most significant competitive advantage among the three alloying elements, with a positive promotion direction, while the The SE analysis reveals a positive promotional effect of the Ni, Cr, Co elements on the elastic moduli G and E. However, a negative influence of the Ni element on the elastic moduli B, G and E is observed through the ME analysis.Additionally, it is observed that there is a gradual decline in the level of competitiveness among the elements Co, Ni, and Cr.
Among them, the SE analysis and ME analysis yield some different conclusions, the reason is that the SE analysis solely takes into account the impact of variations in the content of a single alloying element, while disregarding the influence on the content of other alloying elements and subsequently neglecting their collective effect on the physical properties of alloys.Therefore, merely employing the first principles calculations is inadequate for comprehensively investigating the influence of alloying element content on the physical properties of alloy materials, but using multivariate numerical analysis will be more helpful to reveal the hidden interaction mechanism.The present study provides a novel research concept to elucidate the competitive relationship among alloying elements, thereby offering more reliable theoretical guidance for the development of new alloy materials.The SE analysis suggests that augmenting the contents of Ni, Cr, and Co elements can effectively enhance the values of C 11 and C 44 , while the impact on the elastic constant C 12 remains inconclusive.Further analysis using ME revealed the competitive relationship among the three alloying elements in the formation of elastic constants, with negative effects observed for Ni and Cr elements, while a positive effect is observed for the Co element.Moreover, both Ni and Co elements exhibited strong competitive strength, but their competitive directions are opposite.
The SE analysis reveals a positive promotional effect of the Ni, Cr, Co elements on the elastic moduli G and E. However, a negative influence of the Ni element on the elastic moduli B, G and E is observed through the ME analysis.Additionally, it is observed that there is a gradual decline in the level of competitiveness among the elements Co, Ni, and Cr.
Among them, the SE analysis and ME analysis yield some different conclusions, the reason is that the SE analysis solely takes into account the impact of variations in the content of a single alloying element, while disregarding the influence on the content of other alloying elements and subsequently neglecting their collective effect on the physical properties of alloys.Therefore, merely employing the first principles calculations is inadequate for comprehensively investigating the influence of alloying element content on the physical properties of alloy materials, but using multivariate numerical analysis will be more helpful to reveal the hidden interaction mechanism.The present study provides a novel research concept to elucidate the competitive relationship among alloying elements, thereby offering more reliable theoretical guidance for the development of new alloy materials.
12 Figure 1 .
Figure 1.The relationships between the lattice constant 0 a and the content of different MC ele-
Figure 1 .
Figure 1.The relationships between the lattice constant a 0 and the content of different MC elements in Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys.
Figure 2 ., 12 C and 44 C 11 C and 44 C 11 C and 44 C 12 C 12 C
Figure 2. Histogram of standardized regression coefficients for Ni, Cr and Co elements with respect to 0 a .
Figure 2 .
Figure 2. Histogram of standardized regression coefficients for Ni, Cr and Co elements with respect to a 0 .To further investigate potential disparities, additional significant physical properties, namely the elastic constants C 11 , C 12 and C 44 , are calculated, and the relationships between the elastic constants and the content of MC elements, Ni, Cr, Co, are depicted in Figure3.Evidently, the influence of increasing the content of a certain MC element on the C 11 and C 44 exhibits similarity.The values of C 11 and C 44 exhibited an overall upward trend with the increase in Ni, Cr, or Co content.The influence of Co is the most pronounced among them, while the influences of Cr and Ni decrease sequentially, with a particular flattening out observed in higher Ni content, as shown in Figure3a,c.However, the fluctuation of elastic constant C 12 exhibits a higher degree of complexity, as illustrated in Figure3b.When the content of MC element is low, there is a rapid decline in the value of C 12 with an increase in Ni or Cr content, while the influence of Co content remains relatively constant.
12 C 11 C and 44 C 12 C
with an increase in Ni or Cr content, while the influence of Co content re- mains relatively constant.Meanwhile, the influence of the Co element is most pronounced when the content of MC element is high, whereas the impacts of Cr and Ni elements are relatively insignificant.To summarize, in the analysis of a single element, the influences of Ni, Cr, and Co on are successively amplified, while their effects on do not exhibit a prominent regularity.
Figure 3 .
Figure 3.The relationships between the elastic constants and the content of different MC elements in Al10Ti15Nix1Crx2Cox3 alloys: (a) 11 C , (b) 12 C , and (c) 44 C .
Figure 3 .
Figure 3.The relationships between the elastic constants and the content of different MC elements in Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys: (a) C 11 , (b) C 12 , and (c) C 44 .
Figure 4 .
Figure 4. Histogram of standardized regression coefficients for Ni, Cr and Co elements with respect to the elastic constants.
Figure 4 .
Figure 4. Histogram of standardized regression coefficients for Ni, Cr and Co elements with respect to the elastic constants.
Figure 5 .
Figure 5.The relationships between the elastic moduli and the content of different MC elements in Al10Ti15Nix1Crx2Cox3 alloys: (a) B, (b) G, and (c) E.
Figure 5 .
Figure 5.The relationships between the elastic moduli and the content of different MC elements in Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys: (a) B, (b) G, and (c) E.
, showing that while the influences of the Ni element on the G and E are upward in the SE analysis, it should be downward when considering the combined effect of three alloying elements in the ME analysis, the observed outcome is attributed to the altering of Ni content in Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys while disregarding its impact on the Cr and Co contents.And the height of the histogram directly reflects the competitive relationship among the three alloy elements.It is evident that Co and Ni exhibit strong competitive advantages, albeit in opposite directions, while Cr demonstrates relatively weaker competitiveness, suggesting that increasing Co content or reducing Ni content can significantly enhance the elastic moduli of Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys.Moreover, the R 2
Coatings 2024 , 12 Figure 6 .
Figure 6.Histogram of standardized regression coefficients for Ni, Cr and Co elements with respect to the elastic moduli.
Figure 6 .
Figure 6.Histogram of standardized regression coefficients for Ni, Cr and Co elements with respect to the elastic moduli.
In summary, the competitive mechanisms between Ni, Cr, and Co elements on the physical properties of Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys are investigated through SE and ME analyses based on first principles calculations and PLS regression.The key findings are outlined as follows: The increase in Ni or Co content in the Al 10 Ti 15 Ni x1 Cr x2 Co x3 alloys leads to a 0 reduction, whereas the opposite effect is observed with the addition of the Cr element in light of SE analysis.Meanwhile, the ME analysis reveals that the Cr element exhibits the most significant competitive advantage among the three alloying elements, with a positive promotion direction, while the Ni and Co elements demonstrate a negative effect, which aligns with SE analysis.
Table 1 .
The variations in Ni, Cr, and Co contents under different MC elements.
Table 1 .
The variations in Ni, Cr, and Co contents under different MC elements.
Table 2 .
The PLS regression results between the content of Ni, Cr, and Co elements and the lattice constant a 0 .
Table 3 .
The PLS regression results between the content of Ni, Cr, and Co elements and the elastic constants.
Table 4 .
The PLS regression results between the content of Ni, Cr, and Co elements and the elastic moduli.
Ni and Co elements demonstrate a negative effect, which aligns with SE analysis.The SE analysis suggests that augmenting the contents of Ni, Cr, and Co elements can effectively enhance the values of Further analysis using ME revealed the competitive re- lationship among the three alloying elements in the formation of elastic constants, with negative effects observed for Ni and Cr elements, while a positive effect is observed for the Co element.Moreover, both Ni and Co elements exhibited strong competitive strength, but their competitive directions are opposite.
|
v3-fos-license
|
2023-07-21T15:11:20.497Z
|
2023-07-18T00:00:00.000
|
260039412
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://hrcak.srce.hr/file/441840",
"pdf_hash": "d2692769d4a5782eb3223fc64db099af2fc43953",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43286",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "1d13e6e1240f021e931e953b1cdef9133a201c71",
"year": 2023
}
|
pes2o/s2orc
|
Improving the Performance of Patch Antenna by Applying Bandwidth Enhancement Techniques for 5G Applications
: In this study, various Rectangular Microstrip Antenna (RMA) designs operating at 28 GHz frequency for 5G - communication system are performed. All designs are generated and analyzed using a 3D electromagnetic simulation program, ANSYS HFSS (High - Frequency Structure Simulator). Single and array type RMA designs are constructed by using non - contact inset - fed feeding technique. Subsequently, the bandwidth of RMAs is increased by slotting on the ground surface, and adding a parasitic element to the antenna structure. Because of these analyses, for single type RMA, the bandwidth increases from 2.09 GHz to 3.45 GHz. Moreover, for 1 × 2 and 1 × 4 array type RMAs, very wide bandwidths of 7.53 GHz and 4.53 GHz, respectively, are obtained by applying bandwidth enhancement techniques. The success of the study has been demonstrated by comparing outputs of the designs with the some similar, experimental or simulation studies published in the literature.
INTRODUCTION
Due to widespread use of new technology devices in recent years, the growing demand for multimedia applications and wireless data creates a significant burden on existing cellular networks. After 4G mobile network, which has been available worldwide since 2009, the 5th Generation (5G) mobile communication technology is expected to show a revolutionary development in terms of network coverage, data rate, latency, network reliability and energy efficiency [1]. With the wide-scale deployment of 5G, mobile network will be required 1000 times higher capacity and 10-100 times faster data transmission rate than the current mobile technology. This is mainly because 5G is expected to obtain a reliable communication network and stable connection not just for phones and computers, but also various types of IoT devices such as self-driving vehicles, robots, cameras or smart home gadgets [1]. Since traditional 4G/LTE network do not provide large bandwidths in giga-bits for 5G applications, several new frequency bands between 20 and 70 GHz, also known millimeter wave bands, are identified in World Radio Communication Conference 2019 (WRC- 19) report [2]. However, operational frequencies around Ku band and more specifically 28/38 GHz are prominent due to their low atmospheric attenuation [3].
Antenna design for 5G devices is very crucial to perform communication in specified millimeter wave frequencies with higher gain, enhanced bandwidth and lesser radiation losses [4]. In this sense, microstrip patch antennas emerge as a strong candidate because of their numerous attractive features such as small size, low profile, ease of production, high reliability etc. In addition, microstrip antennas can tolerate path loss in terms of gain and efficiency at higher frequencies of 5G technology. However, despite of these bountiful advantages, one major problem is their narrow bandwidth [4].
To overcome this disadvantage, bandwidth enhancement techniques such as adding parasitic element, slotting shapes on the patch surface, defecting ground structure (DGS), increasing substrate thickness or coupling type of feeding are commonly used in the designs.
Literature shows that various microstrip antennas for 5G have been studied by researches recently and in some of them, bandwidth enhancement techniques have been applied. For instance, Seyyedehelnaz Ershadi et al. designed a rectangular microstrip antenna with a 4-layer substrate. The use of multiple substrates increased the bandwidth up to 21% for 28 GHz resonance frequency [5]. In 2017, Saeed Ur Rahman et al. presented a single and a 1×2 array microstrip patch antenna with the quarter wave transformation method. The 1×2 array antenna resonated at 26.5 GHz and 28.8 GHz frequencies provided 28% impedance bandwidth [6]. In the study published by Nanae Yoon and Chulhun Seo (2017), a microstrip patch antenna that communicates at 28 GHz was designed. They investigated the effects of single and double U-shaped slits on bandwidth and gain. The study showed that opening slits on the single and 2×2 array rectangular patches increased bandwidth [7]. In 2018, Kyoseung Keum and Jaehoon Choi simulated a single and a 4×4 rectangular microstrip array antenna with double U-shaped slot on the patch. The bandwidths were 3.77 GHz, and 4.71 GHz for the single and 4×4 array antennas respectively [8]. Wahaj Abbas Awan et al. designed a microstrip patch antenna operating at 28 GHz frequency for 5G technology in 2019. It was observed that the bandwidth was increased from 1.33 GHz to 1.38 GHz and the return loss was decreased from -46.97 dB to -56.95 dB by using defected ground structure (DGS) [9]. In 2020, Sharaf et al. proposed a compact dual-frequency (38/60 GHz) microstrip patch antenna for dual-band 5G mobile applications. In the design, two electromagnetically coupled patches were used and experimental results showed that achieved impedance bandwidths are about 2 GHz and 3.2 GHz in the 38 GHz and 60 GHz bands, respectively [10]. In the scientific report published by Marasco et al., in 2022, a novel, miniaturized evolved patch antenna design was introduced for flexible and bendable 5G IoT devices and its radiation properties was enhanced by using a Split Ring Resonator (SRR) in the sub-6GHz frequency band [11]. In 2022, Ezzulddin et al. fabricated and analyzed rectangular, circular and triangular microstrip patch antennas operating at 28 GHz for 5G applications. Measurements showed that achieved bandwidths of rectangular, circular and triangular microstrip patches were 0.904, 0.848 and 0.744 GHz, with the gains of 6.44, 6.03 and 5.26 dB, respectively [12].
In the published studies, some of them are summarized above, bandwidth enhancement techniques have been used, different feeding methods and substrate materials have been tested, single-element or array-shaped microstrip antenna designs have been analyzed. However, the problem is important parameters such as return loss or antenna gain decrease significantly while increasing the bandwidth.
In this study, by using ANSYS HFSS simulation program, various Rectangular Microstrip Antennas (RMA) operating at 28 GHz frequency for 5G technology are designed and analyzed. To increase antenna bandwidth, parasite patch element is added to the antenna structure and DGS technique is applied on the ground surface. When outcomes of these designs are compared with the results of the experimental and simulation studies in the literature, it has been concluded that the study is quite successful.
THEORETICAL BACKGROUND
Microstrip antennas are widely used due to their advantages such as lightness, small volume and low cost. In its most basic form, a RMA geometry consists of the ground plane, dielectric layer and radiating patch as shown in Fig. 1. The most common patch geometry in terms of usage area is rectangular shaped patches. The width of the radiating patch, W p , its length, L p , the dielectric constant of the substrate (dielectric layer), ε r and its thickness, h are shown in Fig. 1. Since thicker materials with low dielectric constant provide better radiation efficiency and wider bandwidth, a low-loss dielectric substrate Rogers RT/duroid 5880 with dielectric constant ε r = 2.2 and loss tangent tanδ = 0.0009 is chosen.
As a first step, the patch width W p and length L p are calculated by using operation frequency f r of the antenna and the dielectric constant of the substrate, as given in Eq. (1) [13].
F/m are speed of light, magnetic permeability and dielectric constant in free space respectively. The microstrip antenna has an inhomogeneous structure due to patch on the top surface, ground plane on the bottom surface and the dielectric layer between them. This structure causes the change of electrical conductivity and so, effective dielectric constant ε reff is given as, Due to fringing field effect, the electrical dimension of the patch is greater than its physical dimension. The increment in length ΔL and electrical length of the patch L eff are calculated by using Eqs. (3) and (4) respectively [13,14].
Therefore, the actual length of the patch, L p , is Single element or array type RMAs are designed and analyzed in this study. Since non-contact inset-fed is preferred as the feeding technique, energy flow is provided indirectly by the contactless 50 Ω feeding line. The theoretical calculation of antenna and microstrip line impedances are explained in detail in Ref. 13 and 14. After the general theoretical calculations, the best gain, bandwidth and return loss values are obtained by impedance matching with the help of the software.
28 GHZ SINGLE RMA DESIGN
RT Duroid 5880 material with dielectric constant ε r = 2.2 and loss tangent tanδ = 0.0009 is used as a substrate for the 28 GHz rectangular microstrip antenna due to its low loss and low dielectric constant. The undertone substrate thickness is chosen as 0.508 mm. 2 shows the geometry of the RMA with non-contact inset-fed. As seen from the figure, W p , and L p are the patch width and length; F i is the embedding distance of feeding line; g is gap between the feed line and the patch; h is substrate thickness; t is patch thickness; w 50 and L 50 are the feed line width and length; W g and L g are the ground surface width and length, respectively. Also, the width and length of substrate and ground surface are taken as equal. All dimensions of designed RMA are given in Tab. 1.
The return loss graph of RMA is shown in Fig. 3. RMA, emitting at a frequency of 28.075 GHz, has an impedance bandwidth of 1.91 GHz in the 27.05-28.96 GHz frequency range. As a result, designed RMA has a return loss of -31.93 dB at the frequency of 28.07 GHz, a bandwidth of 1.91 GHz and a gain of 8.20 dB (Fig. 4). As the next step, bandwidth enhancement techniques such as defecting ground structure or stacked patch technique are performed to enhance antenna bandwidth.
Application of Defected Ground Structure (DGS) and Stacked Patch Techniques for Single RMA
In microwave circuits, DGS (Defected Ground Structure) is applied by etching slots on the ground surface. Defects on the ground plane may be in the form of a single cell or periodic/aperiodic configuration of slots which depends on the application. The well-known advantages of the DGS are reducing size of component, improving bandwidth, suppressing mutual coupling or cross polarization effect and using to adjust antenna impedance for matching and for maximum power transfer.
In this part of the study, DGS is applied to the ground surface of the single RMA with non-contact inset fed by opening a slot which is in the form of a ring line aligned at the center of the patch. Bottom view of the single RMA is shown in Fig. 5(a). The radius of the ring slot is R and its width is wR. All dimensions of the designed RMA are given in Tab. 2. The more complex shape imperfections at the ground surface provide further change the path of surface currents. To see the effect of changing current distribution on the ground surface to the antenna performance parameters such as bandwidth or gain, both ring and C-shaped slots are used together as shown in Fig. 5(b). Related dimensions of the RMA for this design are given in Tab. 3. Another method to increase bandwidth of microstrip antennas is to use a stacked patch. In this method, more than one dielectric material is used and a stacked patch is added on the antenna structure. By this design, the radiation of microwave is spread and wider bandwidth can be obtained. To see the effect of stacked patch technique, two dielectric layers with same thickness (h1 = h 2 ) and same dielectric constant (ε r1 = ε r2 = 2.2) used in the structure. Main patch and contactless feed line are placed between two dielectric layers, and stacked patch is at the top of the second dielectric layer, as seen from Fig. 5(c). In addition, to combine design with the DGS, a ring shape slut is cut on the ground surface. Dimensions of the design can be found in Tab. 4 where width and length of the stacked patch are represented by W pp and L pp respectively. Simulation results of designed RMAs given in Fig. 4, 5(a), 5(b) and 5(c) are compared and presented in Tab. 5. From the table, it is seen that the deformation on the ground surface of RMA increases bandwidth but also, it decreases antenna gain significantly. However, by the use of stacked patch technique and DGS together, clearly the widest bandwidth is obtained and significant decrease on gain is prevented. As a next step of the study, bandwidth enhancement techniques are applied to 1 × 2 and 1 × 4 array antenna designs which are explained in the following sections.
28 GHZ 1×2 ARRAY RMA DESIGN
The maximum gain of single microstrip patch antenna is obtained around 8 dB. As well-known from the antenna design studies, one way is creating an array structure to increase antenna directivity and gain. So, to provide higher gain, an array structure consists of two RMAs feeding by contactless microstrip lines is designed and analyzed. In Fig. 6, the geometry of the array design is shown. Dimensions of each array elements are same and width and length of the substrate and ground surface are equal. The distance (l 1 ) between the center points of the array elements effects the radiation pattern and changes the bandwidth and gain due to mutual coupling. Design analyses show that the widest bandwidth occurs when l 1 is chosen around 0.9λ. In addition, another issue about the design is to ensure impedance compatibility between the feed lines and patches. The impedance of the feeding line is 50 ohms and energy flow to the patches is obtained by the coupling effect of two equal 100-ohm transmission lines. The embedded distance of transmission lines into the patch has been decided by the help of simulation program to achieve impedance matching. Dimension parameters for the designed 1×2 array RMA are given in Tab. 6. Analysis result shows that the return loss for the 28.08 GHz resonance frequency is around -44.62 dB and the bandwidth is 3.50 GHz. The gain pattern indicates that the 1×2 array RMA has 11.6 dB maximum realized gain, which is higher than the single element RMA, as expected. However, in this study, as it is aimed to increase antenna bandwidth as well as the gain, DGS and stacked patch techniques are applied to 1×2 array RMA designs which is explained in the next section.
Application of Defected Ground Structure (DGS) and Stacked Patch Techniques for 1×2 Array RMA
It has been observed that the widest bandwidth is obtained when the distance between centers of patch elements is selected as 0.9λ for the designed 1×2 Array RMA with non-contact inset-fed. In addition, ring and rod-shaped slots are cut on the ground surface and parasite patch elements are added antenna structure since it is aimed to increase the bandwidth of the 1×2 array RMA. As it can be seen from the Fig. 7(a) which is the bottom view of 1×2 array RMA, the ground surface is defected by ring and rod-shaped slots and they are located symmetrically and around the center of the two patch elements. In addition, from the Fig. 7(b), simulated 1×2 array RMA with stacked patch elements can be seen. In this design, two dielectric layers with same thickness (h1 = h 2 ) and same dielectric constant (ε r1 = ε r1 = 2.2) are used.
Opening ring/rod-shaped slots on the ground surface causes a shift in the resonance frequency, and to keep resonance frequency at 28 GHz, the patch length has been reduced. While the patch length for each element is 2.92 mm in the array design without DGS, it is 2.67 mm in this design with DGS. Dimensions of the 1 × 2 array RMA represented by Fig. 7(a) and 7(b) are given in Tab. 7 where the width and length of stacked patches are represented by W pp and L pp respectively.
Analyses results show that by the application of stacked patches and DGS, a very wideband antenna design is made. As seen from the Fig. 8, the return loss of the design is -38.06 dB at 28 GHz and the impedance bandwidth is 7.53 GHz between 24.14 GHz and 31.66 GHz. The gain of the 1 × 2 array RMA was measured as 10.03 dB (Fig. 9). This indicates that a design that provides the desired antenna efficiency has been made, although there is a slight decrease in gain compared to previous designs.
28 GHZ 1×4 ARRAY RMA DESIGN
In the design of 1×2 RMA array, antenna gain is varied between 10 and 11.5 dB. As a last step, 1×4 RMA array with contactless inset-fed is simulated and analyzed to increase antenna gain. Moreover, by the application of bandwidth enhancement techniques, wide band antenna is aimed at 28 GHz for 5G applications. Fig.10 shows the geometry of 1×4 array RMA design. The distances between the centers of patch elements, given by l 1 , l 2 and l 3 , are defined by the help of software. The 50 ohm feed line is divided into two equivalent 100 ohm microstrip lines. Quarter wave transformation is performed by using 70 ohm microstrip line for impedance matching between two 100 ohm lines. The width and length of 70 ohm line is represented by w 70 and L 70 , respectively. All dimensions of the 1×4 array RMA illustrated in Fig. 10 are given in Tab. 8. Analyses results show that the return loss of 1 × 4 array RMA is -43.56 dB at 27.92 GHz and -28.56 dB at 30. 17 GHz. This indicates that antenna resonates at two frequencies. In addition, the bandwidth is found 4.18 GHz and maximum antenna gain is around 14 dB at 28 GHz.
Application of Defected Ground Structure (DGS) Technique for 1×4 Array RMA
In order to increase the bandwidth of the 1×4 RMA array, the ground is defected by four, concentric, double ringshaped slots located each patch. As seen from Fig. 11, for each ring pair, the inner and outer ring radii are expressed by R1 and R 2 , with widths w R1 and w R2 , respectively. The widths and radii of slots are optimized by the HFSS software and the most suitable dimensions are defined for the best bandwidth.
The bottom view of 1×4 RMA array is given in Fig. 11 and the widths and radii of slots can be found from Tab. 9. All the other dimensions of the design are the same with the previous one, given in Tab. 8. Like the previous one, in this design antenna resonates at two frequencies, which are 28.12 GHz, and 30.63 GHz. Return loss and antenna gain are -31.53 dB and 13.34 dB. As expected, gain slightly decreases due to defected ground surface. However, the bandwidth of RMA increases to 4.53 GHz.
RESULTS AND DISCUSSION
To see the effect of the bandwidth enhancement techniques, a comparison of all design cases with respect to antenna output parameters is given in Tab. 10 and important conclusions of the study are summarized as follows: 1) For all design cases, defected ground surface and parasitic patch element provide wider bandwidth at the operating frequency as seen from Tab. 10. 2) According to Tab. 5, if only DGS is applied to single RMA, antenna gain is reduced significantly due to complex shape imperfection and radiation loss. To avoid this situation, the combination of DGS and stacked patch technique with multiple layer dielectric substrates is performed in the design and it is observed that wider bandwidth can be achieved with slight decrease on gain. Therefore, it can be said that overall performance of single RMA is improved by using these bandwidth enhancement techniques. 3) To provide higher gain, one well-known method is to increase antenna directivity by arraying. To do that, 1×2 and 1×4 RMA arrays are designed and analyzed. Both two array designs improve antenna gain with respect to the single RMA, as expected. 4) By applying DGS and adding parasitic patch element with multiple layer substrates to 1×2 RMA array structure, the best bandwidth (≈7.5 GHz), which is more than twice of the bandwidth of the design without using DGS and parasitic patch elements (≈3.5 GHz), is obtained. The gain for this design is around 10.3 dB which is slightly less (≈1.3dB) than 1 × 2 RMA array design without using any bandwidth enhancement methods. 5) The highest gain (≈14dB) is obtained by the 1 × 4 RMA array configuration, as expected. By applying DGS to the design, slight increase on bandwidth (9.7%) and decrease on gain (0.2 dB) is observed. Stacked patch technique is not implemented to the design to avoid high degree of design complexity, high loss and very dispersed radiation pattern which causes multiple resonance frequency. 6) For all designs, regardless of single or array RMA configurations, by the application of DGS and stacked patch, loss is increased because of adding some components such as multiple dielectric substrate or parasitic patch element to the design and deformation on the ground plane. However, for all designs, loss level is less than -10 dB so it is in acceptable range for application.
As a last step of this study, obtained results of the analyses of single and 1×2/1×4 array RMAs are compared with some similar publications in the literature and expressed in Tab. 11 and 12. As seen from Tab. 11, using the combination of DGS and stacked patch technique obviously increases bandwidth and it prevents significant decrease on gain. [15] 28 GHz -39.36 dB 2.48 GHz 6.37 dB [16] 28.3 GHz ≈-38 dB 2.4 GHz 5.56 dB [17] 28 GHz -40 dB 1.3GHz 7.6 dB [18] 28 GHz ≈-28 dB 2.66 GHz 5.82 dB [19] 28 GHz -24 dB 2.24 GHz 7.86 dB [20] 27.9 GHz -15.35 dB ≈0.5 GHz 6.92 dB [21] 28.96 GHz -19.12 dB 3.93 GHz 6.05 dB [22] 28 GHz -59. 17 Of course, it is possible to obtain wider bandwidth with lower gain or vice versa by applying different design techniques, which depends on the application and major consideration of the design. However, the overall aim of this work is to improve antenna performance by using bandwidth enhancement techniques in the design and results show that in the desired frequency range, efficient designs with wide bandwidth and good gain for single or array type RMAs are achieved for the applications of 5G.
|
v3-fos-license
|
2022-02-26T06:23:41.565Z
|
2022-02-25T00:00:00.000
|
247106514
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "89d89dee733f0a27ee12ff725722e7b56e0da934",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43287",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "9dd210d26df6fcb5de7debb81e09f13b704187c1",
"year": 2022
}
|
pes2o/s2orc
|
Newly diagnosed multiple sclerosis in a patient with ocular myasthenia gravis
Abstract Rationale: Patients with myasthenia gravis may also have comorbid autoimmune diseases. Since both myasthenia gravis and neuromyelitis optica spectrum disease are mediated by antibodies, they are likely to occur together. However, since multiple sclerosis is an autoimmune disease that is not mediated by a specific antibody, it has fewer immune mechanisms in common with myasthenia gravis than neuromyelitis optica spectrum disease. We encountered a case of newly developed multiple sclerosis in a patient with myasthenia gravis. Patient concerns: A 46-year-old man was diagnosed with ocular myasthenia gravis 6 years ago and had been taking pyridostigmine to control his symptoms. Diagnosis: The patient developed right optic neuritis, and multiple sclerosis was suspected based on the brain magnetic resonance imaging findings. However, the required diagnostic criteria were not met. Interventions: Disease-modifying therapy was not initiated, and clinical progression of the disease was monitored. Outcomes: One year after the onset of optic neuritis, the patient developed myelitis and was diagnosed with multiple sclerosis, prompting treatment with disease-modifying therapy. Lessons: When optic neuritis occurs in patients with myasthenia gravis, careful evaluation is necessary while considering the possibility that it may be the first symptom of a demyelinating central nervous system disease. Therefore, it is important to conduct shorter-interval monitoring and symptom screening for patients with neurological autoimmune diseases, such as myasthenia gravis, even if multiple sclerosis is not initially suspected, to achieve early detection of multiple sclerosis.
Introduction
Myasthenia gravis and multiple sclerosis are autoimmune diseases that affect the neuromuscular junctions and central nervous system (CNS), respectively. The main mechanism of myasthenia gravis is antibody-mediated, while that of multiple sclerosis is T cell-mediated. In part, both these diseases are caused by immune dysregulation induced by the numerical, functional, and migratory deficiency of T regulatory cells, which play an important role in immunologic tolerance. [1] Approximately 25% of patients with autoimmune diseases tend to develop more autoimmune diseases. [2] The co-occurrence of myasthenia gravis and demyelinating disorders is more common than expected by chance. [3] We report a patient with myasthenia gravis, an antibody-mediated disease, who developed multiple sclerosis, a nonantibody-mediated disease, several years later.
Case presentation
A 46-year-old man presented with sudden onset of decreased vision in the right eye that occurred suddenly 2 days prior. Six years ago, the patient had undergone several tests for ptosis and diplopia. A significant decremental response was observed in the repetitive nerve stimulation test performed on the orbicularis oculi muscle. Anti-acetylcholine receptor antibody test result was positive. Consequently, he was diagnosed with ocular myasthenia gravis and was prescribed 180 mg/d of pyridostigmine to be Editor: Maya Saranathan.
JBB and MAL contributed equally to this work.
The patients have provided their consent for this case report.
Written informed consent was obtained from the patient for the publication of this case report and accompanying images. taken orally. The patient had a mean quantitative myasthenia gravis score of 2. Chest computed tomography performed at that time did not reveal any thymic abnormalities. He was able to continue his daily life without much discomfort, despite only taking pyridostigmine with no immunosuppressants. Therefore, he was followed up on an outpatient basis.
During this visit, neurological examination revealed identical pupils with normal pupillary reflexes. However, a relative afferent pupillary defect was observed in the right eye. The visual acuity of the right eye enabled recognition of fingers at a distance of 30 cm from the eye, while that of the left eye was 0.8. Fundus examination revealed swelling of the right optic disc. There were no defects in the extraocular muscle movements, as well as orbital pain. The results of all the other cranial nerve tests were normal. Moreover, there were no motor or sensory deficiencies in the upper and lower extremities.
Blood workup including complete blood count, electrolytes, liver and renal function tests showed normal results. Erythrocyte sedimentation rate, C-reactive protein and fluorescent antinuclear antibody test results were normal as well. The results of the anti-aquaporin 4 antibody, antimyelin oligodendrocyte glycoprotein antibody, and oligoclonal band of cerebrospinal fluid were negative; while the immunoglobulin G index did not show an increase from the baseline of 0.7.
High-signal intensity and enhancing lesions were observed on T2-weighted magnetic resonance imaging (MRI) of the orbit in the intraorbital segment of the right optic nerve ( Fig. 1A and B). Brain MRI showed multiple contrast-enhancing and noncontrast-enhancing lesions in the left periventricular region (Fig. 1C-F); no lesions were observed in the cortical, juxtacortical, infratentorial, or spinal cord regions. Intravenous corticosteroid (methylprednisolone 1000 mg/d) was administered for 3 days to treat optic neuritis of the right eye, which gradually and partially improved the decreased vision. Since the patient showed symptoms of unilateral optic neuritis, other diseases such as neuromyelitis optica spectrum disease (NMOSD) were excluded based on blood tests.
The patient was assumed to have a clinically isolated syndrome so the 2017 McDonald diagnostic criteria were used to diagnose multiple sclerosis. The patient fulfilled the "dissemination in time" criterion because both contrast-enhancing and nonenhancing lesions were seen in the left periventricular region on brain MRI. However, the "dissemination in space" criterion was not fulfilled, because lesions were not observed in any other characteristic regions other than the periventricular region. Hence, the patient was monitored during the follow-up period without any additional treatment.
While he was receiving treatment for myasthenia gravis, the patient followed up in the outpatient department after 1 year and developed acute sensory numbness in his right trunk. On wholespine MRI, high-signal intensity and contrast-enhancing lesions were observed at the second thoracic vertebral level ( Fig. 2A and B). It was also observed that the number of high-signal intensity lesions in the periventricular and juxtacortical regions had increased in the follow-up T2-weighted brain MRI as compared to the findings from the past year ( Fig. 2C and D). Based on these findings, the patient was diagnosed with recurrent multiple sclerosis. After administering high-dose steroids for 5 days, teriflunomide was initiated as a long-term disease-modifying therapy, and the progress was monitored ( Figure S1, Supplemental Digital Content, http://links.lww.com/MD2/A901).
Discussion
This patient was diagnosed with ocular myasthenia gravis 6 years ago, and his symptoms were well controlled with pyridostigmine. He suffered from acute visual deterioration in the right eye and was administered with steroid treatment for optic neuritis in the same eye. Initially, he was not diagnosed with multiple sclerosis because, despite fulfilling the dissemination in time criterion of the 2017 McDonald diagnostic criteria, the lesions observed on brain MRI did not fulfill the dissemination in space criterion. Hence, the patient was monitored for disease progression.
After 1 year, the patient complained of sensory abnormality in the right trunk, and a whole spine MRI was conducted to investigate it. Myelitis was confirmed at the thoracic level of the spinal cord. Thus, a diagnosis of multiple sclerosis was made, and treatment was initiated accordingly. Strict diagnostic criteria should be implemented when diagnosing multiple sclerosis to avoid misdiagnosis, which is reportedly 25% in the United States. [4] Thus, this case highlights the need for conducting shorter-interval monitoring and symptom screening in patients with autoimmune diseases, such as myasthenia gravis, considering that they have a higher probability of developing other autoimmune diseases than the general population.
Autoimmune diseases can occur with other autoimmune diseases, and 25% of the patients with autoimmune diseases tend to develop additional autoimmune diseases. [2] Myasthenia gravis and multiple sclerosis are autoimmune diseases affecting the neuromuscular junctions and the CNS, respectively. Myasthenia gravis is caused by the destruction of neuromuscular junctions by an acetylcholine receptor-specific antibody, whereas multiple sclerosis is caused by neuronal antigen-specific T-lymphocytes. There are many studies and case reports on the co-occurrence of CNS demyelinating diseases and myasthenia gravis. The most recent results of these studies showed that 0.34% of patients with multiple sclerosis and 5% of patients with NMOSD were also diagnosed with myasthenia gravis, which is higher than the prevalence of myasthenia gravis (0.024%) in the general population. [5] Myasthenia gravis and NMOSD are caused by the actions of the anti-acetylcholine receptor antibody and the anti-aquaporin 4 antibody, respectively. Since both diseases are mediated by the action of immunoglobulin G1 antibodies against distinct proteins, it can be speculated that a common immune mechanism is involved in both diseases. [5] In contrast, since multiple sclerosis is not medicated by a specific antibody, it has fewer immune mechanisms in common with myasthenia gravis than NMOSD. Consequently, the co-occurrence rate of multiple sclerosis and myasthenia gravis may be lower than that of NMOSD and myasthenia gravis. When optic neuritis occurred in this patient, the possibility of NMOSD was higher than that of multiple sclerosis. However, the investigation revealed a diagnosis of multiple sclerosis. When optic neuritis occurs in a patient with myasthenia gravis, it is necessary to examine closely whether the diagnostic criteria for multiple sclerosis are satisfied.
When thymectomy is performed for treating myasthenia gravis, the incidence of CNS demyelinating diseases such as multiple sclerosis tends to increase. [6] In the case of experimental euthymic mice that fed myelin basic protein, the onset of autoimmune encephalomyelitis was inhibited owing to oral tolerance. However, the thymectomized mice showed that oral tolerance was not achieved because absence of deletion of autoreactive T cells. [7] In this case, the patient did not have severe myasthenia gravis symptoms; hence, only symptomatic treatment with pyridostigmine was administered. Immunosuppressants were not administered, and thymectomy was not performed. Nevertheless, he developed multiple sclerosis, which differentiates this case report from previous ones.
Both myasthenia gravis and multiple sclerosis result from loss of self-tolerance due to the quantitative and functional defects of T regulatory cells, which are important in immune tolerance. Moreover, they share some common immunopathological mechanisms. [1] A study conducted in British Columbia [8] reported that 3 out of 8 patients with co-occurring myasthenia gravis and multiple sclerosis developed multiple sclerosis approximately 6 to 8 years after the onset of myasthenia gravis. Moreover, the first manifestation of myasthenia gravis in these cases was ocular symptoms, such as ptosis or diplopia. Similarly, in this case, myasthenia gravis developed approximately 6 years prior, with ptosis and diplopia as the initial symptoms. In the British Colombia study, 5 out of the 8 patients developed optic neuritis as the first symptom of multiple sclerosis, whereas facial palsy, paresthesia, and motor weakness were the first symptoms in each of the remaining patients. It is noteworthy that optic neuritis was the first symptom of multiple sclerosis in the aforementioned study, similar to the patient in this case report. However, when the patient developed optic neuritis, his symptoms did not meet McDonald diagnostic criteria for multiple sclerosis. Therefore, follow-up observation was the only option. Hence, multiple sclerosis was diagnosed only after the occurrence of myelitis after one year, which delayed the treatment of multiple sclerosis.
Conclusion
It is extremely important to diagnose multiple sclerosis accurately and as early as possible so that prompt treatment can be given. As multiple sclerosis progresses, the severity of axonal damage increases, and the disease load accumulates even in the absence of clinical symptoms. However, as shown in this case, even if multiple sclerosis is suspected, the diagnosis may be delayed and appropriate treatment may not be initiated because the diagnostic criteria are not satisfied. Therefore, when optic neuritis occurs in patients with a neurological autoimmune disease, such as myasthenia gravis, it should always be considered as the first symptom of CNS demyelinating diseases such as multiple sclerosis rather than idiopathic optic neuritis. Therefore, shorter-interval monitoring and symptom screening should be conducted in patients with myasthenia gravis to achieve early detection of multiple sclerosis. Furthermore, additional studies are needed to determine whether separate diagnostic criteria for multiple sclerosis should be established for patients with
|
v3-fos-license
|
2018-09-15T21:17:39.665Z
|
2011-11-02T00:00:00.000
|
52363322
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://acp.copernicus.org/articles/11/10837/2011/acp-11-10837-2011.pdf",
"pdf_hash": "5427b217a8858f1d231a23ef49d470cad4a30027",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43288",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "5427b217a8858f1d231a23ef49d470cad4a30027",
"year": 2011
}
|
pes2o/s2orc
|
Rate coefficients for the reaction of methylglyoxal ( CH 3 COCHO ) with OH and NO 3 and glyoxal ( HCO ) 2 with NO 3
Rate coefficients,k, for the gas-phase reaction of CH3COCHO (methylglyoxal) with the OH and NO3 radicals and (CHO) 2 (glyoxal) with the NO3 radical are reported. Rate coefficients for the OH + CH 3COCHO (k1) reaction were measured under pseudo-first-order conditions in OH as a function of temperature (211–373 K) and pressure (100–220 Torr, He and N 2 bath gases) using pulsed laser photolysis to produce OH radicals and laser induced fluorescence to measure its temporal profile. k1 was found to be independent of the bath gas pressure with k1(295 K) = (1.29± 0.13)× 10−11 cm3 molecule−1 s−1 and a temperature dependence that is well represented by the Arrhenius expression k1(T ) = (1.74±0.20)×10−12 exp[(590±40)/T ] cm3 molecule−1 s−1 where the uncertainties are 2 σ and include estimated systematic errors. Rate coefficients for the NO 3 + (CHO)2 (k3) and NO3 + CH3COCHO (k4) reactions were measured using a relative rate technique to be k3(296 K) = (4.0±1.0)×10−16 cm3 molecule−1 s−1 and k4(296 K) = (5.1± 2.1)× 10−16 cm3 molecule−1 s−1. k3(T ) was also measured using an absolute rate coefficient method under pseudo-first-order conditions at 296 and 353 K to be (4.2 ± 0.8)× 10−16 and (7.9± 3.6)× 10−16 cm3 molecule−1 s−1, respectively, in agreement with the relative rate result obtained at room temperature. The atmospheric implications of the OH and NO3 reaction rate coefficients measured in this work are discussed. Correspondence to: R. K. Talukdar (ranajit.k.talukdar@noaa.gov)
Introduction
Methylglyoxal, CH 3 COCHO, and glyoxal, (HCO) 2 , are dicarbonyls that play an important role in atmospheric chemistry as tracers of atmospheric biogenic and anthropogenic organic chemistry.They also play a role in tropospheric ozone production and secondary organic aerosol (SOA) formation on local to regional scales (Ervens and Volkamer, 2010).Methylglyoxal and glyoxal are short-lived species that are removed from the atmosphere primarily by UV/visible photolysis, gas-phase reaction, and heterogeneous processes.Studies of the OH radical reaction with glyoxal and its UV/visible photolysis quantum yields have been reported in previous work from this laboratory (Feierabend et al., 2008(Feierabend et al., , 2009)).In this work, rate coefficients for the OH radical reaction with methylglyoxal and the NO 3 radical reaction with glyoxal and methylglyoxal are presented.
Methylglyoxal is formed in the degradation of volatile organic compounds including isoprene and the aromatic hydrocarbons toluene, xylene, and trimethylbenzene.Methylglyoxal is also emitted directly into the atmosphere via the incomplete combustion of fossil fuels and biomass and to a lesser extent in automobile emissions as a result of biofuel usage.Approximately 30 % of the atmospheric oxidation of isoprene, the biogenic hydrocarbon with the greatest global emission, leads to the formation of methylglyoxal (Paulot et al., 2009;Paulson and Seinfeld, 1992), which accounts for ∼79 % of the methylglyoxal atmospheric budget.The atmospheric degradation of acetone is the next largest source of methylglyoxal and accounts for ∼7 % of its budget (Fu et al., 2008).The atmospheric abundance of methylglyoxal varies depending on location and season with gas-phase values of ∼0.15 ppb and particle-phase concentrations in the range 0.1-8.0ng m −3 reported in urban and rural areas (Grossmann Published by Copernicus Publications on behalf of the European Geosciences Union.et al., 2003;Ho et al., 2006;Liggio and McLaren, 2003;Moortgat et al., 2002).
The general atmospheric degradation scheme for methylglyoxal given in Fig. 1 shows that the competition between its reaction with the OH radical and its UV photolysis plays an important role in determining HO x production, which effects the oxidation capacity of the atmosphere, and the CH 3 C(O)OONO 2 (PAN) yield, which impacts ozone production in remote locations (Atkinson et al., 2006;Baeza-Romero et al., 2007;Staffelbach et al., 1995).Reaction with the OH radical leads to no net HO x radical production, while degradation via UV photolysis produces HO x (Atkinson et al., 2006) CH 3 COCHO + hν → CH 3 CO + HCO 387 nm (R2a) where the heats of reaction, r H 0 , and photolysis thresholds were calculated using available thermochemical parameters (Sander et al., 2006).CH 3 COCO radical formed in channels (R1a) and (R1d) spontaneously dissociates to CH 3 CO and CO in <15 µs (Green et al., 1990).CH 3 CO radical from channel 1a has sufficient energy to dissociate further to CH 3 and CO (Baeza-Romero et al., 2007).PAN, which enables the long-range transport of NO x (NO x = NO + NO 2 ) and ultimately ozone production in remote areas, is an end-product of both the OH reaction and UV photolysis mechanisms.It is important to quantify the degradation pathways to fully evaluate the impact of methylglyoxal on tropospheric chemistry.Several studies of the rate coefficient for Reaction (R1), k 1 , have been reported to date with room temperature values falling in the range (7-16) × 10 −12 cm 3 molecule −1 s −1 (Baeza- Romero et al., 2007;Kleindienst et al., 1982;Plum et al., 1983;Tyndall et al., 1995).Rate coefficient data at atmospherically relevant temperatures, ≤298 K, is, however, more limited.In fact, only one study has reported rate coefficient data at temperatures below 260 K (Baeza-Romero et al., 2007).The current IUPAC kinetic data evaluation recommends k 1 (T ) = 1.9 × 10 −12 exp((575 ± 300)/T ) cm 3 molecule −1 s −1 for use in atmospheric models (Atkinson et al., 2006).The large uncertainty in the activation energy, E/R, is primarily due to a lack of experimental data for the temperature dependence of Reaction (R1).Additional measurements of k 1 (T ), particularly at reduced temperatures, are therefore warranted and were addressed in the present study.
Nighttime atmospheric loss processes of methylglyoxal and glyoxal are also of interest for modeling tropospheric chemistry and possible SOA formation, but at present are not well characterized.The reaction of methylglyoxal and glyoxal with NO 3 and O 3 as well as their heterogeneous processing on atmospheric aerosol represent the most likely nighttime loss processes.Currently there are no experimental kinetic data available for the NO 3 radical reaction with glyoxal and methylglyoxal At present, atmospheric chemistry models rely on estimated rate coefficient values for Reactions (R3) and (R4) (Myriokefalitakis et al., 2008).In the present study, rate coefficients for the reaction of the NO 3 radical with glyoxal and methylglyoxal are reported.
Experimental details
Rate coefficients for the gas-phase reaction of OH with CH 3 COCHO were measured as a function of temperature (211-373 K) and pressure (100-200 Torr in He and N 2 ) by producing OH via pulsed laser photolysis (PLP) and measuring its temporal profile using laser-induced fluorescence (LIF).Rate coefficients for the reactions of NO 3 with glyoxal (k 3 ) and methylglyoxal (k 4 ) were measured at 630 Torr and 296 K via a relative rate technique using Fourier transform infrared spectroscopy (FTIR) to monitor the extent of reaction.k 3 (T ) was also measured at 296 and 353 K in a flow tube reactor at 3-6 Torr that was coupled to a chemical ionization mass spectrometer (FT-CIMS).The experimental apparatus and methods used have been described in detail elsewhere (Talukdar et al., 1995(Talukdar et al., , 2003;;Vaghjiani and Ravishankara, 1989;Zhu et al., 2008).Here, we only present the essentials needed to understand the present work.
OH reaction rate coefficients
Rate coefficients were measured under pseudo-first-order conditions in OH, [OH] [CH 3 COCHO] using the PLP-LIF experimental apparatus.A schematic of the apparatus is provided in the Supplement.The key components of the apparatus were (1) a temperature controlled reactor where OH was produced by pulsed laser photolysis and its temporal profile measured by laser-induced fluorescence, (2) pulsed lasers used to generate and detect OH, (3) a gas handling manifold, and (4) UV and infrared absorption setups to determine the methylglyoxal concentration on-line using UV absorption at 184.9 nm and Fourier transform infrared (FTIR) spectroscopy.
OH radicals were produced by the 248 nm pulsed laser (KrF, excimer laser for kinetic measurements at temperatures ≥255 K.For temperatures <255 K condensation of H 2 O 2 interfered with the rate coefficient measurements.For kinetic experiments performed at temperatures between 211 and 373 K OH was produced in the 248 nm pulsed photolysis of tert-butyl hydroperoxide, (CH 3 ) 3 COOH Photolysis of HNO 3 at 248 nm was also used in limited cases.The initial OH radical concentration, [OH] 0 , was estimated to be in the range of (0.3-2.7) × 10 11 molecule cm −3 based on the photolyte concentration, absorption cross section and quantum yield, and the photolysis laser fluence (Baasandorj et al., 2010;Sander et al., 2011;Taylor et al., 2008).The OH radical was detected by fluorescence following excitation in the A 2 + (v = 1) ← X 2 (v = 0) transition at 282 nm using the frequency doubled output from a pulsed Nd:YAG pumped dye laser (Vaghjiani and Ravishankara, 1989).
The OH decay obeyed the integrated rate expression where [OH] Methylglyoxal was introduced into the PLP-LIF gas flow from dilute gas mixtures of methylglyoxal in He (0.5-2.0 %) that were prepared manometrically in darkened 12 l Pyrex bulbs at total pressures of ∼1000 Torr.The methylglyoxal concentration in the LIF reactor was determined using the measured gas flow rate in addition to on-line optical absorption measurements.The UV absorption of methylglyoxal was measured using an Hg Pen-Ray lamp light source, a 100 cm long (2.5 cm dia.) absorption cell, a 184.9 nm narrow band-pass filter, and a solar blind phototube detector.Infrared absorption spectra were recorded between 500 and 4000 cm −1 at a spectral resolution of 1 cm −1 using a Fourier transform spectrometer.A multi-pass absorption cell (485 cm optical path length, 550 cm 3 volume, and KBr windows) was used for all infrared measurements.UV absorption was measured before the LIF reactor, while infrared absorption spectra were measured either before or after the LIF reactor.The methylglyoxal concentration in the LIF reactor determined from the optical measurements was scaled for gas flow dilution and differences in temperature and pressure between the LIF reactor and the absorption cells.The methylglyoxal concentration was varied over the range (5-174) × 10 13 molecule cm −3 during the course of the kinetic measurements.
Absorption cross-section measurements
Infrared and UV (184.9 nm) absorption cross sections of CH 3 COCHO were determined as part of this work.Cross sections of CH 3 COCHO at 296 K were determined using Beer-Lambert law, A = Lσ [CH 3 COCHO], from a linear least-squares analysis of the measured absorbance versus [CH 3 COCHO].The infrared (IR) and ultraviolet (UV) measurements were made simultaneously using a multi-pass cell (path length = 485 cm) for IR and 100 cm path length for UV.The cells were connected in series and the gas flow velocity was varied as part of the measurements.Cross sections were measured both under static fill and flowing conditions with the CH 3 COCHO concentration determined from absolute pressure measurements of manometrically prepared CH 3 COCHO/He mixtures (0.5-2 %).At least 10 different CH 3 COCHO concentrations, varied over at least an order of magnitude, were used in the cross section determinations.No difference was observed at different flow velocities or the direction of the flow, which indicates no loss of methylglyoxal in the flow through the apparatus.
The absorption cross section of CH 3 COCHO at 184.9 nm was determined to be (5.21 ± 0.16) × 10 −18 cm 2 molecule −1 where the error limit represents 2σ precision of the measurements.
The infrared absorption spectrum of methylglyoxal agrees with those reported in earlier studies and is given in the Supplement (Plum et al., 1983;Tuazon and Atkinson, 1989).The infrared cross sections determined using methylglyoxal samples obtained from different synthesis agreed to within 2 %.The integrated band intensities and the peak cross sections obtained in this work are given in Table 1.The methylglyoxal infrared cross sections obtained in this work are 8 to 15 % greater, depending on the spectral region, than those reported by Staffelbach et al. (1995).After our ACPD paper was published, an IR absorption study of several dicarbonyls was published (Profeta et al., 2011).The agreement in absolute intensities between their results and the present work is excellent (within 3 %).
NO 3 reaction rate coefficients
Two independent experimental techniques were used to determine k 3 and k 4 (a) an absolute method using a flow tube reactor coupled to a chemical ionization mass spectrometer (FT-CIMS) to measure k 3 at 296 and 353 K and (b) a relative rate technique using Fourier transform infrared spectroscopy (RR-FTIR) to measure k 3 and k 4 at 296 K.
Flow tube -chemical ionization mass spectrometer (FT-CIMS) method
Details of the experimental apparatus is given in a previous publication from this laboratory (Talukdar et al., 2003).The temperature regulated halocarbon wax coated flow tube reac-tor was a 150 cm long Pyrex tube, 2.54 cm i.d., with a moveable injector.The outside of the moveable injector (120 cm long, 0.64 cm o.d.) was also coated with halocarbon wax.The reaction zone of the flow tube was ∼50 cm.The reaction time in the flow tube was between 16 and 85 ms; total gas flow rates of 10 to 25 STP cm 3 s −1 at pressures between 2 and 6 Torr.A chromel-alumel thermocouple, inserted through the injector was used to measure the temperature of the gas in the reaction zone; the variation in the temperature along the reaction zone was ≤1 K. Rate coefficients were measured under pseudo-first-order conditions in NO 3 , [Glyoxal]/[NO 3 ] 0 ∼ 1000, with NO 3 radicals produced by the thermal decomposition of N 2 O 5 at 400 K (Rudich et al., 1996).NO 3 was introduced either through the moveable injector or through a side arm into the flow tube.Glyoxal was added to the flow tube opposite to the NO 3 addition point.The initial NO 3 radical concentration in the flow tube was in the range (1-5) × 10 11 molecule cm −3 .With a signal-to-noise ratio of ∼1000 and a detection sensitivity of ∼2 × 10 8 molecule cm −3 for one second integration, changes in [NO 3 ] of less than 1 % could be measured.
The effluent of the flow tube passed through a Pyrex valve into the ion flow tube, at ∼0.5 Torr, approximately 50 cm downstream of the ionization source.N 2 O 5 and NO 3 were detected by a quadrupole mass spectrometer as NO − 3 following their reaction with I − reagent ion.
The variation of NO 3 concentration with the relative injector position was used to derive the pseudo-first-order rate coefficient, k , which was measured at various glyoxal concentrations to obtain k 3 where k w (typically in the range: ∼0.2-0.5 s −1 ) represents the pseudo-first-order wall loss of NO 3 .A linear leastsquares fit of k vs.
[glyoxal] yielded the second-order rate coefficient, k 3 , which was determined at 296 and 353 K.
Relative rate method (RR-FTIR)
A relative rate method was used to determine k 3 and k 4 using ethene (CH 2 = CH 2 ) and iso-butane ((CH 3 ) 2 CHCH 3 ) as reference compounds.In this method, if the reactant of interest, R, and a reference compound (Ref) are removed solely by reaction with NO 3 the ratio of their reaction rate coefficients, k R /k Ref , is given by where the subscripts 0 and t refer to the initial reactant concentration and its concentration at time t.The slope of ln Experiments were carried out in a 22 l Pyrex reactor under dark conditions at 296 K. Experiments were performed by first adding N 2 O 5 to the reactor by flowing zero air over a solid N 2 O 5 sample at 230 K.The reactant and reference compounds were then added to the reactor from dilute mixtures; 0.1 % ethene/N 2 , 2-6 % glyoxal/He or 2 % methylglyoxal/He.
Synthetic air was then added to bring the reactor total pressure to 630 Torr.The initial concentrations in the reactor were: NO 3 radicals were produced in situ by the thermal decomposition of N 2 O 5 where k −9 (296 K, 630 Torr) = 0.04 s −1 and K eq (296 K) = 2.9 × 10 −11 cm 3 molecule −1 (Sander et al., 2011).The reaction was monitored by periodically transferring a portion of the reaction mixture from the reactor into the multi-pass absorption cell of the FTIR.An experiment typically lasted ∼4 h with the contents of the reactor sampled every 30 min.
In the data analysis the reactant and reference compound concentrations were corrected for the small change in reactor pressure, ∼3 % at each stage of sampling.
Materials
He (UHP, 99.999 %), N 2 (UHP, >99.99 %), and O 2 (UHP, >99.99 %) were used as supplied.Concentrated H 2 O 2 (>95 %) was prepared by bubbling N 2 for several days through a sample initially at 60 wt %.A small flow of bath gas was passed through the H 2 O 2 bubbler, which was then diluted by the main bath gas flow before entering the reactor.The H 2 O 2 reservoir was kept at 273 K during the kinetic measurements to avoid condensation of H 2 O 2 in the reactor.A tert-butylhydroperoxide solution (70 % in water) was degassed and used without further purification.A small flow of N 2 or He bath gas was bubbled through the solution at 273 K to sweep tert-butylhydroperoxide into the main gas flow.N 2 O 5 was synthesized by the reaction of ozone with NO 2 as described elsewhere (Papadimitriou et al., 2011;Rudich et al., 1996).Methylglyoxal samples were prepared from commercial 40 % aqueous solutions.A 25 ml aliquot of the solution was transferred into a 500 ml round bottom flask partially filled with small pieces of glass tubes.The flask was kept in the dark and pumped on for 16-20 h to remove water.The remaining viscous liquid was then covered with ∼6 g of P 2 O 5 and heated to 323-333 K.A yellow oily liquid was collected in a trap at 195 K for ∼5 min.The distillate was then pumped on for approximately one hour with the sample at dry ice temperature.The trap was then quickly warmed to ∼283 K and the volatile impurities, such as formaldehyde, were pumped off and the sample re-cooled to dry ice temperature.This process was repeated three times.No FTIR detectable impurities were observed in the final sample.A formaldehyde impurity upper limit was estimated to be <1 %.Dilute mixtures of methylglyoxal in a He bath gas (0.5-2 %) were prepared in a darkened 12 l Pyrex bulb.The dilute gas mixture composition was tested periodically using FTIR and found to be stable for a period of several weeks.After ∼3 weeks of storage weak unidentified infrared absorption peaks in the range 800-1000 cm −1 were observed.
Gas flow velocities through the reaction zone in the LIF reactor were in the range 6-15 cm s −1 , which ensured a fresh gas mixture for each photolysis laser pulse.Gas flows were measured using calibrated electronic mass flow meters.Pressures were measured using calibrated 10, 100, and 1000 Torr b The quoted uncertainties are 2σ from the precision of linear least-squares data fits of pseudo-first order rate coefficients, k , versus the concentration of methylglyoxal.
capacitance manometers.The photolysis and probe lasers were operated at 10 Hz.
Results and discussion
Rate coefficients for the OH reaction with methylglyoxal, k 1 (T ), and the NO 3 reaction with glyoxal (k 3 ) and methylglyoxal (k 4 ) are presented separately below.
OH + CH 3 COCHO
A summary of the experimental conditions used in our rate coefficient measurements and the obtained k 1 (T ) values are given in Table 2.A potential complication in the rate coefficient measurement of Reaction (R1) arises due to the unavoidable formation of the CH 3 CO radical as a secondary reaction product.The CH 3 CO radical is known to react with O 2 to produce OH radicals as a reaction product (Baeza-Romero et al., 2007;Tyndall et al., 1995) which could possibly influence the determination of k 1 (T ) under certain conditions.In this work, rate coefficients were measured at pressures >100 Torr with He and N 2 bath gases, where the OH radical yield in Reaction (R10) is known to be small (Tyndall et al., 1995).The OH yield in Reaction (R10) in a N 2 bath gas is less than in He.The formation of OH was observed, as expected, in test experiments performed with and without O 2 added to the reaction mixture.OH radical temporal profiles measured at low pressure (20-50 Torr, He) were found to be non-exponential indicating regeneration of OH on the time scale of the measurement.The measured OH temporal profiles were exponential within the precision of the measurement when ∼2 Torr of O 2 was added to the reaction mixture.The measured pseudo-first-order rate coefficient in the presence of O 2 was, however, ∼13 % less at 50 Torr (He) than that obtained in the absence of added O 2 .This is consistent with ∼13 % OH generation via Reaction (R10) (Talukdar et al., 2006).We assume that the non-exponential behavior observed in the absence of added O 2 may in part be due to a small O 2 impurity in the system.At greater bath gas pressure, >100 Torr N 2 , the OH temporal profiles were exponential, with and without added O 2 , and yielded indistinguishable pseudo-first-order decay rate coefficients, within the precision of the measurement (∼2 %).This was the case over the entire temperature range, 211-373 K, included in our study.The rate coefficients for Reaction (R1) reported in this work were measured at total pressures >100 Torr where OH regeneration was negligible.
Figure 2 shows representative OH temporal profiles measured at 295 K in 210 Torr N 2 and at 211 K in 120 Torr N 2 obtained while using 248 nm photolysis of H 2 O 2 and tertbutyl hydroperoxide, respectively, as the OH radical source.The OH temporal profiles were measured with high precision over two order of magnitude decay in the OH signal in most cases.
Figure 3 summarizes the k data obtained for a range of experimental conditions at temperatures between 211 and 373 K.The pseudo-first-order rate coefficients obtained with both OH radical sources were observed to be linearly dependent on [CH 3 COCHO] at all temperatures.k 1 was obtained at each temperature by fitting all measured (k − k d ) values versus [CH 3 COCHO] together using an un-weighted linear least-squares analysis.The room temperature rate coefficient obtained was k 1 (295 K) = (1.29 ± 0.05) × 10 −11 cm 3 molecule −1 s −1 where the quoted uncertainty is the 2σ (95 % confidence level) precision of the fit.The measured rate coefficients were independent of the bath gas (He or N 2 ) and total pressure, over the range 100-220 Torr.The measured rate coefficients were also independent of [OH] 0 , varied by a factor of ∼7, concentrations of OH precursors by a factor of ∼2, and photolysis laser fluence by a factor of ∼3.The k 1 (T ) values obtained at each temperature are given in Table 2 and plotted in Fig. 4. A weighted linear least-
Uncertainty evaluations
The absolute uncertainty in the measured rate coefficients originates from uncertainties in the measurement parameters, precision of the rate coefficient determinations, and potential systematic errors.Uncertainties arising from the pressure, temperature and flow rate measurements were small and contribute less than 2 % to the overall uncertainty in [CH 3 COCHO].The precision of the k 1 (T ) measurements was very high with the error in the fits of the data to Eq. ( 1) being <5 % at 95 % confidence level.
A potential source of systematic error in our experiments involves the determination of the CH 3 COCHO concentration in the reactor.The uncertainty in the infrared and the UV absorption cross sections of methylglyoxal determined in this work was estimated to be ∼5 % at the 95 % confidence level.We estimate the uncertainty of [CH 3 COCHO] in the reactor to be 8 %.The CH 3 COCHO concentration determined using FTIR before and after the reactor were in excellent agreement, <2 %, at all temperatures.This indicates that there was no measurable loss of methylglyoxal in the reactor due to decomposition at high temperature or condensation at low temperature.The measured first-order rate coefficients, k , showed a linear dependence on [CH 3 COCHO], even at the lowest temperature in our experiments (see Fig. 3).This observation confirmed that dimerization of methylglyoxal, if any, which would most likely appear as a non-linear behavior of k vs. [CH 3 COCHO], did not influence the kinetic measurement.
The presence of reactive impurities in the methylglyoxal sample could also influence the determination of k 1 .The most likely impurities generated during the synthesis of methylglyoxal were CO (∼ 2 × 10 −13 cm 3 molecule −1 s −1 at 298 K and 200 Torr N 2 ) and formaldehyde, H 2 CO (9 × 10 −12 cm 3 molecule −1 s −1 ) where their OH reaction rate coefficients are given in parenthesis.CO, H 2 CO, and other unidentified volatile impurities were removed from the sample by pumping on the methylglyoxal sample until their levels were below the FTIR detectable limit, as discussed earlier.At these levels CO and H 2 CO would not contribute significantly to the measured loss of OH at the temperatures included in this work.There was no significant loss of methylglyoxal in the prepared mixtures over a period of 3 weeks.In addition, the rate coefficients obtained with the older mixtures were identical, within the uncertainty of the measurements, to the values obtained with freshly prepared samples.
Comparison with previous studies
A summary of previous Reaction (R1) rate coefficient results along with the parameters determined in this work is given in Table 3.The rate coefficient data from the previous studies, which extend over the temperature range 223-500 K, are included in Fig. 4 for comparison with the present work.The k 1 (298 K) values reported by Baezo-Romero et al. (2007) and Tyndall et al. (1995) are in good agreement with our results, while Plum et al. (1983) report a value that is ∼30 % greater.The rate coefficient measured by Kleindienst et al. (1982) at 297 K is roughly a factor of 2 less than that reported here.The presence of significant levels of low reactivity impurities in their samples could have lead to lower measured rate coefficients.However, in the absence of a sample analysis, it is not clear why their k 1 value is significantly lower than obtained in the present work.
Baeza-Romero et al. ( 2007) and Tyndall et al. (1995) have reported on the temperature dependence of Reaction (R1).The data of Baeza-Romero et al. are in reasonable agreement with the present work, in the overlapping temperature range, but show more scatter.Their data for temperatures >366 K deviate greatly from their rest of the data, see Fig. 4. The larger scatter may in part be the result of the method used to extract values of k 1 from bi-exponential fits of their measured non-exponential OH temporal profiles.Bi-exponential fitting of simulated OH temporal profiles, under their conditions, confirmed that a larger uncertainty in the returned values of k 1 should be expected.On the other hand, the Arrhenius parameters reported by Baeza-Romero et al. agree very well with those derived from our data (Table 3).
The rate coefficient values reported by Tyndall et al. (1995) below 298 K are systematically greater than obtained in this work, leading to a substantially larger negative value of E/R.Tyndall et al. reported observing reversible sticking of methylglyoxal on the walls of their flow tube at temperatures <298 K, although it is not clear if this would account for the difference in the rate coefficients at low temperatures.Tyndall et al. (1995) reported a value of k 1 (298 K) at low pressure (2-3 Torr) that agrees well with the present work and the Baeza-Romero et al. ( 2007) value.The agreement in k 1 (298 K) over a broad range of pressure, 2-200 Torr, implies that there is no pressure dependence of Reaction (R1) under relevant atmospheric conditions.Galano et al. (2004) calculated k 1 (T ) using quantum chemistry and canonical variational transition state theory including small curvature tunneling and their values are included in Table 3 for comparison with the experimental results.The theoretically calculated value for k 1 (298 K), 1.35 × 10 −11 cm 3 molecule −1 s −1 , is in excellent agreement with that determined in this work and reported by Tyndall et al. (1995) and Baeza-Romero et al. (2007) (Table 2 and Fig. 4).However, the theoretically calculated temperature dependence, E/R = −(1060 ± 8), is much greater than that Atmos.Chem.Phys., 11, 10837-10851, 2011 www.atmos-chem-phys.net/11/10837/2011/ reported in our study and by Baeza-Romero et al. (2007), but is somewhat closer to that reported by Tyndall et al. (1995).
Combining all the previous temperature dependent data in the overlapping temperature range with ours (211-373 K) except the data of Tyndall et al. (1995) at temperatures <298 K, we obtain, by weighted fit, the Arrhenius expression k 1 (T ) = (1.82± 0.33) × 10 −12 exp[(577 ± 50)/T ] cm 3 molecule −1 s −1 The errors are at the 95 % confidence interval.Figure 4 includes the estimated error range in k 1 (T ) calculated with the expression used in the NASA/JPL evaluation (Sander et al., 2011) , where f (298 K) is the uncertainty in the rate coefficient at 298 K, and g is an additional uncertainty term to account for increased uncertainty at temperatures other than 298 K.We recommend 2σ values of f (298 K) = 1.10 and g = 40.
All available studies of Reaction (R1) report negative temperature dependence for k 1 .The temperature dependence is slightly larger than reported for the reaction of OH with aliphatic aldehydes, e.g.E/R = −330 K for the OH + CH 3 CHO reaction at temperatures <300 K. Unlike glyoxal, which exhibits a weak non-Arrhenius behavior (Feierabend et al., 2008), Reaction (R1) follows an Arrhenius behavior, within the precision of the measurements, over the temperature range 211-373 K.The negative temperature dependence is consistent with Reaction (R1) proceeding via a hydrogen-bonded pre-reactive complex (Smith and Ravishankara, 2002).Theoretical calculations (Galano et al., 2004) found a reaction mechanism involving the formation of six-and seven-membered hydrogen-bonded adducts, [CH 3 COCHO. . .OH] * , as reaction intermediates in the H atom abstraction from the -CHO and -CH 3 groups, respectively.Galano et al. (2004) calculated the stabilization energies of the adducts for H abstraction from the -CHO and -CH 3 groups to be −3.28 and −2.82 kcal mol −1 , respectively.Although the stabilization energies of the adducts are very close, the calculated overall activation energy for aldehydic H-atom abstraction is negative (−2.39 kcal mol −1 ), while that for H-atom abstraction from -CH 3 group is substantially positive (3.65 kcal mol −1 ), leading to the former being the most probable pathway for reaction (Galano et al., 2004).The fraction of H atom abstraction from -CH 3 group could contribute at most ∼1 % of k 1 based on a comparison of the rate coefficients for the OH + CH 3 COCH 3 (acetone) reaction, k(298 K) = 1.8 × 10 −13 cm 3 molecule −1 s −1 , to that of the OH + CH 3 CHO reaction, k(298 K) = 1.5 × 10 −11 cm 3 molecule −1 s −1 .In summary, experimental and theoretical results point to the formation of a pre-reactive complex and the abstraction of aldehydic H atom in Reaction (R1) leading to the observed negative activation energy.
NO 3 + glyoxal
Figure 5 shows the measured values of k as a function of [glyoxal] obtained at 296 and 353 K using the absolute flow tube kinetic method.The measured pseudo-first-order rate coefficients, k , are small, i.e. the rate coefficients for this reaction are small.The wall loss of NO 3 was 0.2-0.5 s −1 .The k 3 values obtained are k 3 (296 K) = (4.2± 0.8) × 10 −16 and k 3 (353 K) = (7.9± 3.6) × 10 −16 cm 3 molecule −1 s −1 , which show an increase in reactivity with increasing temperature that is consistent with an abstraction reaction mechanism.However, due to the large uncertainty in k 3 (353 K), the temperature dependence of k 3 (T ) is not well established and it is not advisable to calculate the activation energy from these data points.
Figure 6 shows the results from the relative rate data for Reaction (R3) with CH 2 = CH 2 as the reference compound.The rate coefficient ratio, k 3 /k Ref , obtained from a linear least-squares fit of the data to Eq. ( 3) is 1.9 ± 0.2, which yields k 3 (296 K) = (4.0 ± 1.0) × 10 −16 cm 3 molecule −1 s −1 .The possible formation of HO 2 radicals in the presence of NO 3 would lead to the formation of OH radicals, so experiments were also performed with an OH radical scavenger added to the reaction mixture.CF 3 CF=CHF, 1.7 × 10 16 molecules cm −3 , was used as the OH scavenger due to its slow reaction with NO 3 The measured k 3 was identical to that obtained in the absence of CF 3 CF=CHF, which indicates that secondary OH radical chemistry did not influence the determination of k 3 .Experiments were performed using iso-butane as the reference compound to evaluate the possible interference of NO 2 and N 2 O 5 reaction with ethene, CH 2 = CH 2 , as reference compound.NO 2 is known to react slowly with conjugated dialkenes (Barnes et al., 1990), while it is possible that N 2 O 5 may react with alkenes.Therefore, k 3 was Fig. 6.Relative rate data for the NO 3 + (HCO) 2 (glyoxal) reaction at 296 K and 650 Torr (Syn.Air) with C 2 H 4 and iso-butane as the reference compounds.The solid symbols represents the data obtained in the presence of 1.7 × 10 16 molecules cm −3 of CF 3 CF = CHF.The lines are linear least-squares fits of the data to Eq. ( 3), where the uncertainty is 2σ of the fit precision.measured relative to iso-butane, a saturated hydrocarbon that is not likely to react with N 2 O 5 or NO 2 .The ratio of k 3 /k iso−butane was determined to be 3.0 ± 0.2, which yields k 3 = (3.3± 0.85) × 10 −16 cm 3 molecule −1 s −1 , which agrees, within the measurement uncertainty, with k 3 obtained using C 2 H 4 as the reference.The apparatus and methods were also tested by measuring the rate coefficient ratios for the reaction of NO 3 with CH 2 = CH 2 and iso-butane.The measured rate coefficient ratio for the reactions of CH 2 = CH 2 and isobutane with NO 3 was 1.53 ± 0.38.The measured ratio is lower than the recommended literature rate coefficient ratio of 1.91 ± 0.50, but falls within the current estimated uncertainties for these relatively slow reactions (Atkinson et al., 2006;Barnes et al., 1990;Canosa-Mas et al., 1988).Thus, it appears that there was no significant interference from the reactions of NO 2 or N 2 O 5 in our experiments.
NO 3 + methylglyoxal
k 4 was measured using the relative rate technique with CH 2 = CH 2 as the reference compound.Two sets of experiments were performed, using different methylglyoxal samples, with CF 3 CF = CHF, 1.7 × 10 16 molecule cm −3 , added as an OH radical scavenger.The experimental results are shown in Fig. 7.The rate coefficient ratios, k 4 /k Ref , were determined from a linear least-squares fit of the data to Eq. ( 3) to be 2.9 ± 0.5 and 1.9 ± 0.20, where the quoted error limits are 2σ from the precision of the fit.Taking an average rate coefficient ratio of 2.4 ± 1.0 yields k 4 (295 K) = (5.1 ± 2.1) × 10 −16 cm 3 molecule −1 s −1 .The agreement between the two experiments is rather poor when compared with the results obtained in the glyoxal + NO 3 reaction study given above.There is no explanation for the less reproducible results in the methylglyoxal experiments.The total reactant and reference compound losses (10-40 %) were relatively small over long time duration (4 h), which led to greater uncertainty in the measured rate coefficients.
Comparison of NO 3 rate coefficients
Rate coefficients for the reaction of NO 3 with glyoxal and methylglyoxal have not been reported previously.So here, we compare the present results with rate coefficients reported for aldehydes and ketones.The measured values of k 3 and k 4 from this work along with the rate coefficients for the acetaldehyde (CH 3 CHO), formaldehyde (HCHO), and acetone reactions are listed in Table 4.
k 3 and k 4 are slow and similar in magnitude, k 3 is ∼20 % less than k 4 .That is, although, glyoxal has two identical -C(O)H groups, its reactivity is actually slightly less than that of methylglyoxal.The reactivity could be attributed to the mutual deactivation of the aldehydic H atom reactivity by the adjacent electron withdrawing -C=O groups in glyoxal.The presence of the -CH 3 group in methylglyoxal may offset the electron withdrawing of the α-carbonyl group thereby making the aldehydic H atom in methylglyoxal more reactive.This could, in part, account for the similar reactivity observed for glyoxal and methylglyoxal.
k 3 and k 4 are a factor of 5-7 less than the NO 3 + acetaldehyde reaction rate coefficient and, therefore, does not follow the same trend as in the OH radical reactivity.The room temperature OH rate coefficients of glyoxal and methylglyoxal are ∼40 % and ∼13 % less than that of acetaldehyde, which is 1.5 × 10 −11 cm 3 molecule −1 s −1 at 298 K.However, OH reaction rate coefficient of glyoxal compares well with that for formaldehyde (HCHO) and so does their NO 3 rate coefficients at room temperature.Based on the C-H bond energies for the -C(O)H group alone, the rate coefficients for the reaction of NO 3 radical with glyoxal and methylglyoxal would be greater than in acetaldehyde reaction.The estimated C-H bond energies of the -C(O)H group in glyoxal, methylglyoxal, formaldehyde, and acetaldehyde are 84.8 (Feierabend et al., 2009), 74, 88.6 and 89.4 kcal mol −1 , respectively (Sander et al., 2011;Galano et al., 2004).The measured rate coefficients do not seem to correlate with the bond energies, which is the most likely site for H atom abstraction because the C-H bond energy in the -CH 3 group (95.9 kcal mol −1 ) is much greater and the barrier for H-abstraction from the -CH 3 group is higher, ∼7 kcal mol −1 compared to ∼3 kcal mol −1 from the -C(O)H group (D'Anna et al., 2003).For acetone, abstraction of H from the CH 3 group (-C-H bond energy = 96.4± 1.0 kcal mol −1 ) (Espinosa- Garcia et al., 2003) is the only reaction pathway and is much slower than that from an aldehyde group -C(O)H group.Based on the thermochemical data, the major pathway in the reaction of NO 3 with methylglyoxal and acetaldehyde is expected to be abstraction of H from aldehydic group rather than from the -CH 3 group.Another factor that can influence the trend in the reactivity is the steric effect due to the much larger size of NO 3 radical compared to that of OH and could alter the reactivity trend.At present, it is not clear why k 3 and k 4 are substantially less than the NO 3 + acetaldehyde rate coefficient.High-level quantum chemistry calculations may shed some light on the reactivity trend of NO 3 radical with dicarbonyls and other aldehydes.
Atmospheric implications
The primary atmospheric loss processes of methylglyoxal include reaction with the OH radical, UV/vis photolysis, and uptake on clouds and aerosols as outlined in Fig. 1.The reaction with Cl atoms is expected to be a minor loss process.Dry deposition losses are not well defined and often ignored.Recently, Karl et al. (2010) estimated, based on their laboratory and field measurements, and transport modeling, that the dry deposition of glyoxal and methylglyoxal can increase by ∼100 % and ∼20 % respectively compared to the previous estimates (Goldstein and Galbally, 2007;Hallquist et al., 2009;Zhang et al., 2002), which were used by Fu et al. (2008).In addition, ground-level measurements of oxygenated VOCs, particularly at night under high relative humidity scenarios, could be significantly impacted because of dry deposition.Dry and wet deposition, and the uptake on clouds and aerosols would be a function of location and likely to be significant nighttime loss processes.Fu et al. ( 2008) calculated the lifetime for methylglyoxal due to uptake on cloud and aerosol to be ∼17 h using the uptake coefficient for methylglyoxal on sulfuric acid solutions (50-85 www.atmos-chem-phys.net/11/10837/2011/weight percent, T : 250-298 K) of 2.3 × 10 −3 reported by Zhao et al. (2006).On the other hand, Kroll et al. (2005) did not observe any net growth of ammonium sulfate aerosols in the presence of methylglyoxal at ∼50 % relative humidity.This is contrary to growth of particles in the presence of other dicarbonyls (Jang et al., 2003(Jang et al., , 2005;;Jang and Kamens, 2001).The large discrepancy could be due to the low RH (50-55 %) used in Kroll et al. study leading to low uptake.Fu et al. ( 2008) also calculated the global lifetime due to UV/vis photolysis and OH reaction to be 2.2 and 20 h, respectively.They also report a negligible effect of the NO 3 reaction on the atmospheric loss of methylglyoxal, even though a greater value of k 3 than measured in the present work was used in their analysis (Fu et al., 2008;Myriokefalitakis et al., 2008).
Using the rate coefficient data from this work we estimate the lifetime with respect to NO 3 radical reactive loss would be ∼9 days, assuming ∼100 ppt NO 3 .The NO 3 reaction, therefore, represents only a minor atmospheric loss process for methylglyoxal.Including dry deposition would decrease the lifetime of both dicarbonyls.However, their atmospheric loss would still be dominated by UV photolysis, OH reaction and uptake on clouds and aerosols.
The CH 3 COCO radical, which is formed in channel 1a with a ∼99 % yield (see the comparison section on OH reaction), decomposes promptly to CH 3 CO and CO under atmospheric conditions in <15 µs (Green et al., 1990).CH 3 CO radical has sufficient energy and as much as 40 % may dissociate further to CH 3 and CO even under atmospheric condition, thereby potentially reducing the effective yield of CH 3 CO (Baeza-Romero et al., 2007).CH 3 CO reacts with O 2 to form the peroxyacetyl (PA) radical, CH 3 C(O)O 2 , which in turn reacts with NO 2 to produce peroxyacetyl nitrate (PAN), CH 3 C(O)O 2 NO 2 .The NO reaction with PA radical produces the CH 3 C(O)O radical and eventually HO x .The atmospheric lifetime of PAN is predominantly controlled by its temperature dependent thermal decomposition and therefore highly altitude dependent.At altitudes >7 km PAN lifetimes can exceed several months.Thus, methylglyoxal, which has a short lifetime, leads to the formation of PAN, which is potentially much longer lived and can be transported longer distances and, therefore, impact atmospheric chemistry in remote locations.The loss of methylglyoxal due to the OH reaction leads to a null HO x production cycle if the products of the reaction do not leave the region.The OH reaction with methylglyoxal and the subsequent formation of PAN lead to the local loss of OH and NO 2 radicals.The transport of PAN to remote areas represents a HO x and NO 2 source.Thus, the OH reaction with methylglyoxal acts as local radical sink and remote source of HO x .The PAN yield from the degradation mechanism of methylglyoxal will depend on the overall yield of CH 3 CO radical, the rate coefficients of the PA radical with NO and NO 2 , which are 2.0 × 10 −11 cm 3 molecule −1 s −1 and 1.2 × 10 −11 cm 3 molecule −1 s −1 respectively, at 298 K and atmospheric pressure, and the NO 2 /NO ratio (Atkinson et al., 2006).Plum et al. (1983) has qualitatively observed the formation of PAN following the irradiation of a mixture of methylglyoxal-NO x -air using a solar simulator, but quantitative yields are currently not available.A direct measurement of molecular yields in the degradation of methylglyoxal under atmospheric conditions is needed.
UV/vis photolysis of methylglyoxal leads to a net production of HO x because HO x is not consumed in the initial methylglyoxal destruction step.PAN is also formed following photolysis of methylglyoxal via the same mechanism described above.The NO 3 reaction contributes negligibly (<1 %) to the loss of methylglyoxal, but would be a nighttime source of PAN.Uptake of methylglyoxal in clouds and aerosol would lead to the removal of reactive hydrocarbons from the atmosphere and reduce the oxidative capacity of the atmosphere and short-circuit PAN production.The heterogeneous loss of methylglyoxal would also have an impact on secondary aerosol formation.
The global lifetime of glyoxal due to loss via UV/visible photolysis, OH reaction, and uptake on cloud and aerosol are reported to be 4.9, 20, and 20 h, respectively (Chen et al., 2000;Fu et al., 2008;Staffelbach et al., 1995).An atmospheric modeling study, based on an estimated rate coefficient for the NO 3 + glyoxal reaction greater than obtained in this work, showed this loss process to be negligible (Fu et al., 2008).On the basis of the rate coefficient measured in this work the estimated glyoxal lifetime due to NO 3 reaction is ∼12 days for an NO 3 abundance of 100 pptv.The NO 3 reaction would contribute <1 % to the total loss of glyoxal.
Fig. 1 .
Fig. 1.Simplified atmospheric degradation scheme for methylglyoxal highlighting loss via UV photolysis, OH radical reaction, NO 3 radical reaction, and cloud and aerosol uptake.Approximate atmospheric lifetimes are included (see text for details).
Fig. 3 .
Fig. 3. Plots of (k − k d ) vs. [CH 3 COCHO] where the data points were obtained using 248 nm photolysis of H 2 O 2 for the OH radical source in the absence and presence of O 2 .The error bars for individual k − k d values are 2σ precision obtained from fits as shown in Fig. 2. The lines are linear least-squares fits to all the data at each temperature.
Fig. 5 .
Fig. 5. Pseudo-first-order rate coefficients for the reaction of NO 3 + (HCO) 2 (glyoxal) obtained at 296 and 353 K using a fast flowreactor with chemical ionization mass spectrometer (CIMS) detection of NO 3 .
Fig. 7 .
Fig. 7. Loss of methylglyoxal versus the reference compound, C 2 H 4 , in relative rate study for the NO 3 reaction at 296 K in 650 Torr of dry air.
t is the OH concentration at time t and k d is the first-order rate coefficient for OH loss in the absence of CH 3 COCHO, which is primarily due to reaction with the OH precursor and diffusion out of the detection volume.k
Table 1 .
Staffelbach et al. (1995)strengths and peak cross sections of methylglyoxal, CH 3 COCHO, measured in this work at 296 K.Staffelbach et al. (1995)This work* Integration range (cm −1 ) Band Strength (10 −18 cm 2 molecule −1 cm −1 ) The uncertainties are 2σ from the precision of the linear least-squares analysis of the integrated absorbance versus concentration. *
Table 2 .
Summary of experimental conditions and measured rate coefficients for the OH + CH 3 COCHO (methylglyoxal) reaction, k 1 (T ).
Table 3 .
Summary of rate coefficient data, k 1 (T ), for the reaction OH + CH 3 COCHO → products.
Table 4 .
Comparison of NO 3 reaction rate coefficients for glyoxal and methylglyoxal measured in this work with those for other carbonyl compounds.
|
v3-fos-license
|
2021-03-13T06:16:42.816Z
|
2021-03-11T00:00:00.000
|
232208321
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0247512&type=printable",
"pdf_hash": "223a3fdda76af3c97d44d593eb705842b3d1148c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43290",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "ad935e71b43f5c91bacf55c04d765d20b7322187",
"year": 2021
}
|
pes2o/s2orc
|
An epidemic model for non-first-order transmission kinetics
Compartmental models in epidemiology characterize the spread of an infectious disease by formulating ordinary differential equations to quantify the rate of disease progression through subpopulations defined by the Susceptible-Infectious-Removed (SIR) scheme. The classic rate law central to the SIR compartmental models assumes that the rate of transmission is first order regarding the infectious agent. The current study demonstrates that this assumption does not always hold and provides a theoretical rationale for a more general rate law, inspired by mixed-order chemical reaction kinetics, leading to a modified mathematical model for non-first-order kinetics. Using observed data from 127 countries during the initial phase of the COVID-19 pandemic, we demonstrated that the modified epidemic model is more realistic than the classic, first-order-kinetics based model. We discuss two coefficients associated with the modified epidemic model: transmission rate constant k and transmission reaction order n. While k finds utility in evaluating the effectiveness of control measures due to its responsiveness to external factors, n is more closely related to the intrinsic properties of the epidemic agent, including reproductive ability. The rate law for the modified compartmental SIR model is generally applicable to mixed-kinetics disease transmission with heterogeneous transmission mechanisms. By analyzing early-stage epidemic data, this modified epidemic model may be instrumental in providing timely insight into a new epidemic and developing control measures at the beginning of an outbreak.
Introduction
In epidemic control, speedy action guided by the knowledge of pathogens is crucial. Mathematical models have become essential tools for understanding infectious diseases since the early 20 th century [1]. A critical challenge in modeling epidemics is how to gain insight into the intrinsic mechanism of disease transmission during the early stages of epidemics when there are limited data [2,3].
The Susceptible-Infectious-Removed (SIR) model is based on a scheme that compartmentalizes the population into susceptible (S), infectious (I), and removed (R) subpopulations [4]. Coefficients and ordinary differential equations are used to quantify the transformation of subjects from one subpopulation to another (Fig 1). Generally, coefficients in these equations are solved numerically or analytically, and the course of epidemics can be predicted via simulation.
Model coefficients, such as β and γ, illustrate the properties of infectious diseases by quantifying the progression rates. In addition, epidemiological indices, such as the basic reproduction number R 0 , can be derived from these coefficients [1,5,6]. R 0 is defined as the average number of secondary cases produced by one infectious agent during the whole infectious period in a fully susceptible population. It quantifies the transmission potential of an infectious disease and is easy to understand conceptually. As such, R 0 provides a point of reference for other epidemics and helps detect heterogeneous conditions and populations, in which R 0 may take on different values [7].
Environmental factors (e.g., contact structure heterogeneity) and intervention measures (e.g., social distancing and contact tracing) introduce complexity to the natural course of an epidemic, which makes it challenging to estimate R 0 [8,9]. Therefore, the effective reproduction rate Re has often been used instead. Re is a dynamic index of real-time disease status, and when used with R 0 , it can provide a comparative reference [9,10].
The current study proposes a modified mathematical model based on the modified SIR scheme. Like many previous studies, the mathematical model proposed in the current study derives inspiration from chemical reaction kinetics [11][12][13], with a critical difference in that the transmission "reaction" is not assumed to be first-order regarding the infectious population. The modified model provides two disease-describing parameters: transmission rate constant k and transmission rate order n. k responds to external intervention measures, such as disease control measures, whereas n is conceptually linked to the intrinsic properties of the epidemic agent, such as the reproduction number. Fig 2 provides an overview of the current study.
The SIR model
The differential equations in Fig 1 depict a disease progression dynamic parallel to an autocatalytic chemical reaction, with the subpopulations S, I, and R representing different reactive molecular species in the reaction mixture. The disease spreading process can be treated as a reaction converting the "reactant" S into I, where the infectious agent I is both the product and the catalyst. Based on reaction schemes shown in Eqs (1) and (2), the classic rate law for infectious case number change is expressed in Eq (3): where [I] and [S] are the population density of the infected (or infectious) and susceptible individuals, respectively, while k and k r are the reaction rate constants, respectively, for infection and removal. In chemical reaction terms, Eq (3) describes a first-order reaction kinetics in the infectious agent I, and, in the early stages of the epidemics, leads to an exponential growth of [I] over time t. However, it has been noted that this exponential growth trajectory does not fit real-world data well [14][15][16][17]. Various statistical strategies have been used to address this discrepancy, including the adoption of a time-dependent rate constant k [10,17].
The current paper draws from a more general framework pioneered by Wilson and Worcester [18], where a transmission rate is not linearly proportional to S and I (i.e., exponential growth) but instead follows a more generally applicable rate raw. In other words, the rate of transmission is not β [19,20], where p and q are positive constants. It has been discussed why infectious disease outbreaks do not grow exponentially [20,21]. Some later studies have adopted this approach and incorporated p and q as "decelerating" parameters to improve model fit [22,23]. In the current study, we provide a theoretical rationale for the modified rate law by drawing from chemical kinetics and demonstrate the modified model by analyzing observed data from 127 countries during the initial phase of the COVID-19 pandemic.
A modified mathematical model with non-first-order transmission kinetics
In a reaction rate equation, the power to which the concentration of a species is raised is called the order of the reaction with respect to that species [24]. Reaction order is an empirical value deduced from observed data, and reaction mechanism analysis is based on it [25]. In chemical reaction terms, Eq (3) describes a first-order reaction kinetics in the infectious agent I. While Eq (3) points to a one-to-one transmission mechanism that is appealing in its simplicity, the reason that disease transmission may not be first-order regarding [I] is twofold: 1. Due to contact structure heterogeneity, multiple transmission modes are more likely, with each having its own kinetics and reaction order. As in a chemical reaction with mixed kinetics [26], the overall, apparent reaction order would be a moving average of the mixed reaction orders, shifted by reaction conditions. During epidemics, movement restriction measures (e.g., travel restrictions) could change the kinetics of disease transmission, which is analogous to chemical reaction situations where an impediment of mass transfer changes the kinetics in diffusion-controlled chemical reactions and, subsequently, the overall reaction order [27].
2. As depicted in the SIR scheme, disease transmission is an autocatalytic process. The kinetic behavior of a wide range of autocatalytic systems can generally be expressed by the following scheme (see [28]): where A is the reactant, B is the product and catalyst, and n is the order of the autocatalytic reaction regarding B. Note that in this expression, n is not a stoichiometric coefficient as in a mass balance reaction equation. The chemical kinetics of autocatalytic chemical reactions has been extensively studied, with the reaction orders deduced from experimental data. Often, the reaction turned out to be non-first-order. For example, the decomposition of nitrobenzene derivatives manifested reaction orders (n values) ranging from 0.6 to 1.8 [29]. Available data on the COVID-19 pandemic suggest that there may be multiple transmission modes, including one to one or one to many [30,31]. Thus, the assumption contained in Eq (3) that the transmission is first-order with respect to [I] may not always hold and need to be made more general. Furthermore, mechanistically, a viral transmission does not always follow the molecularity [24] implied in Eq (3). To better reflect this, Eqs (1) and (2) can be modified as follows: The rate law of the number of infectious cases after this modification becomes: where reaction order n is an empirical value to be extracted from observed data. Note that Eq (3) can be regarded as a special case of Eq (8), where disease transmission follows a first-order reaction kinetics with n = 1.
To further develop the modified mathematical model, it is necessary to establish a clear definition of [I]. In analogous chemical reaction terms, this is the concentration or density of infectious agents that are active in the population at a given time. The cumulative case density is not appropriate for this calculation, except at the beginning of an epidemic. During an epidemic, the infectious population is continuously filled by newly infected cases while simultaneously being drained by those in recovery or quarantine or who passed away. The effect of draining on [I] must be taken into consideration. The draining starts when infected individuals begin to recover or get quarantined after developing symptoms. However, in the early phase of an epidemic, almost no significant draining occurs, and consequently, k r [I] is negligible. Therefore, we can eliminate the term, k r [I] from Eq (8) and use cumulative confirmed case density for [I].
In addition, the infected population is extremely small relative to the total population in the early stage of an epidemic. Therefore, Eq (8) can be simplified further by approximating [S] as a constant, and the reaction as pseudo n th order in [I]: Eq (9) can be mathematically solved. Integration gives the following equation: Note that Eq (10), when n is 1, becomes an exponential growth function. When n is not 1, then population-level epidemic data can be fitted as follows: Where Thus, the modified SIR model shown in Eqs (10) and (11) expands the classic SIR model to include epidemic episodes with non-first-order transmission kinetics. Critical model coefficients, such as reaction order k and reaction order n, can be obtained by fitting this model to observed data. In the next section, we demonstrate this approach using the COVID-19 data.
Data and methods
COVID-19 data are compiled by Our World in Data (ourworldindata.org) and available at https://github.com/owid/covid-19-data/tree/master/public/data. The data downloaded for the current study span from January 1, 2020 to June 30, 2020 and include the number of infected cases from 210 countries and independently administered regions. The terms, countries and regions, are interchangeably used for simplicity in the current study. The variables included in the data set are the number of confirmed cases, deaths, and tests conducted, as well as countrylevel variables concerning the demographic, economic, health, and disease control measures from each country.
All analyses were conducted using Python [32]. Random Forest (RF) regression analyses were conducted using Scikit-Learn (v0.22) developed for Python. Scikit-Learn is a free software machine learning library for the Python programming language (for more information, https://scikit-learn.org/stable/faq.html). Data and computing codes for the analyses reported in this article are available in an online repository [33].
COVID-19 pandemic modeling
The relationship between ln[I] and ln t was examined using the COVID-19 data from 127 countries. Of the total 210 countries from the Our World Data, countries with a population size of at least one million and with at least one confirmed case per million were included (156 countries). Of those, we excluded data from 29 countries with fewer than three cases per million on the 14 th day since the first day of one confirmed case per million for concerns of data accuracy, thus resulting in N = 127 countries.
We defined the age of epidemic t as the number of days since the day when there was at least one confirmed case per million population. In Fig 3, each line represents one country or region. Starting at around t = 14 days (at~2.6 in ln scale), the curves of many countries showed downward bumps or inflection points. This timing coincides with the two-week isolation period recommended for people exposed to the SARS-Cov-2 virus. Conceivably, only after this time, the pool of infected individuals starts to be drained due to removal (i.e., recovery or quarantine). Therefore, data from the first 14 days were used in all subsequent analyses.
The modified model in Eq (11) The modified model was fitted to data from each country. The resulting R 2 values were mostly between 0.95 and 1 (Fig 5, blue). In contrast, the linear relationship between ln[I] and t entailed by the classic exponential model was comparatively less well-fitting for the same data, as shown by the distribution of R 2 (Fig 5, orange). Both models estimated the same number of regression coefficients (i.e., intercept and regression slope). Therefore, the R 2 distributions in Parameter estimates a and b for each country were obtained from the ln[I]~ln t regression: a and b are the intercept and slope, respectively, of the regression line. The n and k' values for each country were then calculated from a and b values using the equations previously defined (see Eq [11]). The transmission rate constant k for each country was further calculated by dividing k' by the population density [S] of that country (see Eq (9)).
A visual inspection of the distributions of n and k showed some potential outliers. We calculated the multivariate Mahalanobis distance metric to assess how far each country deviates from the center of the multivariate normal distribution, applied Chi-square tests for all distance metric values (for df = 1, p < 0.05), and consequently removed the following seven outliers: Estonia, Puerto Rico, Palestine, Jamaica, Belarus, Papua New Guinea, and Togo. Supporting information (see the S1 Appendix in S1 File) provides a detailed coverage on outliers, outlier detection methods [34], and outcomes. Fig 6 shows the summary results after removing the seven countries identified by the multivariate Mahalanobis distance metric. The average transmission order n from 120 countries was 0.33, with a standard deviation of 0.14. The average transmission rate constant k was 0.31, with a standard deviation of 0.56.
For a chemical reaction, the reaction rate constant k is expressed by the following Arrhenius equation: where the pre-exponential factor A is a measure of how frequently collisions occur, and the exponential factor, e −Ea/RT , indicates the fraction of collisions with enough kinetic energy to lead to a reaction. Hence, the rate constant k gives the rate of successful collisions [24]. Intuitively, the counterpart parameter k in our model is also linked to the rate of "successful" disease transmission via interactions between infectious and susceptible individuals. Country-level differences in disease control measures, such as social distancing, as well as pre-COVID-19 conditions, would then result in noticeable country-to-country variations in k. The variance of k provides an opportunity to use statistical modeling or machine learning techniques to discover associations of transmission rates with country-level characteristics and country-specific disease-fighting measures.
We conducted a Random Forest (RF) analysis [35] using each country's sociodemographic data and disease control measures as variables to predict epidemic-defining parameters k and n. RF uses a nonparametric, ensemble learning algorithm, which has been shown to yield significant improvements in prediction accuracy, compared with other algorithms, especially with a small sample with many features as we have here (see [36] for more explanations on this method). Significantly, RF can provide the relative Gini importance of each feature in predicting a dependent variable. In other words, RF can illustrate which country-level properties are relatively more closely associated with epidemic growth rates. Such information can be critical for formulating effective infection control strategies.
Using RF models (25 trees with an automatic selection of the number of features), countrylevel features predicted k values well (training score R 2 = 0.92; out-of-bag R 2 = 0.49). Of all 14 features entered, we discovered that the number of COVID-19 tests conducted and population density had the most substantial impact on k (see Fig 7; horizontal orange bars). All other country-level features had less influence on k.
In contrast, country-specific data proved to be relatively weak predictors of n (175 trees, selecting a square root of the number of features), training score of R 2 = 0.91 and out-of-bag R 2 = 0.32). No feature stood out as critically important in predicting n (Fig 7; horizontal blue bars).
Discussion
The prevailing pandemic modeling approaches contain the assumption of a first-order kinetics regarding [I], which mathematically leads to an exponential growth function. However, this assumption is unrealistic in many cases, as demonstrated here by real data. The current study developed a modified mathematical model (Eq (11)) derived from a different rate law (Eq (8)) with n th reaction order, which is deduced from data. The results from the analysis of COVID-19 pandemic data suggest that this model provides a more accurate and inclusive description of the virus transmission dynamics. The modified mathematical model provides two parameters describing an epidemic: transmission rate constant k and transmission reaction order n. When group-level heterogeneity exists, statistic or machine learning models can uncover hidden associations between groupspecific features and outcome differences represented by k. For the COVID-19 pandemic data, the value k of each country appears to be strongly associated with control measures such as testing and environmental factors such as population density. Although Random Forest analysis results should not be interpreted as causal inference, such information provides valuable guidance for selecting effective disease control measures.
Transmission reaction order n, like its chemistry counterpart, appears to be more related to the reaction mechanism and less impacted by environmental factors. We observed that the n values of 120 different countries with very different characteristics were distributed narrowly around a mean of 0.33 with a standard deviation of 0.14 and a coefficient of variance of 0.42 (0.14/0.33). Meanwhile, country-level features were demonstrated to be relatively weak predictors of n. These findings suggest that n is a quantity pertaining to the intrinsic property of the epidemic agent.
Furthermore, Eq (6) can be transformed into the following equivalent form: where m = 1/n. The parameter m has its origin rooted in the "molecularity" of the disease transmission "reaction" through n. For the 120 countries in this study, m values ranged from 1.64 (Ireland) to 43.65 (Gabon), with a mean of 4.50. With coefficient m, Eq (12) better illustrates the process of disease transmission that occurs from one infectious to many susceptible individuals.
With traditional SIR models, data collected well into the epidemic development timeline are needed for establishing coefficients, β and γ [37,38]. In contrast, the modified model in this current study is uniquely suitable for extracting epidemic-defining parameters from sparse data at the onset of an epidemic outbreak. Insights afforded by the new mathematical model may be particularly valuable in guiding timely interventions at the most critical period of an epidemic.
More broadly, the non-first-order transmission kinetics model provides a general theoretical framework for epidemic modeling that complements the classic SIR model, where the rate law is implicitly assumed to be first order in the epidemic agent. In the current study, the country-level data used had a limited sample size. However, local-level data sharing similar environmental characteristics or data from different epidemics may provide a further test of the general applicability of the modified model. We also note that the range of k estimates was large, which may reflect important heterogeneity and can be explored further in future studies. Incomplete and noisy data might be other sources of the apparent heterogeneity, which makes it challenging to predict full epidemic trajectories [39]. Finally, the modified SIR model shown in the current study may be further fine-tuned and provide a building block for more elaborate epidemic models.
Conclusion
The present paper provides a theoretical rationale for a modified mathematical epidemic model that removes an implicit assumption on reaction order in the classic SIR compartmental models to be more general, flexible, and accurate. More specifically, the modified mathematical model accommodates mixed-kinetics epidemics that are non-first-order and incorporates transmission heterogeneity. With this modified model, it is possible to derive critical epidemic-defining parameters early, which would be instrumental for understanding new epidemics and developing control measures.
|
v3-fos-license
|
2022-02-07T16:19:04.138Z
|
2022-02-05T00:00:00.000
|
246620534
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00605-022-01678-1.pdf",
"pdf_hash": "792c98d5fd439fd176f43ae84523c5a22cfaf080",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43291",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "d34ecdc9bde23f3c6deb324c2e0cb5ac168934a9",
"year": 2022
}
|
pes2o/s2orc
|
Expansion of eigenvalues of the perturbed discrete bilaplacian
We consider the family \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\widehat{{ H}}}_\mu := {\widehat{\varDelta }} {\widehat{\varDelta }} - \mu {\widehat{{ V}}},\qquad \mu \in {\mathbb {R}}, \end{aligned}$$\end{document}H^μ:=Δ^Δ^-μV^,μ∈R,of discrete Schrödinger-type operators in d-dimensional lattice \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Z}}^d$$\end{document}Zd, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{\varDelta }}$$\end{document}Δ^ is the discrete Laplacian and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{{ V}}}$$\end{document}V^ is of rank-one. We prove that there exist coupling constant thresholds \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu _o,\mu ^o\ge 0$$\end{document}μo,μo≥0 such that for any \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu \in [-\mu ^o,\mu _o]$$\end{document}μ∈[-μo,μo] the discrete spectrum of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{{ H}_\mu }}$$\end{document}Hμ^ is empty and for any \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu \in {\mathbb {R}}\setminus [-\mu ^o,\mu _o]$$\end{document}μ∈R\[-μo,μo] the discrete spectrum of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{{ H}_\mu }}$$\end{document}Hμ^ is a singleton \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{e(\mu )\},$$\end{document}{e(μ)}, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$e(\mu )<0$$\end{document}e(μ)<0 for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu >\mu _o$$\end{document}μ>μo and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$e(\mu )>4d^2$$\end{document}e(μ)>4d2 for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu <-\mu ^o.$$\end{document}μ<-μo. Moreover, we study the asymptotics of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$e(\mu )$$\end{document}e(μ) as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu \searrow \mu _o$$\end{document}μ↘μo and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu \nearrow -\mu ^o$$\end{document}μ↗-μo as well as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu \rightarrow \pm \infty .$$\end{document}μ→±∞. The asymptotics highly depends on d and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\widehat{{ V}}}.$$\end{document}V^.
Introduction
In this paper we investigate the spectral properties of the perturbed discrete biharmonic operator in the d-dimensional cubical lattice Z d , where Δ is the discrete Laplacian and V is a is rank-one potential with a generating potential v. This model is associated to a one-particle system in Z d with a potential field v, in which the particle freely "jumps" from a node X of the lattice not only to one of its nearest neighbors Y (similar to the discrete Laplacian case), but also to the nearest neighbors of the node Y . From the mathematical point of view, the discrete bilaplacian represents a discrete Schrödinger operator with a degenerate bottom, i.e., Δ Δ is unitarily equivalent to a multiplication operator by a function e which behaves as o(| p − p 0 | 2 ) close to its minimum point p 0 . The spectral properties of discrete Schrödinger operators with non-degenerate bottom (i.e., e behaves as O(| p − p 0 | 2 ) close to its minimum point p 0 ), in particular with discrete Laplacian, have been extensively studied in recent years (see e.g. [1,2,7,8,10,11,20,21,23,26,28] and references therein) because of their applications in the theory of ultracold atoms in optical lattices [16,24,35,36]. In particular, it is well-known that the existence of the discrete spectrum is strongly connected to the threshold phenomenon [18,[20][21][22], which plays an role in the existence the Efimov effect in three-body systems [31,32,34]: if any two-body subsystem in a three-body system has no bound state below its essential spectrum and at least two two-body subsystem has a zero-energy resonance, then the corresponding three-body system has infinitely many bound states whose energies accumulate at the lower edge of the three-body essential spectrum.
Recall that the Efimov effect may appear only for certain attractive systems of particles [29]. However, recent experimental results in the theory of ultracold atoms in an optical lattice have shown that two-particle systems can have repulsive bound states and resonances (see e.g. [36]), thus, one expects the Efimov effect to hold also for some repulsive three-particle systems in Z 3 .
The strict mathematical justification of the Effect effect including the asymptotics for the number of negative eigenvalues of the three-body Hamiltonian has been successfully established in 3-space dimensions (for both R 3 and Z 3 ) see e.g., [1,4,13,19,29,31,32,34] and the references therein. In particular, the non-degeneracy of the bottom of the (reduced) one-particle Schrödinger operator played an important role in the study of resonance states of the associated two-body system [1,31]. Another keypoint in the proof of the Efimov effect in Z 3 was the asymptotics of the (unique) smallest eigenvalue of the (reduced) one-particle discrete Schrödinger operator which creates a singularity in the kernel of a Birman-Schwinger-type operator which used to obtain an asymptotics to the number of three-body bound states.
To the best of our knowledge, there are no published results related to the Efimov effect in lattice three-body systems in which associated (reduced) one-body Schrödinger operator has degenerate bottom.
We also recall that fourth order elliptic operators in R d in particular, the biharmonic operator, play also a central role in a wide class of physical models such as linear elasticity theory, rigidity problems (for instance, construction of suspension bridges) and in streamfunction formulation of Stokes flows (see e.g. [9,25,27] and references therein). Moreover, recent investigations have shown that the Laplace and biharmonic operators have high potential in image compression with the optimized and sufficiently sparse stored data [15]. The need for corresponding numerical simulations has led to a vast literature devoted to a variety of discrete approximations to the solutions of fourth order equations [5,12,33]. The question of stability of such models is basically related to their spectral properties and therefore, numerous studies have been dedicated to the numerical evaluation of the eigenvalues [3,6,30].
The aim of the present paper is the study of the existence and asymptotics of eigenvalues as well as threshold resonance and bound states of H μ defined in (1.1), which corresponds to the one-body Schrödinger operator with degenerate bottom. Namely, we study the discrete spectrum of H μ depending on μ and on v. For simplicity we assume the generator v of V to decay exponentially at infinity, however, we urge that our methods can also be adjusted to less regular cases (see Remark 2.6). Since the spectrum of Δ consists of [0, 2d] (see e.g., [1]), by the compactness of V and Weyl's Theorem, the essential spectrum of H μ fills the segment [0, 4d 2 ] independently of μ. Moreover, the essential spectrum does not give birth to a new eigenvalue while μ runs in some real interval [−μ o , μ o ], and it turns out as soon as μ leaves this interval through μ o resp. through −μ o , a unique negative resp. a unique positive eigenvalue e(μ) releases from the essential spectrum (Theorem 2.2). Now we are interested in the absorption rate of e(μ) as μ → μ o and μ → −μ o . The associated asymptotics are highly dependent not only on the dimension d of the lattice (as in the discrete Laplacian case [20,21]), but also values on the multiplicity 2n o and 2n o of 0 ∈ {v = 0} (if v(0) = 0) and π ∈ {v = 0} (if v( π) = 0), respectively. More precisely, depending on d and n o , e(μ) has a convergent expansion Furthermore, observing that the top e( π) = 4d 2 of the essential spectrum is nondegenerate, one expects the asymptotics of e(μ) as μ → −μ o to be similar as in the discrete Laplacian case [20,21]; more precisely, depending on d and The threshold analysis for more general class of nonlocal discrete Schrödinger operators with δ-potential of type can be found in [14], where is some strictly increasing C 1 -function and δ x0 is the Dirac's delta-function supported at 0. Besides the existence of eigenvalues, authors of [14] classify (embedded) threshold resonances and threshold eigenvalues depending on the behaviour of at the edges of the essential spectrum of − Δ and on the lattice dimension d. The eigenvalue expansions for the discrete bilaplacian with δperturbation have been established in [17] for d = 1 using the complex analytic methods.
The paper is organized as follows. In Sect. 2 after introducing some preliminaries we state the main results of the paper. In Theorem 2.2 we establish necessary and sufficient conditions for non-emptiness of the discrete spectrum of H μ , and in case of existence, we study the location and the uniqueness, analiticity, monotonicity and convexity properties of eigenvalues e(μ) as a function of μ. In particular, we study the asymptotics of e(μ) as μ → μ o and μ → −μ o as well as μ → ±∞. As discussed above in Theorems 2.4 and 2.5 we obtain expansions of e(μ) for small and positive μ − μ o and μ + μ o . In Sect. 3 we prove the main results. The main idea of the proof is to obtain a nonlinear equation (μ; z) = 0 with respect to the eigenvalue z = e(μ) of H μ and then study properties of (μ; z). Finally, in appendix Section A we obtain the asymptotics of certain integrals related to (μ; z) which will be used in the proofs of main results.
Data availability statement
We confirm that the current manuscript has no associated data.
Preliminary and main results
Let Z d be the d-dimensional lattice and 2 (Z d ) be the Hilbert space of squaresummable functions on Z d . Consider the family is the discrete Laplacian, and V is a rank-one operator Let T d be the d-dimensional torus equipped with the Haar measure and L 2 (T d ) be the Hilbert space of square-integrable functions on T d . By F we denote the the standard Fourier transform Further we always assume that v and its Fourier image satisfy the following assumptions: There exist reals C, a > 0 and nonnegative integersn o , n o ≥ 0 such that here D j f ( p) is the j-th order differential of f at p, i.e. the j-th order symmetric tensor and π = (π, . . . , π) ∈ T d . Notice that under assumption (H1), v is analytic on T d .
Before stating the main results let us introduce the constants and Moreover, by Propositions A.1 and A.2:
Main results
First we concern with the existence of the discrete spectrum of H μ .
. and Next we study the threshold resonances of H μ . . .
We recall that in the literature the non-zero solutions of equations H μ f = 0 and H μ g = 4d 2 g not belonging to 2 (Z d ) are called the resonance states [1,2]. Now we study the rate of the convergences in (2.6).
where {c 9,n } are some real coefficients.
(b) Suppose that d is even: , then μ o = 0 and for sufficiently small and positive μ, where {C 5,n } are some real coefficients.
where {C 6,nm } are some real coefficients.
Here C v and C v are given by (2.5)
Proof of main results
In this section we prove the main results. By the Birman-Schwinger principle and the Fredholm Theorem we have
Remark 3.2
In view of Lemma 3.1 and the proof of Theorem 2.2, their assertions still hold for any v ∈ 2 (Z d ).
Proof of Theorem 2. 3 We prove only (a), the proof of (b) being similar. Repeating the proof of the continuity (resp. differentiability) of l f at z = 0 in Proposition A.1 one
Proof of Theorem 2.5 From (3.3) it follows that the map p ∈ T d → |v| 2 ( π + p) is even. Now the expansions of e(μ) at μ = −μ o can be proven along the same lines of Theorem 2.4 using Proposition A.2 with f = |v| 2 .
Since both p ∈ T d → f ( p) and p ∈ T d → f ( π + p) are even analytic functions, we can still apply Propositions A.1 and A.2 to find the expansions of z → T d f ( p)dp e( p)−z and thus, repeating the same arguments of the proofs of Theorems 2.4 and 2.5 one can obtain the corresponding expansions of e(μ).
Remark 3.4 When
| v(x)| = O(|x| 2n 0 +d+1 ) as |x| → ∞ for some n 0 ≥ 1, in view of Remark A.3, we need to solve equation (3.4) with respect to μ using only that left-hand side is an asymptotic sum (not a convergent series). This still can be done using appropriate modification of the Implicit Function Theorem for differentiable functions. As a result, we obtain only (Taylor-type) asymptotics of e(μ).
and by the analyticity of f in B π (0) ⊂ R d , the series converges absolutely in p ∈ B π (0). By the definition of ϕ, ϕ(r w) ⊂ B π (0) for any r ∈ (0, γ ) and w = (w 1 , . . . , w d ) ∈ S d−1 , where S d−1 is the unit sphere in R d . Then letting p = ϕ(r w) and using the Taylor series ϕ i (r w) = 2r w i + r 3 w 3 i 3 +
|
v3-fos-license
|
2020-04-02T09:13:25.488Z
|
2020-03-25T00:00:00.000
|
216513839
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://jurnal.batan.go.id/index.php/tridam/article/download/5780/5045",
"pdf_hash": "d8f14134b16846fa3983556f7a3c13917a4dae77",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43292",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Physics"
],
"sha1": "8a8ab9f052ef541c88600f76fe3da749dc1d98c7",
"year": 2020
}
|
pes2o/s2orc
|
Dose Estimation of the BNCT Water Phantom Based on MCNPX Computer Code Simulation
Dose Estimation of the BNCT Water Phantom Based on MCNPX Computer Code Simulation Amanda Dhyan Purna Ramadhani, Susilo, Irfan Nurfatthan, Yohannes Sardjono, Widarto, Gede Sutresna Wijaya, Isman Mulyadi Triatmoko Department of Physics, Mathematics and Natural Science Faculty, State University of Semarang. Jalan Taman Siswa, Sekaran, Gunungpati, Kota Semarang, Indonesia Department of Nuclear Engineering Universitas Gadjah Mada, Jl. Grafika No.2, Senolowo, Sinduadi, Kec. Mlati, Kabupaten Sleman, Daerah Istimewa Yogyakarta, Yogyakarta and 55281, Indonesia 3 Centre of Accelerator Science and Technology,National Nuclear Energy Agency, Jalan Babarsari Kotak Pos 6101 ykbb, Yogyakarta, Indonesia
INTRODUCTION *
Cancer is a malignant tumor, which destroys the healthy cells of the body. Cancer cells become abnormal cells that grow more than they should. The process of cancer cells spreading in the body is called metastasis, and it can result in death of a person. Cancer cell can grow in any kind of tissue, from skin tissue to important organs such as brain and lungs. To exacerbate the threat, cancer cell can spread from one tissue to another tissue [1].
There are several methods to treat cancer, such as surgery, chemotherapy, and radiotherapy. Every method has its specification, benefit, and risk. The surgical method is done by removing the cancerous tissue inside the body. This method leaves marks in body, since to reach the cancerous cell, surgical operation must be performed. In this method, the normal tissue may be removed along with the cancer tissue, or possibly not all cancerous tissue is removed during the process. The remaining cancerous tissue can therefore still grows and destroys the normal tissues on its surrounding. Chemotherapy is a type of cancer therapy that uses drugs. The drugs are delivered by injection into the vein or swallowed in pill form. Chemotherapy works to slow the growth of the cancer cells [1].
Another type of cancer therapy is radiotherapy. Radiotherapy uses particle of high-energy waves such as X-ray to kill the cancer cell. It works by damaging the DNA cells, and thereby causing the cell to die [2]. Radiotherapy usually uses the radiation in for of X-ray, gamma ray, or other charged particles that possesses high energy, and it also has a risk in normal tissues [3]. Radiotherapy utilizes ionization properties generated by radiation sources in areas exposed to radiation. Radiation sources either be internal or external [4].
BNCT is a branch of radiotherapy that uses boron-10 isotope to capture neutrons [4]. This kind of radiotherapy is applied to the human body in a certain period of time instead of daily. Each person can get a personalized dose. BNCT works when the boron-10 carrier is injected into the body and then irradiated with neutrons. The reaction produces alpha and gamma energy along with lithium-7.
To apply the therapy to the patient, the dose given must first be estimated. This must be performed to maintain the safety of the patient. Nevertheless, applying the estimation directly is not necessarily safe to the patient of importance. Therefore, it must be applied to a phantom. One of the phantoms to be used is in form of water phantom, because human torso consists of about 75% water. Thus, the water phantom can represent the human torso [5].
Boron Neutron Capture Cancer Therapy
Boron neutron capture therapy (BNCT) is a binary cancer treatment modality that involves the selective accumulation of boron-10 carriers in tumors, followed by irradiation with a thermal or epithermal neutron beam [6]. BNCT works selectively to reach the target cell. It was introduced by G. L. Locher in 1936. BNCT has the possibility to selectively target a cancerous cell that was given boron-10 while avoid damaging normal cell. Boron-10 is non-toxic to the body, and a non-radioactive material which absorbs thermal neutrons which results in nuclear reaction of 10B(n,α)7Li [7].
BNCT is a two-step procedure: (1) The patient is injected with a tumor-localizing drug containing boron, and (2) the target volume is irradiated with thermal or epithermal neutrons. In this therapy, the following nuclear reaction is used: The ranges of the α and the Li-7 particles generated in this reaction are approximately 8 and 5 µm in tissue. [8] Table 1. IAEA recommendation value of air beam-port parameter for the epithermal BNCT and the corresponding neutron beam energy limits [9] BNCT beam port parameter Limit Φ epi (n/cm 2 .sec) ≥ 10 ! Φ th /ϕ epi ≤ 0.05 D f /ϕ epi (Gy cm 2 /n) ≤ 2.0 E-13 D ϒ /ϕ epi (Gy cm 2 /n) ≤ 2.0E-13 J epi /ϕ epi ≥ 0.7 Φ th energy range E < 0.5 eV Φ epi energy range 0.5 eV ≤ E ≤ 10 keV Φ f energy range E > 10 keV where: ϕ epi is epithermal neutron flux ϕ th is thermal neutron flux, ϕ f is the fast neutron flux, J epi is the epithermal neutron current, D f/ ϕ epi is the fast neutron dose per epithermal neutron flux or "specific fast neutron dose" D ϒ /ϕ epi is the gamma dose per epithermal neutron flux or "specific gamma dose" [10] The advantage of BNCT is that the boron at a certain concentration is non-toxic. Only the tissue located around the cancerous tissue is exposed to the effect of neutron irradiation and boron-10 needs to be considered. Another reason is that boron-10 has a greater thermal neutron uptake than any other element in the body.
Neutron sources
The neutron source of this study is the Kartini Reactor. Neutrons are passed through a conceptually designed collimator. The conceptual design of the collimator on the Kartini Reactor thermal column is made from several materials [12]. The Kartini Reactor (TRIGA MARK-II Reactor), located in the Center of Research on Pure and Instrumentation Materials (PPBMI)-BATAN Yogyakarta, is one of three nuclear research reactors in Indonesia [4]. The Kartini reactor with the presence of the Lazy Susan, pneumatic, beam ports as well as a thermal column facility can be used as a means for irradiation for neutron activation analysis, the gamma irradiation for radiation chemistry, and neutron radiography as well as for basic research in connection to the Boron Neutron Capture Therapy.
Radiation Interaction with Matter
Radiation is defined as emission and propagation of energy through the matter or space in the form of electromagnetic waves or particle decay of a radioactive substance emitting charged particles. Charged particle radiation can be detected by utilizing the interaction of radiation with matter [13].
Neutron Interaction with Matter
Neutron interaction with matter can be broadly divided into two types, scattering or absorption. A more detailed description of the various reactions is described as follows.
Scattering
Scattering is a collision of neutrons to an atomic nucleus that is almost always in a state of quiescent or ground state. It has the lowest energy of an atom in a normal circumstance. The neutron out of the core and the core in a state leaving its ground state.
There are two kinds of neutron scattering, elastic scattering and inelastic scattering. Elastic scattering is when the neutrons are scattered by the atomic nuclei because the state of the system remains unchanged. Inelastic scattering occurs when the atomic nuclei that have been pounded by neutrons are in an excited state. Thus, the atom has more energy than during its ground state. The collision energy is stored in the nucleus so the interaction is endothermic.
Capture / Absorption
Capture or absorption reaction happens when the atomic nuclei absorbs neutrons then transmit one or more so-called captured gamma rays. Capture reaction is symbolized by the exothermic interaction (n, γ), while radiation absorption is symbolized as (n, α).
Gamma Interaction with Matter
Gamma ray is a photon produced by unstable nuclide. There are many possible interactions between gamma and matter, but the most important interactions are photoelectric effect, Compton scattering, and pair production.
Photoelectric Effect
Photoelectric effect is an interaction of photons with strong bound electrons in atoms, that is electrons in the inner shell of an atom, usually at surface K or L. Photons will pound these electrons and the electrons that bind strongly because the electrons will absorb the entire energy of the photons. The result is that electrons are emitted out of the atom with kinetic energy equal to the difference between the energy of the photons and the electrons' connective power.
Compton Scattering
Compton scattering occurs when X-rays unit of energy in the event of light deflected from its original path by interaction with an electron. The electrons are rejected, excluded from the position of the orbital and the amount of energy depends on the corner or corner in not spread and the nature of the medium that spreads, because the unit energy X-rays in a diffuse light has less energy, longer wavelengths and less penetration compared to the unit of energy in light.
Pair Production
Pair production is an interaction between a photon and an atomic nucleus. It occurs when the energy of a photon is at 1.022 MeV or above. Photon energy is absorbed completely, and the resulting photon becomes an electron-positron pair. The rest of the mass of each particle of electron is equivalent to 0.511 MeV.
Dosimetry
There are four components of the dose that need to be considered.
Gamma Dose (Dγ)
Gamma dose is the accumulation of thermal neutron which is captured by the hydrogen reaction rate. The reaction is between hydrogen-1 in body tissues and thermal neutrons. This reaction is equal from rate of hydrogen-2. The result will be a gamma particle which has an energy of 2.33 MeV. The reaction is shown equation (1).
Neutron Scattering Dose (Dn)
The neutrons produced by the reactor are not only thermal neutrons but also fast neutrons. The interaction between neutron and matter can produce recoil and photon radiation. The interaction impact to be scattering reaction. The neutron flux characteristics at the beam outlet depend mainly on the neutron cross section data of the materials used. Resonance scatters are promising materials for epithermal neutron beams [14].
Proton Dose (DP)
Proton dose is determined from the interaction of nitrogen-14 which capture neutrons, generating carbon-14 and 0.66 MeV proton in process. The reaction is shown in equation (2).
To calculate the proton dose, equation (3) is used.
Boron Dose (DB)
The boron dose calculates the thermal neutrons captured by boron-10, which interaction produces lithium-7 and 2.33 MeV alpha particle. The reaction between thermal neutrons and boron-10 in tissue have the highest probability of happening because boron has a high thermal neutron capture cross section when compared to another element. The reaction is in provided in Equation 1. To calculate the dose rate from this interaction, equation (4) is used.
Total Dose
The total dose received by the organ is based on equation (5). D = w B D B + w p D p + w n D n + w γ D γ (5) where D B : boron absorbed doses respectively D p : Thermal absorbed doses respectively D n : Fast absorbed doses respectively D γ : gamma absorbed doses respectively w B : weight factor for alpha particles are 3.8 (tumor) and 1.35 (normal tissues) w p : weight factor for proton is 3.2 w n : weight factor for neutron is 3.2 w γ : weight factor for gamma is 1 Weight factor is the coefficient to show the damaging capability from absorbed radiation. The value is different for each type of radiation and is influenced by the target tissue radio sensitivity.
MCNPX (Monte Carlo N-Particle Extended) Program
MCNPX is a program to simulate particle transport including the theoretical experimentation developed at Los Alamos National Laboratory (LANL) and distributed by Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory [15]. MCNPX program is used to determine the neutron flux in the air. A strong source of photon is converted to particles per second by multiplier factor of 1 x 10 -9 c/s.
Geometry Modeling in MCNPX
The phantom of the experiment is water which size represent the size of a liver. To generate the geometry, the coding is made in the notepad. The code to generate the object needs 3 blocks of card.
The cell card block defines the area of a selected cell with the material used which is defined on the data card. Concentration of material is also included. In this block, the material parameters in a cell are set.
The surface card generates the form of the object. The water phantom was modeled with a size approximately similar with liver in a cube form. The dimension is 7.5 × 7.5 × 7.5 cm. The cancer cell is included in layer 3 of the phantom; there is a GTV (gross tumor volume) with a radius of 1.5 cm, a CTV (clinical tumor volume) with a radius of 1.75 cm, and a PTV (planning tumour volume) with a radius of 2 cm. The cancer cell is in a spherical form. The neutron source is set to the left side of the phantom. Around the outside of the phantom, a sphere containing air is generated. Outside of the cancer cell is a water-containing cube. On the left side of the cube is the tip of the Kartini reactor, the neutron source defined as a radial plate with neutron direction in the x-axis, directed into the cancer cell.
The material card defines the material used in the cell. Innermost cell is GTV which contains cancer cell with boron in full concentration. The second layer is the CTV which contains a cancer cell with half boron concentration in the GTV. The third layer is the PTV which contains a normal cell with one tenth boron concentration in the GTV.
Tally
Tally is the information required to obtain the neutron flux. The cell selected to calculate the neutron flux is the cancer cell in layer 3 of GTV, PTV, CTV, and the water phantom. MCNPX provides several kinds of tally as in Table 3. To determine the neutron flux through the cell, f4 was used.
Calculation of dosimetry
To calculate the dose is by calculating the sum of the boron dose, proton dose, neutron dose, and gamma dose that are simulated and then calculated using the aforementioned equation.
Using MCNPX, the problem must be identified first. In this research, the dose estimation for cancer therapy needs to be known before it is applied to the patient. So, to the dose simulation must be done in a phantom. The phantom used here is wateras a substitute for human body.
After obtaining the number of neutron flux in the cancer cell, then the dose of gamma, neutron, proton and boron was calculated to obtain the total dose. The phantom is built in a cubic shape divided in 30 sections with intervals of 0.5 cm for each surface. This is aimed to determine the flux through the water in each depth, in order to learn the characteristic of neutron through the cell. Neutron flux which comes out from the source is 1.14228×10 7 neutron/cm 2 .s. Flux distribution in water phantom cell decreases through the depth of the phantom apart from the cancer cell. Neutron flux in the cancer cell increases because the cancer cell contains boron, with concentration of 10% in PTV, 50% in CTV, and 100% in GTV. The highest neutron flux is in the GTV cell as it contains the highest boron concentration. Therefore, the highest number of neutrons are captured in GTV.
Total Dose
Total dose is obtained by totaling the weight of particle, multiplied to the proton dose, neutron dose, gamma dose, and alpha dose. Giving 20µg/g tissue we calculated the dose for each cell (in Gy/s). According to previous study, the minimum dose to destroy cancer cell is 30 Gy. To obtain the effective irradiation time to reach 30 Gy, the following formula is used. GTV captured more neutrons than the other cells, as much as 76.38% of total neutron flux. This is because GTV contains the highest concentration of boron. GTV needs an irradiation time of 1.4414E-13 s. This extremely short time is enough to destroy cancer cell in liver.
CONCLUSION
BNCT is promising as one form of cancer therapy due to the reasons that it works selectively and does not have an adverse effect for the normal tissue. Water can represent the soft tissue of humans and can be used as an experimental object of research. Total dose rate to destroy the cancer cell in GTV is 2.0814×10 14 Gy.s (76.38%) with irradiation time of 1.4414×10 13 s. In PTV it is 5.2295×10 13 Gy.s (19,19%) with irradiation time of 5.7367×10 -13 s. In CTV it is 1.1866×10 13 Gy.s (4,35%) with irradiation time of 2.5283×10 -12 s. lastly, in the water, it is 1.9128E+11 Gy.s (0,07%) with irradiation time of 1.5684×10 -10 s. The irradiation time is extremely short because the modeling is based on water phantom not in phantom human body.
ACKNOWLEDGMENT
The authors would like to thank PSTA-BATAN Yogyakarta for allowing the author to do the research. Thanks to Dr. Zaenuri as the Dean of Mathematics and Natural Sciences Faculty, State University of Semarang for supporting the author do the research. Thanks to Laely and Imam from UNNES and all the colleagues in the engineering room of PSTA-BATAN to do our research together.
|
v3-fos-license
|
2015-03-27T18:11:09.000Z
|
2013-10-07T00:00:00.000
|
13960545
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://acp.copernicus.org/articles/13/9855/2013/acp-13-9855-2013.pdf",
"pdf_hash": "bca44a790abf745ed2026edf460eb8b21c7294a6",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43293",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"sha1": "694fe9b7efc0de06e4d8b9ea2d531f1aaaeec49c",
"year": 2013
}
|
pes2o/s2orc
|
Microphysical Process Rates and Global Aerosol-Cloud Interactions
Cloud microphysical process rates control the amount of condensed water in clouds and impact the susceptibility of precipitation to drop number and aerosols. The relative importance of different microphysical processes in a climate model is analyzed, and the autoconversion and accre5 tion processes are found to be critical to the condensate budget in most regions. A simple steady-state model of warm rain formation is used to illustrate that the diagnostic rain formulations typical of climate models may result in excessive contributions from autoconversion, compared to obser10 vations and large eddy simulation models with explicit binresolved microphysics and rain formation processes. The behavior does not appear to be caused by the bulk process rate formulations themselves, because the steady state model with bulk accretion and autoconversion has reduced contributions 15 from autoconversion. Sensitivity tests are conducted to analyze how perturbations to the precipitation microphysics for stratiform clouds impact process rates, precipitation susceptibility and aerosol-cloud interactions (ACI). With similar liquid water path, corrections for the diagnostic rain assump20 tions in the GCM based on the steady state model to boost accretion over autoconversion indicate that the radiative effects of ACI may decrease by 20% in the GCM for the same mean liquid water path. Links between process rates, susceptibility and ACI are not always clear in the GCM. Better rep25 resentation of the precipitation process, for example by prognosing precipitation mass and number, may help better constrain these effects in global models with bulk microphysics schemes.
Introduction
Aerosols have many direct, semi-direct and indirect effects on clouds.The indirect effects, or Aerosol-Cloud Interactions (ACI), result from more Cloud Condensation Nuclei (CCN) creating a population of more and smaller particles 35 for a given amount of cloud water.This makes the clouds brighter (first indirect effect, Twomey (1977)), as well as affecting the resulting lifetime of the clouds in complex ways (second indirect or lifetime effect (Albrecht, 1989)).The effects on cloud lifetime are complex, and depend upon pre-40 cipitation processes in clouds.We will focus in this paper on stratiform clouds.Convective clouds with strong vertical motions, create their own complex challenges in understanding aerosol effects (Rosenfeld et al., 2008).
Many global models of the atmosphere (General Circu-45 lation Models or GCMs) have started to treat aerosol indirect effects (e.g., Boucher and Lohmann, 1995;Quaas et al., 2008).The resulting global effects of aerosols on radiative fluxes appear larger than many observational estimates from satellites (Quaas et al., 2008) or inverse methods (Murphy 50 et al., 2009).Satellite studies and more detailed models indicate that a likely culprit is too large a change in liquid water path with the changing drop number induced by aerosols, resulting in too large a radiative effect (e.g., Wang et al., 2012).
The formation of precipitation, as a primary sink for liquid 55 water, is critical in this process.Also important are entrainment processes (e.g.Ackerman et al., 2004;Guo et al., 2011).The evolution of precipitation in clouds is affected by aerosols through their impact on the droplet size distribution.Increases in aerosol are seen to increase cloud drop 60 number (Martin et al., 1994;Ramanathan et al., 2001).Increased drop number means smaller mean drop size for constant liquid water path (LWP).The result is smaller drops that do not coalesce and grow into precipitation as easily.This coalescence process (described by the stochastic col-65 lection equation) is too detailed to completely represent in bulk formulations of cloud drop size distributions.Thus, the coalescence process of precipitation formation is often represented by a parameterization of the autoconversion of cloud liquid to precipitation, while the collection process of cloud 2 Gettelman et al.: Process rates and Aerosol Indirect Effects droplets onto existing raindrops is represented by an accretion process.Most current GCMs assume a diagnostic treatment of precipitation whereby time tendencies of precipitation are set to zero and precipitation is obtained by a vertical integration of microphysical process rates.On the other hand, Posselt and Lohmann (2008) assumed a prognostic treatment of precipitation that allowed precipitation mass to persist in the atmosphere across time steps in the ECHAM GCM, and found that it shifted rain production towards accretion.Wood (2005) note that autoconversion should play a minor role in increasing drizzle water content.
The autoconversion and accretion rates are affected by changes in drop number.Autconversion is sensitive to drop number (Khairoutdinov and Kogan, 2000) while accretion rates are nearly independent of the drop number: they are only affected via the mass of condensate undergoing autoconversion.If accretion dominates over autoconversion, as observed for shallow clouds (Stevens and Seifert, 2008) and stratocumulus (Wood, 2005), this would tend to dampen the ACI: reducing the role of autoconversion, which depends on 90 cloud drop number, reduces the effect of aerosols on cloud radiative properties (Wood et al., 2009).Consistent with this idea, the change in rain rate with respect to aerosols or drop number (called the 'susceptibility' of precipitation to aerosols following Feingold and Siebert ( 2009)) seems to de-95 crease at higher liquid water paths where accretion dominates (Jiang et al., 2010;Terai et al., 2012).Complicating diagnosis however, Golaz et al. (2011) found a strong co-variance between ACI and LWP with changes in process rates to achieve radiative balance in a GCM.
100
In contrast to previous work on microphysics processes in GCMs (Posselt and Lohmann, 2008;Wang et al., 2012), we compare GCM process rates to rates derived from in-situ observations and we explore a simple steady state model of microphysical processes.We first examine microphysical process rates in a GCM (Section 2).We analyze a simple steady state model (Section 3) to understand interactions of process rates and susceptibility of precipitation to changes in drop number.We compare the GCM to the simple model and observations in Section 4. We then use different formulations of 110 the GCM microphysics to better understand the sensitivity of the GCM cloud aerosol interactions in Section 5. Discussion and Conclusions are in Section 6.
2 Balance of processes in a GCM
115
The GCM we use in this study is the National Center for Atmospheric Research (NCAR) Community Atmosphere Model version 5.2 (CAM5).CAM5 includes an advanced physical parameterization suite (Gettelman et al., 2010;Neale et al., 2010) that is well suited for understanding 120 aerosol indirect effects in stratiform clouds.CAM5 has a 2-moment cloud microphysics scheme (Morrison and Gettelman, 2008;Gettelman et al., 2008), coupled to a modal aerosol model with 3 modes (Liu et al., 2012).CAM5 aerosols affect activation of stratiform cloud droplets and ice 125 crystals.Aerosols in the standard version of CAM5 do not interact with convective cloud drops and ice crystals.A separate scheme is used to describe convective clouds and convective microphysics (Zhang and McFarlane, 1995).CAM5 has a consistent treatment of the radiative effects of cloud 130 droplets and ice crystals, and radiatively active snow (see Gettelman et al. (2010) for details).We will also perform several sensitivity tests as noted below (see Section 5) with different CAM5 formulations.
In CAM, liquid autoconversion (auto) and accretion (accr) 135 are defined following Khairoutdinov and Kogan (2000): Autoconversion depends on cloud water (q c ) and inversely on cloud drop number (N c ) so that increases in drop num-140 ber decrease rain rate (q r ) to a first approximation, leading to more liquid in the presence of higher number (more aerosols).Accretion depends on q c and q r only in this formulation.The rain mixing ratio q r in CAM is diagnostic: only from rain formed at the current time step.
145
To isolate cloud lifetime and precipitation effects of aerosols in a GCM, first we examine the key CAM microphysical process rates in Figure 1.This analysis treats evaporation and condensation as large scale (macro-physical) quantities, and here we focus only on the microphysics.
150
These terms are important in the overall amount of cloud water.We look at the storm track regions, where liquid water path is large, and in CAM there is a large sensitivity of cloud feedbacks in this region (Gettelman et al., 2012).Over the storm track regions (S.Hemisphere shown in Figure 1 A), 155 autoconversion of liquid to precipitation, accretion of liquid by snow and the transition from liquid to ice (Bergeron process) are the largest sink terms for liquid.Autoconversion is the largest process rate from 500-900hPa, with the Bergeron vapor deposition larger below that.Accretion is lower than 160 autoconversion.In the S. E. Pacific off the coast of S. America (Figure 1 B), there is a large sedimentation term, but the dominant microphysical processes after that are Accretion and Autoconversion.Both are nearly equal, but there is more autoconversion near cloud top (∼800 hPa).Over the Trop-165 ical Western Pacific (20S to 20N and 120-160 longitude), the dominant processes are similar.Autoconversion and accretion onto both rain and snow are the dominant sink terms for cloud liquid (Figure 1 C).Several other terms are important due to ice processes at high altitudes (homogeneous Modify rates with CAM sub-grid variability QiagQr Different accretion: with auto converted liquid QiagQr 0.5 DiagQr + Scaled rain mixing ratio for accretion freezing and accretion of liquid onto snow).Accretion and autoconversion have similar magnitudes.Figure 1 shows that regardless of the cloud regime or region, accretion and autoconversion largely determine the sink of cloud liquid water.
Steady State Model
Given the dominance of the autoconversion and accretion processes, we explore a simple model that represents these essential features in much the same way as the GCM.We use the steady state model of Wood et al. (2009), which captures many of the qualitative and quantitative features of warm rain processes.Time tendencies of precipitation mass (and number) mixing ratios are explicitly calculated and precipitation quantities are prognosed across time steps.The model calculates an equilibrium state for rain rate, rain number and cloud water concentration given an input cloud height, replenishment rate and drop number concentration.The essential processes are autoconversion and accretion, combined with sedimentation and removal of cloud water.The model treats rain prognostically, and uses autoconversion from Khairoutdinov and Kogan (2000) as in Equation 1.We use the accretion calculation of Khairoutdinov and Kogan (2000) as in Equation 2, to be consistent with the GCM simulations, and keep all parameters the same.The standard case, seen in black in Figure 2, reproduces the sensitivity of precipitation to LWP and N d in Wood et al. (2009), their Figure 1b.Steady state model cases are described in Table 1.
The bulk microphysics in the GCM differs in several important respects from this steady state model.As described by Morrison and Gettelman (2008), the bulk microphysics treats the impact of the sub-grid variability of total water in a grid box, by assuming a standard deviation, and analytically adjusting the process rates by integrating over an assumed gamma distribution.The result, for a relative variance of 2, is an increase in autoconversion by a factor of 2.02, and of accretion by 1.04.It is straightforward to apply these terms to the steady state model (simulation Qcv=2), but the results do not change much.The precipitation rate is very similar to the base case (not shown) and the ratios between accretion and autoconversion (Figure 3 with R being the Rain Rate, is also very similar, with slightly lower values at high LWP. In addition, the GCM does not have rain mass and num-215 ber mixing ratios that are carried from time step to time step (prognostic rain), but assumes instead that rain only depends on the prognostic cloud quantities over the model time step (typically 20-30 minutes).Thus, rain profiles are found by integration of the microphysical process rates over height but 220 not time (diagnostic rain) as described in (Morrison and Gettelman, 2008, section 2b).In the steady state model however, rain mass mixing ratios increase over time at a given vertical level, leading to an increase in accretion.In the GCM, accretion is caused only by rain which is created (through auto-225 conversion) diagnostically at each time step and falls through cloud water at lower levels.In order to reflect this behavior in the steady state model, we can assume that accretion is affected only through rain created at the current time step, thus: Where q a is the 'autoconverted' liquid (q a ) from the autoconversion rate ( q a = A u ρdt).This formulation (in blue in Figure 2 and Figure 3) changes the balance dramatically in the steady state model, causing a significant reduction in 235 rain rate, and a constant relationship between the autoconversion and rain rate across all values of LWP (Figure 3 B).The accretion is much less important (Figure 3 C), and susceptibility to drop number (Figure 3 D) is increased at high liquid water paths (it does not decrease as in the standard 240 steady state model).This is consistent with previous work (Posselt and Lohmann, 2008;Wang et al., 2012) indicating that the prognostic rain formulation reduces the impact of autoconversion.Note that q a in Equation 3 is dependent on time step.We have tested a range of time steps from 5-30 245 seconds in the steady state model, and the time step does not change the susceptibility with LWP or the slope of the A c /A u ratio with LWP.
Next we explore ways to recover the steady state model behavior with the 'diagnostic' rain rate (only from q a , the au-250 toconverted liquid, as in equation 3).Boosting accretion by a factor of 10 alters the accretion/autoconversion (A c /A u ) ratio, but not significantly (experiments with this change look identical to the diagnostic rain in Figure 3).As a second experiment, we assume that because q r increases lower 255 in the cloud, there is increased efficiency of accretion over autoconversion as the rain builds in the lower part of a cloud.We express this as a power law q amod = q x a , where for x < 1 accretion is boosted (since the rain mixing ratio q r < 1).For illustrative purposes, we choose x=0.5 in Fig- 260 ure 3 (DiagQr 0.5 : blue lines).This method significantly increases the rain rate, matching the steady state model base case for moderate LWP (100-300 g m −2 ).It also increases the accretion/autoconversion ratio (Figure 3 Fig. 2. Results from steady state model of Wood et al. (2009).Rain rate (mm/day), contour lines at 0.03, 0.1, 0.3, 1, 3, 10 and 30 mm day −1 .Thicker lines are higher rain rates.Cases shown: (A) Base case (Black) and Diagnostic rain case (DiagQr, Red).(B) Base case (Black) and diagnostic rain with vertical variation of rain rate from autoconversion (DiagQr 0.5 : Blue) as described in the text.Cases shown: the Base case (Black), case with sub-grid variability (Qcv = 2) in Green, Diagnostic rain case (DiagQr, Red), and diagnostic rain with vertical variation of rain rate from autoconversion (DiagQr 0.5 : Blue) as described in the text.
of accretion in rain formation (Figure 3 C), and reduces the impact of autoconversion (Figure 3 B), while also uniformly lowering susceptibility (Figure 3 D).The results do not fully reproduce the base case steady state model, particularly susceptibility (S p ) with respect to varying LWP.In Figure 3 D, for the simulations that give the two extremes of A c /A u ratios (DiagQr 0.5 in blue and DiagQr in red), S p is nearly constant with LWP.Note that the susceptibilities in the steady state model correspond to the exponents for Autoconversion (∼1.79) for all simulations except DiagQr 0.5 , where S p is around half that of Autoconversion (∼0.9).In the equilibrium model, the slope of the rain rate with specified droplet number is dominated by the exponent for autoconversion, but slightly less so when accretion dominates.These results are consistent with previous studies that susceptibility is related to the ratio between accretion and autoconversion and the A u /R ratio (Wang et al., 2012).Note the similarity of Figure 3A (inverse) and Figure 3B with Figure 3D.
GCM Results
We now focus on these process rates in the GCM, analyzing the ratio of Accretion to Autoconversion (Jiang et al.,285 2010), the ratio of Autconversion (and accretion) to precipitation (Wang et al., 2012), and the susceptibility of precipitation to aerosols (or drop number) (Sorooshian et al., 2009;Terai et al., 2012).We composite the diagnostics by liquid water path (LWP) and by aerosol optical depth (AOD).Note 290 that the LWP is that used in estimating microphysical process rates immediately before the microphysics calculation, not the diagnostic LWP in CAM used by the radiation code (the latter is the traditional GCM output).
295
The autoconversion of cloud condensate to precipitation and the accretion or collection of falling condensate by precipitation are the dominant terms in most places for the microphysical sink of cloud water (Figure 1).
Figure 4 shows zonal cross sections and maps of the autconversion and accretion rates in CAM5.As expected, autoconversion (Figure 4 D) and accretion (Figure 4 B) rates are both larger in midlatitudes than in the tropics where stratiform liquid water paths are higher.Note that these processes and diagnostics do not treat convective clouds (because the simplified convective microphysics does not have these rates), so results for the tropics need to be interpreted with caution.The ratio between accretion and autoconversion (Figure 4 E) is large in the tropical troposphere below the freezing level.Because of the different vertical altitudes and sedimentation, and because it is the vertical integral that is relevant for surface precipitation rate, the vertically averaged A c and A u rates (over all altitudes, but essentially just the troposphere) are used for a ratio (Figure 4 F).In CAM5, accretion (A c ) dominates with the A c /A u ratio typically between 1-10.The A c /A u ratio is lower (more autoconversion) in the mid-latitude regions where the liquid water path is high.In general, the A c /A u ratio is larger than 1, indicating that accretion is more important.
In LES simulations (Jiang et al., 2010), the ratio of accretion to autoconversion increases with LWP. Figure 5 shows an estimate of accretion and autoconversion rates based on observations.The autoconversion and accretion rates are estimated from the droplet size distributions measured on the NCAR/NSF C-130 during the VOCALS experiment off the west coast of South America on profile legs flown through the depth boundary layer (Wood et al., 2011).A mean droplet size distribution is calculated over ten second segments, and after interpolating any gaps in the size distribution, the mass conversion of cloud to drizzle for autoconversion and accretion is calculated using the stochastic collection equation given the size distribution following the method described by Wood (2005).The 10-second-average process rates are averaged over continuous layers of liquid water content (LWC) exceeding 0.01 gm −3 .The LWPs (drizzle+cloud) are estimated only over the cloud layer.A size (radius) cutoff of 25 microns is used to distinguish cloud and drizzle drops, following Khairoutdinov and Kogan (2000).Measurements of droplet size distribution come from the CDP (Cloud Droplet Probe) for the cloud drops and the 2D-C probe for drizzle drops (Wood et al., 2011).Here, the ratio of accretion to autoconversion increases sharply with LWP, as in the LES simulations.
In CAM, the ratio of accretion to autoconversion (A c /A u ) decreases with LWP (Figure 5A), in contrast to the observations and LES models.This appears to be mostly because autoconversion increases with LWP (Figure 5B)faster than accretion (Figure 5C).The A c /A u ratio also decreases with increasing AOD (Figure 5D).Autoconversion increases with AOD in any region (Figure 5E), which is not what would be expected from the formulation in Khairoutdinov and Kogan (2000).It may result from the fact that LWP increases with AOD in CAM, and the convolved variables make it difficult to separate AOD-driven effects in this analysis (the positive correlation between AOD and LWP does not imply causa-
Precipitation and Autoconversion
To investigate the impact of microphysical processes and aerosols on precipitation, we look at the non-dimensional ra-370 tio of the vertical integral of autoconversion (A u ) or accretion (A c ) to the rain rate (R) in Figure 6.Previous studies (e.g., Wang et al., 2012) note that that the autoconversion/rain ratio is important in determining LWP response to CCN.In drizzling stratocumulus, this ratio is small (Wood, 2005).Wang 2012) highlight that the precipitation occurrence is related to the A u /R ratio (since autoconversion is the initial formation of precipitation), whereas the precipitation amount is more dependent on the accretion process and the A c /R ratio.Note that in CAM, there is an additional avenue for rain 380 formation that is not accounted for in this analysis of autoconversion and accretion (for liquid): and that is the formation of frozen precipitation (snow) that melts to form rain. Hence there can be zero autoconversion or accretion for a non-zero rain rate in this analysis.
385
In Figure 6 from the GCM, the A u /R ratio increases with LWP, from 0.0 to 0.7 globally (Figure 6A).There does not appear to be a clear relationship between the A u /R ratio and AOD (Figure 6C).The A c /R ratio increases rapidly and then decreases with increasing LWP (Figure 6B) and decreases in 390 many regions with higher AOD (Figure 6D).The A u /R and A c /R ratios need not add to one (i.e.A c + A u = R) because of the evaporation of precipitation (sum > 1) or ice phase processes (sum < 1).These ratios from CAM are consistent with other work (Wang et al., 2012).Autoconversion/rain ra-395 tios are much higher than seen in embedded cloud resolving model simulations by Wang et al. (2012), and in stratocumulus observations by Wood (2005), where autoconversion played a smaller part in determining rain rates.The A u /R ratios (Figure 6A) are very different from those in the steady 400 state model with prognostic rain (Figure 3 B, red and black), where the A u /R ratio decreases with LWP.The GCM A u /R ratio is more consistent with the increase in A u /R ratio with LWP in steady state model simulations using modified accretion (Figure 3 B, blue).The relationship between accretion 405 and rain rate is also very different in the steady state model A) Zonal Accretion (Ac) Rate (1e -9 kg kg -1 s -1 ) (10 < LWP < 1500 gm (Figure 3 C): where accretion increases relative to rain rate for increasing LWP, but decreases in the GCM (Figure 6C).
Precipitation Susceptibility
The susceptibility of precipitation (S p ) to aerosols is a part of the cloud lifetime effect (Jiang et al., 2010;Feingold and Siebert, 2009).S p is defined in the GCM similarly to the steady state model, but using the column cloud drop number (CDN) concentration for N d .Thus, S p = -∂ln(R)/∂ln(CDN).
In the GCM, we look at instantaneous output from the model at each point, and consider only points with significant (>5x10 −9 kg m −2 s −1 ) rain rates.The output is binned by region and LWP, and then slopes are calculated.In LES simulations of trade cumulus by Jiang et al. (2010), when binned by LWP, susceptibility increases with LWP, and then decreases at high (LWP > 1000 gm −2 ).The high values of susceptibility at higher LWP are consistent with the results above showing a strong impact of autoconversion on rain rate at higher LWP (Figure 6A), since the autoconversion depends on drop number, changes in drop 440 number will have a large impact on autoconversion and hence rain rates.Maps of S p in CAM for warm rain (Figure 8), composited by LWP, illustrate that at low and moderate LWP, patterns are fairly uniform, and susceptibility is low (Figure 8A 445 and B).At high LWP (Figure 8C), precipitation susceptibility (S p ) is larger over the sub-tropical equatorward parts of the oceanic storm tracks.At high LWP (Figure 8C), S p is higher in the N. Pacific and the SH storm track than over land.S p is lower over land at moderate and high LWP.This is consistent with the A c /A u ratio being lower (more autoconversion) over oceanic storm tracks (Figure 4F) where the susceptibility (S p ) is higher (Figure 8C).We have also looked at the ratio of the timescales for drizzle (τ driz ) and condensation (τ cond ) in the GCM.These are defined following Wood et al. (2009) as τ driz = q l /(A c +A u ) and τ cond = q l /A cond where A cond is the total condensation rate.Wood et al. (2009) found τ driz /τ cond to be a good predictor of susceptibility (S p ).We do not see strong relationships between τ driz /τ cond and S p .In general τ driz /τ cond is low, and condensation dominates.Unlike the steady state model, τ driz /τ cond does not seem to determine the susceptibility (S p ) in the GCM.direct effect for liquid clouds of -1.4 Wm −2 and +0.4 Wm −2 for ice (Table 3).
Based on the results of the steady state model tests, we construct several different modifications to the microphysical process rates from the base model in section 4. In one experiment, we reduce autoconversion by a factor of 10 (Au/10).In another we increase accretion by a factor of 10 (Ac*10), and in a third we scale the rain mixing ratio for accretion by an exponent of 0.75 (QrScl 0.75 ).The QrScl 0.75 simulation is similar to the DiagQrl 0.5 steady state model experiment.In order to ensure that the level of liquid water in the simulated clouds does not decrease too much, we also scale back autoconversion in this simulation by a factor of 10.
We also explore the impact of the coupling between condensation and microphysics in the simulations by reducing the time step by a factor of 4 from 1800 to 450s (dT/4).The dynamics time step in the CAM5 finite volume core in standard (dT = 1800s) simulations is sub-cycled 4 times, and this sub-stepping is set to 1 in the dT/4 simulation, so the dynamics has a similar effective time step, but the physics is running with a shorter time step (and affecting the dynamics more often).There are many couplings between the differ-ent physical processes that are altered in this simulation, so this is not a clean experiment for changing the microphysics 505 time-step.The intent is to try to reduce the amount of time for microphysics to deplete the condensation which occurs.We also perform an experiment where A c is increased (*10) and A u lowered (/10) so that LWP is nearly constant (AcAu2).This experiment used a slightly different code (on a different 510 supercomputer) so it is comparable only to its own base case (Base2).These cases are detailed in Table 2.
Global Results
First we report basic statistics for the radiative and precipitation impact of anthropogenic aerosols in the CAM5 simu- CRE is the difference between the top of atmosphere flux for all sky and clear sky conditions, for both shortwave (SWE) and long-wave (LWE).Alternatively, the change in clear sky shortwave flux (dFSC) is a measure of the direct scattering from aerosols, so the indirect effect (ACI) can also be RFP -dFSC.In general these measures are similar.First we note that there are correlations between the 530 change in short wave cloud radiative effect (dSWE) and the mean LWP.An examination of differences in each simula-tion indicates that the magnitude of the ACI as defined by dCRE scale roughly inversely with the mean liquid water path: the largest radiative effect (and change in cloud radia-535 tive effect) occurs for the boosted accretion (Ac*10) simulation, which also has the smallest mean LWP, the largest change in LWP (Table 3), and the largest percent change in the cloud drop number (CDN).Similar conclusions can be drawn from defining ACI = dR -dFSC.
540
The results illustrate a fairly narrow range of changes in CRE due to aerosol cloud interactions, with a spread between simulations on the order of ∼25%.The change in cloud radiative effect varies slightly, despite large differences (a factor of 2) in mean LWP in Table 3.The ACI 545 defined by dSWE is correlated with the mean LWP (r 2 = 0.85) and dLWP (r 2 = 0.85).Platnick and Twomey (1994) note that low LWP clouds have higher albedo susceptibility (∂ln[α]/∂ln[CDN ]), and these radiative effects are seen here: lower mean LWP results in higher SW effects.The increasing the accretion rate through scaling the rain mixing 555 ratio (reducing LWP) and decreasing autoconversion to increase it again: the overall effect is to decrease LWP from the base case.The dT/4 case has 10% higher LWP than the base simulation: this is expected since a shorter time step means less time for large amounts of cloud water to build up after 560 macrophysics but before microphysics, thus microphysical process rates (sinks) are smaller, explaining the increase in LWP.As shown by Golaz et al. (2011), these changes in LWP may affect ACI since changing A u and A c affect LWP as well ulation: the AcAu2 simulation has the same mean LWP as its control, Base2 (Table 3).ACI in this simulation is ∼15% lower than Base2, indicating boosting accretion over autoconversion does have effects on ACI independent of mean LWP.Note that the change in LWP in AcAu2 is lower than Base2, so that the SW radiative ACI seems to be related to dLWP.Autoconversion is too large, and accretion increases less with LWP. Figure 9 illustrates the process rates in the different simulations.For increased accretion (Ac*10) and scaled diagnos-575 tic rain (QrScl 0.75 ), the A c /A u ratio is significantly reduced, and the slope with LWP is slightly reduced (Figure 9 A).This occurs because of a reduction in accretion with respect to LWP, even if the accretion is boosted (Ac*10 and QrScl 0.75 ), as in the steady state model.In the GCM, boosting accretion tends decrease LWP (shifting the curves to the left in Figure 9 A).The estimates based on VOCALS observations are also included in Figure 9 (blue), and the behavior of the model is very different than the observations for all cases, as noted for the base case in Figure 5.
585
There are significant changes in the precipitation susceptibility in the different simulations with altered process rates.Figure 10 illustrates the susceptibility for different simulations.The CAM5 base simulation (solid) features increasing susceptibility globally and in the VOCALS region up to an LWP of about 800 gm −2 .The reduced autoconversion (Au/10) case has increased susceptibility at higher LWPs.However, for the increased accretion cases (Ac*10 and QrScl 0.75 ), with lower slope to the Accretion/Autoconversion ratio (Figure 9 A), the susceptibility 595 is reduced significantly and approaches zero for higher liquid water paths.The simulation with smaller time step only (dT/4) features the strongest increase in susceptibility.It is not clear that this affects the radiative impact of the aerosol cloud interactions (Table 3) significantly.The QrScl 0.75 simulation does have 20% or so lower ACI, and has the lowest susceptibility at high LWP.The A u /R ratio in these simulations (not shown) does not appear to predict the precipitation susceptibility (S p ), in contrast to the steady state model (Figure 3 B and D).Autoconversion and accretion processes are dominant in controlling the liquid water path with bulk 2 moment microphysics in the GCM.This is seen in microphysical budget calculations (Figure 1) as well as in sensitivity tests, where altering these process rates has direct impacts on liquid water path (Table 3).The mean state of the GCM climate (base LWP) is quite sensitive to the formulation of the microphysical process rates: accretion and autoconversion have direct impacts on sources and sinks of liquid.The coupling of these 5).The microphysical behavior seems fairly consistent across regions.The proportion of rain from autoconversion 645 also increases as LWP increases (Figure 6).Because autoconversion is dependent strongly on drop number, it links aerosols to cloud lifetime increases and the decrease in precipitation.Susceptibility increases with LWP in CAM up to large values of LWP (Figure 7), and higher susceptibility 650 is found in regions with higher LWP (Figure 8), and lower A c /A u ratios (Figure 4F).Posselt and Lohmann (2008) showed that diagnostic rain leads to overestimating the importance of autoconversion and Wang et al. (2012) showed that the A u /R ratio correlated 655 with the sensitivity of LWP to aerosols.Here we illustrate that using a 'diagnostic-like' formulation in the steady state model can drastically shift the rain formation from accretion to autoconversion.Attempting to correct the GCM by boosting accretion (as in Ac*10 or QrScl 0.75 ), the GCM still has a 660 difficult time capturing the expected behavior of the A c /A u ratio (as evidenced by how the A c /A u ratio for QrScl 0.75 is actually lower than the Base run in Figure 9 A).While the tendency for increasing autoconversion with LWP is still present in the GCM, susceptibility does appear to be mod-665 ified (reduced) when the process rates are modified (Figure 10), or when the time step is shortened.In the GCM, spatial changes in the accretion/autoconversion ratio (Figure 4) appear to be reflected in the precipitation susceptibility (Figure 8), but this is not apparent in the global averages in 2. Base CAM5 (solid), Au/10 (dotted), Ac*10: (dashed), QrScl 0.75 (Dot Dashed) and dT/4 (triple dot-dash).Also shown are observational estimates (blue crosses) from VOCALS aircraft flights as described in the text.
In the GCM, the susceptibility does not correlate with the A c /A u as strongly as in the steady state model (when comparing DiagQr and DiagQr 0.5 in Figure 3).S p seems related to the slope of the A c /A u ratio.Comparing Figure 9 and Figure 10, the runs with higher A c /A u ratios do not have lower susceptibilities at high LWP as expected.QrScl 0.75 and Ac*10 cases have lower LWP and lower A c /A u ratio, but reduced S p at high LWP.Note that this might be related to the LWP: in the steady state model base case in Figure 3, the A c /A u ratio of the base case increases substantially, but susceptibility doesn't really respond until at around LWP∼500 gm −2 .This highlights the complexity of the interactions in the GCM, where multiple processes are affecting LWP in multiple regimes.There are also ice processes in many GCM regions that complicate the analysis.
The radiative Aerosol-Cloud Interactions (ACI, also called indirect effects) are sensitive to these process rate changes, but the changes may be convolved with differences in base state LWP, similar to Golaz et al. (2011).The larger radiative impacts of cloud-aerosol interactions occur for the largest percent changes in liquid water path and drop number in the Ac*10 simulation with enhanced accretion.When the steady state model diagnostic rain 'correction' is applied to accretion in the GCM (QrScl 0.75 ), ACI is reduced by 20% between this calculation and the enhanced accretion case (Ac*10) with similar mean LWP (Table 3).Or stated another way: with half the LWP of the Base case, essentially the same ACI is predicted.A different experiment with reduced autoconversion and increased accretion (AcAu2) to maintain the 700 same LWP as the base case also reduced ACI by ∼15%.This is consistent with the reduced susceptibility in Figure 10.
We conclude that the simple steady state model reproduces many of the feature seen in cloud resolving (LES) models and observations.The steady state model can also be made 705 to produce similar relationships as in the global model: which we attribute to the differences between prognostic and diagnostic precipitation.It does not appear as if the bulk, semiempirical formulations of the process rates derived from fits to a CRM by Khairoutdinov and Kogan (2000) cause the rel-710 ative increase in Autoconversion over Accretion with higher LWP, since this does not occur in the steady state model with these formulations.This is an important conclusion for many scales of modeling.It appears that radiative ACI in the GCM may be sensitive to the formulation of the diagnostic precip-715 itation.CAM5 is conceptually similar to many other GCMs in how it treats cloud microphysics and aerosols, so these results might be generally applicable across models.Possible sensitivities to LWP confound this interpretation, consistent with radiative effects (Platnick and Twomey, 1994) and recent GCM tuning experiments (Golaz et al., 2011).It appears that reductions of ACI of 20% or so, and decreases in precipi- 2. Base CAM5 (solid), Au/10 (dotted), Ac*10: (dashed), QrScl 0.75 (Dot Dashed) and dT/4 (triple dot-dash).tation susceptibility (Figure 10) result from these process rate changes.
These conclusions will need further testing in both GCM and off-line frameworks, including in other GCMs.We are continuing this research both by extending the Morrison and Gettelman (2008) microphysics scheme to include prognostic precipitation.The possibility also exists that the numerics may have an impact.The combination of the diagnostic precipitation assumption with relatively long time steps (20 min, with 10 min iterations for precipitation), as well as coarse vertical grid spacing (500-1000m in the free troposphere) may impact the simulations.We intend to explore these numerical issues further with a detailed 1d model as a step on the way to more robust formulations of microphysics that work across different time and space scales.
A), autoconversion and rain rate (Figure 3 B) and accretion and rain rate (Figure 3 C) are basically unchanged.The 'susceptibility' of precipitation to drop number (Figure 3 D), here defined as S p = −∂ln(R)/∂ln(N d )
Fig. 3 .
Fig. 3. Results from steady state model of Wood et al. (2009).(A) LWP v. Accretion to Autoconversion ratio ( Ac/Au), (B) LWP v. Autoconversion to Rain Rate (Au/Rain), (C) LWP v. the ratio of Accretion to Rain rate (Ac/Rain) and (D) LWP v. Precipitation Susceptibility.Cases shown: the Base case (Black), case with sub-grid variability (Qcv = 2) in Green, Diagnostic rain case (DiagQr, Red), and diagnostic rain with vertical variation of rain rate from autoconversion (DiagQr 0.5 : Blue) as described in the text.
355 tion, just covariance).Accretion decreases with AOD (Figure 5F) in the S. Ocean and S. E. Pacific, but is nearly constant with LWP globally.CAM has a fundamentally different relationship between the A c /A u ratio and LWP than seen in the steady state model 360 in Figure 3.The A c /A u ratio increases with LWP (Figure 3A) in the steady state model or observationally based estimates in Figure 5.However, in the steady state model with modified accretion, following Equation 3 (DiagQr), the A c /A u ratio is 3 orders of magnitude lower than the basic 365 steady state model, and increases less with liquid water path, similar to the GCM.
Fig. 4 .
Fig. 4. Zonal mean latitude height (A,C,E) and vertically averaged maps (B,D,F) of Accretion rate (Ac: A,B), autoconversion rate (Au: C,D), and the ratio of accretion to autoconversion rate (Ac/Au: E,F) for all Liquid Water Paths.
Fig. 6 .
Fig. 6.Regional averages of the ratio of (A,C) autoconversion and (B,D) accretion to surface precipitation rate for different regions (colors, see Figure 5 for description) binned by (A,B) LWP and (C,D) AOD.
515lations.
Table3shows differences from the different aerosol emissions in the simulations.The total aerosol effect is the Radiative Flux Perturbation or RFP, the change in top of atmosphere net radiative flux (RFP=dR).The quantitative radiative indirect effect (or ACI) can be isolated in 520 several ways, followingGettelman et al. (2012).The change in cloud radiative effect (dCRE) is representative of the indirect effect and can be broken into LW and SW components.
Fig. 7 .
Fig. 7. Regional averages of precipitation susceptibility (Sp) as described in the text for different regions (colors, see Figure 5 for description) binned by LWP.
Table 1 .
Description of steady state simulations.
Table 2 .
Description of global simulations used in this study.
Table 3 .
Table of radiative property changes(year 2000-1850)from simulations.Illustrated are change in top of atmosphere radiative fluxes (R), net cloud radiative effect (CRE) as well as the long-wave effect (LWE) and shortwave effect (SWE) components, the change in clear-sky shortwave radiation (FSC), ice water path (IWP) and year 2000 liquid water path (LWP).Also shown are changes to in-cloud ice number concentration (INC) and column liquid drop number (CDN) Wm −2 Wm −2 Wm −2 Wm −2 gm −2 615processes to the rest of the model, by altering the time step, also impacts the mean state.These results are consistent with previous work, but use analysis of observations and a steady state model.The steady state model (Figures2 and 3) reproduces re- 640state model with the modified accretion formulation.In the model, autoconversion is much more important, and increases relative to accretion at higher liquid water paths (Figure
|
v3-fos-license
|
2018-04-03T04:23:07.660Z
|
1997-08-15T00:00:00.000
|
19400075
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/272/33/20443.full.pdf",
"pdf_hash": "a87128b95766054c29c555304887850425ecc401",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43294",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "deea467adb69c5cd60d686e9f9236d7cff75fc8f",
"year": 1997
}
|
pes2o/s2orc
|
Gelsolin binding to phosphatidylinositol 4,5-bisphosphate is modulated by calcium and pH.
The actin cytoskeleton of nonmuscle cells undergoes extensive remodeling during agonist stimulation. Lamellipodial extension is initiated by uncapping of actin nuclei at the cortical cytoplasm to allow filament elongation. Many actin filament capping proteins are regulated by phosphatidylinositol 4,5-bisphosphate (PIP2), which is hydrolyzed by phospholipase C. It is hypothesized that PIP2 dissociates capping proteins from filament ends to promote actin assembly. However, since actin polymerization often occurs at a time when PIP2 concentration is decreased rather than increased, capping protein interactions with PIP2 may not be regulated solely by the bulk PIP2 concentration. We present evidence that PIP2 binding to the gelsolin family of capping proteins is enhanced by Ca2+. Binding was examined by equilibrium and nonequilibrium gel filtration and by monitoring intrinsic tryptophan fluorescence. Gelsolin and CapG affinity for PIP2 were increased 8- and 4-fold, respectively, by microM Ca2+, and the Ca2+ requirement was reduced by lowering the pH from 7.5 to 7.0. Studies with the NH2- and COOH-terminal halves of gelsolin showed that PIP2 binding occurred primarily at the NH2-terminal half, and Ca2+ exposed its PIP2 binding sites through a change in the COOH-terminal half. Mild acidification promotes PIP2 binding by directly affecting the NH2-terminal sites. Our findings can explain increased PIP2-induced uncapping even as the PIP2 concentration drops during cell activation. The change in gelsolin family PIP2 binding affinity during cell activation can impact divergent PIP2-dependent processes by altering PIP2 availability. Cross-talk between these proteins provides a multilayered mechanism for positive and negative modulation of signal transduction from the plasma membrane to the cytoskeleton.
Phosphoinositides are important in signal transduction, both as precursors to signaling molecules and as physical anchors and regulators of proteins (1,2). Among these, the D4 phosphoinositide, phosphatidylinositol 4,5-bisphosphate (PIP 2 ), 1 has been implicated as a potential mediator of actin cytoskeletal rearrangements (3,4). PIP 2 modulates many actin regula-tory proteins. These include the following: actin severing and/or capping proteins (gelsolin (5), CapG (6), and capping protein (also known as Cap Z) (7)), monomer-binding proteins (profilin (8) and cofilin (9)), and other actin-binding proteins (␣-actinin (10) and vinculin (11)). It has been hypothesized that PIP 2 induces explosive actin assembly by dissociating capping proteins from filament ends and releasing actin monomers from actin-sequestering proteins (3,7,12). The involvement of PIP 2 in actin polymerization is supported by recent experiments that show that Rac1 and RhoA, monomeric GTPases of the Rho family that have well defined effects on the cytoskeleton (13), stimulate the synthesis of PIP 2 (14 -16). Furthermore, manipulations that alter the availability of PIP 2 in cells have profound effects on agonist and/or Rac1-induced filament end capping, actin polymerization, and cell motility (16,17). However, although the time courses of PIP 2 hydrolysis and recovery correlate in some cells (16,18), they do not in most of the cells examined (19 -21). Particularly puzzling is the finding that, in many cells, actin polymerizes at a time when PIP 2 level is reduced, rather than increased, as would be expected if uncapping and monomer desequestration are initiated by PIP 2 . To explain this discrepancy, it is often hypothesized that local PIP 2 availability can be enhanced by compartmentalization or differential turnover (22)(23)(24), even as the bulk PIP 2 mass is reduced. The equally attractive possibility that PIP 2 binding is regulated by signals generated during agonist stimulation has not been considered.
Agonist-stimulated cells exhibit complex Ca 2ϩ oscillations and pH transients. These signals alter the binding of gelsolin and CapG to actin, by inducing a conformational change (6,(25)(26)(27). In this study, we tested the effect of Ca 2ϩ and pH on the binding of the gelsolin family proteins to PIP 2 and found that they affect PIP 2 binding in an interdependent manner. We identified the domains in gelsolin that impart such regulation and elucidated the relation between the NH 2 -terminal and COOH-terminal halves of the protein. Since gelsolin modulates the activity of many PIP 2 -regulated proteins with important signaling functions in vivo (28) and in vitro (29 -31), our results have important implications for how the gelsolin family proteins are regulated during agonist signaling and how the activity of other PIP 2 -dependent cytoskeletal and noncytoskeletal proteins can be coordinated.
EXPERIMENTAL PROCEDURES
Expression and Purification of Recombinant CapG, Gelsolin, and Gelsolin Domains-Gelsolin has six semihomologous domains (S1-6), which can be further divided into two functional halves (32). The expression vectors for the gelsolin NH 2 -terminal half (S1-3), gelsolin S1, gelsolin S2-3, and CapG have been described previously (33)(34)(35). The full-length gelsolin expression vector (encompassing the entire human plasma gelsolin coding sequence) was constructed by ligating gelsolin cDNA to pet3a via the BamHI site. Recombinant proteins were expressed in bacteria and purified using sequential anion and cation exchange chromatography (34). Protein concentration was determined by the method of Bradford (36), and protein purity was assessed by SDS-polyacrylamide gel electrophoresis.
The COOH-terminal half expression vector was constructed by using polymerase chain reaction to generate a fragment encompassing human plasma gelsolin nucleotides 1298 -1753. The forward primer contains a XhoI site (ACC TCC ACT CTC GAG GCC GCC), and the reverse primer has a SmaI site (CAA CAG CCC GGG TGG CT). The polymerase chain reaction product was cloned into Bluescript KSϩ via the XhoI/ SmaI sites. This construct was digested with SmaI and blunt endligated with a downstream gelsolin fragment. The fragment was excised with BamHI from full-length gelsolin cDNA in Bluescript KSϩ (gelsolin SmaI site at nucleotide 1750 and vector multiple cloning SmaI site downstream of the termination codon). The resultant cDNA was digested with SpeI (in the 3Ј multiple cloning region, downstream of SmaI) and filled in with CT nucleotides to create a site with a two-base overhang compatible with that of HindIII. The other end was released by digestion with XhoI and ligated to PGEX K6 vector that was linearized with HindIII (site partially filled in with nucleotides AG to generate a two-base overhang compatible with the partially filled in SpeI) and XhoI. The fusion protein contained a 30-kDa GST followed by a 40-kDa gelsolin COOH-terminal half. The COOH-terminal gelsolin was cleaved from GST bound to a column with thrombin.
Phospholipid-PIP 2 was purchased from Calbiochem. Micelles were prepared by dissolving the dried lipid in water to a final concentration of 2 mg/ml and sonicating for 5 min. at maximum power (model W185; Heat Systems Ultrasonics, Inc., Farmingdale, NY). Large unilamellar vesicles at a 5:1 phosphatidylcholine:PIP 2 ratio were made with an extruder (Lipex Biomembranes, Vancouver, Canada) as described by Machesky et al. (37).
Small Zone Gel Filtration-The assay was similar to that described previously for studying lipid binding to most actin regulatory proteins (33,35,38). This is because small proteins bound to PIP 2 micelles or mixed vesicles migrate faster than the unbound proteins. Proteins were incubated with lipid for 30 min at room temperature, and 100 l of the mixture was chromatographed at 4°C through a Superdex 75 HR 10/30 column (Pharmacia Biotech Inc.), equilibrated with pH 7.0 or 7.5 buffers containing 25 mM Hepes, 100 mM KCl, 0.5 mM -mercaptoethanol, 0.4 mM EGTA with or without CaCl 2 . Lipid was not included in the elution buffer. Fractions were eluted at 0.5 ml/min, and 0.5-ml fractions were collected. The elution profile was monitored by absorbance at 280 nm. The amount of unbound protein was determined from the protein absorbance peak. The lipid-bound protein was calculated as the difference between the total protein applied minus the unbound protein. The apparent dissociation constant (K d ) was calculated as follows.
where r is the ratio of protein bound to each PIP 2 molecule at a given PIP 2 concentration and B max is the maximum number of protein bound per PIP 2 at saturation. Quenching of Intrinsic Tryptophan Fluorescence-Fluorescence spectra were recorded at 30°C with a QM-1 fluorometer (Photon Technology International, Canada). 2 ml of a protein solution (0.3 M, 30°C) in 25 mM Hepes, 100 mM KCl, 0.4 mM EGTA, 0.5 mM -mercaptoethanol, pH 7.5, with or without 36 M free Ca 2ϩ were placed in a 1-cm square quartz cuvette and stirred with a minimagnetic stirrer. After allowing 5 min for equilibration, the tryptophan fluorescence spectrum was recorded by excitation at 292 nm. The excitation and emission beam slits were set at 3 and 2 nm bandwidth, respectively. PIP 2 micelles (at final PIP 2 concentrations ranging from 0.042 to 32.3 M, depending on the protein studied) were added at 2-l increments, and the fluorescence spectra were recorded 5 min after each addition. The total volume of micelles added did not exceed 2% of the initial protein solution volume. The decrease in fluorescence emission at 320 nm was plotted as a function of PIP 2 concentration, and the fluorescence change was assumed to be proportional to the concentration of the protein-phosphoinositide complex. Data were analyzed as described by Ward (40). The apparent dissociation constant, K d , was calculated using the equation, where ⌬F is the fluorescence quenching at a given PIP 2 concentration, ⌬F max is the total fluorescence quenching of the protein saturated with ligand, and [lipid T ] is the concentration of PIP 2 . ⌬F max is estimated by curve fitting of the binding data using the Hyperbol.fit program in SigmaPlot. Alternatively, the intrinsic association constant (K a ) as well as the stoichiometry of binding (p) can be derived using the graphical method of Stinson and Holbrook (41), where is the fractional binding (⌬F/⌬F max ), p is the stoichiometry of binding, [lipid T ] is the total concentration of PIP 2 , and [protein T ] is the total acceptor concentration. When 1/(1 Ϫ ) is plotted against lipid T /, a straight line with a slope of K a and an intercept of protein T / is obtained. The stoichiometry of interaction (p) can be calculated by dividing the intercept with the protein concentration.
Measurement of Free Ca 2ϩ Concentration and pH-
The concentrations of free Ca 2ϩ in EGTA containing solutions with varying amounts of Ca 2ϩ were measured with Ca 2ϩ -sensitive dyes. 5 M Fura-2 was used to determine Ca 2ϩ concentrations below 1 M. Free Ca 2ϩ concentration was calculated (26) assuming the K d of the Fura-2-Ca 2ϩ complex is 229 nM at pH 7.0 and 144 nM at pH 7.5. Calcium green 5N (Molecular Probes, Eugene, OR) was used to measure Ca 2ϩ concentrations higher than 1 M, and free Ca 2ϩ concentration was calculated assuming a K d of 14 M.
RESULTS
CapG Binding to PIP 2 -Small zone gel filtration analyses showed that CapG bound to PIP 2 micelles in a dose-dependent manner. Micelle-bound CapG eluted in the void volume that was well separated from the free protein peak (Fig. 1). Binding to phosphatidylcholine-PIP 2 vesicles gave similar results (data not shown), suggesting that micelles could be used to assess binding, although it is not a physiological substrate. To facilitate comparison under different binding conditions and between different proteins, we attempted to calculate a K d . Equilibrium binding studies suggest that each CapG binds two PIP 2 molecules (see below). Assuming this stoichiometry, the apparent K d for binding to PIP 2 micelles (calculated using Equation (Table I). These values represent the upper limit, since measurements were not made under equilibrium conditions.
To determine if there is indeed a Ca 2ϩ -induced change, equilibrium binding studies based on the quenching of CapG intrinsic tryptophan fluorescence by PIP 2 were performed. This method has been used to study the binding of profilin (43), phospholipase C␦ (44), and dynamin pleckstrin homology domain (45) to PIP 2 . CapG had an emission maximum of 327 nm, and 36 M Ca 2ϩ produced a small reduction in fluorescence intensity (the ratio of peak fluorescence in EGTA/Ca 2ϩ is 0.92 Ϯ 0.05 (mean Ϯ S.E., n ϭ 5) (Fig. 2, A and B). PIP 2 induced a dose-dependent and saturable decrease in intrinsic fluorescence, without shifting the emission maximum. Micelles alone without CapG did not have significant emission (data not shown). A plot of CapG fluorescence quenching versus PIP 2 concentration showed that saturation was reached at a lower PIP 2 concentration in the presence of Ca 2ϩ than in EGTA (Fig. 3A). The K d values for binding at pH 7.5, calculated according to Equation 3, were 31.9 and 8.4 M in EGTA and Ca 2ϩ , respectively, for the experiment shown. Similar values (24.4 and 6.0 M) were obtained when the data were analyzed using Equation 4 (Table I). These K d values were 3-4 times lower than the small zone gel filtration values, suggesting that CapG-PIP 2 complexes dissociate during nonequilibrium gel filtration. Using a similar protocol, human platelet profilin binds PIP 2 with a K d of 35 M (43), and binding is not affected by Ca 2ϩ .
The stoichiometry of CapG binding was 1.7 in either Ca 2ϩ or EGTA (Table I). Since CapG has one known PIP 2 binding site (6,33), this site appears to bind two PIP 2 molecules. The two PIP 2 bound independently and noncooperatively, as indicated by the Hill coefficients of close to 1 (1.02 Ϯ 0.05 and 1.09 Ϯ 0.02 in EGTA and Ca 2ϩ , respectively) (Fig. 3B, Table I). The exact meaning of this stoichiometry is not clear, because each micelle contains multiple PIP 2 and CapG can potentially bind more than one micelle. Nevertheless, the calculated stoichiometry is useful for comparison among different proteins. Equilibrium gel filtration validated the K d derived by fluorescence titration. The column was preequilibrated with CapG, and PIP 2 incubated with CapG in the equilibrating buffer was added. The column was then developed with CapG containing equilibration buffer. CapG bound to PIP 2 migrated faster, increasing the CapG content above the equilibration level (peak) and depleting the amount in the trailing fractions (trough) (Fig. 4, A and B). Assuming that each CapG bound two PIP 2 molecules (see Table I), the K d obtained from five experiments performed with a range of CapG and/or PIP 2 concentration was 8.1 Ϯ 0.9 M (mean Ϯ S.E.). This is comparable with the spectroscopic titration result, affirming the validity of the two independent methods.
Gelsolin Binding to PIP 2 -Tryptophan titration could not be used to study gelsolin binding to PIP 2 because the full-length gelsolin signal (without phosphoinositide) fluctuated and did not reach a steady level even after 20 min. The reason for this instability was not investigated further. Gel filtration experiments showed that gelsolin binding to PIP 2 was enhanced by Ca 2ϩ (Fig. 5, A-F). At pH 7.5, the apparent K d values were 305.4 and 40.2 M with and without Ca 2ϩ (Table II). The latter value is similar to that of CapG, indicating that gelsolin and CapG have comparable PIP 2 binding affinity in the presence of Ca 2ϩ . However, in EGTA, gelsolin has a much higher K d than CapG, suggesting that Ca 2ϩ induces a larger change in binding affinity. This could be due to a disproportionate increase in k off relative to k on . 10 M Mg 2ϩ did not substitute for Ca 2ϩ (data not shown), consistent with previous results (46).
The effect of Ca 2ϩ was amplified when the pH was shifted from 7.5 to 7.0 (Fig. 5, compare A-C with D-F). The relations among K d , Ca 2ϩ , and pH are shown in Fig. 5G. In the absence of Ca 2ϩ , decreasing pH from 7.5 to 7.0 had minimal effect (K d of 300 and 350 M, respectively). This is not surprising, since PIP 2 protonation is not expected to change substantially within this narrow pH range (47) and a broader pH range does not affect binding of profilin to PIP 2 either (37). However, at pH 7.0, less Ca 2ϩ was required to increase binding. 0.2 M Ca 2ϩ decreased the K d by half at pH 7.0, while 4.5 M Ca 2ϩ was required to produce the same effect at pH 7.5. Both Ca 2ϩ concentrations are well within the range achieved following agonist stimulation, particularly at the cytoplasm immediately subjacent to the plasma membrane.
Ca 2ϩ and pH Regulation of Gelsolin Domains-To determine which part of gelsolin contributes to the Ca 2ϩ and/or pH dependence of PIP 2 binding, we examined the PIP 2 -binding characteristics of several gelsolin domains. Gelsolin contains six segmental repeats, S1-6 (32). The NH 2 -terminal half encompassing S1-3 binds actin independently of Ca 2ϩ (48) and has two known PIP 2 binding sites and potentially a third unmapped site (33,49,50). The COOH-terminal half (S4 -6), which requires Ca 2ϩ to bind actin (51), has not been examined previously for PIP 2 binding.
Unlike full-length gelsolin, the gelsolin NH 2 -terminal half behaved well during fluorescence titration (Fig. 6A). It bound PIP 2 with high affinity, and saturation was reached at a slightly lower PIP 2 concentration in EGTA than in Ca 2ϩ (the opposite of full-length gelsolin and CapG). The K d values for the experiment shown in Fig. 6A were 1.2 and 2.9 M, respectively. The stoichiometry of binding, derived from Fig. 6B, was 3.4. This value is twice that of CapG, confirming that gelsolin NH 2 -terminal half has more PIP 2 binding sites (33). Gel filtration studies confirmed that Ca 2ϩ increased the K d . The Hill coefficient of 1.1 Ϯ 0.03 (Fig. 6C, Table II) suggested that binding was noncooperative and that the sites bound PIP 2independently. S1, which has one PIP 2 site, bound 1.6 mol of (Table II).
The gelsolin COOH-terminal half bound PIP 2 with much lower affinity (approximately 7-fold higher K d by fluorescence measurements) than the NH 2 -terminal half (Table II). It is therefore probably not involved in PIP 2 binding per se. As with the NH 2 -terminal half, binding to the COOH-terminal half was reduced in Ca 2ϩ (Fig. 7C). This is in sharp contrast to the large Ca 2ϩ -enhancement of PIP 2 binding to full-length gelsolin. The opposite effects of Ca 2ϩ on full-length and half-length gelsolins therefore cannot simply be due to nonspecific lipid aggregation. The pronounced enhancement of PIP 2 binding to full-length gelsolin most likely reflects a Ca 2ϩ -dependent exposure of the NH 2 -terminal half PIP 2 binding sites through a change in the COOH-terminal half. This conclusion is based on the observation that neither the NH 2 -nor COOH-terminal halves are activated by Ca 2ϩ to bind PIP 2 , and only the COOH-terminal half is known to undergo Ca 2ϩ -induced conformational change (51). Gelsolin NH 2 -terminal half binding to PIP 2 was enhanced by lowering pH. The K d dropped from 8.2 to 3.4 M between pH 7.5 and 7.0 in the presence of EGTA (Fig. 7A). In contrast, the gelsolin COOH-terminal half was not affected by pH (Fig. 7B). DISCUSSION Actin polymerization in response to agonist activation is frequently associated with a rise in cytosolic Ca 2ϩ , changes in PIP 2 content, and intracellular pH. There is also compelling evidence that gelsolin, which severs and caps actin filaments in response to changes in Ca 2ϩ and PIP 2 concentration and pH, is involved in actin remodeling (17,(52)(53)(54). In this paper, we show that gelsolin and CapG binding to PIP 2 is affected by physiologically relevant changes in Ca 2ϩ and pH. The effects are not due to alterations in PIP 2 structure per se but reflect changes in the proteins. This is the first report that PIP 2 binding to any protein is directly modulated by signals generated during agonist stimulation and has implications for divergent PIP 2 -dependent processes beyond a direct effect on the cytoskeleton.
The finding that gelsolin binding to PIP 2 is promoted by Ca 2ϩ is consistent with the current model for how gelsolin is activated by Ca 2ϩ to bind actin (48,51). Our deletion studies suggest that the extreme COOH terminus of gelsolin is critical to the inhibition of the NH 2 -terminal actin binding sites, because gelsolin lacking the COOH-terminal 23 residues no longer requires Ca 2ϩ to bind actin (56). We do not know at present whether actin binding and PIP 2 binding are regulated identically. This question can now be addressed, because the actin and PIP 2 -binding sites of gelsolin have been mapped (33, 50, 56 -58) and the crystal structures of gelsolin S1 complexed with actin (57) and full-length gelsolin in EGTA 2 have been solved recently.
Less is known about how pH affects gelsolin conformation. Selve and Wegner (59) first reported that pH 6 increases the rate of gelsolin binding to actin in the presence of Ca 2ϩ . Lamb et al. (26) subsequently showed that the Ca 2ϩ requirement for gelsolin severing is reduced at pH 6.5 and abolished at pH below 6.0. pH 5 induces gelsolin unfolding, as determined by dynamic light scattering (26). We find that a less extreme pH drop potentiates Ca 2ϩ activation of PIP 2 binding to full-length gelsolin. Acidic pH increases the NH 2 -terminal half binding to PIP 2 even without Ca 2ϩ but has no effect on COOH-terminal half binding. Therefore, mild acidification probably promotes PIP 2 binding by directly altering the NH 2 -terminal PIP 2 binding sites.
The significance of an increase in PIP 2 affinity described here depends on the PIP 2 concentration in the plasma membrane. This is difficult to estimate precisely because PIP 2 may be partitioned and sequestered. One estimate, based on PIP 2 accounting for 1% of plasma membrane lipid, suggests that the PIP 2 concentration in the plasma membrane of a spherical cell with a radius of 10 m is 10 M (44). In platelets, the PIP 2 concentration is estimated to be about 300 M when averaged over the entire cell volume (internal and plasma membranes) (60), and PIP 2 concentration decreases by 30% following stimulation (16). Cytosolic [Ca 2ϩ ] rises during agonist stimulation, and the 4 -8-fold increase in CapG and gelsolin binding affinity described here is sufficiently large to promote their increased association with the plasma membrane despite a modest decrease in membrane PIP 2 . The magnitude of the increase depends on the PIP 2 concentration before and after stimulation. Immunogold labeling studies show that 4 and 6.5% of gelsolin is associated with the plasma membrane in resting and activated platelets, respectively (42). This represents a 63% increase in membrane association after stimulation. Our finding that Ca 2ϩ increases PIP 2 binding affinity can explain how PIP 2 uncaps gelsolin and CapG even as the plasma membrane PIP 2 content decreases following agonist stimulation.
Since only a handful of the currently identified PIP 2 -binding proteins are Ca 2ϩ -and pH-sensitive, our finding is consistent with a selective regulation of the gelsolin family. Nevertheless, increased gelsolin and CapG binding will impact multiple PIP 2dependent processes by altering PIP 2 availability to other binding proteins, especially when PIP 2 concentration is decreased during agonist stimulation. Some actin-binding proteins are inhibited by PIP 2 (profilin, cofilin, capping protein), while others are activated (␣-actinin and vinculin). Gelsolin and CapG can therefore exert positive as well as negative effects indirectly by controlling PIP 2 . We postulate that as the cytosolic [Ca 2ϩ ] rises during stimulation, gelsolin severs filaments and PIP 2 dissociates it from the filament end. Increased gelsolin binding to PIP 2 displaces capping protein and profilin, neither of which are Ca 2ϩ -sensitive, from the plasma membrane. Profilin catalyzes polymerization (16), and the reaction is terminated by capping protein-mediated filament capping (55). Multiple rounds of severing, uncapping, and facilitated actin addition at the barbed ends fuel explosive amplification of filament growth observed during lamellipodial extension and membrane ruffling.
Our findings also have implications beyond a direct effect on the cytoskeleton. Many important signaling proteins are regulated by PIP 2 as well. It is significant that several pleckstrin homology proteins (reviewed in Ref. 2) bind PIP 2 with similar affinity as the gelsolin family. For example, the K d values of -adrenergic receptor kinase type 1, pleckstrin, dynamin, and phospholipase C ␦ are 50, 50, 4, and 1 M, respectively. Therefore, gelsolin and CapG can potentially compete with them for PIP 2 , particularly when the [Ca 2ϩ ] rises and PIP 2 level drops during agonist stimulation. This possibility is supported by in vitro and in vivo experiments. In vitro, gelsolin stimulates and inhibits inositol-specific phospholipase C isozymes in a biphasic manner (29). 3 Gelsolin stimulates phosphoinositide 3-OHkinase (31), although we find that gelsolin and CapG also inhibit it. 4 Gelsolin activates phospholipase D (30) in a PIP 2dependent manner. Modest overexpression of CapG (28) or gelsolin has profound effects on phospholipase C and phospholipase C␥ activated through two distinct receptor-mediated pathways. 3 In conclusion, these observations show that gelsolin and CapG binding to PIP 2 is selectively regulated by second messengers. This regulation provides an additional level of control above that of a bulk change in PIP 2 content. Differential modulation and cross-talk between the PIP 2 -binding proteins allow control to be exerted at multiple points in the signaling cascade.
|
v3-fos-license
|
2023-02-22T15:52:41.376Z
|
2022-02-10T00:00:00.000
|
257055857
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-022-06192-w.pdf",
"pdf_hash": "5ee2969e609c106c3cf0a5ac435ea34b587152aa",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43296",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "5ee2969e609c106c3cf0a5ac435ea34b587152aa",
"year": 2022
}
|
pes2o/s2orc
|
Tissue miR-200c-3p and circulating miR-1290 as potential prognostic biomarkers for colorectal cancer
Epithelial–mesenchymal transition (EMT)-related cancers generally elicit low immune responses. EMT is regulated by several microRNAs (miRNAs) in cancers. Thus, this study aimed to evaluate the prognostic potential of EMT-related miRNAs as biomarkers in colorectal cancer (CRC). Formalin-fixed paraffin-embedded tumor and normal tissue and plasma samples were obtained from 65 patients with pathologically confirmed CRC. In addition, plasma samples were obtained from 30 healthy volunteers. Immunohistochemical staining for E-cadherin, ZEB1, PD-1, PD-L1, CD3, CD4, CD8, Foxp3, and CD68 was conducted on tissue samples. Droplet digital polymerase chain reaction (ddPCR) analysis was performed to evaluate miR-21-5p, 34a-5p, 138-5p, 200a-3p, 200b-5p, 200c-3p, 630, 1246, and 1290 expression in tissue samples and miR-630, 1246, and 1290 expression in plasma samples. miR-21-5p, 34a-5p, 630, 1246, and 1290 expression was higher in tumor tissues than in normal tissues (P < 0.05). EMT was significantly associated with reduced tumor-infiltrating T cells. Moreover, miR-21-5p, miR-34a-5p, miR-200a-3p, and miR-200c-3p expression was negatively correlated with T cell density (P < 0.05). High tissue levels of miR-200c-3p were associated with poor overall survival (OS) (P < 0.001). CRC patients with the EMT phenotype had poor OS; however, PD-L1 positivity and abundant PD-1 positive immune cells were correlated with better OS (P < 0.05). miR-1246 and miR-1290 levels were significantly higher in the plasma of patients with CRC than in the plasma of healthy controls (P < 0.05). High plasma levels of miR-1290 were correlated with advanced stage and poor OS (P < 0.05). The tissue expression of miR-200c-3p and plasma levels of miR-1290 measured by ddPCR indicate their potential as prognostic biomarkers for CRC.
Tumor-node-metastasis ZEB-1 Zinc finger E-box-binding homeobox 1 Colorectal cancer (CRC) contributes significantly to the global cancer burden, ranking third in incidence and second in mortality 1 . In recent years, advances in diagnostic and therapeutic strategies have resulted in a decrease in the incidence of CRC and improvement in patient survival 2 . However, the treatment of metastatic CRC remains a considerable challenge 3 . With the development of new chemotherapeutic and immunotherapeutic agents, the cost of CRC treatment has increased considerably, while survival rates have remained limited. Multiagent approaches have been developed based on availability and not on the basis of validated and refined treatment algorithms 4 . To overcome such challenges, it is necessary to identify biomarkers that enable accurate prognosis and a personalized approach in the treatment of CRC. The epithelial-mesenchymal transition (EMT) is a key cellular process in CRC progression and metastasis 5 . Molecular pathways (including EMT) vary widely, and different investigators have used different methods to classify EMT. Several commonly used EMT markers include the loss of E-cadherin expression and increased expression of EMT-related transcription factors, such as ZEB1 6,7 . Recent findings have suggested that relationships may exist between EMT and microRNAs (miRNAs) 8 or tumor-associated immune cells 9 .
miRNAs are non-coding RNAs comprising 20 to 22 nucleotides that regulate gene expression 10 . Several studies suggest that miRNAs are involved in cancer progression, because miRNAs regulate the expression of tumor-suppressor genes, oncogenes, and other regulatory molecules involved in cell differentiation, apoptosis, and tumorigenesis [11][12][13] . Furthermore, recent reports show that miRNAs are involved in EMT regulation. miR-21-5p, an important miRNA in cancer, is located on chromosome 17q23.2, which frequently has a copy-number gain in metastatic CRCs 14 . Downregulation of miR-21-5p has been reported to reverse EMT and the cancer stem cell phenotype 15 . Moreover, the miR-200 family has been reported to target and downregulate ZEB1, an EMT activator 16 . However, these EMT-related miRNAs likely have several different targets and function at various levels.
The tumor immune microenvironment is another key factor in CRC progression and metastasis 17 . With an increasing interest in immunotherapy, targeting the tumor immune microenvironment has emerged as a therapeutic strategy. Previous findings have suggested that the immunological synapse between PD-1 (which is expressed on lymphocytes) and PD-L1 (which is expressed on tumor cells) causes cytotoxic T cell anergy in the tumor microenvironment, enabling further tumor progression 11 . miRNAs are also known to contribute to immune evasion of neoplastic cells through the regulation of various pathways 18 , as well as PD-1 and PD-L1 expression 19 . Thus, miRNAs have the potential to serve as diagnostic and prognostic markers.
Although miRNAs are typically expressed in tissue samples, they can also be detected in blood samples as they are released from tumor cells into the circulation 20 . Despite their small quantity, circulating miRNAs have apparent merits over tissue miRNAs as biomarkers because blood samples are easier and less invasive to obtain than tissue samples. Therefore, analysis of circulating miRNAs in addition to that of tissue miRNAs is necessary to assess their potential as novel biomarkers.
Although the roles of the EMT, tumor immune microenvironment, and related miRNAs in CRC progression have been widely studied, the clinical significance of the related miRNAs remains unclear. This is partly owing to the extensive number of targets and functional roles of miRNAs. In addition, studies involving human subjects are rare. Consequently, this study aims to illustrate the relationship among miRNAs, EMT, and the tumor immune microenvironment in CRC and determine the clinical potentials of several miRNAs as prognostic biomarkers. We quantified the expression levels of miRNAs previously reported to be related to EMT in various cancers, using tissue and plasma samples from 65 patients with CRC. We then investigated the clinicopathologic significance of the measured miRNA expression levels.
Methods
Study population and clinical specimens. This study included plasma specimens from 30 healthy blood donors and tissue and plasma specimens from 65 patients with CRC who underwent radical surgical resection at the Seoul National University Bundang Hospital between March 2011 and March 2012. Patients who received preoperative radiotherapy or chemotherapy were excluded from the study. Clinicopathologic features of CRC patients are summarized in Table 1. Tissue specimens were obtained during resection, whereas plasma samples were obtained approximately 1 to 20 days before resection. Tissue samples were fixed in 4% buffered formalin solution and embedded in paraffin. Blood samples were processed within 2 h of collection and centrifuged at 3,000 rpm for 10 min. Each plasma sample was filtered through a Fisherbrand Standard Serum Filter (13 mm × 4″; Fisher HealthCare, Houston, TX, USA) and stored at -80 °C until use. Clinicopathologic data such as age, sex, histological grade, and patients' overall survival (OS) were obtained from electronic medical records. OS was defined as the period from surgery to death from any cause or to the date of the last follow-up. Cancer stages were determined based on the guidelines from the American Joint Committee on Cancer (8th edition).
Quantification of miRNAs by droplet digital polymerase chain reaction (ddPCR) analysis.
Total RNA was extracted from paired normal and tumor formalin-fixed paraffin-embedded (FFPE) tissue samples. Four 8-µm-thick FFPE tissue sections were used for RNA extraction. Tissue sections were deparaffinized by incubation at 70 °C for 10 min and centrifugation for 10 min at maximum speed. After deparaffinization, RNA extraction was performed using RecoverAll™ Total Nucleic acid Isolation Kit (Invitrogen, Waltham, MA, USA) according to the manufacturer's instructions. For blood samples, total RNA was extracted using a High Pure Viral Nucleic Acid Kit (Roche, Indianapolis, IN, USA) according to the manufacturer's instructions with 300 µL of plasma.
miRNA expression levels were measured via ddPCR. Each ddPCR mixture contained 10 µL of ddPCR Supermix for Probes, 1 µL template DNA synthesized from RT reactions, 1 µL of a FAM-labeled probe for the miRNA of interest, and 8 µL of distilled water. Oil drops were generated using Droplet Generation Oil for www.nature.com/scientificreports/ Probes (Bio-Rad; catalog #1,863,005). The C1000 Touch™ Thermal Cycler, equipped with a deep-well block, was used for PCR analysis with the following thermocycling conditions: 95 °C for 10 min, 40 cycles of 94 °C for 30 s and 60 °C for 1 min, and 98 °C for 10 min. All data were interpreted using the Bio-Rad QX200 droplet reader and analyzed using the QuantaSoft program (version 1.7.4). The representative results of ddPCR fluorescence plots were shown in Additional file 1. Quantification was performed by determining the copy number of target miRNA/1 ng total RNA for tissue samples or the number of copies of target miRNA/1 µL cDNA for plasma samples, as previously described 25,26 .
Tissue microarray (TMA) construction and immunohistochemistry (IHC). A representative tissue
core with a 2-mm diameter was obtained from each patient as an FFPE block, and sets of TMA blocks were made of these tissue cores. TMA blocks were constructed from the tumor center (TC) and the invasive margin ( For E-cadherin IHC, membranous expression was considered to reflect a positive result, and the area (%) of positive staining was examined. The area (%) and intensity of nuclear ZEB1 expression were recorded, and the intensities were classified as indicating negative (0), weak (1), or strong (2) expression. EMT phenotype was defined as any loss of E-cadherin expression or a score greater than 20 as determined by multiplying the ZEB1expression intensity by the % area positive for ZEB1 expression.
PD-L1 was interpreted with a combined positive score (CPS). CPS is defined as the percentage of PD-L1 staining cells (tumor cells, lymphocytes, and macrophages) relative to total viable tumor cells. CPS ≥ 1 was used as the cut-off for PD-L1 positive 27 .
Image analysis for immune cell densities. The digitalized IHC-stained images were analyzed using QuPath software (version 0.2.0-m4). For each IHC-stained image for the immune checkpoint marker, PD-1, or immune cell markers (CD3, CD4, CD8, CD68, and Foxp3), the number of positively stained cells was counted using the cell count function in QuPath, and the density was calculated as the number of cells/mm 2 . Statistical analyses. All statistical analyses were performed using R software (version 4.1.0; http:// cran.rproje ct. org/). The Wilcoxon rank-sum tests and Kruskal-Willis tests were used to determine correlations between miRNA expression levels and the clinicopathological features of the patients with CRC. The maximal Χ 2 method was used to define the optimal cut-off values for continuous variables. Kaplan-Meier survival analysis was performed to determine the associations of variables with survival. Spearman's correlation coefficient was used to determine the relationship between miRNAs and the tumor immune microenvironment in the tumor tissues. To compare immune cell densities or miRNA expression levels between the two groups, Wilcoxon ranksum tests were performed after verifying that the groups in the test did not follow normal distributions. The tests were unpaired, except for the comparison of miRNA expression levels between tumor and normal tissues, which was performed using a paired test. P values < 0.05 were regarded as statistically significant.
Relationships between tissue miRNAs and the tumor immune microenvironment. After classi-
fying the patients into two groups on the basis of their EMT status, we observed a significant difference between the groups in terms of their T cell densities in both TC and IM tissues. T cell markers (CD3, CD4, and CD8) were expressed at a lower density in CRCs with the EMT phenotype in both TC and IM tissues (Fig. 2a-f,h, P < 0.05; Fig. 2g, P = 0.063). Similarly, the EMT phenotype was associated with lower density of PD-1 in both TC (Fig. 2d www.nature.com/scientificreports/ P = 0.031) and IM ( Fig. 2h; P = 0.026) tissues. However, no significant correlation was found between the EMT status and a PD-L1 CPS of ≥ 1 (data not shown; P = 0.294 and 0.923, respectively). The tissue expression levels of the nine miRNAs included in this study did not show statistically significant correlations with the EMT status of the patient (P > 0.05; data not shown). We investigated correlations between miRNA expression levels and tumor-infiltrating immune cell densities in tumor tissues (Fig. 3). The expression levels of miR-21-5p, miR-34a-5p, miR-200a-3p, and miR-200c-3p were negatively correlated with the densities of most tumor-infiltrating immune cells in both TC and IM tissues, as shown in Table 2. The results suggest weak but statistically significant (P < 0.05) correlations 28 ; exact correlation coefficients are recorded in Table 2. Specifically, the CD8-positive T cell density was inversely correlated with miR-21-5p, miR-34a-5p, miR-200a-3p, and miR-200c-3p levels in IM tissues.
Correlations with immune checkpoint markers were also investigated. The PD-1 positive immune cell density was negatively correlated with miR-21-5p (P = 0.002) and miR-34a-5p (P = 0.035) expression in IM tissues. miR-200c-3p expression showed a negative relationship with a PD-L1 CPS of ≥ 1 in IM tissues (Wilcoxon test, P < 0.05). nucleic acids in cell-free plasma is generally very low, we chose three miRNAs, which had tissue expression levels that suggested relatively abundant quantities, for further evaluation in plasma samples (Fig. 1). Specifically, we measured the expression levels of miR-630, 1246, and 1290 in cell-free plasma from healthy individuals and patients with CRC. The plasma samples were analyzed in triplicate by ddPCR to validate the reproducibility of the results. The intraclass correlations (ICCs) for all samples were > 0.950 except for miR-630 expression in healthy individuals (ICC = 0.514), suggesting that the results were highly reproducible (Additional file 2). The plasma expression levels of miR-1246 and 1290 in patients with stage II-IV CRC were significantly higher than those in healthy volunteers (P < 0.05). The miR-630, miR-1246, and miR-1290 expression levels in patients with stage I CRC did not differ significantly from those in healthy volunteers (Additional file 3). Besides, the plasma levels of miR-640, 1246, and 1290 did not show statistically significant correlations with tumor-infiltrating immune cell densities or immune checkpoint markers (data not shown).
Clinicopathologic correlations among EMT, tumor immune responses, and miRNAs.
As expected, patients with CRC and the EMT phenotype had significantly worse OS (Additional file 4a; P < 0.001). PD-L1 positivity in IM and TC tissues predicted significantly better OS (Additional file 4b and 4c; P = 0.012 and 0.024, respectively). In addition, the high density of PD-1-positive immune cells in both IM and TC tissues was strongly associated with better OS (Additional file 4d and 4e; P < 0.001 and 0.002, respectively). Table 3 summarizes the correlations between clinicopathologic parameters and miRNA expression levels. Higher expression levels of miR-21-5p and 200c-3p were observed in CRC cases with lymphatic invasion and an advanced tumor-node-metastasis (TNM) stage (P < 0.05) and were associated with worse OS (Fig. 4a and b; Table 3. Relationship between the concentration of tissue microRNA-21-5p, 200c-3p, and plasma microRNA-1290 and clinicopathologic features. Data are presented as Median (range). *P-value < 0.05, **P-value < 0.01. T-, tissue; P-plasma. SD standard deviation, WD well differentiated, MD moderately differentiated, PD poorly differentiated. www.nature.com/scientificreports/ P = 0.053 and < 0.001, respectively). No significant difference in miRNA expression levels was found, according to the KRAS mutational status (P > 0.05). High plasma expression of miR-1290 showed a strong association with an advanced TNM stage (Table 3; P < 0.001). Furthermore, high plasma expression of miR-1290 was associated with worse OS ( Fig. 4c; P = 0.029). 30 . miRNAs have many advantages as biomarkers, including easy extraction through liquid biopsies and high tissue-type specificity. Many researchers have attempted to establish new miRNAs as biomarkers for various diseases, with promising results, although such research is still in the early stages. For example, data from a previous study suggested that circulating miR-200c-3p may be useful as a diagnostic and prognostic biomarker for gastric cancer 31 . The results of another study suggested miR-148a as a biomarker for predicting the efficacy of chemotherapy in patients with advanced colorectal cancer 32 . Using such biomarkers may enable the customization of treatment strategies for individuals, which holds implications in personalized medicine.
Discussion
Similar to some previous studies measuring multiple miRNA expression levels 33 , we also observed a great variance in the detected amount of miRNA molecules in this study. For example, tissue levels of miR-138-5p and miR-200b-5p were detected at amounts of fewer than 5 copies/ng of RNA in both normal and tumor tissues; therefore, the difference between paired tissues was small in scale although statistically significant (Fig. 1). However, tissue levels of miR-1246 and miR-1290 were detected at amounts of more than 1 × 10 6 and 1 × 10 4 copies/ng of RNA, respectively; consequently, the observed difference between paired tissues was greater in scale while statistical significance was comparative (Fig. 1). As the biological relevance of miRNAs as potential biomarkers includes not only the statistical significance but also the actual detected amount, the quantitative analysis provided by this study aids in the assessment of the potential clinical utility of the investigated miRNAs.
Our results suggest that tissue levels of miR-200c-3p and circulating levels of miR-1290 are potential prognostic biomarkers for CRC. In addition, the high expression level of each miRNA was associated with worse OS. Furthermore, as suggested by our current results and previous findings, such associations may reflect the key roles played by miR-200c-3p and miR-1290 in the regulation of EMT-immune crosstalk in CRC.
One seemingly counterintuitive result is that while the expression level of miR-200c-3p is lower in tumor tissue, patients with a high expression level of miR-200c-3p show poor survival. Previous studies suggest that the switch between EMT and mesenchymal-epithelial transition (MET) is a transient and dynamic process wherein EMT plays a critical role in the first stages of metastasis, such as tumor cell dissemination, while MET drives the later stages of metastasis, such as the colonization of the metastatic site 34 . miR-200c-3p downregulation is associated with EMT and miR-200c-3p overexpression is associated with MET, which may partially explain the seemingly unreasonable observation: the EMT-related expression pattern reflects the metastatic nature of the tumor tissue, but the MET-related expression pattern is associated with poorer survival owing to its role www.nature.com/scientificreports/ in more advanced metastatic stages. However, such an explanation is limited. Our current understanding of the role of miRNAs in cancer progression is rather oversimplified, where the complex mechanisms of multiple miRNAs regulating the expression of various genes are not thoroughly reflected. In fact, previous studies on the biomarker potential of miR-200c-3p in various cancer types have produced controversial results 35,36 . A more sophisticated understanding of the workings of miRNAs in the regulation of cancer progression is necessary for a more exhaustive explanation. Crosstalk between EMT and the tumor immune microenvironment has widely been suggested for various types of tumors. In an ovarian carcinoma study, the mesenchymal subtype with an EMT-related gene signature correlated with the lower density of CD8-positive tumor-infiltrating lymphocytes 37 . Altered expression of EMT markers was associated with decreased tumor infiltration of CD4-and CD8-positive T cells in a study of nonsmall cell lung cancer 38 and with upregulated inhibitory immune checkpoint molecules such as PD-L1 in a study of lung adenocarcinoma 39,40 . Such results indicate that EMT is a process involving evasion of the host immune system, and our results demonstrate a similar trend with CRC. The EMT statuses of the patients were highly correlated with a low density of tumor-infiltrating CD3-, CD4-, and CD8-positive lymphocytes. Similarly, low densities of PD-1-positive cells in both TC and IM tissues were correlated with an EMT phenotype in CRC patients. These results not only clarify EMT as an immune-related process but also hold implications for immunotherapy, as data from many previous studies have identified the tumor immune microenvironment as a key factor in predicting and explaining the success of immunotherapy in many cancer types [41][42][43] . A deeper understanding of the relationship between EMT and the tumor immune microenvironment may lead to refined immunotherapies.
We also identified miRNA markers associated with the EMT phenotype that had effects on tumor immune microenvironment. Notably, tissue levels of miR-200c-3p were negatively correlated with cells positive for immune cell markers such as CD3, CD4, and CD8 and negatively correlated with cells positive for the immune checkpoint marker PD-L1. These results agree with data from previous functional studies of the miR-200 family in regulating the immune system 44 , which may explain the worse OS observed in patients with high miR-200c-3p expression. Similarly, the circulating plasma levels of miR-1290 showed a negative association with CD3-, CD8-, and PD-1-positive cells. Considering that recent findings elucidated the role of miR-1290 in the immune escape of cancer cells in gastric cancer 45 , this finding may explain the worse OS observed in patients with high plasma levels of miR-1290.
This study does have some limitations. Notably, this study was conducted with a retrospective cohort at a single institute, and only 65 samples from patients with CRC were analyzed. Therefore, the potential utility of miRNAs suggested by this study as biomarkers needs to be further validated in an independent cohort as well as in a multi-centered study of a larger scale.
In conclusion, we identified tissue levels of miR-200c-3p and plasma levels of miR-1290 as potential prognostic markers of CRC, which may reflect functional associations of the miRNAs with EMT or the tumor immune microenvironment. Clinically, these miRNA markers could be used to accurately evaluate the prognosis and metastatic potential of CRC in an individual patient and to adjust the therapeutic strategy accordingly.
Data availability
The datasets supporting the conclusions of this article are included within the article and its additional files.
|
v3-fos-license
|
2023-03-03T05:53:29.349Z
|
2023-04-27T00:00:00.000
|
263471499
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jmir.org/api/download?alt_name=resprot_v12i1e46281_app1.pdf&filename=82da26f145c687e9e8619330dd29d83d.pdf",
"pdf_hash": "697039a614673532015e309426bec6658ec1a1ed",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43299",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "e0971ca9e75265aaa4c26617472c9813a262c1d7",
"year": 2023
}
|
pes2o/s2orc
|
Telehealth-Based Music Therapy Versus Cognitive Behavioral Therapy for Anxiety in Cancer Survivors: Rationale and Protocol for a Comparative Effectiveness Trial
Background Cancer survivors represent one of the fastest growing populations in the United States. Unfortunately, nearly 1 in 3 survivors experience anxiety symptoms as a long-term consequence of cancer and its treatment. Characterized by restlessness, muscle tension, and worry, anxiety worsens the quality of life; impairs daily functioning; and is associated with poor sleep, depressed mood, and fatigue. Although pharmacological treatment options are available, polypharmacy has become a growing concern for cancer survivors. Music therapy (MT) and cognitive behavioral therapy (CBT) are evidence-based, nonpharmacological treatments that have demonstrated effectiveness in treating anxiety symptoms in cancer populations and can be adapted for remote delivery to increase access to mental health treatments. However, the comparative effectiveness of these 2 interventions delivered via telehealth is unknown. Objective The aims of the Music Therapy Versus Cognitive Behavioral Therapy for Cancer-related Anxiety (MELODY) study are to determine the comparative effectiveness of telehealth-based MT versus telehealth-based CBT for anxiety and comorbid symptoms in cancer survivors and to identify patient-level factors associated with greater anxiety symptom reduction for MT and CBT. Methods The MELODY study is a 2-arm, parallel-group randomized clinical trial that aims to compare the effectiveness of MT versus CBT for anxiety and comorbid symptoms. The trial will enroll 300 English- or Spanish-speaking survivors of any cancer type or stage who have experienced anxiety symptoms for at least 1 month. Participants will receive 7 weekly sessions of MT or CBT delivered remotely via Zoom (Zoom Video Communications, Inc) over 7 weeks. Validated instruments to assess anxiety (primary outcome), comorbid symptoms (fatigue, depression, insomnia, pain, and cognitive dysfunction), and health-related quality of life will be administered at baseline and at weeks 4, 8 (end of treatment), 16, and 26. Semistructured interviews will be conducted at week 8 with a subsample of 60 participants (30 per treatment arm) to understand individual experiences with the treatment sessions and their impact. Results The first study participant was enrolled in February 2022. As of January 2023, 151 participants have been enrolled. The trial is expected to be completed by September 2024. Conclusions This study is the first and largest randomized clinical trial to compare the short- and long-term effectiveness of remotely delivered MT and CBT for anxiety in cancer survivors. Limitations include the lack of usual care or placebo control groups and the lack of formal diagnostic assessments for psychiatric disorders among trial participants. The study findings will help guide treatment decisions for 2 evidence-based, scalable, and accessible interventions to promote mental well-being during cancer survivorship. International Registered Report Identifier (IRRID) DERR1-10.2196/46281
• Both interventions are evidence-based and backed by guidelines.Reviewers perceived the proposed virtual delivery of the interventions as practical and appropriate especially given the context of the current pandemic.These are major strengths.
• Recruitment sites are culturally diverse, which will assist with generalizability.This is a major strength.
• The investigative team is strong and has experience with studies of this scope, size, and complexity.Their track record of successfully completing similar studies with low attrition, limited missing data, and high rates of participation among underrepresented groups was perceived as a major strength by reviewers.
• Strong patient-centeredness is evident in the responsiveness to patient input through the choice of comparators (exploring a non-CBT option), focusing on anxiety symptoms rather than diagnosis, and in the plans to use the Spanish version of the Patient-Reported Outcomes (PRO) instruments.
Weaknesses:
• The reviewers identified minor to moderate weaknesses in the study design.
o The definition of "cancer survivor" lacks detail (e.g., is it defined by remission or by cancer-free period).This can also vary across the different types of cancers being assessed.This is a minor, fixable weakness.
o The definition of what constitutes music therapy needs additional clarification (e.g., that the intervention is appropriate for non-musicians, that clarity around where/whether music therapy overlaps with CBT, and that the choice of music will be tailored culturally).This is a minor, fixable weakness.
• The reviewers identified minor weaknesses in the analytic plan: o The proposed instrument for the assessment of assumption of normality may be inappropriate and skew results.This can impact how easily the mixed-effects model can be applied.
o Sample size calculations are unclear.
o It is unclear why the study proposes stratifying by language.Stratification by site was discussed as an alternative.
• The focus on Memorial Sloan Kettering and its affiliate site in Miami may limit generalizability and scalability in lower-resourced institutions/settings.This is a minor weakness.
• Effort designated for the senior statistician is low, and the expertise of the lead music and CBT therapist could not be evaluated because specific individuals to fill these roles have not been named yet.These are minor, fixable weaknesses.
• Access to the virtual interventions may be problematic for persons in low-resourced communities (e.g., those not having Internet or a device), and the financial incentive for participation is low.These are minor weaknesses.
• The study lacked caregiver or family stakeholder involvement and the number of stakeholder engagement meetings is low.The two patient partners are paid different rates for the same expertise.These are minor weaknesses.
Additional Comments:
It is unclear why 7 sessions/doses were chosen given the background literature cited, which used 14 sessions.There was significant discussion of the appropriateness of the 7 session dose.The rationale was not justified, and reviewers were inconclusive on whether this was adequate.
NOTE:
The above In-Person Review Discussion Notes are a summary of the in-person reviewer group discussion.
Online Reviewer Critiques
The below reviewer critiques were written by individual panel members assigned to review this application prior to the in-person discussion and were not altered post discussion.These reviewer critiques might not necessarily reflect the position of the reviewers at the close of the group discussion.
Reviewer 1:
Criterion 1: Potential for the study to fill critical gaps in evidence (Scientist Reviewers) Strengths: • The proposal indicates that anxiety is commonly experienced by many cancer survivors (major).
• The proposal indicates that cognitive behavioral therapy (CBT) can be effective for anxiety but there can be stigma around CBT.The proposal notes how CBT requires mental and emotional stamina and can therefore be too demanding.CBT can be effective but not for all people and the application proposes that an engaging comparison condition is warranted (major).
• The proposal highlights that there is general evidence for music therapy (MT) in the treatment of anxiety and results of a Cochrane review (conducted by one of the authors of the proposal) support music therapy oncology populations.However, there is a lack of MT research for anxiety in cancer survivors, a growing group of people due to improvements in technology and treatment.While the proposal notes that CBT has strong empirical support, many people may not want CBT as it can be demanding, stigmatizing, and requires mental and emotional stamina.MT may require less of these factors and cancer survivors may be more likely to request, accept, and be actively involved in MT (as indicated in the MT literature and noted in the proposal).This application seeks to address the treatment of psychosocial factors cancer survivors experience using music therapy in a virtual delivery format therefore enhancing accessibility to underserved communities (major).
vivors, a gr
wing group of people due to improvements in technology and treatment.While th proposal notes that CBT has strong empirical support, many people may not want CBT as it can be demanding, stigmatizing, and requires mental and emotional stamina.MT may require less of these factors and cancer survivor may be more likely to request, accept, and be actively involved in MT (as indicated in the MT literature and noted in the proposal).This application seeks to address the treatment of psychosocial factors cancer survivors experience using music therapy in a virtual delivery format therefore enhancing accessibility to underserved communities (major).
• Based on the cite • Based on the cited Cochrane Review of MT for oncology and other related research, the proposal identifies that there is a critical gap in the literature about virtual MT for anxiety with cancer survivors and that virtual MT may be an optimal intervention to compare with virtual CBT.The results of the proposed study would fill that critical gap in the literature (major).
Cochrane Review of MT for oncology and other related research, the proposal identifies that there is a critical gap in the literature about virtual MT for anxiety with cancer su vivors and that virtual MT may be an opt mal intervention to compare with virtual CBT.The results of the proposed study would fill that critical gap in the literature (major).
Weaknesses:
• None in this section.
Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers)
Weaknesses:
• None in this section.
Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers)
Strengths:
Strengths:
• The application noted that music therapists, cancer centers, and cancer survivors would be able to use study findings and advocate for virtual MT as a treatment (major).
• The proposal notes how CBT requires mental and emotional • The application noted that music therapists, cancer centers, and cancer survivors would be able to use study findings and advocate for virtual MT as a treatment (major).
• The proposal notes how CBT requires mental and emotional stamina and can therefore be too demanding.As such, the application provides information on the need for this study from end-users.
stamina and can therefore be too demanding.As such, the application provides information on the need for this study from end-users.
• The proposal's research findings have the potential to provide valuable data that would be u • The proposal's research findings have the potential to provide valuable data that would be used to inform key stakeholders' decision making.Administrators would hire music therapists and end-users would request services.The treatment design is based on evidence and, if positive results occur, other practitioners could reproduce those positive results (moderate).
ed to inform key stakeholders' decision making.Administrators would hire music therapists and end-users would request services.The treatment design is based on evidence and, if positive results occur, other practitioners co
d reproduce those pos
tive results (moderate).
• The application notes a comprehensive plan to disseminate findings through local and national media, social media, newsletters, and professional conferences.Dr. Bradt will present results at the American Music Therapy Association conference.The disseminati • The application notes a comprehensive plan to disseminate findings through local and national media, social media, newsletters, and professional conferences.Dr. Bradt will present results at the American Music Therapy Association conference.The dissemination plan will reach patients and stakeholders through the About Herbs newsletter (over 10,000 subscribers and millions of website visits each year) (moderate).
n plan will reach patients and stakeholders through the About Herbs newsletter (over 10,000 subscribers and millions of website visits each year) (moderate).
Weaknesses:
• MT is sometimes considered a "best buy" -music therapists, while highly trained, a
Weaknesses:
• MT is sometimes considered a "best buy" -music therapists, while highly trained, are typically less expensive than other psychosocial interventionists.As such, MT could have the potential to lead to improvements in cancer survivors across the country and result in greater access to effective treatment for cancer survivors.These were not made explicit in the application but constitute fixable items in the application (minor).
e typically less expensive than other psychosocial interventionists.As such, MT could have the potential to lead to improvements in cancer survivors across the country and result in greater access to effective treatment for cancer survivors.These were not made explicit in the application but constitute fixable items in the application (minor).
• A potential barrier to this study is that there are only about 9000 Bo • A potential barrier to this study is that there are only about 9000 Board-Certified Music Therapists in the United States; this could limit access to effective treatment should the virtual MT intervention be non-inferior to virtual CBT (minor).
d-Certified
usic Therapists in the United States; this could limit access to effective treatment should the virtual MT intervention be non-inferior to virtual CBT (minor).
• The study will take place virtually but is based in New York, New Jersey, and Miami; While New York, New Jersey, and Miami have diverse populations, the proposal does not provide opportunities for other parts of the United States (i.e., Midwest, west coast) to be repres • The study will take place virtually but is based in New York, New Jersey, and Miami; While New York, New Jersey, and Miami have diverse populations, the proposal does not provide opportunities for other parts of the United States (i.e., Midwest, west coast) to be represented?(minor) • The application did not explicitly mention publishing results in specific refereed journals (i.e., Cancer or another related high impact oncology journal) but this is inferred due to the academic publishing track records of the authors.While the application did not mention specific academic journals, the authors are experienced scholars and have the expertise to select the best journals for the most impact (minor).
nted?(minor) • The application did not explicitly mention publishing results in specific refereed journals (i.e., Cancer or another related high impact oncology journal) but this is inferred due to the academic publishing track records of the auth rs.While the application did not mention specific academic journals, the authors are experienced scholars and have the expertise to select the best journals for the most impact (minor).
Criterion 3: Scientific merit (research design, analysis, and outcomes) (Scientist Reviewers)
Strengths:
• CBT has strong theoretical and empirical support as noted in the application.The virtual MT in the study will be informed by the social cognitive processing model of emotional adjustment of cancer.The variables are relevant and germane to cancer survivorship (major).
• The proposal is a 2 arm randomized comparative effectiveness study of virtual MT and virtual CBT on anxiety.
Other related secondary variab
Strengths:
• CBT has strong theoretical and empirical support as noted in the application.The virtual MT in the study will be informed by the social cognitive processing model of emotional adjustment of cancer.The variables are relevant and germane to cancer survivorship (major).
• The proposal is a 2 arm randomized comparative effectiveness study of virtual MT and virtual CBT on anxiety.
Other related secondary variables include depression, fatigue, insomnia, and quality of life.These are all relevant constructs for cancer survivors.The use of virtual delivery through HIPPA compliant Zoom may increase access to services (major).
or cancer s
rvivors.The use of virtual delivery through HIPPA compliant Zoom may increase access to services (major).
• The patient population (adult cancer survivors) and setting (virtual delivery) is appropriate for this study.The virtual delivery format is especially relevant giv • The patient population (adult cancer survivors) and setting (virtual delivery) is appropriate for this study.The virtual delivery format is especially relevant given COVID and participants who may be immunocompromised (major).• Randomization will be 1:1 (CBT:MT) in permuted blocks of random length and stratified by anxiety medication (yes or no) and language (English or Spanish) (major).
n COVID and participants who may be immunocompromised (major).• Randomization will be 1:1 (CBT:MT) in permuted locks of random length and stratified by anxiety medication (yes or no) and language (English or Spanish) (major).
• CBT and MT have standardized semi-fixed protocols with definitions (major).
• The application specifies rigorous methods, includi • CBT and MT have standardized semi-fixed protocols with definitions (major).
• The application specifies rigorous methods, including masking when possible, that are congruent with PCORI Methodology Standards.The overall study design -2 arm comparative effectiveness study with two follow-up measures to determine maintenance of treatment gains -is justified given the existing literature and gap in the literature (moderate).
g masking when possible, that are congruent with PCORI Methodology Standards.The overall study design -2 arm comparative effectiveness study with two follow-up measures to determine maintenance of treatment gains -is justified given the existing literature and gap in the literature (moderate).
• MT and CBT were adequately described.Standardized semi-fixed protocols were included in the appen • MT and CBT were adequately described.Standardized semi-fixed protocols were included in the appendix with definitions (major).
ix with definitions (major).
• The measures of anxiety, depression, fatigue, • The measures of anxiety, depression, fatigue, insomnia, and mental and physical health are relevant to the study population.Cronbach's alpha values were provided for each measure and these values were strong.Most of the measures are relatively brief, indicating that the application recognizes fatigue may be a factor given the population.Participants are more likely to complete brief instruments and this aspect of the study may lessen attrition (moderate).
nsomnia, and mental and physical health are relevant to the study population.Cronbach's alpha values were provided for each measure and these values were strong.Most of the measures are relatively brief, indicating that the application recognizes fatigue may be a factor given the population.Participants are more likely to complete brief instruments and this aspect of the study may lessen attrition (moderate).
• There is also a qualitative component to better understand the se • There is also a qualitative component to better understand the service users' experiences in CBT and MT.
vice users' experiences in CBT and MT.
Theoretical thematic analysis will be used to analyze these data (major).
• Power analyses were conducted, sample sizes are appropriate, and, given the experience and expertise of the proposal authors and the clinical sites, the project is feasible.Providing financial incentive to participants will help attrition.All plans are realistic (major).
• Thorough application and literature review and supportive scholarship; Th Theoretical thematic analysis will be used to analyze these data (major).
• Power analyses were conducted, sample sizes are appropriate, and, given the experience and expertise of the proposal authors and the clinical sites, the project is feasible.Providing financial incentive to participants will help attrition.All plans are realistic (major).
• Thorough application and literature review and supportive scholarship; There were 170 different references (minor).
re were 170 different references (minor).
Weaknesses:
• The application did not specifically state if MT
Weaknesses:
• The application did not specifically state if MT and CBT were to be delivered in group or individual formats.It is inferred from the application and protocols (appendix) that treatments will be delivered in individual formats.Group MT and group CBT could potentially allow for more normalization, universalization, and social connectedness between participants who are cancer survivors.There are major logistical hurdles in group formats and it is more difficult to control the treatment, so this is not a major concern in the application.Moreover, individual therapy also allows the therapist to better tailor the treatment to the service user and removes many confounding variables associated with group dynamics (minor).
and CBT were to be delivered in group or individual formats.It is inferred from the application and protocols (appendix) that treatments will be delivered in individual formats.Group MT and group CBT could potentially allow for more normalization, universalization, and social connectedness between participants who are cancer survivors.There are majo logistical hurdles in group formats and it is more difficult to control the treatment, so this is not a major concern
n the applic
tion.Moreover, individual therapy also allows the therapist to better tailor the treatment to the service user and removes many confounding variables associated with group dynamics (minor).
• While not a health-related outcome, perhaps including therapeutic alliance as an outcome measure might provide an interesting comparison between MT and CBT.Although not part of the research question, perhaps including alliance as a measure would distinguish the treatments.As there are barriers associated with CBT, perhaps including an alliance measure might quantitatively distinguish MT and CBT conditions.In the literature, alliance is often a predictor of therapeutic outcome.Relational factors may emerge in the qualitative/interpr • While not a health-related outcome, perhaps including therapeutic alliance as an outcome measure might provide an interesting comparison between MT and CBT.Although not part of the research question, perhaps including alliance as a measure would distinguish the treatments.As there are barriers associated with CBT, perhaps including an alliance measure might quantitatively distinguish MT and CBT conditions.In the literature, alliance is often a predictor of therapeutic outcome.Relational factors may emerge in the qualitative/interpretivist arm of the study (minor).
tivist arm of the study (minor).
• The dose of each CBT and MT is 7 sessions over 7 weeks.There was no information in the application on why 7 doses were chosen.Usual evidence-based treatment courses of CBT for major depressive disorder are 14 weeks.While the intervention protocols are theoretically-informed, the dose may not be enough to induce positiveand enduring -change for participants.There was no supporting literature in the proposal that supported the rationale for 7 dose treatment.Perhaps this omission was an oversight and constitutes a fixable item in the appl • The dose of each CBT and MT is 7 sessions over 7 weeks.There was no information in the application on why 7 doses were chosen.Usual evidence-based treatment courses of CBT for major depressive disorder are 14 weeks.While the intervention protocols are theoretically-informed, the dose may not be enough to induce positiveand enduring -change for participants.There was no supporting literature in the proposal that supported the rationale for 7 dose treatment.Perhaps this omission was an oversight and constitutes a fixable item in the application (minor).
cation (minor).
• The virtual delivery format limits the MT interventions (i.e., improvisation and instrument playing do not typically work well in virtual delivery formats) but the application focuses on receptive music therapy interventions (including song discussion, playlist construction, music-based self-management) or active music therapy interventions (songwriting) that can be delivered in a virtual format (minor).
• Congruent with other psychosocial intervention studies, it is not possible to mask participants to condition (minor).• Cronbach's alpha • The virtual delivery format limits the MT interventions (i.e., improvisation and instrument playing do not typically work well in virtual delivery formats) but the application focuses on receptive music therapy interventions (including song discussion, playlist construction, music-based self-management) or active music therapy interventions (songwriting) that can be delivered in a virtual format (minor).
• Congruent with other psychosocial intervention studies, it is not possible to mask participants to condition (minor).• Cronbach's alpha values were not provided for the Spanish version of the instruments (minor).
values were not provided for the Spanish version of the instruments (minor).
Criterion 4: Investigator(s) and environment (Scientist Reviewers)
Strengths:
• The study team is comprised of leading experts in integrative medicine, oncology, CBT, MT, psycho-oncology, telehealth, statistics, and qualitative analysis.The group has considerable clinical and research expertise that are documented in the applic
Criterion 4: Investigator(s) and environment (Scientist Reviewers)
Strengths: • The study team is comprised of leading experts in integrative medicine, oncology, CBT, MT, psycho-oncology, telehealth, statistics, and qualitative analysis.The group has considerable clinical and research expertise that are documented in the application (major).
tion (major).
• The PI has experience in PCORI research and has received funding from PCORI, National Institutes of Hea • The PI has experience in PCORI research and has received funding from PCORI, National Institutes of Health (NIH), the Department of Defense, and the American Cancer Society (major).
th (NIH), the Department of Defense, and the American Cancer Society (major).
• Other investiga • Other investigators have considerable experience in securing funding and conducting and publishing high impact research.The multidisciplinary team is very strong and each member brings their expertise to the group.The governance structure is justified and the percent effort for all team members is appropriate (major).
rs have considerable experience in securing funding and conducting
nd publishi g high impact research.The multidisciplinary team is very strong and each member brings their expertise to the group.The governance structure is justified and the percent effort for all team members is appropriate (major).
• The settings, facilities, agreements, and • The settings, facilities, agreements, and resources are absolutely sufficient.There are a multitude of resources at the Memorial Sloan Kettering Cancer Center (ranked as one of the top two cancer centers in the US) and at Drexel University and Thomas Jefferson University.There is considerable support for this project from all who are involved as evidenced by multiple letters in the appendix (major).
resources are absolutely sufficient.There are a multitude of resources at the Memorial Sloan Kettering Cancer Center (ranked as one of the top two cancer centers in the US) and at Drex l University and Thomas Jefferson University.There is considerable support for this project from all who are involved as evidenced by multiple letters in the appendix (major).
• The research team has experience delivering services in virtual formats (minor).
Weaknesses:
• None -thank you.• The application notes the man • The research team has experience delivering services in virtual formats (minor).
Weaknesses:
• None -thank you.• The application notes the many problems that cancer survivors experience including anxiety, depression, fatigue, insomnia, and quality of life.These are the primary (anxiety) and secondary (depression, fatigue, insomnia, and quality of life) outcomes of the proposed study and constitute critical gaps in the existing literature that are highly relevant to patients and other stakeholders.Additionally, the proposal includes a qualitative/interpretivist arm for both conditions that may provide additional patient-centric insights into virtual CBT and virtual MT (major).
problems that cancer survivors experience including anxiety, depression, fatigue, insomnia, and quality of life.These are the primary (anxiety) and secondary (depression, fatigue, insomnia, and quality of life) outcomes of the proposed study and constitute critical gaps in the existing literature that are highly relevant to patients and other stakeholders.Additionally, the proposal includes a qualitativ /interpretivist arm for both conditions that may provide additional patient-centric
nsights into
virtual CBT and vir
al MT (major).
• The application includes informa
• The application includes information about the benefits to patients and this is supported by the existing literature.As with all research, there is potential for harm but harm is unlikely given the nature of CBT and MT.The application has a plan should such harm occur (major).
on about th
benefits to patients and this is supported by the existing literature.As with all research, there is potential for harm but harm is unlikely given the nature of CBT and MT.The application has a plan should such harm occur (major).
• The application contains five letters of support from cancer survivors who are patient partners.There are also letters of support from the American Cancer Society and Society and Society for Integrative Oncology (major).
• While there are far more CBT practitioners than qualified music therapists, CBT and MT are currently offered to cance • The application contains five letters of support from cancer survivors who are patient partners.There are also letters of support from the American Cancer Society and Society and Society for Integrative Oncology (major).
• While there are far more CBT practitioners than qualified music therapists, CBT and MT are currently offered to cancer survivors.Comparing MT to CBT -a treatment that has a large base of empirical support -has the potential to provide convincing evidence to the scholarly community and stakeholders and enable MT to be a treatment of choice for cancer survivors.In the literature, patients generally accept and favor MT (major).
survivors.Comparing MT to CBT -a treatment that has a large base of empirical support -has the potential to provide convincing evidence to the scholarly community and stakeholders and enable MT to be a treatment of choice for cancer survivors.In the literature, patients generally ccept and favor MT (major).
• The study is available in both English and Spanish (major).
• The inclusive study will involve underserved and marginalized communities (goal is 30% of participants will identify as a minority • The study is available in both English and Spanish (major).
• The inclusive study will involve underserved and marginalized communities (goal is 30% of participants will identify as a minority) and can be delivered in English or Spanish to survivors of any type of cancer (major).
and can be delivered in English or Spanish to survivors of any type of cancer (major).
Weaknesses:
• Communities who are not as affluent may have difficult attaining a device to operate Zoom (minor).• The application notes the degree to which the researchers have already actively engaged and involved patients and stakeholders as well as a plan for continued stakeholder involvement during the data collection period.The applica
Weaknesses:
• Communities who are not as affluent may have difficult attaining a device to operate Zoom (minor).• The application notes the degree to which the researchers have already actively engaged and involved patients and stakeholders as well as a plan for continued stakeholder involvement during the data collection period.The application describes how the study has involved patients, clinicians, hospital and health system representatives to ensure the study receives a wide range of input.The authors are obviously committed to patients and stakeholders through soliciting their feedback and involving them through the various stages of the project.These engagements are ongoing and germane to the study (major).
ion describes how the study has involved patients, clinicians, hospital and health system representatives to ensure the study receives a wide range of input.The authors are obviously committed to patients and stakeholders through soliciting their feedback and involving them through t
various sta
es of the project.These engagements are ongoing and germane to the study (major).
• The proposed eng • The proposed engagement plan is appropriate and tailored to the study (minor).
ement plan is appropriate and tailored to the study (minor).
•
• Roles are clearly described in figure 2 with patient co-investigators being equal to the PI.The organizational structure and resources will ensure that the research team will be able to engage patients and stakeholders through all aspects of the study (major).
les are cle
rly described in figure 2 with patient co-investigators being equal to the PI.The organizational structure and resources will ensure that the research team will be able to engage patients and stakeholders through all aspects of the study (major).
• Table 3 on page 14: Patient and stakeholder advisory board members.People have various types of cancer and roles and all will have input throughout the study via regular engagement meetings (major).
• The application has budgeted funds allocated for patient partners Co-investigators to attend local • Table 3 on page 14: Patient and stakeholder advisory board members.People have various types of cancer and roles and all will have input throughout the study via regular engagement meetings (major).
• The application has budgeted funds allocated for patient partners Co-investigators to attend local and national conferences to share results.This increases the number of people who will have access to the results (minor).
nd national conferences to share results.This increases the numb r of people who will have access to the results (minor).
• The proposal included budgeted funds for publishing results in a peer reviewed journal.While the application did not explicitly state which journals the authors would submit the results to, it is implied the open access option will be used so that all people would be able to access re • The proposal included budgeted funds for publishing results in a peer reviewed journal.While the application did not explicitly state which journals the authors would submit the results to, it is implied the open access option will be used so that all people would be able to access results.The application does not explicitly state this but this is assumed.In addition, as this project would be federal funded, perhaps open access is assumed (minor).
ults.The application does not explicitly state this but this is assumed.In addition, as this project would be federal funded, perhaps open access is assumed (minor).
• The design and protocols were inf • The design and protocols were informed and endorsed by patient participants.The researchers had a meeting on December 14, 2020 to solicit input from patients and other stakeholders (major).
rmed and endorsed by patient participants.The researchers had a meeting on December 14, 2020 to solicit input from patients and other stakeholders (major).
• The advisory board will ensure language is accessible and appropri • The advisory board will ensure language is accessible and appropriate for various interested people (minor).
te for various interested people (minor).
• The application was informed by cancer survivors.The application noted a patient/engagement stakeholder meeting on December 14, 2020 wherein feedback was received about the research questions, inclusion criteria, measures, and protocol.Two patient partners are co-investigators (moderate).
• The study includes people from underserved and marginalized communities who speak English and Spanish.On page 21, Table • The application was informed by cancer survivors.The application noted a patient/engagement stakeholder meeting on December 14, 2020 wherein feedback was received about the research questions, inclusion criteria, measures, and protocol.Two patient partners are co-investigators (moderate).
• The study includes people from underserved and marginalized communities who speak English and Spanish.On page 21, Table 7 provides estimated racial/ethnic and gender enrollment demographic data (major).
7 provides estimated racial/ethnic and gender enrollment demographic data (major).
Weaknesses:
• The application does not contain a letter of support from the American Music Therapy Associat
Weaknesses:
• The application does not contain a letter of support from the American Music Therapy Association (www.musictherapy.org) or the Certification Board for Music Therapists (www.cbmt.org)(minor).
on (www.musictherapy.org) or the Certification Board for Music Therapists (www.cbmt.org)(minor).
• The applica • The application does not include input from cancer survivor caregivers or family members.This is likely due to the population itself being cancer survivors who may not rely on their caregivers or family members as much as people who are receiving cancer treatments.Since the proposal contains a great deal of input from patient groups, it seems that this group will be able to take caregivers and family members perspectives into account.(minor).
ion does not include input from cancer survivor caregivers or family members.This is likely due to the population itself being cancer survivors who may not rely on their caregivers or family members as much as people who are receiving cancer treatments.Since the proposal contains a great deal f input from patient groups, it seems that this group will be able to take caregivers and family members perspectives into account.(minor).
Does the application have acceptable risks and/or adequate prote
ions for hum
n subjects?
Yes
There are plans to refer participants to Memorial Sloan Kettering Cancer Center psychiatry services if participants experience distress; plans for adverse event reporting.
O
Yes
There are plans to refer participants to Memorial Sloan Kettering Cancer Center psychiatry services if participants experience distress; plans for adverse event reporting.
Overall Comments:
erall Comments:
Due to ongoing improvements in oncology treatment, more people are able to survive cancer.However, cancer survivors still encounter significant distress including anxiety, depression, fatigue, insomnia, and quality of life.While CBT can positively impact these factors, CBT has limitations including stigma, being too demanding, and requiring mental and emotional stamina.MT is a preferred and effective intervention for people with ca Due to ongoing improvements in oncology treatment, more people are able to survive cancer.However, cancer survivors still encounter significant distress including anxiety, depression, fatigue, insomnia, and quality of life.While CBT can positively impact these factors, CBT has limitations including stigma, being too demanding, and requiring mental and emotional stamina.MT is a preferred and effective intervention for people with cancer but high quality MT research with cancer survivors is necessary.As such, MT has the potential to fill a consequential gap in the literature base for cancer survivors.Therefore, the purpose of this 2 arm randomized effectiveness study is to compare virtual CBT and virtual MT on measures of anxiety, depression, fatigue, insomnia, and quality of life in adult cancer survivors ("MELODY trial").
ial
o fill a consequential gap in the literature base for cancer survivors.Therefore, the purpose of this 2 arm randomized effectiveness study is to compare virtual CBT and vir
al MT on measures
f anxiety, depression, fatigue, insomnia, and quality of life in adult cancer survivors ("MELODY trial").
The study includes follow up at 8 and 16 weeks to determine maintenance of potential treatment gains.Virtual delivery ensures participants will be able to receive services during COVID and augments accessibility.The study also contains a qualitative/interpretivist arm to better understand the lived experience of participants receiving CBT and MT.The design is strong, congruent with PCORI guidelines, and the multidisciplinary study team are all experts in their respective areas.There are adequate resources, facilities, and prospective participants.The inclusive study will involve underserved and marginalized communities (goal is 30% of participants will identify as a minority) and can be delivered in English or Spanish to s The study includes follow up at 8 and 16 weeks to determine maintenance of potential treatment gains.Virtual delivery ensures participants will be able to receive services during COVID and augments accessibility.The study also contains a qualitative/interpretivist arm to better understand the lived experience of participants receiving CBT and MT.The design is strong, congruent with PCORI guidelines, and the multidisciplinary study team are all experts in their respective areas.There are adequate resources, facilities, and prospective participants.The inclusive study will involve underserved and marginalized communities (goal is 30% of participants will identify as a minority) and can be delivered in English or Spanish to survivors of any type of cancer.Patients and stakeholders have been involved in all aspects of the design of the study and will continue to be involved throughout all stages.
rvivors of any type of cancer.Patients and stakeholders have been involved in all aspects of the design of the study and will continue to be involved throughout all stages.
While the authors provided thorough and theoretically supported rationales for the treatments, a potential weakness of the application is the dose: Seven virtual sessions of CBT or MT may not be adequate to induce a measurable change at posttest and follow-up.The application did not provide a rationale or supporting literature for the dose.Overall, the study has strong potential to enable additional access to both virtual CBT and virtual MT as psychosocial treatments of choice for cancer survivors.
Reviewer 2:
Criterion 1: Potential for the study to fill critical gaps in evidence (Scientist Reviewers) Strengths:
• The application defines a need for evidence to support the use of music therapy (MT) to support cancer survivors While the authors provided thorough and theoretically supported rationales for the treatments, a potential weakness of the application is the dose: Seven virtual sessions of CBT or MT may not be adequate to induce a measurable change at posttest and follow-up.The application did not provide a rationale or supporting literature for the dose.Overall, the study has strong potential to enable additional access to both virtual CBT and virtual MT as psychosocial treatments of choice for cancer survivors.
Reviewer 2:
Criterion 1: Potential for the study to fill critical gaps in evidence (Scientist Reviewers) Strengths: • The application defines a need for evidence to support the use of music therapy (MT) to support cancer survivors, as 1 in 3 suffer anxiety due to consequences of cancer treatment.This anxiety impacts daily functioning and quality of life.With COVID-19 exacerbating symptoms for many, there is an even greater need for treatment, and treatment alternatives that are effective and accessible virtually.Moderate Strength • While MT and Cognitive Behavioral Therapy (CBT) have been shown to be delivered virtually on a wide scale, they have not been compared to one another for effectiveness virtually, or otherwise.If successful this study will fill a gap in evidence as well as support interventions to decrease burden.Moderate Strength • Some patients do not wish to receive CBT, or for reasons cannot, and MT can play an important treatment role.
as 1 in 3 suffer anxiety due to consequences of cancer treatment.This anxiety impacts daily functioning and quality of life.With COVID-19 exacerbating symptoms for many, there is an even greater need for treatment, and treatment alternatives that are effective and accessible virtually.Moderate Strength • While MT and Cognitive Behavioral Therapy (CBT) have been shown to be delivered virtually on a wide scale, they have not been compared to one another for effectiveness virtually, or otherwise.If succ
sful this st
dy will fill a gap in evidence as well as support interventions to decrease burden.Moderate Strength • S me patients do not wish to receive CBT, or for reasons cannot, and MT can play an important treatment role.
Thus proving it is at least non-inferior to CBT, and while being delivered virtually, is important.Additionally by measuring which intervention is best at improving co-occurring symptoms, such as fatigue, adds strength.
Moderate Strength
• With a multitude of remote treatment options entering the health care system due to COVID-19, there is a need for information on effective treatments to help patients choose one over another, as well as for providers to recommend them.It will also be important for payers to have evidence to support MT's use virtually.Moderate Strength Weaknesses:
• None noted.
Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers)
Strengt Thus proving it is at least non-inferior to CBT, and while being delivered virtually, is important.Additionally by measuring which intervention is best at improving co-occurring symptoms, such as fatigue, adds strength.
Moderate Strength
• With a multitude of remote treatment options entering the health care system due to COVID-19, there is a need for information on effective treatments to help patients choose one over another, as well as for providers to recommend them.It will also be important for payers to have evidence to support MT's use virtually.Moderate Strength Weaknesses: • None noted.
Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers)
Strengths:
s:
• Demand has been seen from the American Psychological Association (APA), which has called for comparative effective research (CER) of CBT and other psychotherapeutic interventions.Furthermore, the APA states that such interventions have been disproportionately targeted for women with breast cancer (p.3), and more interventions need to be explored to target patients with other types of cancer.The National Academy of Medicine (NAM) recognizes psychosocial services as, "essential components of high-quality cancer care."(p.2) Moderate Strength • Support for this study to c • Demand has been seen from the American Psychological Association (APA), which has called for comparative effective research (CER) of CBT and other psychotherapeutic interventions.Furthermore, the APA states that such interventions have been disproportionately targeted for women with breast cancer (p.3), and more interventions need to be explored to target patients with other types of cancer.The National Academy of Medicine (NAM) recognizes psychosocial services as, "essential components of high-quality cancer care."(p.2) Moderate Strength • Support for this study to compare virtual treatment with MT or CBT is witnessed in numerous letters from providers who also express that the virtual delivery of such treatments is, "Critical during and beyond the COVID-19 pandemic."(letters p.7)One practicing oncologist expressed, "major need for non-pharmacological treatment options such as MT and CBT" and that the research team's study is, "crucial to improve psychological well being and quality of life (QOL) for cancer survivors."(Letters p.10) Moderate Strength • Patients have expressed the importance of this study in letters of support.One breast cancer survivor stated how well both MT and CBT worked for her and that finding ways to reach patients in underserved populations with integrative therapy through this study is a must, and that the dissemination of the results will significantly impact how survivors approach treatment for anxiety.(letter p.13).
pare virtual
treatment with MT or CBT is witnessed in numerous letters from providers who also express that the virtual delivery of such treatments is, "Cri
cal during
nd beyond the COVID-19 pandemic."(letters p.7)One practicing oncologist expressed, "major need for non-pharmacological treatment options such as MT and CBT" and that the research team's study is, "crucial to improve psychological well being and quality of life (QOL) for cancer survivors."(Letters p.10) Moderate Strength • Patients have expressed the importance of this study in letters of support.One breast cancer survivor stated how well both MT and CBT worked for her and that finding ways to reach patients in underserved populations with integrative therapy through this study is a must, and that the dissemination of the results will significantly impact how survivors approach treatment for anxiety.(letter p.13).
• Patients report that CBT requires a lot of mental and emotional energy and may be demeaning and taxing for survivors.MT may provide a more easy-going, personally tailored treatment that allows for less patient drop out.Major Strength for end use.
• Results should be easily replicated and implemented across a vast amount of health systems.MT is already a proven and recognized intervention, and if found to be as effective as CBT when delivered virtually, it should be no more difficult to adopt and implement virtually.Furthermore, many health systems already have many social and psychosocial interventions running virtually, so adding MT should be simple.Major Strength • Dissemination plans beyond traditional means have been well thought out and • Patients report that CBT requires a lot of mental and emotional energy and may be demeaning and taxing for survivors.MT may provide a more easy-going, personally tailored treatment that allows for less patient drop out.Major Strength for end use.
• Results should be easily replicated and implemented across a vast amount of health systems.MT is already a proven and recognized intervention, and if found to be as effective as CBT when delivered virtually, it should be no more difficult to adopt and implement virtually.Furthermore, many health systems already have many social and psychosocial interventions running virtually, so adding MT should be simple.Major Strength • Dissemination plans beyond traditional means have been well thought out and planned.Stakeholder partners will provide key information on how to get an easy to remember message out and have already drafted a multichannel dissemination plan that will include social media, internet blogs, support groups, brochures, newsletters and patient websites to provide study results and information for patients.The American Music Therapy Association will also play a key role in disseminating results.Major Strength lanned.Stakeholder partners will provide key information on how to get an easy to remember message out and have already drafted a multichannel dissemination plan that will include social media, internet blogs, support groups, brochures, newsletters an patient websites to provide study results and information for patients.The American Music Therapy Association will also play a key role in disseminating results.Major Strength
Weaknesses:
None noted Criterion 3: Scientific merit (research design, analysis, and outcomes) (Scientist Reviewers) Strengths:
• A randomized control trial (RCT) was chosen for validity of effectiveness, controlling large variables and measuring confounders between exposure and the primary outcome.Moderate Strength • The researchers state they used the PICOTS (population, interventions, comparator, outcomes, time, setting) framework to guide the study.Minor Strength • The study will test the non-inferior of MT to CBT for anxiety while also testing which treatment is superior in addressing other symptoms that occur with anxiety, such as fatigue.Major Strength • Sub groups by individual characteristics such as age, sex, race and education will be used
Weaknesses:
None noted Criterion 3: Scientific merit (research design, analysis, and outcomes) (Scientist Reviewers) Strengths: • A randomized control trial (RCT) was chosen for validity of effectiveness, controlling large variables and measuring confounders between exposure and the primary outcome.Moderate Strength • The researchers state they used the PICOTS (population, interventions, comparator, outcomes, time, setting) framework to guide the study.Minor Strength • The study will test the non-inferior of MT to CBT for anxiety while also testing which treatment is superior in addressing other symptoms that occur with anxiety, such as fatigue.Major Strength • Sub groups by individual characteristics such as age, sex, race and education will be used to explore the differences of these attributes on decision making.Major Strength • Patient reported outcomes will be collected either by Research Electronic Data Capture (REDCap) online, or over the phone by choice of participant.All assessments and study materials will be given in English and Spanish.Major Strength • Scales used to collect patient reported outcomes seem appropriate.They include the hospital anxiety and depression scale (HADS), the Brief Fatigue Inventory (BFI), Patient Reported Outcomes Measurement Information System (PROMIS) and Insomnia Severity Index (ISI).The ISI and the BFI have proven success and high validity when used amongst cancer patients.Major Strength Weaknesses:
differences
of these attributes on decision making.Major Strength • Patient reported outcomes will be collected eithe by Research Electronic Data Capture (REDCap) online, or over the phone by choice of participant.All assessments and study materials will be given in English and Spanish.Major Strength • Scales used to collect patient reported outcomes seem appropriate.They include the hospital anxiety and depression scale (HADS), the Brief Fatigue Inventory (BFI), Patient Reported Outcomes Measurement Information System (PROMIS) and Insomnia Severity Index (ISI).The ISI and the BFI have proven success and high validity when used amongst cancer patients.Major Strength Weaknesses:
• Incentive for participants is very low.Participants will receive nothing to participate in the actual intervention for 7 weeks, and only after an assessment at 8 and 26 weeks, as well as a 45 minute phone interview at week 8, they will receive a total of $100.The assessments will consist of 5 scales with are estimated to take a minimum of 30 minutes each time.Participants taking medication for anxiety will also need to complete medication diaries at 0, 8 and 26 weeks.This may lead to difficult enrollment and/or drop out.Minor Weakness • It is unclear how the researchers will account for medication use.The application only says they will track medication use by, "asking patients to complete weekly medication diaries at 0, 8, and 26 weeks" (p.10).It is • Incentive for participants is very low.Participants will receive nothing to participate in the actual intervention for 7 weeks, and only after an assessment at 8 and 26 weeks, as well as a 45 minute phone interview at week 8, they will receive a total of $100.The assessments will consist of 5 scales with are estimated to take a minimum of 30 minutes each time.Participants taking medication for anxiety will also need to complete medication diaries at 0, 8 and 26 weeks.This may lead to difficult enrollment and/or drop out.Minor Weakness • It is unclear how the researchers will account for medication use.The application only says they will track medication use by, "asking patients to complete weekly medication diaries at 0, 8, and 26 weeks" (p.10).It is also unclear how a weekly diary is to only be collected during 3 weeks of a 26 week study.Minor Weakness Criterion 4: Investigator(s) and environment (Scientist Reviewers) Strengths:
lso unclear
ow a weekly diary is to only be collected during 3 weeks of a 26 week study.Minor Weakness Criterion 4: Investigator(s) and environment (Scientist Reviewers) Strengths:
• Very impressive, well qualified team.Dr. Mao, Lead principal investigator (PI), is an integrative medicine and oncology provider with a focus on integrative complementary therapies, and has led large trials for mental health and focus on cancer, including past PCORI awards.Dr. Bradt is a music therapist with PHD in health studies who has been PI on NIH studies on mental health and pain, and is the chief editor for a MT journal.Other team members include a director creative arts provider who is physician of integrative medicine, a bio statistic with previous PCORI experience, and patient partners with lived experience with cancer survival, MT, research and advocacy.All • If successful, the study w • Very impressive, well qualified team.Dr. Mao, Lead principal investigator (PI), is an integrative medicine and oncology provider with a focus on integrative complementary therapies, and has led large trials for mental health and focus on cancer, including past PCORI awards.Dr. Bradt is a music therapist with PHD in health studies who has been PI on NIH studies on mental health and pain, and is the chief editor for a MT journal.Other team members include a director creative arts provider who is physician of integrative medicine, a bio statistic with previous PCORI experience, and patient partners with lived experience with cancer survival, MT, research and advocacy.All • If successful, the study will provide evidence that MT is no less effective in treating anxiety virtually than CBT, but will also focus on which treatment is better at addressing other symptoms that occur with anxiety such as fatigue.Very patient centered in addressing co-occurring symptoms.Major Strength • Both CBT and MT are available today and have been made available virtually due to the current pandemic.While both are proven effective as non-pharmacological treatments for anxiety upon cancer patients, CBT is more widely recognized as the first line treatment for anxiety.More evidence is needed to support that MT is just as effective for those who do not prefer CBT or are unable to keep up with the demands of CBT due to illness.It is very clear that patients are concerned about fatigue due to treatment demands, furthering the patient centeredness of this proposal.Major Strength
l provide evidence that MT is no less effective in treating anxiety
virtually t an CBT, but will also focus on which treatment is better at addressing other symptoms that occur with anxiety such as fatigue.Very patient centered in addressing co-occurring symptoms.Major Strength • Both CBT and MT are available today and have been made available virtually due to the current pandemic.While both are proven effective as non-pharmacological treatments for anxiety upon cancer patients, CBT is more widely recognized as the first line treatment for anxiety.More evidence is needed to support that MT is just as effective for those who do not prefer CBT or are unable to keep up with the demands of CBT due to illness.It is very clear that patients are concerned about fatigue due to treatment demands, furthering the patient centeredness of this proposal.Major Strength
• The proposed study plans to answer questions that are important to patients (such as "which intervention is more effective for anxiety with co-occurring symptoms such as fatigue, and given my personal situation, which treatment deliver virtually will be better for me").By examining variables such as age, sex, race and education, the study will be very patient center • The proposed study plans to answer questions that are important to patients (such as "which intervention is more effective for anxiety with co-occurring symptoms such as fatigue, and given my personal situation, which treatment deliver virtually will be better for me").By examining variables such as age, sex, race and education, the study will be very patient centered and allow for answering the second question with even more precision.Major Strength • The study hopes to provide evidence to support the use of MT as a non-pharmacological treatment for anxiety, citing that one in six survivors report using more than 2 psychotropic medications that are associated with poor QOL, financial drain, and higher risk of side effects and interactions.Finding evidence to support an alternative to medications may reduce this negative effects, which is very patient centered.Major Strength • Interventions and comparators have been chosen based on demonstrated effectiveness as well as patient endorsement.Patients who have recently participated in MT virtually have expressed that the program has reduced barriers to access, fostered social connections and helped deal with stress, while increasing energy.Measuring all of these pieces in the proposed study to formally prove this is very patient centered.Major Strength • The decision to focus on anxiety symptoms, rather than anxiety disorder, for the proposed study was based on conversations with patient stakeholders as well as via guidelines from the American College of Surgeons Commission on Cancer that used such guidelines on symptom severity, rater than psychiatric diagnoses.(P.6) Very patient centered.Moderate Strength • Interviews with participants after the intervention period will focus on patient centered aspects of treatment such as, acceptability, impact on anxiety and coping, digital experience and any unanticipated benefits and/or harms.Major Strength • Patient partners expressed need for evidence to help cancer survivors chose treatment that is best suited to their personal experience with anxiety and other co-occurring symptoms.Major Strength Weaknesses:
and allow f
r answering the second question with even more precision.Major Strength • The study hopes to provide evidence to support the use of MT as a non-pharmacological treatment for anxiety, citing that one in six survivors report using more than 2 psychotropic medication
that are associated with poor QOL, financial drain
and higher risk of side effects and interactions.Finding evidence to support an alternative to medications may reduce this negative effects, which is very patient centered.Major Strength • Interventions and comparators have been chosen based on demonstrated effectiveness as well as patient endorsement.Patients who have recently participated in MT virtually have expressed that the program has reduced barriers to access, fostered social connections and helped deal with stress, while increasing energy.Measuring all of these pieces in the proposed study to formally prove this is very patient centered.Major Strength • The decision to focus on anxiety symptoms, rather than anxiety disorder, for the proposed study was based on conversations with patient stakeholders as well as via guidelines from the American College of Surgeons Commission on Cancer that used such guidelines on symptom severity, rater than p ychiatric diagnoses.(P.6) Very patient centered.Moderate Strength • Interviews with participants after the intervention period will focus on patient centered aspects of treatment such as, acceptability, impact on anxiety and coping, digital experience and any unanticipated benefits and/or harms.Major Strength • Patient partners expressed need for evidence to help cancer survivors chose treatment that is best suited to their personal experience with anxiety and other co-occurring symptoms.Major Strength Weaknesses:
• None noted.• Patients and stakeholders were engaged prior to this application to inform proposal, identify research questions and the aims of the study.Moderate Strength • Two patient partners with lived experience with cancer survival and expertise in research and advocacy are are part of the research team.They will help with recruitment of, and dissemination to, underrepresented groups.They have already drafted a far reaching multi-channel dissemination plan via social media and across patient support groups.They will co-present results at local and national advocacy conferences.Major Strength • An 8 person advisory board has been formed and will include 5 patients with lived experience across 4 different cancer groups, 2 clinical providers, and 1 community stakeholder from the American Cancer Society.They are outlined to provide input throughout the study on protocols and outcomes, recruitment, engagement and dissemination.Minor Strength Weaknesses:
• Payers are an overlooked stakeholder group.Minor Weakness • While Music Therapists are on the study team, there are none on the advisory board.Minor Weakness • Decision making amongst partners is not described.Minor Weakness • The advisory board will only meet 2x year.This is not sufficient enough time for engagement and input throughout the study in all areas as outlined.This is also not consistent with the researchers statement in the application that, "hosting regular engagement meetings will ensure observance of the PCORI Principle….and allow all major decisions regarding study."(p.14) Minor Weakness • Advisors will only be paid $50 • None noted.• Patients and stakeholders were engaged prior to this application to inform proposal, identify research questions and the aims of the study.Moderate Strength • Two patient partners with lived experience with cancer survival and expertise in research and advocacy are are part of the research team.They will help with recruitment of, and dissemination to, underrepresented groups.They have already drafted a far reaching multi-channel dissemination plan via social media and across patient support groups.They will co-present results at local and national advocacy conferences.Major Strength • An 8 person advisory board has been formed and will include 5 patients with lived experience across 4 different cancer groups, 2 clinical providers, and 1 community stakeholder from the American Cancer Society.They are outlined to provide input throughout the study on protocols and outcomes, recruitment, engagement and dissemination.Minor Strength Weaknesses: • Payers are an overlooked stakeholder group.Minor Weakness • While Music Therapists are on the study team, there are none on the advisory board.Minor Weakness • Decision making amongst partners is not described.Minor Weakness • The advisory board will only meet 2x year.This is not sufficient enough time for engagement and input throughout the study in all areas as outlined.This is also not consistent with the researchers statement in the application that, "hosting regular engagement meetings will ensure observance of the PCORI Principle….and allow all major decisions regarding study."(p.14) Minor Weakness • Advisors will only be paid $50 a meeting.Additionally they will be required to take a 6 hour research training for which they will only be compensated $150.00.Minor Weakness Does the application have acceptable risks and/or adequate protections for human subjects?
meeting.Add
tionally they
ill only be
compensated $150.00.Minor Weakness Does the application have acceptable risks and/or adequate protections for human subjects?
Yes
Overall Comments:
The proposed study will compare two interventions (CBT and MT, delivered on virtual platforms) for treating anxiety symptoms in cancer survivors,.While both interventions have proven effective, CBT is the recommended first line treatment, and the interventions have not been compared virtually or otherwise.If successful, the study hopes to prove that MT is no less effective then CBT when delivered virtually, and hypothesizes that MT will be more effective at decreasing fatigue.The researchers have chosen to study these interventions with cancer survivors specifically because they cite that this group of patients has the latest growing rate of anxiety and co-occurring symptoms such as fatigue.
Letters of support and meetings with survivors confirmed the need for alternative treatment to CB
Yes
Overall Comments: The proposed study will compare two interventions (CBT and MT, delivered on virtual platforms) for treating anxiety symptoms in cancer survivors,.While both interventions have proven effective, CBT is the recommended first line treatment, and the interventions have not been compared virtually or otherwise.If successful, the study hopes to prove that MT is no less effective then CBT when delivered virtually, and hypothesizes that MT will be more effective at decreasing fatigue.The researchers have chosen to study these interventions with cancer survivors specifically because they cite that this group of patients has the latest growing rate of anxiety and co-occurring symptoms such as fatigue.
Letters of support and meetings with survivors confirmed the need for alternative treatment to CBT for their anxiety as CBT can draw energy and increase fatigue.This proposal could fill a gap in evidence to support the use of MT virtually on a wider scale and increase adoption, implementation, and coverage.With many health systems delivering care virtually due to the current pandemic, it should be easy to replicate the results and implement into practice.With many choices for virtual treatments today, this study may also help patients and their providers, choose a program that is best for them, based on their personal preferences and needs, knowing that one choice is no less effective on anxiety symptoms.
for their an
iety as CBT can draw energy and increase fatigue.This proposal could fill a gap in evidence to support the use of MT virtually on a wider scale and increase adoption, implementation, and coverage.With many health systems delivering care virtually due to the current pandemic, it should be easy to replicate the results and implement into practice.With many choices for virtual treatments today, this study may also help patients and their providers, choose a program that is best for them, based on their personal preferences and needs, knowing that one choice is no less effective on anxiety symptoms.
The study team is very strong, with an experienced PI who is an integrative medicine provider and oncologist who focuses on complimentary therapies such as MT, acupuncture and massage.CO-I's consist of experienced researchers, music therapists, biostatisticians, and most have successf The study team is very strong, with an experienced PI who is an integrative medicine provider and oncologist who focuses on complimentary therapies such as MT, acupuncture and massage.CO-I's consist of experienced researchers, music therapists, biostatisticians, and most have successful past PCORI and/or NIH funded research.Two very experienced patient partners round out the team and are a strong asset.Support is seen from all partners, as well as from multiple sites, providers and organizations.The proposal is very patient centered with its focus on symptoms and interventions that are important to patients.
pas
PCORI and/or NIH unded research.Two very experienced patient partners round out the team and are a strong asset.Support is seen from all partners, as well as from multiple sites, providers and organizations.The proposal is very patient centered with its focus on symptoms and interventions that are important to patients.
The engagement section of the application however is very small and mostly outlines players and vague activities, which is what brings the score down from excellent to very good.The advisory board may be missing a payer representative, and they will only meet 2x year to inform all aspects of the study, and how decisions will be made has not been addressed.Additionally board members will only rec The engagement section of the application however is very small and mostly outlines players and vague activities, which is what brings the score down from excellent to very good.The advisory board may be missing a payer representative, and they will only meet 2x year to inform all aspects of the study, and how decisions will be made has not been addressed.Additionally board members will only received $50 compensation a meeting, and will be expected to take a 6 hour research training for which they will receive $150.Pay is similarly low for study participants.If the engagement plan were strengthened and better described it would make for a stronger proposal for what seems to be a very promising study otherwise.If successful, the study would be generalizable to a broad range of patients experience anxiety symptoms, not just cancer survivors, and MT could join the rank of first line treatment recommendations and increase its use and reach.
ived $50 compensation a meeting, and will be expected to take a 6 hour research training for which they will receive $150.Pay is similarly low for study participants.If the engagement plan were strengthened and better described it would make for a stronger proposal for what seems to be a very promising study otherwise.If successful, the study would be generalizable to a broad range of patients experience anxiety symptoms, not just cancer survivors, and MT could join the rank of first line treatment recommendations and increase its use and reach.
Reviewer 3:
Criterion 1: Potential for the study to fill critical gaps in evidence (Scientist Reviewers) Strengths:
• The proposal notes there will likely be over
Reviewer 3:
Criterion 1: Potential for the study to fill critical gaps in evidence (Scientist Reviewers) Strengths: • The proposal notes there will likely be over 22 million cancer survivors living in the United States by the end of this decade and that nearly one in three of these survivors suffer from anxiety symptoms indicating a substantial clinical burden.(Moderate) • While a growing body of evidence indicates that both cognitive behavioral and music therapies (CBT and MT) are associated with greater reduction in anxiety among cancer survivors compared with usual care, the two therapies have not been directly compared.The American Psychological Association (APA) has called for "continued and further research on the comparative effectiveness" of CBT and other psychotherapeutic interventions, identifying a critical gap in current knowledge.(Moderate) 22 million cancer survivors living in the United States by the end of this decade and that nearly one in three of these survivors suffer from anxiety symptoms indicating a substantial clinical burden.(Moderate) • While a growing body of evidence indicates that both cognitive behavioral and music therapies (CBT and MT) are associated with greater reduction in anxiety among cancer survivors compared with usual care, the two therapies have not been directly compared.The American Psychological Association (APA) has called for "continued and further research on the comparative effectiveness" of CBT and other psychothe apeutic interventions, identifying a critical gap in current knowledge.(Moderate)
• The application identifies several gaps in knowledge that affect clinical decision making.First, not all individuals are able to complete a full CBT treatment course, a first-line treatment for anxiety, complicating treatment decisions.Second, people may be reluctant to pursue CBT due to the socio-cultural stigma surrounding psychotherapy in different communities.Finally, for people who do not respond or wish to pursue CBT, it remains unclear whether MT is an effective treatment option that is non-inferior to CBT, further complicating treatment decisions.(Moderate) • Another key evidence gap is the lack of diverse representation in CBT trials.Most trial participants have been well-educated and white.(Major) • The proposed randomized clinical trial would provide high quality evidence to compare the effectiveness of cognitive behavioral and music therapies to t • The application identifies several gaps in knowledge that affect clinical decision making.First, not all individuals are able to complete a full CBT treatment course, a first-line treatment for anxiety, complicating treatment decisions.Second, people may be reluctant to pursue CBT due to the socio-cultural stigma surrounding psychotherapy in different communities.Finally, for people who do not respond or wish to pursue CBT, it remains unclear whether MT is an effective treatment option that is non-inferior to CBT, further complicating treatment decisions.(Moderate) • Another key evidence gap is the lack of diverse representation in CBT trials.Most trial participants have been well-educated and white.(Major) • The proposed randomized clinical trial would provide high quality evidence to compare the effectiveness of cognitive behavioral and music therapies to treat anxiety among cancer survivors.The proposal to enroll a racially/ethnically diverse, heterogenous population from urban, suburban, and rural settings to ensure that findings are applicable to diverse cancer survivors.(Major)
at anxiety a
ong cancer survivors.The proposal to enroll a racially/ethnically diverse, heterogenous population from rban, suburban, and rural settings to ensure that findings are applicable to diverse cancer survivors.(Major)
Weaknesses:
• None noted.
Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers)
Strengths:
• •Stakeholder organizations the investigators are connected with, such as the American Cancer Society, may include the study findings in their discussions of treatment options for anxiety.(Moderate) • The primary end-users of the comparative effectiveness results of this study will be the patients with anxiety who receive the cognitive behavioral or MT and clinicians who recommend therapies and the proposal includes extensive dissemination plans to engage both groups.(Moderate) • The investigators cite studies that show that 20-25% of CBT participants fail to complete a full treatment course and that patient stakeholders commented that a full treatment course requires "significant mental and emotional stamina" that may be too demanding and taxing for some survivors indicating the need for alternative therapies for anxiety.The investigators also cite studies that support the use of MT to treat anxiety in cancer populations.Finally, several studies demonstrate that both cognitive behavioral and music therapies can be successfully delivered virtually.(Moderate) • Demonstrating that virtual MT is as effective as virtual CBT to treat anxiety in cancer survivors could inform treatment decisions of both clinician and patient stakeholder by providing a treatment option that avoids the stamina and energy that CBT requires.Since the interventions of interest will be delivered virtually, it is very likely that others could reproduce the findings.(Moderate) • The investigators propose to take advantage of the existing infrastructure at Memorial Sloan Kettering (MSK) and their pa
Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers)
Strengths:
• •Stakeholder organizations the investigators are connected with, such as the American Cancer Society, may include the study findings in their discussions of treatment options for anxiety.(Moderate) • The primary end-users of the comparative effectiveness results of this study will be the patients with anxiety who receive the cognitive behavioral or MT and clinicians who recommend therapies and the proposal includes extensive dissemination plans to engage both groups.(Moderate) • The investigators cite studies that show that 20-25% of CBT participants fail to complete a full treatment course and that patient stakeholders commented that a full treatment course requires "significant mental and emotional stamina" that may be too demanding and taxing for some survivors indicating the need for alternative therapies for anxiety.The investigators also cite studies that support the use of MT to treat anxiety in cancer populations.Finally, several studies demonstrate that both cognitive behavioral and music therapies can be successfully delivered virtually.(Moderate) • Demonstrating that virtual MT is as effective as virtual CBT to treat anxiety in cancer survivors could inform treatment decisions of both clinician and patient stakeholder by providing a treatment option that avoids the stamina and energy that CBT requires.Since the interventions of interest will be delivered virtually, it is very likely that others could reproduce the findings.(Moderate) • The investigators propose to take advantage of the existing infrastructure at Memorial Sloan Kettering (MSK) and their patient and stakeholder partners to disseminate using social media, internet blogs, support groups, community outreach, informational brochures, newsletters, and patient websites to provide other patients with treatment-related information based on study results.The investigators will also leverage their pre-existing relationships with stakeholder organizations such as the American Cancer Society to disseminate information to their members.These are solid plans that are likely to succeed.(Major) ient and stake older partners to disseminate using social media, internet blogs, support groups, community outreach, informational brochures, n
sletters, a
d patient websites to provide other patients with treatment-related information based on study results.The investigators will also leverage their pre-existing relationships with stakeholder organizations such as the American Cancer Society to disseminate information to their members.These are solid plans that are likely to succeed.(Major)
Weaknesses:
• The proposal does not explicitly identify who will make decisions based on the comparative effectiveness results this study will produce but one would assume that clinician and patient stakeholders would use the results to guide treatment decisions.( Minor Weaknesses:
• The investigators do not describe or cite the methods they used in their sample size calculations.The proposal also needs sample size calculations to describe the magnitudes of the heterogeneity of treatment effects their proposed sample will be able to detect with high power.(Minor)
• The plans to plot the outcome measure trajectories by randomization arm over time and summarize each outcome measure at each assessment time by treatment arm do not seem to focus on the trajectories of individual subjects which is the objective of the statistical analysis.(Minor) • The proposed linear mixed effects model analyses are not completely clear.For example, the investigators do not clearly describe exactly which time points they will include in the analyses nor how they intend to model the time-by-intervention arm interaction.Overall, the proposal focuses on statistical tests and does not mention plans to assess magnitudes of intervention group differences using quantities such as 95% confidence intervals.• The investigators note that discussions with patient stakeholders revealed that anxiety symptoms represent an outcome that survivors notice and care about.(Moderate)
• The investigators note that when they engaged their patient stakeholders, those with p Weaknesses: • The proposal does not explicitly identify who will make decisions based on the comparative effectiveness results this study will produce but one would assume that clinician and patient stakeholders would use the results to guide treatment decisions.( Minor Weaknesses: • The investigators do not describe or cite the methods they used in their sample size calculations.The proposal also needs sample size calculations to describe the magnitudes of the heterogeneity of treatment effects their proposed sample will be able to detect with high power.(Minor) • The plans to plot the outcome measure trajectories by randomization arm over time and summarize each outcome measure at each assessment time by treatment arm do not seem to focus on the trajectories of individual subjects which is the objective of the statistical analysis.(Minor) • The proposed linear mixed effects model analyses are not completely clear.For example, the investigators do not clearly describe exactly which time points they will include in the analyses nor how they intend to model the time-by-intervention arm interaction.Overall, the proposal focuses on statistical tests and does not mention plans to assess magnitudes of intervention group differences using quantities such as 95% confidence intervals.• The investigators note that discussions with patient stakeholders revealed that anxiety symptoms represent an outcome that survivors notice and care about.(Moderate) • The investigators note that when they engaged their patient stakeholders, those with prior CBT experience commented that a full treatment course requires "significant mental and emotional stamina" that may be too demanding and taxing for some survivors suggesting that patients are interested in alternative, effective treatments.Further discussions with patient stakeholders revealed that many of them used music to cope with difficult emotions during their cancer journeys, indicating interest in MT. (Minor) • The video-based music and cognitive behavioral therapy protocols were developed based on prior research of the study team and the extensive literatures on music and cognitive behavioral therapies for anxiety.The investigators suggest that there is an important need for effective mental health treatments that everyone can easily access such as the interventions to be studied in this project.• The project will appropriately include two patient co-Investigators, Ms. Macleod and Ms. Walker, as well as an advisory board consisting of patients representing diverse cancer experiences, as well as key clinical and community stakeholders.All patient and stakeholder partners provided letters of support describing their roles for the proposed project and these roles are appropriate.(Major) • The investigators actively engaged patient and clinical stakeholders in the development of the project and will continue this engagement during the execution of the project.Patient and clinical stakeholders provided guidance in areas such as the subject inclusion and exclusion criteria, follow-up time and study outcomes.The project scientists and patient stakeholders also have joint publications.The investigators appropriately plan to hold bi-annual meetings with their patient/stakeholder partners to review study progress, develop or revise recruitment and engagement strategies, and plan for dissemination efforts.The bi-annual meetings seem frequent enough to allow the clinical and community stakeholders to provide input and perspective.(Moderate) • The proposed Engagement Plan appropriately includes cancer survivor co-Investigators as well as an appropriately chosen advisory board consisting of patients representing diverse cancer experiences, as well as key clinical and community stakeholders.Of note, the PI and research team (including two patient partner co-Is) have extensive experience with patient and stakeholder engagement in past and ongoing research projects.(Moderate) ior CBT experience commented that a full treatment course requires "significant mental and emotional stamina" that may be too demanding and taxing for some survivors suggesting that patients are interested in alternative, effective treatments.Further discussions with patient stakeholders revealed that many of them used music to cope with difficult emotions during their cancer journeys, indica
ng interest in MT. (Minor) • The video-based music and cognitive behavioral therapy protocols
ere develop d based on prior research of the study team and the extensive literatures on music and cognitive behavioral therapies for anxiety.The investigators suggest that there is an important need for effective mental health treatments that everyone can easily access such as the nterventions to be studied in this project.• The project will appropriately include two patient co-Investigators, Ms. Macleod and Ms. Walker, as well as an advisory board consisting of patients representing diverse cancer experiences, as well as key clinical and community stakeholders.All patient and stakeholder partners provided letters of support describing their roles for the proposed project and these roles are appropriate.(Major) • The investigators actively engaged patient and clinical stakeholders in the development of the project and will continue this engagement during the execution of the project.Patient and clinical stakeholders provided guidance in areas such as the subject inclusion and exclusion criteria, follow-up time and study outcomes.The project scientists and patient stakeholders also have joint publications.The investigators appropriately plan to hold bi-annual meetings with their patient/stakeholder partners to review study progress, develop or revise recruitment and engagement strategies, and plan for dissemination efforts.The bi-annual meetings seem frequent enough to allow the clinical and community stakeholders to provide input and perspective.(Moderate) • The proposed Engagement Plan appropriately includes cancer survivor co-Investigators as well as an appropriately chosen advisory board consisting of patients representing diverse cancer experiences, as well as key clinical and community stakeholders.Of note, the PI and research team (including two patient partner co-Is) have extensive experience with patient and stakeholder engagement in past and ongoing research projects.(Moderate)
• The proposal clearly describes the roles of the patient co-Investigators and the advisory board.Ms. Macleod and Ms. Walker will contribute their patient perspectives and play key roles in patient engagement and outreach efforts.(Moderate) • The proposal clearly describes the organizational structure of the study team and provides appropriate financial support to both patient co-Investigators throughout the project.(Moderate)
Weaknesses:
• None noted.
Does the application have acceptable risks and/or adequate protections for human subjects?
Yes
Overall Comments:
The objective to compare the effectiveness of virtual cognitive behavioral and music therapies to reduce anxiety among cancer survivors using a randomized clinical trial is well-motivated by patient concerns and will provide high quality results.The research team is strong with complementary expertise in integrative medicine, cognitive behavioral and music therapies, statistics and telemedicine.The PI, Dr. Mao, is experienced in designing and executing clinical trials aimed at alleviating physical and psychologic • The proposal clearly describes the roles of the patient co-Investigators and the advisory board.Ms. Macleod and Ms. Walker will contribute their patient perspectives and play key roles in patient engagement and outreach efforts.(Moderate) • The proposal clearly describes the organizational structure of the study team and provides appropriate financial support to both patient co-Investigators throughout the project.(Moderate)
Overall Comments:
The objective to compare the effectiveness of virtual cognitive behavioral and music therapies to reduce anxiety among cancer survivors using a randomized clinical trial is well-motivated by patient concerns and will provide high quality results.The research team is strong with complementary expertise in integrative medicine, cognitive behavioral and music therapies, statistics and telemedicine.The PI, Dr. Mao, is experienced in designing and executing clinical trials aimed at alleviating physical and psychological symptoms, as well as improving quality of life in cancer patients and has led previous PCORI studies.Patients provided useful input into the development of the project which will include two cancer survivor co-Investigators.The plans to disseminate the study results are solid and the comparisons of treatments that can be delivered virtually are timely with the move toward telemedicine, especially given the current pandemic.
symptoms, a
well as improving quality of life in cancer patients and has led previous PCORI studies.Patients prov
ed useful input into the development of the project which will i
clude two c ncer survivor co-Investigators.The plans to disseminate the study results are solid and the comparisons of treatments that can be delivered virtually are timely with the move toward telemedicine, especially given the current pandemic.
On the downside, the roles of some of the co-Investigators are not well-justified due to duplication of effort with other investigators.In addition, the proposal fails to identify the Lead Music and Lead Cognitive Behavioral Therapists will play major roles in the delivery of the therapies.The project also does not devote sufficient resources to statistics.Finally, the proposal devotes insufficient detail concerning the sample size calculations and an inadequate description of the proposed linear mixed effects model analysis.
Reviewer 4:
Criterion 1: Potential for the study to fill critical gaps in evidence (Scientist Reviewers)
Strengths:
Weaknesses: Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers)
Strengths:
• The decision makers who will be able to utilize the results of this research project to make health care choices will be cancer survivor patients who are experiencing anxiety, their families and caregivers and their treatment providers such as oncologists, counselors, psychologist and doctors.Major Strength • The results of this study will also be utilized by the end-users at auxiliary professional organizations such as the American Music Therapy Association (MTA), the American Psychological Association (APA), American Society of Clinical Oncology (ASCO), the National Comprehensive Cancer Network (NCCN), American College of Surgeons Commission on Cancer, American Cancer Society, Memorial Sloan Kettering Cancer Center (MSKCC), Drexel College of Nursing, Sydney Kimmel Cancer Center at Thomas Jefferson University and various medical and patient advocacy groups.Major Strength • Representative end-users such as these will be engaged partly in the Patient/Stakeholder Advisory Group w On the downside, the roles of some of the co-Investigators are not well-justified due to duplication of effort with other investigators.In addition, the proposal fails to identify the Lead Music and Lead Cognitive Behavioral Therapists will play major roles in the delivery of the therapies.The project also does not devote sufficient resources to statistics.Finally, the proposal devotes insufficient detail concerning the sample size calculations and an inadequate description of the proposed linear mixed effects model analysis.
Reviewer 4:
Criterion 1: Potential for the study to fill critical gaps in evidence (Scientist Reviewers)
Strengths:
Weaknesses: Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers)
Strengths:
• The decision makers who will be able to utilize the results of this research project to make health care choices will be cancer survivor patients who are experiencing anxiety, their families and caregivers and their treatment providers such as oncologists, counselors, psychologist and doctors.Major Strength • The results of this study will also be utilized by the end-users at auxiliary professional organizations such as the American Music Therapy Association (MTA), the American Psychological Association (APA), American Society of Clinical Oncology (ASCO), the National Comprehensive Cancer Network (NCCN), American College of Surgeons Commission on Cancer, American Cancer Society, Memorial Sloan Kettering Cancer Center (MSKCC), Drexel College of Nursing, Sydney Kimmel Cancer Center at Thomas Jefferson University and various medical and patient advocacy groups.Major Strength • Representative end-users such as these will be engaged partly in the Patient/Stakeholder Advisory Group who will participate in all phases of the research project and partially through the dissemination of results.Moderate Strength • The Co-Investigators, Dr. Bradt and Dr. Trevino, are affiliated with many of these organizations and other cancer centers and plan to leverage these organizations to disseminate results to these end-users as well.Major Strength • Patients, providers and professional organizations contributed to the selection of Musical Therapy (MT) and Cognitive Behavioral Therapy (CBT) for a comparative effectiveness study with virtual treatment delivery and have input throughout the formulation of the research question and methods.These end-users expressed concern about the adverse effect of polypharmacy in current practice.the effectiveness of CBT with the underserved and diverse population groups (previous highest success rate has been associated with white, highly educated, males), and desired further research on comparing virtual CBT with other virtual methods for treatment of anxiety.Major Strength • Stakeholder concerns with the increasing population of cancer survivors, the increasing use of virtual health care delivery throughout the medical community, and the lack of research on CBT for a diverse population contributed to the selection of these study treatment modalities and delivery methodology.Major Strength.
will partic
pate in all ph
th • The Co-Investigat
rs, Dr. Bradt and Dr. Trevino, are affiliated with many of these organizations and other cancer centers and plan to leverage these organizations to disseminate results to these end-users as well.Major Strength • Patients, providers and professional organizations contributed to the selection of Musical Therapy (MT) and Cognitive Behavioral Therapy (CBT) for a comparative effectiveness study with virtual treatment delivery and have input throughout the formulation of the research question and methods.These end-users expressed concern about the adverse effect of polypharmacy in current practice.the effectiveness of CBT with the underserved and diverse population groups (previous highest success rate has been associated with white, highly educated, males), and desired further research on comparing virtual CBT with other virtual methods for treatment of anxiety.Major Strength • Stakeholder concerns with the increasing population of cancer survi ors, the increasing use of virtual health care delivery throughout the medical community, and the lack of research on CBT for a diverse population contributed to the selection of these study treatment modalities and delivery methodology.Major Strength.
• Selection of a virtual treatment modality will facilitate easier usage by providers and patients and avoid close contact in healthcare in the context of the COVID pandemic.Major Strength • This research plan includes a dissemination plan focused on many layers of end-users and m • Selection of a virtual treatment modality will facilitate easier usage by providers and patients and avoid close contact in healthcare in the context of the COVID pandemic.Major Strength • This research plan includes a dissemination plan focused on many layers of end-users and many modalities of delivery.The patient partners have drafted a multi-channel dissemination plan utilizing various forms of social media to target other patients.The Integrative Medicine Service at MSKCC has an award winning website, "About Herbs" and monthly e-newsletter that targets patients, clinicians and the public throughout the country and will be utilized to disseminate study updates and findings.Study findings will be distributed to study participants at the study's conclusion and they will be invited to the annual cancer survivor conference held each year.Major Strength.
y modalities
of delivery.The patient partners have drafted a multi-channel dissemination plan utilizing various forms
f social med
a to target other patients.The Integrative Medicine Service at MSKCC has an award winning website, "About Herbs" and monthly e-n
sletter tha
targets patients, clinicians and the public throughout the country and will be utilized to disseminate study updates and findings.Study findings will be distributed to study participants at the study's conclusion and they will be invited to the annual cancer survivor conference held each year.Major Strength.
• The protocols and implementation plan for the project will be formalized and available for replication and the outcome measurement tools are already validated and in practice for ease to replicate.Moderate Strength.
Weaknesses:
• The Music Therapy intervention is explained as moving from initially developing a relationship of trust and then moving from a passive role involving music (listening) to a more active role such as developing a play list, singing and writing songs.Singing and writing songs require an integrated cognitive activity level and it would be difficult to engage all personalities in this method.One of the aims of this study is to include underrepresented populations and diverse groups of patients and it may be difficult to engage all randomly selected patients to engage in this level of social and mental activity.Minor Weakness.
• A barrier to reproduction would be the challenge of standardizing CBT for comparison purposes.While the time frame and schedule has been standa • The protocols and implementation plan for the project will be formalized and available for replication and the outcome measurement tools are already validated and in practice for ease to replicate.Moderate Strength.
Weaknesses:
• The Music Therapy intervention is explained as moving from initially developing a relationship of trust and then moving from a passive role involving music (listening) to a more active role such as developing a play list, singing and writing songs.Singing and writing songs require an integrated cognitive activity level and it would be difficult to engage all personalities in this method.One of the aims of this study is to include underrepresented populations and diverse groups of patients and it may be difficult to engage all randomly selected patients to engage in this level of social and mental activity.Minor Weakness.
• A barrier to reproduction would be the challenge of standardizing CBT for comparison purposes.While the time frame and schedule has been standardized, the methodology is much harder to replicate for transference.Minor Weakness.• Prior to and during the formulation of the proposal, patients expressed a desire to have at least some qualitative outcome measures to address the quality of life issues associated with anxiety and also a desire to focus on the symptomatology of anxiety as opposed to being evaluated for a clinical disorder of anxiety.Those concerns were incorporated into the research plan and will be partially expressed with semi-structured interviews instead of quantative outcome measures exclusivly.Major Strength • Patient and stakeholders also expressed the opinion that CBT might require a more significant effort, may be too demanding, contribute to social stigma and that members of the underrepresented population are more likely to drop out and would be interested in an alternative treatment.Major Strength • Patients requested that patients currently taking medications be included in the study and this was incorporated into the final proposal.Major Strength • Patients and stakeholders noted the lack of representation of underserved populations in previous studies and sought to close the evidence gap by advocating for recruitment of a representative sample of actual patients including different sexes, different ethnic and racial backgrounds, socio-economic conditions, and different types of cancer.Major Strength.
dized, the methodology is much harder to replicate for transference.Minor Weakness.• Prior to and during the formulation of the proposal, patients expressed a desire to have at least some qualitative outcome measures to address the quality of life issues associated with anxiety and also a desire to focus on the symptomatology of anxiety as opposed to being evaluated for a clinical disorder of anxiety.Those concerns were incorporated into the research plan and will be partially expressed with semi-structured interviews instead of quantative outcome measures exclusivly.Major Strength • Patient and stakeholders also expressed the opinion that CBT might require a more significant effort, may be too demanding, contribute to social stigma and that members of the underrepresented population are more likely to drop out and would be interested in an alternative treatmen .Major Strength • Patients requested that patients currently taking medications be included in the study and this was incorporated into the final proposal.Major Strength • Patients and stakeholders noted the lack of rep
sentation of
underserved populations in previous studies and sought to close the evidence gap by advocating for recruitment of a representative sample of actual patients including different sexes, different ethnic and racial backgrounds, socio-economic conditions, and different types of cancer.Major Strength.
• The proposed interventions of MT and CBT have both been utilized in the field of behavioral health and both have been found to have positive outcomes.The comparison of these treatments has not been done using a virtual modality.The increased use of virtual treatment due to COVID, accessibility and cost of healthcare, has facilitate • The proposed interventions of MT and CBT have both been utilized in the field of behavioral health and both have been found to have positive outcomes.The comparison of these treatments has not been done using a virtual modality.The increased use of virtual treatment due to COVID, accessibility and cost of healthcare, has facilitated the need for more research using virtual modalities.This proposal stands to leverage the increase in virtual treatment and to test its' effectiveness with two existing therapies.Major Strength.
the need for more research using virtual modalities.This proposal stands to leverage the increase in virtual treatment and to test its' effectiveness with two existing therapies.Major Strength.
• Patients and stakeholders contribu • Patients and stakeholders contributing to this project expressed the barriers to some patients in engaging and adhering to CBT and welcome an alternative treatment modality.Moderate Strength.
elcome an a
ternative tr
tment modality.Moderate Strength.
• Involved patients expressed th
• Involved patients expressed the desire that the follow-up be more of a long term time frame and that was also incorporated into the proposal.Minor Strength.
desire tha
the follow-u
be more of a long term time frame and that was also incorpor ted into the proposal.Minor Strength.
Weaknesses:
• MT is not available in all cancer centers.Minor weakness.• This proposal's research team will build upon an existing patient, stakeholder and community organizational relationship with MSKCC.The research plan includes a Patient/Stakeholder Advisory Board which met on December 14, 2020 to initiate plans and procedures for this proposal, including patient centered research questions, appropriate inclusion and exclusion policies, research protocol, and appropriate outcome and measurement tools.Patients representing different cancer experiences were included in the Board's makeup.This Board will communicate regularly but meet in person twice a year with the research team and the patient Co-Investigators (co-Is).Their responsibilities are indicated to include providing ongoing feedback throughout the project including the research protocol, ongoing recruitment and engagement strategies and dissemination efforts.Major strength • The application indicates that the patient stakeholder advisory board will monitor adherence to the PCORI Principles of Partnerships guidelines.Major strength.
• The organizational structure is adequate to support activities and include patients throughout the process of implementation.There are two patient co-investigators named tha
Weaknesses:
• MT is not available in all cancer centers.Minor weakness.• This proposal's research team will build upon an existing patient, stakeholder and community organizational relationship with MSKCC.The research plan includes a Patient/Stakeholder Advisory Board which met on December 14, 2020 to initiate plans and procedures for this proposal, including patient centered research questions, appropriate inclusion and exclusion policies, research protocol, and appropriate outcome and measurement tools.Patients representing different cancer experiences were included in the Board's makeup.This Board will communicate regularly but meet in person twice a year with the research team and the patient Co-Investigators (co-Is).Their responsibilities are indicated to include providing ongoing feedback throughout the project including the research protocol, ongoing recruitment and engagement strategies and dissemination efforts.Major strength • The application indicates that the patient stakeholder advisory board will monitor adherence to the PCORI Principles of Partnerships guidelines.Major strength.
• The organizational structure is adequate to support activities and include patients throughout the process of implementation.There are two patient co-investigators named that have had experience in previous research.Moderate strength.
have had experience in previous research.Moderate strength.
Weaknesses:
• Even though the patients have been designated as co-Is, the description of their responsibilities focus' on recruitment and dissemination.There is no mention of how they will interface with the research team on implementation factors.Minor weakness.
• There isn't a description of how the Advisory Board will initiate ongoing communication and oversight since they meet with the research team and patient co-Is twice a year.There's no mention of how ongoin
Weaknesses:
• Even though the patients have been designated as co-Is, the description of their responsibilities focus' on recruitment and dissemination.There is no mention of how they will interface with the research team on implementation factors.Minor weakness.
• There isn't a description of how the Advisory Board will initiate ongoing communication and oversight since they meet with the research team and patient co-Is twice a year.There's no mention of how ongoing communication will take place.Minor weakness.• There's no mention of auxiliary stakeholders such as payers, purchasers, psychiatrists, nurses, mental health counselors, and caregivers, being included on the Advisory Board or for input, .Minor weakness Does the application have acceptable risks and/or adequate protections for human subjects?
communication will take place.Minor weakness.• There's no mention of auxiliary stakeholders such as payers, purchasers, psychiatrists, nurses, mental health counselors, and caregivers, being inc uded on the Advisory Board or for input, .Minor weakness Does the application have acceptable risks and/or adequate protections for human subjects?
Overall Co
ents:
Due t
Due to increased success in cancer treatment the numbers of cancer survivors have increased greatly and are projected to continue to increase at great rates.The physical and psychological burden of a cancer diagnosis is great initially, but those who survive and their families are typically left with a lot of anxiety inherent in the supposed random nature of cancer, factors involving capacity, fatigue, financial burden, and perhaps new limitations.There has not been established any certain method recognizing or a certain modality of treating this anxiety.
increased success in cancer treatment the numbers of cancer
crease at g
eat rates.The physical and psychological burden of a cancer diagnosis is great initially, but those who survive and their families are typically left with a lot of anxiety inherent in the supposed random nature of cancer, factors involving capacity, fatigue, financial burden, and perhaps new limitations.There has not been established any certain method recognizing or a certain modality of treating this anxiety.
Patients and stakeholders indicate that the usual first line of mental health treatment, cognitive therapy and psychopharmacology, isn't always effective or desired.In conjunction with this project they called for an alternative approach, a more diverse research population and attention to desired qualitative outcomes such as quality of life and common symptoms such as fatigue.This study addresses and incorporates investigating these requests.In general virtual treatments are increasing at a great rate and in many cases are highly desirable due to COVID, procedures cost, accessibility and ease.This proposal leverages this Patients and stakeholders indicate that the usual first line of mental health treatment, cognitive therapy and psychopharmacology, isn't always effective or desired.In conjunction with this project they called for an alternative approach, a more diverse research population and attention to desired qualitative outcomes such as quality of life and common symptoms such as fatigue.This study addresses and incorporates investigating these requests.In general virtual treatments are increasing at a great rate and in many cases are highly desirable due to COVID, procedures cost, accessibility and ease.This proposal leverages this trend and tests its approach between two existing treatments and their virtual delivery.It stands to advance several different concepts of treatment.
rend and tests its approach between two existing treatments and their virtual delivery.It stands to advance several different concepts of treatment.
There is some weakness in the proposal regarding the availability and standardized use of There is some weakness in the proposal regarding the availability and standardized use of MT, the ability to replicate, and lack of some specific language regarding methodology for incorporating patient and stakeholder involvement in implementation with a seeming emphasis on recruitment and dissemination with their roles.This proposal stands to benefit many people who have already suffered through a cancer diagnosis and are so deserving of help with the prolonged effects.Testing 2 potential treatments and advancing the field of virtual medicine make this a strong proposal.
T, the abili
y to replicate, and lack of some specific language regarding methodology for incorporating patient and stakeholder involvement in implementation with a seeming emphasis on recruitment and dissemination with their roles.This proposal stands to benefit many people who have already suffered through a cancer diagnosis and are so deserving of help with the prolonged effects.Testing 2 potential treatments and advancing the field of virtual medicine make this a strong proposal.
Reviewer 5:
Criterion 1: Potent
Reviewer 5:
Criterion 1: Potential for the study to fill critical gaps in evidence (Scientist Reviewers) Strengths: al for the study to fill critical gaps in evidence (Scientist Reviewers) Strengths:
• Major: The application indicates that anxiety is one of the most common mental health issues facing cancer survivors.About 30% of cancer survivors experience significant anxiety symptoms, which impair functioning • Major: The application indicates that anxiety is one of the most common mental health issues facing cancer survivors.About 30% of cancer survivors experience significant anxiety symptoms, which impair functioning and are associated with poor treatment adherence and worse quality of life.
and are associated with poor treatment adherence and worse quality of life.
• Moderate: The application indicates that no study has compared music therapy (MT) to cognitive behavioral therapy (CBT) for anxiety symptoms in cancer survivors.
• Minor: American Psychological Association (APA) and other organizations have called for more comparative effectiveness research of CBT and other psychotherapeutic interventions.This is only a minor strength because these guidelines do not specifically identify the comparison of CBT vs. MT.
• Major: The recruitment of a racial and ethnic • Moderate: The application indicates that no study has compared music therapy (MT) to cognitive behavioral therapy (CBT) for anxiety symptoms in cancer survivors.
• Minor: American Psychological Association (APA) and other organizations have called for more comparative effectiveness research of CBT and other psychotherapeutic interventions.This is only a minor strength because these guidelines do not specifically identify the comparison of CBT vs. MT.
• Major: The recruitment of a racial and ethnically diverse sample could address the research gap regarding the effectiveness of CBT and other treatments in minority groups.
lly diverse sample could address the research gap regarding the effectiveness of CBT and other treatments in minority groups.
Weaknesses:
• None noted.
Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers)
Strengths:
• Major: The application notes the surge in telehealth, especially during COVID-19.The proposed virtual delivery of interventions is especially timely, potentially more accessible, scalable, and likely to improve care delivery.
• Major: The findings are likely to be reproducible given the rigorous methods.The findings are likely to be generalizable given the recruitment of a more diverse sample.
• Major: The inclusion of Spanish increases the reach of these interventions to a growi
Criterion 2: Potential for the study findings to be adopted into clinical practice and improve delivery of care (All Reviewers) Strengths: • Major: The application notes the surge in telehealth, especially during COVID-19.The proposed virtual delivery of interventions is especially timely, potentially more accessible, scalable, and likely to improve care delivery.
• Major: The findings are likely to be reproducible given the rigorous methods.The findings are likely to be generalizable given the recruitment of a more diverse sample.
• Major: The inclusion of Spanish increases the reach of these interventions to a growing minority group.
g minority group.
• Minor: The dissemination plan includes generic themes such as multi-channel dissemination, social media (e.g., Instagram, Facebook, Twitter, and YouTube), internet blogs, support groups, community outreach, informational brochures, newsletters, and patient websites.Among these, the application explicit • Minor: The dissemination plan includes generic themes such as multi-channel dissemination, social media (e.g., Instagram, Facebook, Twitter, and YouTube), internet blogs, support groups, community outreach, informational brochures, newsletters, and patient websites.Among these, the application explicitly indicates the website "About Herbs".
y indicates the website "About Herbs".
Weaknesses:
Weaknesses:
• Minor: The application does not provide information that supports the demand for this kind of study from endusers and does not identify who will use the study's findings.It seems assumed th
Weaknesses:
Weaknesses: • Minor: The application does not provide information that supports the demand for this kind of study from endusers and does not identify who will use the study's findings.It seems assumed that clinicians and other stakeholders will use the study's findings.• Major: The application identifies an appropriate study population (cancer survivors experiencing anxiety symptoms) and clearly describes the inclusion and exclusion criteria.
clinicians
nd other stakeholders will use the study's findings.• Major: The application identifies an appropriate s udy population (cancer survivors experiencing anxiety symptoms) and clearly describes the inclusion and exclusion criteria.
• Major: The primary (anxiety subscale from the Hospital Anxiety and Depression Scale [HADS]) and secondary outcomes are well justified and assessed with valid and rel • Major: The primary (anxiety subscale from the Hospital Anxiety and Depression Scale [HADS]) and secondary outcomes are well justified and assessed with valid and reliable measures.
able measures.
• Major: The application provides clear and convincing evidence that the two comparators are justified.For example, they report on the effectiveness • Major: The application provides clear and convincing evidence that the two comparators are justified.For example, they report on the effectiveness of CBT and MT.Professional cancer societies and clinical guidelines endorse the treatments.
of CBT and MT.Professional cancer societies and clinical guidelines endorse the treatments.
• Major: The sample size and power analysis are based on estimates from past studies.The analytic framework (linear mixed-effects models) and approaches to missing data are clearly described and justified.
• Major: The study seems feasible, and it has realist • Major: The sample size and power analysis are based on estimates from past studies.The analytic framework (linear mixed-effects models) and approaches to missing data are clearly described and justified.
• Major: The study seems feasible, and it has realistic assumptions about subject enrollment, timeline, and attrition (15%).Evidence from a previously funded PCORI trial (recruitment of 27.5% of Black participants; 10% withdrew from CBT; <9% missing data) further supports the proposed plan's feasibility.
c assumptions about subject enrollment, timeline, and attrition (15%).Evidence from a previously funded PCORI trial (recruitment of 27.5% of Black participants; 10% withdrew from CBT; <9% missing data) further supports the proposed plan's feasibility.
Weaknesses:
• Minor: The randomization should be stratified by sites to reduce risk of bias.
• Minor: The application does not describe the strategies used to recruit a diverse sample (besides recruiting from large and diverse metropolitan areas in NY and Mi
Weaknesses:
• Minor: The randomization should be stratified by sites to reduce risk of bias.
• Minor: The application does not describe the strategies used to recruit a diverse sample (besides recruiting from large and diverse metropolitan areas in NY and Miami).
i).
• Minor
• Minor: The exclusion of patients with cancer is not justified.While the inclusion of individuals with ongoing treatment would complicate the trial, these patients are likely to experience high anxiety and would benefit from treatments.• Major: The investigators have conducted a similar PCORI-funded trial for cancer survivors (comparing CBT to acupuncture).They have also published on MT.They have the required research, statistical, and clinical expertise.
The exclusion of patients with cancer is not justified.While the inclusion of in ividuals with ongoing treatment would complicate the trial, these patients are likely to experience high anxiety and would benefit from treatments.• Major: The investigato s have conducted a similar PCORI-funded trial for cancer survivors (comparing CBT to acupuncture).They have also published on MT.They have the required research, statistical, and clinical expertise.
• Moderate: The level of effort seems a • Moderate: The level of effort seems adequate (e.g., 20% for principal investigator (PI)).Figure 2 describes the organizational structure of the study team.The role and responsibility of each team member are clearly defined, complementary, and integrated.
quate (e.g., 20% for principal investigator (PI)).Figure 2 describe
the organi ational structure of the study team.The role and responsibility of each team member are clearly defined, complementary, and integrated.
• Moderate: The application indicates access to the planned study population, institution • Moderate: The application indicates access to the planned study population, institutional resources and support, and collaborative agreements.
l resources and support, and collaborative agreements.
Weaknesses:
• None noted.
Criterion 5: Patient-centeredness (All Reviewers)
Strengths:
• Major: The application indicates that reducing anxiety, even if not necessarily an anxiety disorder, is impo
Criterion 5: Patient-centeredness (All Reviewers)
Strengths: • Major: The application indicates that reducing anxiety, even if not necessarily an anxiety disorder, is important to patients.
tant to patients.
• Moderate: The virtual assessment and delivery of treatments can address barriers to access to care.The application also note • Moderate: The virtual assessment and delivery of treatments can address barriers to access to care.The application also noted the stigma associated with psychotherapy, which the virtual delivery could reduce.MT is likely to be less stigmatized than CBT.
the stigma a
sociated with
ychotherapy, which the virtual delivery could redu
e.MT is lik ly to be less stigmatized than CBT.
• Moderate: CBT is a first-line treatment for anxiety and one of the most common psychothera • Moderate: CBT is a first-line treatment for anxiety and one of the most common psychotherapeutic interventions.The application notes that according to public-facing websites, most Comprehensive Cancer Centers offer some MT services.
eutic interventions.The application notes that according to public-facing websites, most Comprehensive Cancer Centers offer some MT services.
Weaknesses:
• Minor: The application does not provide clear information that comparing MT to CBT is important to
Weaknesses:
• Minor: The application does not provide clear information that comparing MT to CBT is important to patients.• Moderate: Two patient partners are included as co-investigators (co-Is) and five are in the patient/stakeholder advisory board.A patient/stakeholder advisory board is tasked with providing input on all aspects of the study and meets biannually with the research team.
atients.• Moderate: Two patient partners are included as co-investigators (co-Is) and five are in the patient/stakeholder advisory board.A patient/stakeholder advisory board is tasked with providing input on all aspects of the study and
eets biannua
ly with the research team.
• Minor: The application indicates that patient partners will receive appropriate c • Minor: The application indicates that patient partners will receive appropriate compensation.
Weaknesses:
• Moderate: The engagement plan provides little information on the level of engagement from patients.Even the date (December 14, 2020) raises questions on the actual contributions of patients.Given the admirable intention to recruit from minority populations, the application provides little information on how the team will engage these groups.Also missing is information on whether the patient/stakeholder advisory board includes members from underserved communities.
s little in ormation on the level of engagement from patients.Even the date (December 14, 2020) raises questions on the actual contributions of patients.Given the admirable intention to recruit from minority populations, the application provides little information on how the team wi l engage these groups.Also missing is information on whether the patient/stakeholder advisory bo
d includes m
mbers from underserved communities.
• Moderate: The patient/stakeholder advisory board does not include caregivers or family members who play a vital role in cancer survivors' lives.There is limited engagement with mental health advocacy organizations.
• Minor: The application does provide limited information on the resources and roles of study partners.The engagement plan does not seem tailored to the study.
Does the application have acceptable risks and/or adequa • Moderate: The patient/stakeholder advisory board does not include caregivers or family members who play a vital role in cancer survivors' lives.There is limited engagement with mental health advocacy organizations.
• Minor: The application does provide limited information on the resources and roles of study partners.The engagement plan does not seem tailored to the study.
Does the application have acceptable risks and/or adequate protections for human subjects?
e protections for human subjects?
Yes
Overall Comments:
The proposed randomized clinical trial (RCT) tests whether music therapy (MT) is not inferior to cognitive behavioral therapy (CBT) for reducing anxiety amon
Overall Comments:
The proposed randomized clinical trial (RCT) tests whether music therapy (MT) is not inferior to cognitive behavioral therapy (CBT) for reducing anxiety among 300 cancer survivors.Professional cancer societies recommend the two treatments, but the application indicates that no comparative effectiveness research (CER) has compared these two interventions, particularly when delivered remotely.The trial will also test whether MT is superior to CBT at reducing fatigue.The application will explore additional aims, such as investigating whether baseline characteristics moderate treatment effects and conduct semi-structured interviews for a qualitative study.
300 cancer survivors.Professional cancer societies recommend the two treatments, but the application indicates that no comparative effectiveness research (CER)
test whether MT is sup
rior to CBT at reducing fatigue.The application will explore additional aims, such as investigating whether baseline characteristics moderate treatment effects and conduct semi-structured interviews for a qualitative study.
The patient and stakeholder engagement seems weak, with little or no involvement of caregivers/family members, mental health advocacy groups, and other stakeholders besides a few patients.
Despite some minor and fixable weaknesses, the application is excellent, it is scientifically rigorous, and it has several notable strengths.The two well-matched treatments consist of seven 60-minute sessions delivered remotely.Even without a pandem The patient and stakeholder engagement seems weak, with little or no involvement of caregivers/family members, mental health advocacy groups, and other stakeholders besides a few patients.
Despite some minor and fixable weaknesses, the application is excellent, it is scientifically rigorous, and it has several notable strengths.The two well-matched treatments consist of seven 60-minute sessions delivered remotely.Even without a pandemic, such remote interventions can increase access and reduce treatment barriers for those who need or prefer therapy online.The planned inclusion of minority groups (>30%) and the Spanish-language version will help reach a broader population and provide more generalizable findings.
c, such remote interventions can increase access and reduce treatment barriers for those who need or prefer therapy online.The planned inclusion of minority groups (>30%) and the Spanish-la guage version will help reach a broader population and provide more generalizable findings.
other CO-I's have vast experience with integrative therapies, cancer and research.Many of the team members have previously worked together.Major Strength • Memorial Sloan Kettering and its multiple sites has provided detailed support, and proposed plans for support seem very appropriate.Major Strength • All partners and team members have pledged major enthusiasm and support for this project in their letters of support, and roles are cle other CO-I's have vast experience with integrative therapies, cancer and research.Many of the team members have previously worked together.Major Strength • Memorial Sloan Kettering and its multiple sites has provided detailed support, and proposed plans for support seem very appropriate.Major Strength • All partners and team members have pledged major enthusiasm and support for this project in their letters of support, and roles are clearly delineated.Major Strength Weaknesses: • 2 patient partners on the team, Macleod and Walker, will receive substantially different compensation for what appears to be equal qualification, expertise and effort.Minor Weakness • There is no one on the team representing Miami Cancer Center.Minor Weakness Criterion 5: Patient-centeredness (All Reviewers) Strengths: rly delineated.Major Strength Weaknesses: • 2 patient partners on the team, Macleod and Walker, will receive substantially different compensation for what appears to be equal qualification, expertise and effort.Minor Weakness • There is no one on the team representing Miami Cancer Center.Minor Weakness Criterion 5: Patient-centeredness (All Reviewers) Strengths:
•
) • Since the interventions require internet access, cancer survivors wi
•
) • Since the interventions require internet access, cancer survivors without such access will not be able to use the therapies.(Minor) Criterion 3: Scientific merit (research design, analysis, and outcomes) (Scientist Reviewers) Strengths: The proposal clearly describes the proposed randomized clinical trial design, the cognitive behavioral and music therapy interventions, the anxiety and other symptom outcomes and discusses relevant literature that support the investigators' research design.(Moderate) • With a few minor exceptions, the overall Research Plan including the study design, subjects studied, outcomes investigated and statistical analysis plan closely adhere to the PCORI Methodology Standards.The focus of the statistical analysis on the time-by-intervention arm interaction is appropriate.(Moderate) • The proposal well-justifies the choice of the randomized clinical trial study design as the most appropriate way to obtain a valid measure of effectiveness and control for confounding.(Major) • The investigators appropriately propose to gather a diverse sample of cancer survivors from the MSK regional network in New York and New Jersey and from the Miami Cancer Institute in South Florida.The investigators aim to enroll >30% non-white participants and will rely on Miami Cancer Institute enhance Hispanic accrual.These are appropriate patient populations and the investigators have experience working with them.(Moderate) • The investigators based their choice of outcomes on the scientific literature and input from patient partners on what aspects of their cancer symptom experience are important to them and will use measurement instruments that have been previously validated in both English and Spanish versions.(Moderate) • The proposal clearly describes the cognitive behavioral and music therapy interventions and justifies the objective to compare them in a diverse sample.(Major) • The inputs into the sample size calculations are appropriate so that the proposed sample size is likely appropriate as well.(Moderate) • The study plan seems feasible and the investigators have pilot tested the data collection and management system.The investigators are experienced with trials of cancer survivors and calculated reasonable estimates of potential patient pools and recruitment rates.The project timelines and milestones are all realistic.(Moderate) • The investigators appropriately base their sample size calculations on tests of the time-by-intervention arm interaction.(Minor)
••
The research team consisting of the principal investigator (PI), Dr. Mao, the co-PIs, Drs.Bradt and Trevino, coinvestigators Drs.Lopez and Panageas is well-qualified to carry out the proposed research and several of the investigators have worked together in the past.The research team includes complementary expertise in integrative medicine, cognitive behavioral and music therapies, statistics and telemedicine.(Major) • The PI, Dr. Mao, is experienced in designing and executing clinical trials aimed at alleviating physical and psychological symptoms and improving quality of life in cancer patients.These trials were of a similar size, scope, and complexity as the proposed trial.(Moderate) • The proposed levels of support for Drs.Mao, Bradt, Trevino and Lopez are appropriate and well-justified.(Moderate) • The Physician-in-Chief at MSK, Dr. DeAngelis, wrote a letter of support for this project but did not offer any specific resources.(Minor) • The research facilities and resources at Memorial Sloan Kettering Cancer Center (MSKCC) and the Miami Cancer Institute are excellent will support the successful completion of the proposed research.(Moderate) Weaknesses: The proposed 1.2 calendar months effort for Dr. Panageas is not sufficient since she will be responsible for conducting complex statistical analyses as well as overseeing the work of Mr. Baser.(Minor) • The proposed 3.6 calendar months effort for Dr. Liou is not well-justified since his duties duplicate the roles of Dr. Mao and Ms Seluzicki.(Minor) • The Lead Music and Lead Cognitive Behavioral Therapists will play major roles but the application does not identify the people who will perform this work and their expertise cannot be evaluated.(Moderate)
•
The proposal does not mention that these interventions are available to patients right now.(Minor)
•
Major: The application provides a clear conceptual framework for the proposed study supported by relevant background literature and the investigators' past experiences with CBT and MT.• Major: The proposed randomized controlled study is well justified and adheres to the PCORI Methodology Standards.The trial will follow the Consolidated Standards of Reporting Trials (CONSORT) guidelines for nonpharmacological interventions.The blinding and the matching of time for MT and CBT are examples of rigorous methods.
The proposed sample size, power calculations, recruitment, and attrition are based on past research experience (including a previous PCORI RCT) or published evidence.Previous work also indicates that the study is feasible, and the team can complete this work.Overall, the proposed CER will provide information on psychotherapeutic interventions that can improve cancer survivors' quality of life.*This is the end of the Summary Statement*
|
v3-fos-license
|
2019-01-22T22:24:07.648Z
|
2018-10-31T00:00:00.000
|
57192925
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.4615",
"pdf_hash": "205a2527d0579d9da697f91ffa8ea6326ac7d812",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43300",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "08fd945d5b225019ceb1f8a2c2562f2f45e7f876",
"year": 2018
}
|
pes2o/s2orc
|
Fine‐scale population differences in Atlantic cod reproductive success: A potential mechanism for ecological speciation in a marine fish
Abstract Successful resource‐management and conservation outcomes ideally depend on matching the spatial scales of population demography, local adaptation, and threat mitigation. For marine fish with high dispersal capabilities, this remains a fundamental challenge. Based on daily parentage assignments of more than 4,000 offspring, we document fine‐scaled temporal differences in individual reproductive success for two spatially adjacent (<10 km) populations of a broadcast‐spawning marine fish. Distinguished by differences in genetics and life history, Atlantic cod (Gadus morhua) from inner‐ and outer‐fjord populations were allowed to compete for mating and reproductive opportunities. After accounting for phenotypic variability in several traits, reproductive success of outer‐fjord cod was significantly lower than that of inner‐fjord cod. This finding, given that genomically different cod ecotypes inhabit inner‐ and outer‐fjord waters, raises the intriguing hypothesis that the populations might be diverging because of ecological speciation. Individual reproductive success, skewed within both sexes (more so among males), was positively affected by body size, which also influenced the timing of reproduction, larger individuals spawning later among females but earlier among males. Our work suggests that spatial mismatches between management and biological units exist in marine fishes and that studies of reproductive interactions between putative populations or ecotypes can provide an informative basis on which determination of the scale of local adaptation can be ascertained.
higher in the marine realm than it is for terrestrial systems (Hauser & Carvalho, 2008) such that even limited genetic exchange between populations can erode genetic divergence. Fishery and conservation management units are often established at very large spatial scales (e.g., COSEWIC 2010), although many appear to have limited conformity with independent demographic units and are often based on decades-old interpretations of the flow patterns of ocean currents (Cadrin, Friedland, & Waldman, 2004).
The Atlantic cod (Gadus morhua) is a prime example of such a species. Management units in the Northwest Atlantic have remained unchanged since the 1940s (ICNAF, 1952), the largest extending more than 1,000 km, despite data indicating that some of these large units contain multiple, smaller units that represent biologically and genetically distinct populations. For example, the Southern Scotian Shelf/Bay of Fundy fisheries management unit (Northwest Atlantic Fishery Organization Division 4X) includes groups of cod that have spatially distinct spawning locations, temporally different spawning periods, and genetically different responses to temperature change (Hutchings et al., 2007;Oomen & Hutchings, 2015. Similar mismatches between the spatial scales of management and putative local adaptation almost certainly exist in the Northeast Atlantic. All cod that inhabit Norwegian coastal waters north of 62° are managed as a single unit (www.ices.dk), despite evidence of variation in genetic structure, spawning location, and spawning time within the unit (Johansen et al., 2009). South of 62°, Norwegian cod are considered part of a single North Sea cod management unit (www.ices.dk). Yet cod that inhabit fjords and coastal waters along the southeast Norwegian coast-Skagerrak-can differ genetically from North Sea cod , a distinction that seems temporally stable, suggesting that cod spawn in, and inhabit, coastal Skagerrak waters throughout the year rather than the North Sea (Cianelli et al., 2010;Knutsen et al., 2007Knutsen et al., , 2011Rogers et al., 2011).
There is compelling evidence that cod inhabiting the inner waters of some Skagerrak fjords are phenotypically and genetically distinct from cod inhabiting outer waters of the same fjord Øresland & André, 2008). In this respect, the most extensive data are available for Risør Fjord (58. 7°N, 9.2°E). Although they are not physically restricted from moving between the two areas, cod inhabiting inner and outer Risør waters can differ genetically from one another . Recent analyses of single nucleotide polymorphism (SNP) data have revealed that two different genotype clusters of cod coexist in various proportions along coastal Skagerrak ; a "fjord" ecotype dominates the waters of the inner fjords, for which Risør is an excellent example, whereas a "North Sea" ecotype is often predominant in outer-fjord waters. Phenotypically, cod from inner Risør grow at a slower rate than those from outer Risør (Kuparinen, Roney, Oomen, Hutchings, & Olsen, 2016). Differences in life-history traits are evident among several coastal Skagerrak fjords and appear to be spatio-temporally stable .
Thus, Risør Fjord would appear to be an ideal location in which to study questions related to the spatial scale of local adaptation in marine fish with high dispersal capabilities. However, notwithstanding the phenotypic and genetic differences between the two populations described above, it is not known whether individuals would have equal reproductive success in a mixed-population spawning situation. Here, we explore this hypothesis. Equivalence in reproductive success under such conditions would imply an absence of intrinsic barriers to interbreeding attributable to potential differences in factors likely to affect mating probability, such as body size, behavior, physiology, and (or) gamete quality. In contrast, if one population experiences higher average reproductive success when cohabiting the same breeding environment, ceteris paribus it might indicate that genetic and life-history differences between inner and outer populations reflect processes contributing to reduced probability of interbreeding, and increased probability of reproductive isolation, between populations (Rundle & Nosil, 2005;Schluter, 2000).
| Parental fish collection
Skagerrak ( Figure 1) is a strait bounded by southeast Norway, southwest Sweden, and Denmark's Jutland peninsula, connecting the North Sea and the Kattegat sea area (the latter being the entrance to the Baltic Sea). The Norwegian Skagerrak is highly heterogeneous, comprising a multitude of habitats ranging from sheltered mud flats and wave-exposed cliffs to semi-enclosed fjords and deep (~700 m) near-coast waters (Saetre, 2007).
Risør Fjord (Norwegian Skagerrak) encompasses ~20 km 2 , providing habitat for two putative populations of cod, one inhabiting the inner fjord and the other the outer fjord. Four to six weeks prior to spawning (December 2014), adults were collected by fyke net from the inner and outer fjord at Sørfjorden and Østerfjorden, respectively. Fish were measured for length, tagged externally (using a T-Bar anchor tag labeled with a unique identification code), and subsequently placed in a single spawning basin at the Institute of Marine Research Flødevigen Research Station (~60 km south of Risør) where they spawned undisturbed in a ~1-m deep, 9 m × 5 m spawning basin lined with natural rock. Although the overall sex ratio (determined by postmortem inspection) was female-biased 1.6:1.0 (45 females, 28 males), sex ratio did not differ between populations (χ 2 = 0.40, p = 0.53), the number of females:males being 24:12 and 21:16 for outer-fjord (length range: 45-63 cm) and inner-fjord cod (45-57 cm), respectively. Water in the spawning basin, pumped regularly from a depth of 75 m, averaged 7.4°C (ambient temperature).
Lights were adjusted to mimic the natural photoperiod and cod were fed ~2 kg of frozen shrimp daily.
| Offspring sampling protocol
Eggs were sampled daily (between 08:00 and 10:00 hours), using a small container through which the spawning-basin outflow passed.
The spawning period began when eggs were first evident in the egg collector (20 January 2015) and ended when no eggs had been collected for five consecutive days (the final date of egg collection was 24 April 2015; Figure 2). Eggs were collected on 90 of the 94 days that comprised the spawning period. At each daily collection, the volume of eggs was measured, using a 4-l graduated cylinder, and placed into a separate incubation tank.
Eggs were incubated at 6.1 ± 0.5°C (mean ± SD) until they were visually assessed to be at 50% hatch (15.0 ± 0.6 days; mean ± SD) at which time genetic samples were taken for 50 individual larvae sampled at random (each whole larva was preserved individually in microtubes containing 250 µl of ThermoFisher RNAlater). During the 94-day spawning period, daily genetic samples were available for 4,500 larval individuals.
One month after completion of spawning, the 73 adults were sampled, otoliths extracted, and the following morphological traits recorded: total standard length (mm); stomach weight with and without contents (g); total weight (g); liver weight (g); gonad weight (g); and pelvic fin length (mm), a sexually dimorphic trait that appears to affect male cod mating success (Skjaeraasen, Rowe, & Hutchings, 2006). The gonadosomatic index (GSI = gonad weight/total body weight) and the hepatosomatic index (HSI = liver weight/total body weight) were calculated as proxies for body condition (Lambert & Dutil, 1997). Due to poor health, two adults were sacrificed early in the spawning season, at which time only length, weight, otolith, and sex were recorded.
Age estimates were obtained from otoliths for 72 of the 73 adults.
One otolith from each individual was embedded in a black polyester resin and transversally sectioned at the Otolith Research Laboratory at the Bedford Institute of Oceanography, Canada. Images of sectioned otoliths were then obtained under reflected light, using an Axiocam Mrm camera mounted to a Zeiss SteREO Lumar v12 stereomicroscope. All images were processed to enhance local contrast F I G U R E 1 Atlantic cod were collected from the inner (red-lined area) and the outer (blue-lined area) Risør Fjord on the Norwegian Skagerrak coast. Modified from Kuparinen et al. (2016) F I G U R E 2 Daily volume (ml) of eggs collected throughout the spawning period (20 January-24 April 2015). Absent estimates of egg volume were due either to an absence of eggs or an overflow of the egg collector, resulting in an inaccurate egg-volume estimate between the opaque and translucent zones, after which ages were estimated by counting annuli along transects starting from the nucleus in the center of the otolith, proceeding until the edge.
| Genetic analysis
Family reconstruction was based on tissue samples from offspring and parents. DNA was extracted from parental fin clips, using an OMEGA Bio-tek tissue extraction kit, and from whole offspring, using the OMEGA Bio-tek 96-well plate DNAeasy extraction kit.
All samples were amplified, using two multiplexes consisting of four loci each. Multiplex 1 comprised three tetranucleotide repeat loci (Gmo8, Gmo19, and Tch11) and one trinucleotide repeat locus (Gmo35; Miller et al., 1988;O'Reilly et al., 2002). Multiplex 2 comprised three dinucleotide repeat loci (Gmo132, Gmo2, Tch13) and one tetranucleotide repeat locus (Gmo34; Brooker, 1994;Miller et al., 1988;O'Reilly et al., 2002). Both multiplexes were chosen based on the high levels of heterozygosity at each locus, genotyping reliability, and demonstrated efficiency for paternity studies in Atlantic cod (Dahle, Jørstad, Rusaas, & Otterå, 2006;Wesmajervi, Westgaard, & Delghandi, 2006). Loci were amplified by polymerase chain reaction, as specified by Wesmajervi et al. (2006) and Dahle et al. (2006), and then analyzed using the capillary gel electrophoreses instrument 3130xl Genetic Analyzer (Applied Biosystems). Allelic sizes were calculated with instrument-specific software and the pro- Family reconstruction of the allelic data from both offspring and parents was performed with the programme COLONY v2.0.6.1 (Jones & Wang, 2010). Larvae were run in batches of 10 days (~500 larvae per batch). All runs used the full-likelihood method with high precision and a random seed number. Genotyping error was set to 0.02 per locus. Each analysis was repeated, using medium, long, and very long runs, to assess whether maximum likelihood configuration had been reached. (The length of each run of the programme is determined by the user [Jones & Wang, 2010]; the longer the run, the greater the number of configurations considered in the searching process, and the greater the likelihood that the maximum likelihood configuration will be found.).
| Statistical analyses
Individual reproductive success was defined as the number of offspring to which an individual's genotype had been identified as one of those contributing to the fertilization (male) or production (female) of each offspring.
Generalized linear models (GLMs) were used to examine the relative contributions of population identity and trait morphology to individual reproductive success. Models were run separately for each sex, such that the number of offspring produced (reproductive success, or R S ) was a function of population (inner and outer fjord) and the following morphological variables, measured (with one exception) postmortem: body length (prior to spawning), body weight, HSI, GSI, age, and the residual mean pelvic fin length (calculated from the residuals of linear regressions between pelvic fin length and body length sensu Skjaeraasen et al., 2006): Because of the high degree of skewness in the number of sired offspring (see below), the GLM for males incorporated a quasi-poisson error structure. The model for females was run under the assumption of a normal distribution. Model selection was performed following the protocol suggested by Zuur, Ieno, Walker, Saveliev, and Smith (2009), using stepwise model reduction. Residual plots were examined to ensure appropriate model fits to the data. To examine the robustness of the model selection process and final models, stepwise forward model selection was also performed. All analyses were conducted with R version 3.1.0.
The influence of body size on temporal variation in reproductive success was examined, using linear models run separately for each sex, such that the day of the spawning period was a function of the mean length of fish known to have successfully spawned on each day (as determined by genetic analysis), that is, Day ~Mean length of spawning fish. Mean length was calculated as both the arithmetic and weighted mean lengths of spawning fish (i.e., those whose R S > 0), the latter being the length of each spawning fish weighted by the relative number of offspring produced by that fish on a given day.
Cumulative rank curves were used to visualize skewness in reproductive success such that the proportion of offspring fertilized was plotted against the rank of the individual in terms of highest number of offspring produced. Deviation from the 1:1 ratio line is indicative of skewed reproductive success. Population differences in reproductive phenotype were examined, using t tests.
| Parentage analysis
Microsatellite genotypes were successfully obtained for all putative parents with a minimum of two successful replicate amplifications per locus per adult. There was some evidence of potential null alleles at Gmo19 (frequency = 0.068) and Tch11 (0.070) for cod from the inner-and outer-fjord populations, respectively, although it is possible that the MICRO-CHECKER software misinterpreted minor deviations from Hardy-Weinberg equilibrium (HWE) as evidence of null alleles. Given the lack of consistency of null alleles between populations, and that failure to meet HWE is not typically grounds for discarding a locus (Selkoe and Toonen, 2006), these loci were retained in the analyses. None of the other markers exhibited evidence of scoring error, large allele dropout, or null alleles. Microsatellite Short and medium runs in COLONY v2.0.6.1 produced variable parentage results whereas results from long and very long runs were nearly identical. Thus, the results from long runs were used for parental assignment. The maximum likelihood was clearly obtained during long runs, providing further indication that the long run provided sufficient time for the programme to reach the best configuration. In the instance where an offspring was assigned to an unknown parent, the genotype of "unknown" parents was compared to the known parental genotypes. If an unknown parental genotype matched at least 5 of 8 loci of a known parent, the unknown parent was reassigned as the known parent. The final parentage analysis resulted in successful paternal assignment to 94.0% of the larvae (4,221 of 4,489) and successful maternal assignment to 93.5% of the larvae (4,198 of 4,489).
| Population differences in reproductive success and reproductive phenotype
Reproductive success was significantly influenced by population origin (Table 1). For both males and females, outer-fjord individuals experienced, on average, lower reproductive success than innerfjord females. Comparing males, the average number of offspring fertilized by inner-fjord males (228.4; range: 0 to 987) was more than three times that of outer-fjord males (66.1; range: 0 to 351). A threefold difference in reproductive success was also evident among females, inner-fjord females producing an average 152.5 fertilized eggs (range: 0 to 319) compared to an average of 48.6 eggs (range: 0-266) for outer-fjord females. Differences in average reproductive success between populations were not reflected by differences in reproductive phenotype.
Neither post-spawning condition (males: p-value =0.27, females: pvalue = 0.68) nor HSI (males: p-value = 0.11; females: p-value = 0.12) differed between populations. The same was true for average male pelvic fin length (p-value = 0.22). Populations did not differ in the initial day of spawning for males (p-value = 0.864) and females (pvalue = 0.114). The only trait that differed between populations was average post-spawning GSI which was higher among males from the inner fjord (1.2%) compared to those from the outer fjord (0.2%; p-value = 0.014). The same was true for females, the GSI for those from the inner fjord (1.2%) exceeding that for females from the outer fjord (0.7%; p-value < 0.001). As noted previously, neither sex ratio nor number of spawners differed between populations.
| Individual differences in reproductive success
Of the 73 fish in the spawning basin, offspring were assigned to 57 during the spawning period. Overall, males exhibited a greater skew in reproductive success than females (Figure 3). Of the 24 males who contributed gametes, the top-ranked individual (inner fjord) sired 23.0% of the offspring and the top three males (two from inner fjord, one from outer fjord) were responsible for 50.5% of the fertilized offspring. Females exhibited less of a reproductive skew; among the 33 females who produced fertilized eggs, the top-ranked female contributed only 7.5% of the offspring. The top three females were responsible for 20.2% of the offspring, substantially less than the male equivalent. When the cumulative reproductive success curves were examined for each population separately, the males and females exhibited skews similar to those evident when the data were pooled (with the exception of the outer-fjord females, who exhibited a more pronounced skew; Figure 3).
| Correlates of reproductive success
Following model simplification, the primary correlates of male reproductive success (number of offspring sired) included body weight, population identity, and GSI ( Notes. Data are shown for three groups of spawners: (a) males; (b) females; and (c) a subset of females (for which the two largest individuals were excluded from the analysis). Only significant (α < 0.05) fixed effects are presented here; the population fixed effect is represented by "PopOuter", the outer-fjord population.
TA B L E 1 Output from generalized linear models, after model simplification, between individual reproductive success (number of offspring sired/fertilized) and several fixed effects, all but length having been measured post-spawning: population identity; length; weight; hepatosomatic index (HSI); gonadosomatic index (GSI); residual mean pelvic fin length significant predictor with a slightly positive coefficient (0.002), indicating that increases in weight had a positive additive effect on the number of offspring sired. As noted above, males from the outer fjord experienced lower reproductive success. The GSI was the least significant predictor, its negative regression coefficient indicating that lower GSI at the end of the spawning period was associated with higher number of offspring sired.
Regarding body weight, although the regression coefficient was only slightly positive (0.002; Table 1), the statistical significance was high. Upon further examination, it was evident that the three most successful males were among the largest, resulting in a strong positive correlation, given the sample size. However, beyond these three top males, there was little or no relationship between reproductive success and body size (Figure 4). The pattern in these data indicates that there is not a continuous pattern of association between male body size and male reproductive success; if a male was not among the heaviest, weight had no demonstrable effect on reproductive success.
Reproductive success in females was explained by maternal length, population identity, and weight (Table 1). Length had the largest effect, indicative of a positive additive effect on the number of offspring produced. Although the full data set indicated that weight had a negative additive effect on the number of offspring sired, the negative correlation appeared to be heavily influenced by the two largest females both of whom had very low reproductive success (Figure 4b).
When these two females were excluded from the analysis, weight was reduced from the model as a non-significant variable, leaving length, population identity, and HSI as the remaining correlates (Table 1).
| Effect of body size on reproductive timing during the spawning period
The relationship between body size and timing of reproduction differed between sexes. Among males, there was a significant negative relationship between day of the spawning period and body size, such that larger males were dominant at the beginning of the spawning period (Figure 5a). Among females, however, larger individuals tended to spawn later than smaller females (Figure 5b). Both models suggest that smaller females were comparatively more active at the beginning of the spawning season. Notwithstanding the statistical significance of most of the associations, the explained variation, as reflected by r 2 , was low, being less than or equal to 0.10 for all models. This might be attributable to the observation that individuals of an intermediate size (500-540 mm; both sexes) did not exhibit an obvious temporal pattern in spawning activity ( Figure 5).
| D ISCUSS I ON
The present study examined correlates of reproductive success in a broadcast-spawning marine fish at an exceptionally fine spatial and temporal scale. Based on daily estimates of parentage for almost 4,500 offspring, several broad-scale patterns emerged. Firstly, despite their small (<10 km) spatial separation, average individual reproductive success differed between the two populations, after accounting for phenotypic variability in several traits. Secondly, reproductive success was skewed within both sexes, albeit much more so among males. Thirdly, body size affected reproductive success differently between sexes, being a strong positive predictor among females but much less so among males. Lastly, body size influenced the timing of reproduction, larger individuals spawning later among females but earlier among males.
The results of the present study indicate that genetically distinctive populations of Atlantic cod can differ considerably in individual reproductive success when competing for mating and reproductive opportunities. These differences appear not to be attributable to phenotypic variability between populations. Despite similarity in terms of number of individuals, average body size, sex ratio, initiation of spawning period, and body condition, cod originating from outer Risør fjord were less reproductively successful than those from inner Risør fjord, a finding consistent for both males and females. Based on data reported in a separate study , duration of the spawning period was the same for males but longer for inner-fjord females.
Our work suggests that there are intrinsic differences between the inner and outer fjord cod populations that affect individual reproductive success. These might be related to population differences in agonistic behavior; males can be significantly more aggressive in some populations than others (Rowe, Hutchings, Skjaeraasen, & Bezanson, 2008; aggression contributes to a dominance hierarchy that is associated with fertilization success; Hutchings, Bishop, & McGregor-Shaw, 1999). Population differences in reproductive success might also be related to differences in sound production by males, mate choice by females, or both . There might also be genetically based differences in reproductive success related to local adaptation, given a high correlation between genetic origin and the presence of three inversion zones in the genome (Sodeland et al., 2016). Cod in the outer fjord are dominated by a North Sea genomic signature, whereas a coastal-fjord genotype is predominant in inner Risør .
The skew in fertilization probability observed here is well within previously reported estimates of male cod reproductive success.
The top three of 28 males in our study fertilized 50% of the total number of eggs produced during the spawning period, an estimate that falls within the range (48%-93%) for the top 3 males (range in number of males: 18-37) reported among four Northwest Atlantic spawning groups .
The skew in male reproductive success lends firm support to the existence of a duality of male spawning strategies in cod. The release of gametes by a spawning pair is preceded by a ventral mounting of the female by a single male (Brawn, 1961;Hutchings et al., 1999).
Males have also been observed to adopt a satellite strategy and to release milt alongside a spawning pair (Hutchings et al., 1999;Rowe et al., 2008). Studies suggest that males who participate in pairedspawning events are afforded this opportunity because of their rank within a dominance hierarchy, established by factors such as size and aggressive behavior (Brawn, 1961;Hutchings et al., 1999). Indeed, the most highly successful males in the present study were among the heaviest, lending credence to the hypothesis that they were the top-ranked males within the dominance hierarchy and, thus, were most likely to participate in paired-spawning events. For individuals not among the top-ranked males, body size had little to no effect on reproductive success. Thus, small to moderately heavy males were likely to be lower ranked individuals who, failing to obtain mating opportunities, would be more likely to adopt the satellite male spawning behavior, resulting in lower, but presumably non-trivial, levels of fertilization success .
For females, a skew in reproductive success was evident, although much less so than that observed in males, and the length of female size (Kjesbu, Solemdal, Bratland, & Fonn, 1996;McIntyre & Hutchings, 2003).
An unanticipated finding is that the two largest spawning females had unexpectedly low reproductive success, so much so that they were solely responsible for a negative relationship between weight and the number of offspring in the initial model. Not only were these two females among the three heaviest fish in the spawning basin, the only other fish within their size range was also a female and had zero reproductive success. (Exclusion of these three females from our analyses still yielded highly significant population-origin effects in the GLM.) The lack of success among the largest females might be attributable to a lack of suitably sized males for paired spawning, possibly due to a physical limitation from males failing to grasp the females during the ventral mount or a behavioral choice on the female's behalf.
The mean size of reproductively successful males decreased over time, suggesting that larger males dominated mating opportunities during the early part of the spawning period. As the spawning season progressed, and larger dominant males presumably began to exhaust their energy and sperm reserves, smaller males were perhaps better able to become more reproductively successful. The temporal shifts in size ranks of reproductively successful males reported here, and elsewhere (Bekkevold, Hansen, & Loeschcke, 2002;Skjaeraasen & Hutchings, 2010), for cod might have consequences for the strength and direction of sexual selection in the presence of size-selective fisheries .
Based on fisheries-independent survey data, Hutchings and Myers (1993) reported that younger males initiated (and completed) spawning earlier than older males. This might be interpreted as conflicting with our results, although we note that the range in age of cod analyzed by Hutchings and Myers (1993) (6-16 years) was considerably greater than the range considered here (4-8 years). In contrast to males, the mean size of reproductively successful females increased throughout the spawning period, a finding concordant with the experimental work by Hutchings et al. (1999) and the agebased meta-analysis by Hutchings and Myers (1993). In contrast, Marteinsdottir and Bjornsson (1999) suggested that larger females begin spawning earlier than smaller females, based on the relative prevalence of females in spawning condition on spawning grounds.
It is has been hypothesized that older, potentially more ex- F I G U R E 5 Day of spawning period versus mean length (mm) of reproductively successful (a) male and (b) female Atlantic cod. In the right-hand plots, mean length is proportionately weighted by the relative number of offspring produced on a particular day. Output from linear models between day of spawning and length: unweighted mean length (males: p-value = 0.001, r 2 = 0.10; females: p-value = 0.036, r 2 = 0.04); mean length weighted by the relative number of offspring sired/fertilized on a given day (males: p-value = 0.003, r 2 = 0.08; females: p-value = 0.076, r 2 = 0.02) However, age was not a significant correlate of individual reproductive success in the present study. As noted above, this might be attributable to the limited range in parental age (mean: 5.0 ± 1.0 SD year).
We cannot discount the possibility that the spawning basin might have altered spawning behavior and subsequent levels of reproductive success relative to those that cod would experience under natural conditions. The average depth of the sloped spawning basin (1 m, although deeper in places) was comparatively shallow when compared to the reported depths of many spawning locations in the wild (Rowe & Hutchings, 2003). Environmental variables, such as temperature, were held invariant in the spawning basin; such "con- The present study provides novel insights into the spatial scale at which reproductive success can vary within a marine fish species that exhibits high dispersal capabilities. Differential reproductive success between spatially disparate groups of the same species is consistent with the hypothesis that these groups represent different populations and (or) ecotypes at a spatial resolution thought to be uncommon in highly mobile, broadcast-spawning fish. It is also noteworthy that the inner-and outer-fjord groups of cod examined here are likely to each be predominantly comprised of genomically different cod ecotypes . This raises the intriguing hypothesis that the populations might be diverging because of ecological speciation, that is, the evolution of reproductive isolation between populations resulting from ecologically based but divergent natural selection (Rundle & Nosil, 2005;Schluter, 2000).
Our work contributes to a growing body of research highlighting the influence that the mating system in broadcast-spawning fish can have on individual reproductive success (Rowe & Hutchings, 2003).
Our conclusion of small-scale population differentiation is also consistent with the finding that the temporal population dynamics of coastal Norwegian cod can be spatially structured, differing among fjords and between sheltered/exposed areas (Rogers, Storvik, Knutsen, Olsen, & Stenseth, 2017). Recent establishment of smallscale (1 km 2 ) marine protected areas provides field-experimental support for extremely local demographic processes in terms of survival and size structure of coastal Skagerrak cod (Fernández-Chacón, Moland, Espeland, & Olsen, 2015).
A fundamental challenge to achieving successful resource-management and conservation outcomes is to correctly identify the spatial scale at which strategies for harvesting and threat mitigation are developed (Cianelli et al., 2010;Conover, Clarke, Munch, & Wagner, 2006;Kuparinen et al., 2016). A mismatch between the spatial scale of a management unit and the spatial scale of a biological unit may result in ineffective actions. Our work suggests that such spatial mismatches exist in marine fishes and that studies of reproductive interactions between putative populations or ecotypes can provide an informative basis on which determination of the scale of adaptation can be ascertained. to RAO), the Prediction and Observation of the Marine Environment network (mobility grants to NER and RAO), and the European Regional Development Fund (Interreg IVa. "MarGen" project). We are grateful for the constructive criticisms proffered by referees on an earlier version of the manuscript.
CO N FLI C T O F I NTE R E S T
None declared.
AUTH O R S' CO NTR I B UTI O N S
NER, RAO, and JAH conceived the ideas and designed methodology; NER collected the data with assistance from RAO, HK, and EMO; NER and JAH analyzed the data; NER and JAH led the writing of the manuscript. All authors contributed critically to the drafts and gave final approval for publication.
|
v3-fos-license
|
2023-01-18T15:02:35.753Z
|
2017-11-14T00:00:00.000
|
255948019
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12985-017-0893-3",
"pdf_hash": "f9a35cb27bcfb19a107d8af038a25d86be5a5a81",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43303",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f9a35cb27bcfb19a107d8af038a25d86be5a5a81",
"year": 2017
}
|
pes2o/s2orc
|
A cheap and open HIV viral load technique applicable in routine analysis in a resource limited setting with a wide HIV genetic diversity
HIV infection in Cameroon is characterized by a great viral diversity with all HIV-1 groups (M, N, O, and P) and HIV-2 in circulation. HIV group determination is very important if tailored viral load analysis and treatments are to be applied. In our laboratory, HIV viral load is carried out using two platforms; Biocentric and Abbott depending on the HIV group identified. Biocentric which quantifies HIV-1 group M is a cheap and open system useful in resource limited settings. The objective of this study was to compare the viral load analyses of serologically group-indeterminate HIV samples using the two platforms with the view of reducing cost. Consecutive samples received between March and May 2014, and between August and September 2014 in our laboratory for HIV viral load analysis were included. All these samples were analyzed for their HIV groups using an in-house ELISA serotyping test. All HIV-1 group M samples were quantified using the Biocentric test while all other known atypical samples (HIV-1 groups N, O and P) were analyzed using the Abbott technique. HIV group-indeterminate samples (by serotyping) were quantified with both techniques. Among the 6355 plasma samples received, HIV-1 group M was identified in 6026 (94.82%) cases; HIV-1 group O, in 20 (0.31%); HIV-1 group M + O, in 3 (0.05%) and HIV-2, in 3 (0.05%) case. HIV-group indeterminate samples represented about 4.76% (303/6355) and only 231 of them were available for analysis by Abbott Real-Time HIV-1 and Generic HIV Viral Load techniques. Results showed that 188 (81.39%) samples had undetectable viral load in both techniques. All the detectable samples showed high viral load, with a mean of 4.5 log copies/ml (range 2.1–6.5) for Abbott Real-Time and 4.5 log copies/ml (range 2–6.4) for Generic HIV Viral Load. The mean viral load difference between the two techniques was 0.03 log10 copies/ml and a good correlation was obtained (r2 = 0.89; P < 0.001). Our results suggest that cheaper and open techniques such as Biocentric could be useful alternatives for HIV viral load follow-up quantification in resource limited settings like Cameroon; even with its high viral diversity.
Background
Human immunodeficiency virus (HIV) infection is a major public health problem in the world, particularly in sub-Saharan Africa where the majority of patients live. An outstanding characteristic of the virus is its genetic variability which has been attributed to high rates of mutation [1], recombination and viral turnover [2]. To date, HIV is divided into two types: HIV-1 and HIV-2. HIV-1 has been subdivided into four phylogenetically distinct groups: M for major (or main), O for outlier, N for non-M/non-O (or new) and P [3,4], while HIV-2 is subdivided into nine groups: A-I [5]. According to the demographic health survey (DHS) 2011, the seroprevalence of HIV is estimated at 4.3% in Cameroon [6]. This infection is marked by a great genetic diversity with the cocirculation of all types and groups (HIV-1 M-P and HIV-2). This diversity has been shown to impact on the diagnosis (possibility of false negative results) [7], on treatment; (some studies described that HIV-1 O were naturally resistant to the non-nucleoside reverse transcriptase inhibitor because of the presence of Y181C mutation in the RT gene [8]) and on follow-up of patients. Therefore, diagnostic techniques (screening and molecular biology), follow-up and treatment options are really dependent on the type and/or group of HIV. HIV-1 O for example has a broader diversity than HIV-1 M [9] and causes great difficulty for diagnosis. The high genetic diversity of HIV-1 also has a major impact on the plasma quantification of HIV-1 RNA [10,11]. HIV group determination is very important if tailored viral load analysis and treatments are to be applied.
In the biological follow-up of HIV infected patients, plasma (RNA) viral load is the key parameter currently recommended by WHO. In fact it is a good marker of therapeutic adherence, disease progression and treatment efficacy. In addition, according to recent WHO's recommendations, it will be used as the main therapeutic follow-up parameter rather than CD4 counts.
For HIV plasma viral load analyses, the Virology Department of Centre Pasteur of Cameroon uses two platforms: Generic HIV Viral Load assay (Biocentric, Bandol, France) and Abbott Real-Time HIV-1 assay (Abbott Molecular, Wiesbaden, Germany). The techniques choice is based on the HIV-1 group harbored by the patient. Generic HIV Viral Load assay, used for the quantification of HIV-1 group M is a cheap and open system whose usefulness has been shown in resource limited countries like Cameroon. It has also been shown to be applicable in the context of high viral diversity among HIV-1 group M. Abbott Real-Time HIV-1 assay is broader in terms of range of HIV groups covered [10], however because of unaffordable cost of analysis, this test is less implemented in resource limited settings. In order to choose an adequate and cheap HIV viral load technique in our laboratory for each sample, HIV types/groups determination is performed routinely using an in-house ELISA assay as previously described [12]. In spite of the systematic application of this discriminatory test, the types/groups of some samples remain indeterminate. The objective of this study was to compare the viral loads of serologically indeterminate HIV samples using two techniques: the Generic HIV Viral Load assay (Biocentric, Bandol France) and the Abbott Real-Time HIV-1 assay (Abbott Molecular, Wiesbaden Germany) with the view of reducing cost in such genetically divergent viral populations.
Description of the study site
The Centre Pasteur of Cameroon (CPC) is one of the reference HIV laboratories in Cameroon. It plays a central role in evaluating HIV tests intended to be used for routine analyses in Cameroon. As part of this role, the CPC determined the utility a cheaper test, the Biocentric method for HIV serological indeterminate samples. All the data were collected in the Virology service from patients who requested HIV viral load analyses at CPC. Two platforms are used for viral load quantification selected according to HIV serotyping results: Biocentric is specifically dedicated for HIV-1 group M quantification while Abbott is used for other HIV-1 variants.
Serotyping
Between March and May 2014, and between August and September 2014, 6355 consecutive samples were registered in the Virology laboratory of Centre Pasteur of Cameroon for viral load analysis. As a routine test, prior to all viral load analysis in our laboratory, HIV-group serotyping was performed on all these samples using an in-house ELISA as previously described [12]. Briefly, this test uses HIV peptides of the V3 loop and gp41/36 regions of HIV-1 groups M, N, O and P as well as HIV-2 peptides. A first test, "two peptides" format, explores the V3 region of HIV-1 groups M (HIV-1 M) and O (HIV-1 O). Negative samples in the "two peptides" assay were re-tested in a second ELISA known as the "ten peptides" format using peptides mapping the gp41/36 region of HIV-1 groups M, O, N and HIV-2/ SIVsm on one hand and V3 peptides of HIV-1 groups M, O (subgroup H and consensus), N, P, and HIV-2/SIVsm on the other hand.
Plasma HIV RNA quantification
All HIV-1 M samples identified by serotyping were quantified by the Generic HIV Viral Load assay (Biocentric, Bandol-France), which targets a well-conserved LTR region of HIV-1 [13]. Non-M samples (HIV-1 N, O and P) identified by serotyping were quantified by Abbott Real-Time HIV-1 assay (Abbott molecular, Wiesbaden-Germany), which targets a highly conserved integrase-coding region of the pol gene [14]. For the purpose of this study, HIV group-indeterminate samples (by serotyping) were quantified with both techniques.
a. Generic HIV Viral Load assay (Biocentric Bandol-France)
RNA was extracted manually from 1 mL of all HIV-1 group M plasma samples using QIAmp viral RNA mini kit (QIAGEN, Courtaboeuf, France) according to the manufacturer instructions. Purified RNA was eluted in 60 μL of molecular grade water. A volume of 10 μL of RNA extracts was therefore used for quantification with the Generic HIV Viral Load assay (Biocentric, Bandol-France) as originally described by Rouet and collaborators in the ANRS HIV quantification working group [13] and previously reported by our team [15]. The cycling conditions consisted of 50°C for 10 min and 95°C for 5 min, followed by 50 cycles of 95°C for 15 s and 60°C for 1 min. Amplification and data acquisition were carried out using the ABI Prism 7300 Sequence Detection System (Applied Biosystems) and the detection cut-off value was 60 HIV-1 RNA copies/mL.
b. Abbott Real-Time HIV-1 assay (Abbott molecular, Wiesbaden-Germany)
This assay was performed on samples that tested HIV-1 Group O or N by serotyping. The test was run according to the manufacturer's instructions. Briefly, HIV-1 RNA was extracted from 0.6 mL of plasma sample, using the Abbott m2000sp nucleic acid extraction system. 50 μL of purified RNA was subsequently mixed with 50 μL of Master mix and run on the m2000rt Real-Time PCR system [14]. The detection cut-off value was 40 HIV-1 RNA copies/mL.
Molecular characterization
For the purpose of this study, samples that were serologically HIV type/group-indeterminate were quantified by both the Abbott Real-Time HIV-1 and Generic HIV Viral Load assays. Samples showing detectable viral loads were further characterized using molecular tests.
Molecular tests were performed using an RT-nested PCR targeting the pol gene (Integrase) of HIV-1 group M and O as previously described [16]. RNA extracts originating from samples that were detectable by only one of the quantification techniques (Generic HIV Viral Load and Abbott HIV-1 viral load assays) were further analyzed using an RT-nested PCR targeting the Gp41 region of the Env gene as previously described [17]. This PCR uses primers with a broad specificity and can amplify HIV-1 (M, O, and N) as well as Simian Immunodeficiency Viruses from Chimpanzees (SVIcpz). Primers were designed on the basis of conserved gp41 (gpM-Z) regions. The outer primers gp40F1 and gp41R1 and the inner primers gp46F2 and gp48R2 were used with the PCR conditions previously described [17].
Sequence and phylogenetic analyses
PCR products were sequenced by the Sanger method using BigDye® Terminator Cycle Sequencing Ready Reaction v3.1 kit (Applied Biosystems). Sequences were aligned by CLUSTALW and phylogenetic trees were inferred using a Kimura two-parameter substitution model and the neighbor-joining method with 1000 bootstrapped data sets implemented with the MEGA6.06 software [18].
Statistical analysis
We performed the correlation coefficient of Pearson to observe association between HIV-1 viral loads obtained with the two techniques. Furthermore, the statistical tests of Karle Wallis were implemented to compare the value of correlation with 0. All the statistical analyses were performed using R software version 2.15. The results showed that 188 (81.39%) of these samples showed undetectable viral load using both techniques. All the remaining 43 samples were detectable with Abbott Real-Time HIV-1 while 40 of them were also detectable with the Generic HIV Viral Load (Table 1). All detectable samples in both techniques showed high viral load, with a mean of 4.5 log copies/ml (range 2.1-6.5) and 4.5 log copies/ml (range 2-6.4) for Abbott Real-Time and Generic HIV Viral Load, respectively.
Serotyping
The mean viral load difference between the two techniques was 0.03 log 10 copies/ml which was not significantly different. Importantly, the difference between both assays did not increase with low or high viral loads. A good correlation was found between the results of both assays (Pearson correlation coefficient r 2 = 0.89; P < 0.001) as shown on Fig. 1. [20][21][22]; with HIV-1 M representing the broad majority of HIV in circulation. It is known that HIV-1 M is the pandemic form of HIV because it is distributed worldwide, while other variants are mostly found in Africa, especially in Cameroon [23]. Our results showed that 0.3% of samples were HIV-1 group O, and this is consistent with the actual epidemiology of HIV-1 group O whose prevalence is estimated at approximately 0.4-1% of all HIV infections in Cameroon [20][21][22]. Among the 231 HIV type/group indeterminate samples analyzed with both Generic HIV Viral Load and Abbott Real-Time assays, as high as 81.39% had undetectable viral load in both It has been demonstrated that treatment can drop viral load and antibody titers can subsequently decrease leading to indeterminate serological tests [24] as observed with indeterminate results in serotyping. However, PCR could be used to confirm HIV status in such cases; we previously described seronegative results with positive DNA PCR in children who started treatment early [25]. We found a good concordance (98.7%) between both techniques, and the majority of samples found to be detectable with Abbott Real-time were also detectable with Generic HIV Viral Load, with a good correlation between viral load values (Pearson correlation coefficient r 2 = 0.89; P < 0.001). This is consistent with results obtained by Rouet and collaborators who described the performance of the ANRS (Agence Nationale de Recherche sur le SIDA) second-generation long terminal repeat-based realtime RT-PCR test, Generic HIV Viral Load, for the quantification of HIV in a context of high HIV genetic diversity [13]. The molecular characterization showed that all 40 samples efficiently detected by both techniques were HIV-1 group M, whereas, the three samples refractory to Generic HIV Viral Load were HIV-1 group O. This result can be explained by the fact that Generic HIV Viral Load has been designed for the quantification of HIV-1 group M and has been shown to inaccurately quantify certain strains of HIV-1 group O [13]. In the phylogenetic analysis, two of HIV-1 group O sequences were closely related and supported by a high bootstrap value (Fig. 2). Further analyses were performed to investigate a potential transmission pair. These two sequences displayed a 97.43% identities and they were obtained from two women from different cities. Also, the patients were received at different date with no epidemiological link. Altogether, these results suggested that there is no transmission pair even if the two sequences are more related to each other than to other HIV-1 group O sequences.
Molecular characterization
Even though viral load has become the main virological marker for detecting treatment failure, this test is not affordable in many countries especially in the resource limited settings. Because of inadequate laboratory capacity and high cost of equipment, viral loads are performed in the reference laboratory; samples have to be collected in the field and shipped to reference laboratory for testing. In this context, logistic barriers (plasma separation, storage and shipping of specimens) are major challenges in the availability of HIV viral load in peripheral regions. Therefore, there is an urgent need in the development of less costly methods such as viral load pooling, viral load using DBS as well as point of care to monitor patients receiving antiretroviral therapy. These methods could be helpful for the sustainability of the monitoring of patients; however, such strategies should to be validated in each region/country in order to take into consideration the specificity of the designated area.
Nonetheless, our study has some limitations because the presence of non HIV-1 variants in the 188 samples with HIV type/group indeterminate results with serotyping and undetectable viral load on both Abbott Realtime and Generic HIV Viral Load cannot be ruled out. Since these viral load assays could not amplify non HIV-1 variants, nested-PCR for HIV-2 and SIV should be performed on these samples.
Conclusion
In conclusion, our results suggest that a cheaper and open technique such as Generic HIV Viral Load (Biocentric) could be a reliable alternative for HIV viral load follow-up quantification in the settings marked by a high viral diversity, like Cameroon. In such settings, molecular characterization could therefore be performed on samples originating from patients with immuno-virological or immuno-clinical discordances, in order to detect potential non-M HIV variants. Since the price of Abbott Real-Time assy is twice the price of the Generic assay in Cameroon, using this Biocentric platform will save 450,000 US dollars at the end of the year.
|
v3-fos-license
|
2020-09-20T13:05:11.646Z
|
2019-09-19T00:00:00.000
|
221798089
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.1734",
"pdf_hash": "0c1065335b15b14b6ad7b2eb6052263ff5becf06",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43305",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "ec4f24644e78a9360eba558881c5cab518f1c37f",
"year": 2020
}
|
pes2o/s2orc
|
Brain reactions to the use of sensorized hand prosthesis in amputees
Abstract Objective We investigated for the first time the presence of chronic changes in the functional organization of sensorimotor brain areas induced by prolonged training with a bidirectional hand prosthesis. Methods A multimodal neurophysiological and neuroimaging evaluation of brain functional changes occurring during training in five consecutive amputees participating to experimental trials with robotic hands over a period of 10 years was carried out. In particular, modifications to the functional anatomy of sensorimotor brain areas under resting conditions were explored in order to check for eventual changes with respect to baseline. Results Full evidence is provided to demonstrate brain functional changes, and some of them in both the hemispheres and others restricted to the hemisphere contralateral to the amputation/prosthetic hand. Conclusions The study describes a unique experimental experience showing that brain reactions to the prolonged use of an artificial hand can be tracked for a tailored approach to a fully embedded artificial upper limb for future chronic uses in daily activities.
| INTRODUC TI ON
The hand is a body segment of extraordinary importance as it is used to interact with the peri-personal environment in all the pivotal daily activities; its loss causes severe physical and psychological deficits.
Hand amputation is followed by a cascade of plastic changes in motor and somatosensory pathways and relays in the "orphan" districts of the central nervous system (CNS) connected to the amputated part. The greater availability of adequate tools for evaluating the cerebral cortex makes this district the most studied of nervous system. The first described feature of cortical plastic reorganization following limb amputations is the invasion of the "deafferented" cortex by the cortical representation of adjacent body districts in the primary sensory and motor cortices (Merzenich, 1998;Wrigley et al., 2009). Several recent experimental studies and observations in human amputees clearly demonstrated the persistence of a certain degree of functionality of the somatosensory and motor cortices corresponding to the amputated part, even several years following amputation (Granata et al., 2018;Makin, Filippini, et al., 2015;Makin et al., 2013;Makin, Scholz, Henderson Slater, Johansen-Berg, & Tracey, 2015;Raspopovic et al., 2014;Rossini et al., 2010). Both the cortical reorganization and the persistance of the original functional topography seem to contribute to the presence and severity of the phantom limb pain (PLP) (Flor et al., 1998;Flor et al., 1999;Vaso et al., 2014). Moreover, recent researches pointed out the presence of plastic changes occurring at the whole sensorimotor network after hand amputation (Serino et al., 2017;Valyear, Mattos, Philip, Kaufman, & Frey, 2019), demonstrating an impoverishment of the network used for grasping in terms of brain areas activated during the task, intensity of activation and interareas connectivity.
The future of prosthetics is moving toward the use of a new generation of devices giving to the amputee the ongoing possibility of performing complex movements and receiving real-time somatosensory feedback in an open loop (Raspopovic et al., 2014;Schiefer, Tan, Sidek, & Tyler, 2016;Tan, Schiefer, Keith, Anderson, & Tyler, 2015;Tan et al., 2014;Tyler, 2015). A limited literature is available on plastic brain changes accompanying the use of artificial upper limbs, and-so far-no reports have been published concerning brain reactions to the use of sensorized hand prosthesis (Chen, Yao, Kuiken, & Dewald, 2013;Serino et al., 2017;Yao & He, 2001).
This study reports on a multimodal evaluation of cortical changes occurring during bidirectional hand prosthesis training of variable duration (4-36 weeks), in a series of 5 amputees over a period of 10 years. Primary endpoint was to investigate the presence of chronic changes in the functional organization of sensorimotor brain areas due to amputation at baseline and to follow up eventual modifications induced by training with a hand prosthesis and sensory feedback.
| SUBJEC TS , MATERIAL S , AND ME THODS
The hand prosthesis, the stimulation unit and the dedicated software were the output of several consecutive European Union and Italian Ministry of Health funded projects involving research teams from different EU countries; some of them were described in dedicated previous papers (Petrini et al., 2019;Raspopovic et al., 2014;Rossini et al., 2010). All the human experiments reported here were carried out in Italy after approval by the local Ethics Committee and by the Italian Ministry of Health. Written informed consent was provided by the patients before trial initiation.
| Patients
The results from five consecutive left transradial amputees are included in this report.
Patient 1: 26-year-old right-handed male with a left transradial amputation that occurred in 2007. At the time of the study (2.5 years after amputation), the patient was not using any kind of prosthesis and was affected by mild phantom limb pain (scored as 4 according to the VAS). Patient 3: 37-year-old male with a traumatic transradial (proximal third of the forearm) amputation of the left arm that occurred 2 years before the trial. The patient, before the amputation, was left-handed, and at the time of the trial, the patient was affected by severe PLP (score of 8 on the VAS).
Patient 4: 48-year-old female with a traumatic transradial (distal third of the forearm) amputation of the left arm that occurred 23 years before trial. She was free from phantom pain and was in chronic treatment with antidepressant drugs at a low dosage because of mild depression.
Patient 5: 53-year-old female with left traumatic transradial (proximal third of the forearm) amputation that occurred 1 year and 6 months before the trial. She suffered from mild phantom limb pain and very frequent nonpainful phantom sensation (score of around 4 on the VAS), mainly located on the ulnar side of the phantom hand. The artificial hand model utilized for Patient 1 was too heavy to be wearable and was experimented under remote control on the laboratory table; such condition decreased the mechanical/electronic noise to a level that allowed reliable recordings of the motor output signals from the motor nerve fibers, which were translated into movement commands to the hand prosthesis (Rossini et al., 2010).
The remaining patients were using a more advanced model which K E Y W O R D S advanced biotechnologies, brain function, hand prosthesis, personalized medicine was directly connected to the socket on the stump. This did not allow any direct intraneural recording from motor nerve fibers of the stump due to the electronic/mechanical noise. Therefore, surface electromyographic recordings were employed to drive the motor commands to the prosthesis (Figure 1, see Raspopovic et coll. 2014(Raspopovic et al., 2014) as fully described in previous technical publications (Raspopovic et al., 2014;Rossini et al., 2010).
Patients 1 and 2 used the system for 1 month with daily trials.
Patient 3 used the system almost daily for 9 months while Patients 4 and 5 used the system for 5 months, almost daily in the first 2 months and 2 or 3 days a week afterward. During the training period, each patient performed different tasks aiming to recognize the physical properties of objects (including shape, consistency, and texture), to perform a variety of simple and complex movements, and to produce different levels of force and pressure. Moreover, many hours of nerve stimulation were devoted to psychophysical topographical mapping of the sensory input and to validate its stability in time.
The patients underwent a multimodal neurophysiological and neuroimaging examination, including electroencephalography (EEG), the recording of EEG responses to transcranial magnetic stimulation of the motor cortex (TMS-EEG), and structural and functional magnetic resonance imaging (MRI and fMRI).
| Procedures to analyzed brain reactions
Transcranial magnetic stimulation EEG (TMS-EEG) was performed using a Magstim Rapid2 Stimulator (Magstim Company Limited) F I G U R E 1 Left lower and upper panels: the prosthetic system (the sensorized prosthesis, the EMG electrodes for the muscular activity recording, the electrical stimulator, and the TIME electrodes implanted on the nerves) and the functioning of the whole system during a grasp task (EMG from the stump muscles during voluntary contraction was recorded with surface electrodes, used to decode efferent activity, and translated into orders for movement of the hand prosthesis; the contact and force production of the prosthetic fingers against resistance triggered a rapid cascade of events within a time window of a few tens of milliseconds, generating a real-time perception of a sensation from the phantom hand/fingers, thanks to the proportional intensity of electrical stimulation of the stump nerves with the TIME connected to the stimulator). Middle lower panels: localization of TIME electrodes into the median and the ulnar nerves above the elbow, the TIME electrode inserted in the nerve during surgery, and the transcutaneous electrodes available for the connection with the stimulator at the end of surgery. Upper right panel: structure of the TIME electrodes and procedure of their insertion via a guiding needle. Right lower panel: setting of TMS-EEG performed with the neuronavigation system for exact positioning and repositioning in order to stimulate the motor cortex with simultaneous EEG recording via a BrainAmp DC (Brain Product GmbH) device with 57 electrodes positioned on standardized scalp sites according to the augmented 10-20 international system. Three additional channels were simultaneously acquired for the electro-oculogram and the electrocardiogram. Other four electrodes were used for a bipolar recording of the motor-evoked potential (MEP). For the magnetic stimulation, a Magstim 200 magnetic stimulator connected to an 80-mm figure-of-eight coil was used. The stimulation was performed by positioning the virtual cathode of the coil over the site of the scalp to be stimulated with the holder oriented at a 45° angle with respect to the approximate direction of the central sulcus. MEPs were recorded from the biceps and/or extensor and flexors muscles of the forearm on both sides via pairs of Ag/AgCl surface electrodes placed over the belly of the muscle. The resting motor threshold (RMT) was defined-according to international standards-as the minimum intensity of the magnetic field able to generate a MEP of at least 50 μV in amplitude in approximately 50% of 10 consecutive stimuli (Rossini et al., 2015(Rossini et al., , 2019. TMS was delivered with magnetic field intensity equal to 120% of RMT. In order to record the EEG responses evoked by TMS, 120 consecutive stimuli were applied, each separated by a 6-8 s random interval with the assistance of a neuronavigator system in order to have a precise repositioning during follow-up sessions (Softaxic Navigator System, EMS). Magnetic resonance imaging (MRI) was performed using a 1.5T Philips Achieva scanner equipped with an eight-channel SENSE head coil. Structural MRI acquisition of the entire brain volume was performed using a 3D TFE T1-weighted sequence (TR 8.2 ms, TE 3.8 ms, FOV 240 mm, slice thickness 1 mm, FA 12°). All images were obtained using a 240 × 240 matrix with an in-plane voxel size of 1 × 1 mm.
Structural MRI acquisitions were processed using the Freesurfer image analysis suite (Fischl et al., 2002;Reuter, Schmansky, Rosas, & Fischl, 2012). Cortical reconstruction and volumetric segmentation were performed for all time points. To extract values, the intersection between gray segmentation and the ROI positioned on the hand motor area of left and right precentral gyrus were used. The hand motor cortex was identified based on its typical Ω or ε shape on the axial scan plane (Caulo et al., 2007). DTI acquisition involved two multishot spin-echo EPI diffusion acquisitions with high directional resolution and gradient overplus with bhigh 700 and 1,200 (s/mm 2 ), respectively. The isovoxel was 2 × 2 × 2 mm 3 with an in-plane FOV of 224 × 224 mm 2 and 60 slices 2 mm in thickness with no gap. A single couple of TR and TE were set to 22,140 ms and 60 ms, respectively, as the optimal values for acquisition with bmax. Stack angles were 0° in AP and RL and 0° and 15° in FH. For each session, DTI acquisitions were denoised (Manjón et al., 2013), and then, eddy current correction was applied. Second acquisition was rotated to the first one (Lotze, Flor, Grodd, Larbig, & Birbaumer, 2001) and then concatenated to get a 64 gradient direction.
Functional MRI (fMRI) acquisitions (task and rest acquisitions) were obtained using a gradient-echo EPI sequence to measure the BOLD contrast over the whole brain (TR = 1,710 ms, TE = 30 ms, 34 slices acquired in ascending interleaved order, voxel size = 3.59 × 3.59 × 3.59 mm, 64 × 64 matrix, flip angle = 70°). Task for fMRI acquisition included the following: a paradigm similar to the one adopted by Macuga and Frey (2012) was performed and involved kinesthetic imagery and synchronous imitation of hand (finger tapping) movements under controlled visual drive for movement characteristics. The hand runs included two repetitions of three conditions (contralesional hand movement imitation, contra-and ipsi-lesional hand imagery) presented in random order. Each condition included 2 s. of instruction followed by TA B L E 1 Characteristics of patients, summary, and timing of tests
| EEG
EEG data were processed in Matlab R2014b using scripts based on the EEGLAB toolbox (Swartz Center for Computational Neurosciences).
EEG signals were band-pass filtered from 0.1 to 47 Hz using a finite impulse response (FIR) filter. Imported data were divided into 2 s duration epochs, and visible artefacts in the EEG recordings were removed using an independent component analysis (Infomax ICA algorithm) procedure (Hoffmann & Falkenstein, 2008;Jung et al., 2001). The three-dimensional distribution of the EEG activity was estimated by the use of the standardized low-resolution electromagnetic tomography algorithm (sLORETA). All EEG data epochs were The normality of the data was tested using the Kolmogorov-Smirnov test, and the hypothesis of Gaussianity could not be rejected. The significance level was set at p < .05. Two core measures of graph analysis were computed: characteristic path length (L) and clustering coefficient (C), representative of network global and local interconnectedness, respectively (Watts & Strogatz, 1998). Small-worldness (SW) was obtained by the ratio between normalized C and L, as it describes the balance between local connectedness and global integration of a network.
| TMS-EEG
TMS-EEG data were processed offline using the Brain Vision Analyzer (Brain Products GmbH) and the MATLAB environment (MathWorks Inc.).
TMS-evoked EEG activity was visually inspected in each channel, and trials contaminated by environmental artifacts, muscle activity, or eye movements were rejected. As a first step, a linear interpolation from 1 ms before to 10 ms after the TMS pulse was applied to remove the TMS artifact. Afterward, the signal was band-pass filtered between 1 and 80 Hz (Butterworth zero-phase filters). A 50 Hz notch filter was also applied to reduce noise from electrical sources. Identification and removal of artifacts
| MRI
Diffusion Toolkit and Trackvis (Wedeen et al., 2008) were used to calculate the diffusion tensor and fiber tracking of the corticospinal tract. The multiple volume DWI acquisitions were first corrected for eddy current and motion artifact by eddy_correct, then analyzed to get whole brain indexes values. Fraction anisotropy (FA) was the main output from the FSL Diffusion Toolkit (FDT, 24).
The first Feat level analysis (Woolrich et al., 2001) was performed to get functional activation on motor and imagery task. Stimuli were presented electronically using the E-Prime 2.0 software (Psychology Software Tools); then, the ".prt" stimuli text files were converted in events (".evt") FSL text format by hand. The general linear model from Feat indicates the brain activities during tasks.
| TMS-EEG
The TMS-EEG performed by stimulating the hot spot motor area of either the biceps or forearm flexors-extensor muscles showed a visually interhemispheric difference at baseline (T0 = before trial).
Stimulation of the M1 cortex contralateral to the stump showed a visually remarkable amplitude reduction of the early components (from 10 to 50 ms poststimulus latency), while there were no significant differences in later ones (from 100 ms onward). At T1 (following the trial with the prosthetic hand), stimulation of M1 to the intact hand showed no visual significant changes in the main early and late TMS induced waves (Figure 2), while a significant modulation (t test, p < .0001) in the amplitude of the wavelets between the 30 and 100 ms poststimulus epochs was recorded for stimulation of M1 to the stump, with a significant reduction in global cortical excitability in the stimulated hemisphere, but also in the one ipsilateral to the stump.
In summary, the qualitative evaluation (visual inspection) of TMS-EEG assessment, not supported by statistical analysis, showed at baseline in almost all patients a poor representation of the early TMS evoked potentials after M1 stimulation contralateral to the amputation, with a clear interhemispheric asymmetry that was not clearly modified at T1.
| EEG
The Clustering coefficient and characteristic path length connectivity parameters were not significantly different in the same ANOVAs.
The small-world index results in baseline showed a trend (nonstatistically significant) to more random architecture in the alpha band and the opposite behavior in the delta band on the left (intact hand) with respect to the right (contralateral to stump) hemisphere. After hand prosthesis use and sensory feedback trials (T1 session), the two hemispheres showed decreased differences in small-worldness for both delta and alpha bands with respect to baseline ( Figure 5).
In summary, EEG recordings showed a bilateral statistically significant increase in alpha-band power and a consensual decrease in delta band power between T0 and T1. All the other parameters analyzed did not reach the statistical significance, but for some parameters a trend for significance comparing the two hemispheres was found.
| MRI
DTI. No significant differences in the fractional anisotropy of the corticospinal tracts were found between baseline and T1 (Figure 6a).
Cortical thickness. The cortical thickness of S1 and M1 remained unchanged in the baseline versus T1 images.
Task-based fMRI. Experiments were the only task-related ones being performed during kinesthetic imagery and synchronous men-
| D ISCUSS I ON
Our study provides a direct evidence of brain functional changes (as addressed via different techniques) following training with bidirectional hand prosthesis in five amputees; two of them had a relatively short-term trial (of 1 month) while the others had a longer training period (5, 6, and 9 months). All subjects had suffered from a left transradial amputation and, with one exception, were right-handed.
It should be taken into account that our primary endpoint was to reveal chronic and stable changes in the functional organization of the sensorimotor brain areas due to amputation, either contralateral to amputation or in both hemispheres, and to follow up their eventual modifications induced by training with the sensorized hand prosthesis of last prototypal generation. For this reason, most of the investigations were carried out under resting conditions and not during a task.
EEG recordings showed a bilateral statistically significant increase in alpha-band power and a consensual decrease in delta band power. The separate analysis of the two hemispheres did not show a statistically significant change in these two EEG frequency bands, with only a trend toward significance. EEG resting-state connectivity showed baseline asymmetry of the two hemispheres, especially in the alpha and delta bands, with a tendency (even in this case nonstatistically significant) toward a more symmetrical representation after training with the bidirectional hand prosthesis in the T1 recordings. Even if many of our data showed only a trend of significance, probably because of the reduced number of patients, some consideration could be made. Alpha rhythms reflect one of the most prominent oscillatory hallmarks of the resting/awake human brain and may arise from different cortico-cortical and cortico-thalamic-cortical circuits that play an important role in the top-down control of cortical activation (Palva and Palva, 2011) and exhibit temporal correlations and spatial coherence over a long time range (Freyer, Aquino, Robinson, Ritter, Breakspear, 2009;Nunez, Wingeier, Silberstein, 2001). Conversely, the localized delta frequency band in awake adults is usually associated with focal cerebral dysfunctions/lesions.
Moreover, a generalized reduction in alpha EEG and a simultaneous
increase in delta rhythms are a common pattern of progressive brain diseases or loss of sensory information, for example, cognitive impairment and chronic visual deprivation (Bola et al., 2014;Vecchio et al., 2014). Recovery of the alpha/delta balance, that is, an increase in alpha activity and a consensual decrease in slow rhythms in the delta range, has been described following clinical improvement (Bola et al., 2014) that, in the case of hand amputation, could be considered as regaining a "surrogate" of the lost function via an artificial hand (movement and sensory feedback). Interestingly, the analysis F I G U R E 4 EEG power density of right and left hemispheres. Upper Left and right panels: ANOVA interaction of the power density values among the factors ROIs (frontal, central, parietal, occipital, temporal, limbic), Band (delta, theta, alpha 1, alpha 2, beta 1, beta 2 and gamma), and condition (baseline; T1) in the left hemisphere (left panel) and in the right hemisphere (right panel). Lower panel: lagged linear baseline versus T1 connectivity in EEG bands (delta, theta, alpha 1, alpha 2, beta 1, beta 2 and gamma) in the left (blue lines) and in the right hemisphere (red lines) of EEG connectivity showed a trend toward a focal effect predominantly on the hemisphere controlling the amputated hand. It can therefore be hypothesized that the use of a last-generation prosthesis closely reproducing the motor/sensory conditions of the natural hand has different effects on brain EEG rhythms, that is, some more diffuse and bihemispheric, others more focal and monohemispheric, but specifically related to the input/output of the M1 and S1 areas contralateral to the prosthesis. This information supports our hypothesis that the regular use of a prosthetic hand mimicking a "natural" hand activity could reverse some of the aberrant plastic changes following amputation.
Structural MRI did not show significant modifications in corticospinal tract fiber density or in S1-M1 cortex thickness neither at baseline, nor after training, suggesting that the observed brain changes were mainly due to synaptic strength and connectivity changes without significant anatomical modifications. The reason for this could be related to several factors including the relatively short duration of the trials and the use of the prosthetic hand mainly in a laboratory context more than in an ecological environment.
Moreover, the connectivity analysis performed with the fMRI data allowed examining the changes occurring in two networks with different functional roles, that is, the SMN and the DMN. We found F I G U R E 5 Small-word index of right and left hemispheres at baseline and T1. ANOVA interaction of the small-world index among the factors Band [delta (0.5-4.5 Hz), theta (5-7.5 Hz), alpha (8-11.5 Hz), sigma (12-15.5 Hz), and beta (16-24.5 Hz)] and side (left hemisphere, right hemisphere) at baseline (left panel) and T1 (right panel). After robotic hand use and sensory feedback trials, the two hemispheres appeared less different in small-worldness F I G U R E 6 (a) MRI tractography. Corticospinal tract reconstruction. No statistically significant changes between the two sides were observes. (b) Functional MRI. Brain activation during movement imagination task of the phantom hand before and after the training. The upper panel shows the cerebral activation during the task at baseline, while the lower panel shows the activation at the end of the trial. Activation of the occipital areas is due to visual stimulation by the virtual hand movements in a screen. Following the training trials (T1), there was a reduction of additional motor areas and cerebellum activation with a prominent and more selective activation of the contralateral M1 as a typical sign of "motor learning." Activation maps are displayed with the same statistical threshold (a) (b) significant modifications in connectivity mainly restricted to the SMN, without significant changes in the DMN in line with previous results (Makin, Filippini, et al., 2015); this supports the idea that, after amputation, the missing hand cortex gradually becomes functionally decoupled from the SMN, its typical network of origin, and that prolonged use of a sensorized hand prosthesis could reverse this phenomenon. Moreover, when fMRI findings were followed up during a task with phantom hand movement (repeated movements opening and closing), an increase of M1 and a decrease of cerebellar recruitment was found at follow-up, supporting the idea that the prolonged use of the bidirectional prosthetic hand can reverse some aberrant brain plastic changes. These findings, even if at some difference, could be considered similar to those of recent studies exploring brain changes after restoration of bidirectional hand func- ces activity related to movement and touch are enabled by targeted muscle and sensory reinnervation, suggesting that TMSR may counteract maladaptive cortical plasticity typically found after limb loss, in M1, partially in S1, and in their mutual connectivity.
TMS-EEG findings, even if not supported by statistical analysis,
showed at baseline recordings a poor representation of the early TMS evoked potentials after M1 stimulation contralateral to the amputation, with a clear interhemispheric asymmetry. Interestingly, this phenomenon was evident almost exclusively for the early waves (before 50 ms) that are related to the direct activation of the stimulated motor cortex and the spread of activation to the contralateral hemisphere via direct transcallosal (P14) and/or deep subcortical connections (wave P30 as seen in the paper by Bonato et al. (2006)).
There was no significant modification of these waves after training, suggesting that the use of a prosthetic system was not able to reverse some of the alterations in cortico-cortical connectivity following amputation.
Modifications of the PLP are an indirect, but still powerful reflection of the functional changes described in this study. Four out of five patients suffered from PLP, and three of them improved following training with the prosthesis. Patient 1 experienced a 4-2 decrease in PLP score on the visual analogic scale (VAS), while patient 2 decreased from 9 to 4 and patient 3 from 8 to 4. Patient 5 did not experience any PLP modification during the trial, with a stable VAS score of 4 (see Table 1).
Our study is affected by several limitations: (a) The methodology was not homogeneous due to patient characteristics, patients availability to be submitted to several investigations and regulatory requirements (i.e., two of them could not undergo MRI due to claustrophobia; in one, the type of intraneural electrodes used was not compatible with use in the MRI environment); (b) the prosthetic system was not exactly the same; (c) the duration of the trial was different across patients due to health authorities restrictions; and (d) the absence of a control group.
Moreover, the influence of the different clinical conditions of the patients at baseline should be also considered, especially the role of time since the amputation and the presence/absence of PLP. The low number of patients combined with the just mentioned limitations does not allow more complex conclusions that would result in excessive speculation. Despite such limitations, this study describes a unique and unprecedent experience accumulated on this topic in 10 years, clearly showing that brain reactions to multitask (motor and sensory) artificial hand use can be tracked for a tailored approach to a fully embedded artificial upper limb for future chronic uses in daily activities.
| CON CLUS IONS
Our study provides a direct evidence of brain functional changes (as addressed via different techniques) following training with bidirectional hand prosthesis in five amputees. The alterations, describing for the first time the brain reactions to multitask (motor and sensory) artificial hand use, were mainly restricted to functional connectivity as addressed with EEG, TMS-EEG, and MRI and could represent a starting point for a future tailored approach to study chronic uses of fully embedded artificial upper limb in daily activities.
ACK N OWLED G M ENTS
We thank Dr. Laura Taormina, Dr. Rossana Moroni, and Dr. Emanuela Ricci for their important organizational role.
CO N FLI C T O F I NTE R E S T
The authors declare no competing interest.
AUTH O R CO NTR I B UTI O N
G.G. and P.M.R. conceived the study, planned the experiments, and wrote the paper. F.M. analyzed data on EEG and brain connectivity and wrote the paper. M.C. analyzed MRI data and wrote the paper. R.D. and F.Io. performed the experiments, collected clinical data, and drafted the final version of the paper. F.V. analyzed data on EEG and brain connectivity. G.V., I.S., E.D., and F.Ib. performed the experiment. L.L. and E.F. performed surgical implantation of the electrodes. R.R. performed the experiments and drafted the paper.
F.M.P., S.R., and S.M. conceived the study and drafted the paper.
PEER R E V I E W
The peer review history for this article is available at https://publo ns.com/publo n/10.1002/brb3.1734.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
|
v3-fos-license
|
2020-09-26T13:38:12.637Z
|
2020-09-26T00:00:00.000
|
221911982
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12520-020-01184-1.pdf",
"pdf_hash": "8737be864bd10933fd0d17ff9bf90873dd9079a5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43308",
"s2fieldsofstudy": [
"Chemistry",
"History"
],
"sha1": "8737be864bd10933fd0d17ff9bf90873dd9079a5",
"year": 2020
}
|
pes2o/s2orc
|
On metal and ‘spoiled’ wine: analysing psimythion (synthetic cerussite) pellets (5th–3rd centuries BCE) and hypothesising gas-metal reactions over a fermenting liquid within a Greek pot
A Pb-based synthetic mineral referred to as psimythion (pl. psimythia) was manufactured in the Greek world at least since the 6th c BCE and routinely by the 4th c BCE. Theophrastus (On Stones, 56) describes its preparation from metallic Pb suspended over a fermenting liquid. Psimythion is considered the precursor of one of western art’s most prominent white pigments, i.e. lead white (basic lead carbonate or synthetic hydrocerussite). However, so far, and for that early period, published analyses of psimythia suggest that they consisted primarily of synthetic cerussite. In this paper, we set out to investigate how it was possible to manufacture pure cerussite, to the near exclusion of other phases. We examined the chemical and mineralogical composition (pXRF/XRD) of a small number of psimythion pellets found within ceramic pots (pyxis) from Athens and Boeotia (5th–4th c BCE) in the collection of the National Archaeological Museum (NAM), Athens. Analyses showed that the NAM pellets consisted primarily of Pb/cerussite with small amounts of Ca (some samples) and a host of metallic trace elements. We highlight the reference in the Theophrastus text to ‘spoiled wine’ (oxos), rather than ‘vinegar’, as has been previously assumed, the former including a strong biotic component. We carried out DNA sequencing of the pellets in an attempt to establish presence of microorganisms (Acetic Acid Bacteria). None was found. Subsequently, and as a working hypothesis, we propose a series of (biotic/abiotic) reactions which were likely to have taken place in the liquid and vapour phases and on the metal surface. The hypothesis aims to demonstrate that CO2 would be microbially induced and would increase, as a function of time, resulting in cerussite forming over and above hydrocerussite/other Pb-rich phases. Psimythion has for long been valued as a white pigment. What has perhaps been not adequately appreciated is the depth of empirical understanding from the part of psimythion manufacturers of the reactions between abiotic and biotic components within ‘oxos’/pot, as key drivers of minerals synthesis. Ultimately, psimythion manufacture may rest in understanding the nature of ‘oxos’, antiquity’s relatively little researched strongest acid.
Introduction
Psimythion: the sources There is substantial evidence in the literary record of the Greek world of the 5th-3rd c BCE to suggest that women of all ages and across social boundaries applied a white powder, psimythion, as a cosmetic and for purposes of beautification. So extensive was its apparent use that religious temples felt obliged to list it amongst the items that women were not allowed to wear on entering their premises and/or participating in the activities within (Tsoucaris et al. 2011). For example, at the sanctuary of Andania in the Peloponnese (1st c CE), the wearing of psimythion was banned together with gold jewellery and red dye (IG5(1).1390.22).
The desire to have and/or maintain a fair skin appears to have run deeply within Greek culture, as one can gauge from the Homeric poems. The adjective λευκώλενος (whitearmed) is associated mostly with the goddess Hera (for example Homer Iliad, 8.484) but also with mortal women like Andromache, Hector's wife, or Penelope, Odysseus' wife, or Helen of Troy and in association with purely female attributes (for example submissiveness, vulnerability, desirability). If a fair skin was indeed a social requisite, for women who did not have it, artificial substances must have been the only remedy.
There is no suggestion that the Homeric ladies used psimythion but it follows that, for later periods, psimythion must have filled that gap in market demand: 'Has anyone a dark complexion, white-lead will that correct,' Athenaeus (Deipnosophistai, xiii.23) notes as late as the 3rd c CE. Before its use as a cosmetic, it is perhaps its 'whiteness' that forms the focal point of its attributes as can be gleaned from the earliest reference to psimythion, a fragment of Xenophanes (6th-5th c BCE) (Fragment 28,978a, line 10) and indeed in later texts by Aristotle (Nicomacheian Ethics 1096b, line 23).
Nevertheless, the wearing of psimythion came at a price since it could also be a cause of ridicule, particularly by comic poets like Aristophanes, who appears to have had a particular dislike for the substance and/or the women who wore it: Are you an ape plastered with white lead, or the ghost of some old hag returned from the dark borderlands of death? (Aristophanes Eccleciazousae 1072). It was thought as a means for females to conceal their age: 'No, no! as she is there, she can still deceive; but if this white-lead is washed off her wrinkles will come out plainly (Aristophanes Plutos 1065).
Furthermore, it appears that one way to discredit an Athenian lady's reputation for being virtuous was to allude to her wearing psimythion, …. at the wrong time(!): 'But it struck me, sirs, that she had powdered her face though her brother had died not thirty days before; even so, however, I made no remark on the fact, but left the house in silence' (Lysias, On the murder of Eratosthenes 1.14). Men applying psimythion on themselves appears to have been frowned upon: 'she adorned his (Alciviades) face like a woman's with paints and pigments' (Plutarch Alcibiades 39.2). The use of psimythion by males is also attested by Ctesias, a 5th c BCE Greek physician (Fragment 1b line 689). By the Roman period, the use of cerussa (Latin for psimythion) seems to have been more widely accepted, yet old beliefs die hard: 'You dye your head but you will never disguise your old age nor straighten out the wrinkles in your cheeks. Don't cover your face with paint so as to have a mask and not a face. For it avails nothing. Why are you so foolish? Paint and dye won't make Hecuba a Helen' (Lucilius Epigram Book II).
It is perhaps its entry to the Hippocratic Corpus of the 5th-4th c BCE that alerts us to its properties beyond the aesthetic.
Psimythion: the material culture
Lead-and copper-based minerals can be traced in Egypt in the 2nd millennium BCE (Walter et al. 1999) and in the Royal Tombs of Ur in the 3rd millennium BCE (Hauptmann et al. 2016). The Theophrastus recipe for psimythion making, but also other synthetic minerals, like the copper-based ios xystos (Theophrastus, On Stones, 57), puts synthetic mineral manufacture at centre-stage in the Classical/Hellenistic world. Use of psimythion in contemporary art (5th c BCE) is attested only occasionally. It was identified in a single 'brush stroke' ('Pheidias' brush stroke') on the pediment of the Parthenon of Athens consisting of hydrocerussite mixed with gypsum and a phosphate mineral (Jenkins and Middleton 1988, 204). More visible are the applications of hydrocerussite in the Hellenistic world, including 4th-3rd c BCE funerary paintings from Macedonia (Brecoulaki et al. 2014) and as an undercoat to organic pigments. At the opposite end of the chronological spectrum, there is evidence for its production, in the Early Bronze Age, at Akrotiri, Thera (Sotiropoulou et al. 2010). Recently, researchers have shown that cerussite had been applied on Early Cycladic marble figurines as a white substrate on the areas of the marble that would be eventually decorated, and prior to the application of the coloured pigment. They suggested that it was the cerussite, rather than the colourful pigments themselves, which may have been responsible for the preservation of anatomical and other details, often described as 'paint ghosts' (K Manteli, pers. comm.).
On the other hand, psimythia, in pellet or lump form, have been found within lidded ceramic vessels, in (primarily) female burials. However, such finds tend to be rather rare, when viewed in the context of the vast number of excavated female burials. A number of these pellets have already been analysed, albeit not with the same methods (Table 1). Of the 12 samples, 3 are pink coloured (Eretria, Kerameikos and Delphi) and therefore mixtures: one is cerussite mixed with iron oxide (Eretria), the second, 'PbO' (no cerussite/hydrocerussite is mentioned) mixed with HgS (Delphi) and the third, cerussite with HgS ( Kerameikos). Of the remaining 9 samples, 3 are primarily hydrocerussite with small amounts of cerussite (Volos 396 and 471; Agora, Athens). Of the remaining 6, 5 are 100% cerussite (Corinth, Volos 439, Eleusis, Paestum, Kerameikos) and 1 is primarily cerussite with some hydrocerussite (Derveni). Interestingly, the Kerameikos psimythion was found not within a female burial, but rather within that of a male actor; the latter were highly likely to make use of the powder, since males would take on female roles (Kapparis 2018).
The analyses of the artefacts in Table 1, by different researchers and over the last 80 years, suggest that 6 out of the 9 white samples contain cerussite over and above hydrocerussite and/or other phases. The above data set is not large and, as such, conclusions drawn are more likely to be indicative, rather than definitive. Nevertheless, the question arises as to which parameter(s) control the production of cerussite, above other phases, within the Theophrastus pot. It is the excess of CO 2 and maintenance thereof throughout the 10day cycle that would have ensured the production of cerussite in preference to the hydrocerussite. This paper sets out to present a working hypothesis regarding the importance of the biotic component within the oxos as a driver of CO 2 production. The two sections that follow assess past reconstructions of the Theophrastus pot ('Oxos in the Theophrastus recipe: Pb metal in the presence of fermenting liquid' section) and recent ('Vinegar in Pb corrosion studies' section) work on Pb metal corrosion led by acetic acid bacteria (AAB) and other microorganisms.
Oxos in the Theophrastus recipe: Pb metal in the presence of fermenting liquid Theophrastus' (On Stones, 56) recipe for psimythion making reads as follows: 'lead (molybdos) about the size of a brick is placed in jars (pithos) over vinegar (oxos) and when this acquires a thick mass, which it generally does in ten days, then the jars are opened and a kind of mold (evros) is scraped off the lead, and this is done again until it is all used up. The part that is scraped off is ground in a mortar and decanted frequently, and what is finally left at the bottom is white lead (psimythion)' (Caley and Richards 1956, 188) (the transliterated Greek in parentheses is our addition.) There have been many attempts to reproduce psimythion experimentally and in the manner of the Theophrastus set-up. Katsaros et al. (2010) reproduced this recipe in a similar pot, a pithos, resulting in the production of hydrocerussite and cerussite. When the authors analysed psimythion pellets from the Kerameikos cemetery (see Table 1), they found them to consist primarily of cerussite. Principe (2018) carried out experiments similar to that of Katsaros et al. (2010) but in a glass jar. He did not report any mineralogical analysis of his finds, so it is not clear what exactly he made, cerussite or hydrocerussite or lead acetate. Regarding the question of the source of the CO 2 , he suggests that the diurnal effect, i.e. the temperature variations between day and night, would have had an effect on the O 2 /CO 2 introduced into the poorly sealed vessel of coarse fabric every night ('a daily cycle of breathing') as a result of vinegar vapour contraction. CO 2 may indeed have been made available in that way but the question is whether it would be sufficient to generate enough hydrocerussite/cerussite over a 10-day cycle. We argue in favour of a more reliable and continuous source of CO 2 , i.e. that which can be generated within a fermenting liquid by either aerobic (acetic acid) bacteria converting acetic acid to CO 2 and water and/or facultative anaerobic yeasts (Saccharomyces) doing the same.
We have introduced the presence of the microorganisms because Theophrastus makes reference to the use of 'oxos'. Within a 5th-3rd c BCE context, oxos would have been translated as 'poor wine or vin ordinaire' or 'vinegar produced from oxos' (see entry in Liddell Scott Dictionary-www. perseus.tufts.edu). In the second meaning, it is made clear that vinegar is distinct from and not synonymous with oxos. Xenophon in his Anabasis (2.3.14) refers to oxos produced from 'boiling' (fermenting) palm wine. The issue of fermentation, both implied and stated by these authors, suggests that the biotic component plays a key role and needs to be brought forward in any psimythion-related discussion.
In the past, a number of researchers had queried whether the Theophrastus arrangement could actually produce anything other than lead acetate, the result of acetic acid vapour reacting with lead metal (Bailey 1932;Shear 1936;Stevenson 1955). Since lead acetate, as a water-soluble salt, would have never worked as a cosmetic or an artist's pigment, it follows that there must have been an obvious source of CO 2 helping to convert lead acetate to lead carbonate.
In an attempt to tackle the question of the origin of the elusive CO 2 source, Caley and Richards (1956, 188) proposed that 'another source of the carbon dioxide may have been the so-called vinegar used in the process. If this was merely a spoiled grape juice undergoing both alcoholic and acetous fermentation, ample carbon dioxide would have been available'.
Grape skin hosts a number of aerobic (acetic acid bacteria (AAB)), facultative anaerobic (e.g. yeasts, like Saccharomyces spp.) and anaerobic (e.g. Acetobacterium spp.) microorganisms which play key roles in the conversion of sugars to alcohol and alcohol to vinegar. Mortimer and Polsinelli (1999) have demonstrated that one of them, Saccharomyces cerevisiae, constitutes about one-quarter of the totality of yeasts living on damaged grape skin (grapes that have been pressed and squeezed). Cavalieri et al. (2002) identified the same microorganism in the lees residue within a wine jar dated to the 4th millennium BCE from Egypt. The authors have suggested that this yeast served as an inoculum for bread and beer. Although yeasts are responsible for the transformation of grape juice to wine they are also capable of spoiling it. S. cerevisiae is the yeast primarily responsible for spoilage since it can resist high alcohol concentrations and low pH (Martorell et al. 2005).
Vinegar in Pb corrosion studies
In their study on the crystal growth of lead carbonates under different media, Sánchez-Navas et al. (2013) reproduced the 'stack' or 'Dutch' process where metallic lead was first oxidized by acetic acid vapours in the presence of moisture, and the resulting lead acetate was later transformed into basic lead carbonate (hydrocerussite) by the action of carbon dioxide (Gettens et al. 1967). The 'stack' process used the same raw materials, i.e. lead metal exposed to acetic acid vapours, but not in a closed pot, as in the Theophrastus case. The source of the CO 2 , in this case, was manure which surrounded the stacked pots of lead/acetic acid.
Sánchez-Navas et al. (2013) used various sources of CO 2 , one of them being a liquid culture of Acetobacter sp., or acetic acid bacteria (AABs) which can make acetic acid from alcohol but can also oxidize acetic acid for the production of CO 2 and H 2 O. They considered this a bio-mediated process and noted that if nitrogenated organic matter is present (for example proteins), then not only CO 2 but also NH 3 is produced. Ammonia (in the gas phase) would increase the pH on the metal plate favouring, they argued, the formation of hydrocerussite. They also noted that, in the course of their reactions, the product formed was poorly crystalline hydrocerussite/cerussite at atmospheric partial pressures of CO 2 (10 -3.5 atm) but when a higher CO 2 pressure was used (1 atm) by flowing gas in the container, cerrusite was formed. This is consistent with the higher carbonate content of cerussite, where 1 mol of Pb requires 1 mol CO 2 , compared with 0.67 mol of CO 2 required per mole of Pb to form hydrocerussite.
More recently, Gonzáles et al. (2019b) also reported that a CO 2 producer is necessary to quickly form lead carbonates. They showed that the metal surface can h a v e m a n y l a y e r s o f p r o d u c t c o n s i s t i n g o f plumbonacrite (Pb 5 (CO 3 ) 3 O(OH) 2 ) at the lead surface f o l l o w e d b y o u t e r l a y e r s o f h y d r o c e r u s s i t e Pb 3 (CO 3 ) 2 (OH) 2 and cerussite (PbCO 3 ) which are in direct contact with each other. The role of CO 2 , heat and UV light in the production of cerussite deserves further study. They noted that using vinegar as the source of acetic acid, and with no separate CO 2 source beyond atmospheric CO 2 , no hydrocerussite or cerrusite is produced, even after 1 month; yet Katsaros et al. (2010), by placing lead over vinegar in a jar sealed but with a breathable leather lid, formed hydrocerussite and cerussite after 10 days. Both systems had access to atmospheric CO 2 and O 2 , the main difference being that Katsaros et al. heated their sample container (27-55°C) by placing it outside 'in the sun', followed by washing and drying the sample outside in sunlight. Washing removes soluble acetates which do not convert on heating to carbonates (Martínez-Casado et al. 2016) but whether heating/sunlight converts plumbonacrite to cerussite is unknown. Finally, Gonzáles et al. (2019b)) addressed the question of hydrocerussite stability in air and reported no evidence at all of cerrusite formation after hydrocerrusite was exposed to laboratory air for 26 months; it is therefore unlikely that the cerussite analysed here was formed during burial.
This paper is divided in two parts. In the first part, we investigate a select number of complete and fragmented pellets of psimythion, in the collection of NAM ('The NAM pellets' section and Fig. 1), on the basis of their composition, as well as metrology ('The NAM pellet metrology' section, Fig. 2, Table 2). The pellets appear to have been 'cast' in moulds and therefore it is possible to arrive at the shape of the latter by looking closely at the shape of the former (Fig. 3). The 'Method' section outlines the method of their examination while the 'Results' section gives a description of the results. Chemically the pellets are made of Pb with Ca as a minor element ('pXRF analyses of multiple fragments' section and Tables 3 and 4). Mineralogically they are made primarily of cerussite ('XRD and SEM-EDAX analysis' section and Table 5 and Fig. 4a). In hypothesising a mechanism by which cerussite could be produced to the near exclusion of other phases, we focus on the biotic element within the 'oxos', as a key driver in CO 2 production within the closed Theophrastus pot. For a full discussion, see the 'CO 2 -rich conditions prevailing within the Theophrastus pot: a hypothesis' section.
The NAM pellets
Complete and fragmented pellets of psimythion, deriving from the contents of two pyxides, lidded ceramic vessels, NAM 13676a and NAM 13676b (Fig. 1a) were retrieved from burials dating to the 5th-4th c BCE, Athens; also 'loose' (without a pyxis) complete and fragmented pieces of psimythion (Fig. 1d) from a cemetery at Tanagra, Boeotia, c. 60 km north of Athens. The pyxis (NAM 11447), with which the latter are currently displayed, is from Rhodes. These pyxis-less pellets from Tanagra are also dated to the 5th-4th c BCE.
NAM 13676a (Fig. 1b) Pyxis with lid found together with pyxis B, 136763b, in the foundations of a house in a plot opposite the National Technical University, Patission Street, Athens, in the late 1890s. On the body of the pyxis (Fig. 1b) is a depiction of women's quarters, while on the lid, a band of egg-and-dot patterns surrounds a wreath of ivy leaves. The pyxis is attributed to the Painter of Athens 1585 and is dated 410-400 BCE. Fig. 1 The 13676a collection consists of three complete pieces, one near-complete and 22 fragmented ones, a total of 26.
NAM 13676b (Fig. 1c) Miniature pyxis with lid found together with pyxis A, 136763a above. On the body is a depiction of a hare, a feline and swans, while on the lid, there are three female heads alternating with anthemia surrounded by a band of egg-and-dot pattern. It is also dated 410-400 BCE. Its contents include two complete pieces, two near complete and numerous fragmentary ones.
NAM 11332 (Fig. 1d) Thirty-six white pellets from Tanagra, Boeotia: 14 of them are complete, 14 are near complete and eight are fragmentary. Figure 1d displays 19 of the 36. They are dated, by association, to the end of the 5th-end of 4th c BCE based on the chronology of the tombs in the Tanagra cemetery.
Method pXRF pXRF analyses took place at the National Archaeological Museum, Athens, with a NitonX3Lt-GOLDD instrument which has a 50kVAgX-raytube, 80-MHz real-time digital signal processing and two processors for computation and data storage respectively. The TestAllGeo (TAG) mode was selected. Analysis time was set at 60 s and two measurements were taken on different spots on each fragment. An average of the two is shown here. Replicate analyses of NIST SRM 2709a soil standard revealed satisfactory precision (< 10% Zr and < 5% Rb and Sr), good accuracy for Sr (< 10%), but poor accuracy for Zr and Rb which were underestimated by over 90%, based on using NIST 2709a as the internal standard. However, the above elements are not crucial to the discussion here. The two elements of relevance here were Pb and Ca. Therefore, four external standards were prepared using reagent grade PbO and CaCO 3 . Results are shown in Table 3.
SEM-EDAX
A tungsten filament Scanning Electron Microscope (W-SEM) HITACHI S-3700, combined with an Oxford Inca 350 with 80-mm X-Max detector, based at University of Strathclyde's Advanced Materials Research Laboratory, was used for the elemental analysis of materials. Freshly fractured surfaces of specimens were gold -coated.
XRD
Fragments of psimythion pellets were analysed using X-ray diffraction (Bruker D8 Advance) with a Cu K α X-ray source.
To determine the cerussite to hydrocerussite ratios within the analysed replicates, we performed quantitative Rietveld refinements on the XRD patterns using TOPAS and the crystallographic information files for cerussite (Antao and Hassan 2009) and hydrocerussite (Martinetto et al. 2002).The subsamples from NAM 11332, NAM 13676a and NAM 13676b consisted respectively of powdered psimythion only, both powdered and a single fragment of a psimythion pellet and two fragments of one (or two) psimythion pellet(s).
DNA Analyses
Given the presence and crucial role that microorganisms play in the reactions to be outlined below, it was desirable to find potential DNA signatures of the microorganisms within the samples. DNA half-life has been determined to be over 500 years (Allentoft et al. 2012), but, buried in soils, it can last 1000-10,000 years (Thomsen and Willerslev 2015). DNA were extracted from approximately 100 μg of mineral material in triplicate using a Qiagen DNAeasy Soil extraction kit. DNA quantity and extraction purity were screened using UVmicrospectrometry (Epoch BioTek; Swindon, UK). Final extracts were further diluted 1/10 and 1/100 to be included in downstream processes along with the 'neat' (undiluted) extracts. Dilution of samples has often been applied in cases where enzyme-inhibiting materials (e.g. chlorophyll, metals or humic acids) could possibly exist. Here, the likelihood of Pb +2 posed a concern, although the extraction method does involve the precipitation removal of cationic elements. Assays for the detection of Saccharomyces fungus and Acetobacter involved sensitive quantitative polymerase chain reactions (qPCR). Primers were based on those previously reported: Saccharomyces sp. (SFC1 forward primer: 5′ GGACTCTGGACATGCAAGAT and SCR1 reverse primer: 5′ ATACCCTTCTTAACACCTGGC; Salinas et al. 2009) and two sets of Acetobacter sp. primers: (forward #1: GCTGGCGGCATGCTTAACACAT and reverse #1: G G A G G T G A T C C A G C C G C A G G T ; f o r w a r d # 2 : TCAAGTCCTCATGGCCCTTATG and reverse #2: TACACACGTGCTACAATGGCG (González et al. 2005). qPCR conditions involved 10-μL reactions (5 μL GoTaq® qPCR Master Mix; Promega (Madison, WI, USA)) with 40 cycles of thermal cycling: 95°C for 10 s (DNA denaturation), 60°C for 20 s (primer annealing and elongation), on a BioRad iCycler (BioRad; Hercules, CA, USA). Genomic DNA from previously identified cultures were used as positive controls; molecular-grade water was used as a negative control.
Results
The NAM pellet metrology Twelve pellets from Tanagra, Beoetia (11332) and three pellets from Athens (13676a) showed surprising homogeneity with a mean diameter of 2.8 cm, thickness of 0.8 cm and a weight of 7 g and with a uniform standard deviation of c. 15%. Interestingly, our data is in good agreement with those of Katsaros et al. (2010) who measured pellets from the Kerameikos. They report diameter = 2.75 cm, thickness = 0.6 cm and weight = 5.5 g suggesting that the psimythion industry of the period operated on an accepted 'standard' of weights and measures ( Table 2). The shape of the pellets varies and as long as the dimensions and weights are kept constant, it is possible to make allowances for 'preferred' shapes by different workshops. The Tanagra pieces vary as follows: 11332-3 (Fig. 2a) is flat on both sides, while Fig. 2b and c show pellets with only one side concave and the other flat. The Athens pellets had a shallow convex/concave surface, with the concave surface facing up (Fig. 2e-right) and the convex surface in the opposite direction (Fig. 2e-left); similarly for those in Fig. 2d and f. The different cross-sections displayed by the pellets suggest that each workshop may have had its own preferred shape(s), but all were required to abide by prescribed dimensions and weight.
The uniform shape and size of the NAM pellets suggest that they may have been formed within a bivalve mould, as shown in Fig. 3, the top part closing on the bottom in the manner shown here: the concave surface facing down and adhering onto the bottom section of the mould, and the convex adhering to the top part. The mould may have been made of carved wood or ceramic given the smooth surface of all pellets. The mould may have been covered by a medium, perhaps calcium-rich, preventing the adherence of the pellet on the mould and allowing each pellet to detach easily. The space left by an air bubble trapped between the surface of psimythion and the bottom section of the mould can be seen in Fig. 2d.
pXRF analyses of multiple fragments
Non-destructive pXRF analysis was carried out on a number of samples, at the NAM, both complete and fragmented (Tables 3 and 4). In total, c. 9 pellets of sample 13676b and 9 pellets of sample 11332 were analysed on both their flat and curved faces. The results were calibrated against prepared standards (PbO-CaCO 3 ) for only two elements Pb and Ca (Table 3) which showed values above 0.5%. The 13676b samples contained c. 82% Pb and 2% Ca by weight on their flat face; and c. 70% Pb and 2.5% Ca on their curved faces. Ca values for the 11332 samples were low and there was no variation between the curved and the flat face. Table 4 shows uncalibrated data sets for same sample sets and with regard to Cu, Sn, Sb, Cd and Ag.
XRD and SEM-EDAX analysis
Permit was granted to take one fragmented pellet from each of the three pyxides 13676a and 13676b and 11332 and subject it to destructive analysis via SEM-EDAX and XRD.
SEM-EDAX image and analysis (Fig. 4b) of the surface of a fragment of a pellet from 11332b showed large well-formed cerussite crystals with composition corresponding to c. 70% Pb, 15% O and 15% C; the results are uncalibrated but definitely point to cerussite. For XRD analysis, a number of samples were obtained. In the case of the pellet from pyxis, 13676a, powder samples were taken from both surfaces, the convex and concave. A section of that pellet was subsequently crushed and ground and mounted for XRD analysis. Similarly for pellets from pyxides 13676b and 11332. The XRD patterns for all are shown in Fig. 4a and the quantitative assessment in weight per cent of each is given in Table 5. Only two minerals are identified, namely cerussite and hydrocerussite. It is noted that the powder scraped off the concave surface has a slightly higher concentration of hydrocerussite as opposed to powder scraped off the convex surface. In one case, the hydrocessurite of that concave face is c. 11%. When the powder subsample is obtained following pulverisation of the original, then the hydrocerussite concentration is c. 2%.
Summary
The entire collection of NAM psimythia consists of 103 pieces, in complete or mostly fragmentary state. XRD analysis of three pellets (various fractions) shows cerussite with hydrocerussite not exceeding c. 11%. Ca has been detected in the pXRF but was not visible in the XRD of the samples analysed here and as part of a distinct phase. From the perspective of trace elements, there are elevated amounts of Ag which may point to an Ag-rich Pb metal source, Laurion in Attica being the most likely candidate for the manufacture of such metal (Photos-Jones and Jones 1994).
DNA extraction and qPCR of some pellets
Community DNA was extracted from three pellets. Visually, the material had minimal evidence of organic matter, and determinations of DNA concentrations were found below detection (< 2 ng/μL) and without any evidence of impurities. The lack of DNA evidence does not suffice to preclude the qPCR analysis, as the assay is innately more sensitive (detection limit: 100 DNA copies/g). However, the samples were negative (positive assays showed results > 95% efficiency of reactions). As such, Saccharomyces or Acetobacter were absent in all pellets. This suggests that reactions on the metal-mineral phase are likely abiotic, but it may be that the effect on the composition of gases within the pot may have been largely microbially mediated.
CO 2 -rich conditions prevailing within the Theophrastus pot: a hypothesis Theophrastus describes psimythion manufacture as consisting of two stages: (a) the preparation of synthetic cerussite and (b) at the beneficiation thereof by grinding, dissolving and decanting of the soluble components with the aim of their enrichment and refinement with regard to both composition and particle size. The need to grind the flakes of psimythion into fine powder, its subsequent mixing with water allowing for any soluble matter to dissolve, the settling of the insoluble parts and the decanting of the soluble ones, all of the above steps would have aimed at producing a pure final product.
Our proposed model for the reactions taking place within the Theophrastus pot is illustrated in Fig. 5. The prescribed 10day cycle has been divided notionally into three stages to account for reactions taking place at different times and 'fronts'. These stages are not sharply divided but rather merge into one another and can also be reversible, if conditions within the pot change. First, there are reactions between the metal (lead) surface and the gaseous phase (i.e. the air space above the liquid). Second, there are reactions taking place within the oxos. The biotic component is made up of microorganisms, both aerobic (acetic acid bacteria (AAB)/Acetobacter) and anaerobic (yeast/Acetobacterium and other obligate bacterial fermenters) actively changing the chemistry of the oxos. These changes result in changes in the gas phase, via the production of O 2 /CO 2 /acetic acid vapour, which in turn have a direct effect on the reactions on the metal surface.
In stage 1, aerobes (AAB and Acetobacter) are active in an oxygen-rich environment converting alcohol to acetic acid. But the same bacteria also respire aerobically converting acetic acid to CO 2 and water. Although at the start the gas phase in the pot is O 2 rich, under the dual action of the aerobes (oxidation of alcohol and respiration/metabolic activity), it becomes increasingly richer in CO 2 , acetic acid and water vapour. On the metal surface, a reaction between Pb (metal), O 2 and water vapour results in the formation of lead hydroxide; this, in turn reacts with a vapour that is an increasingly rich in acetic acid, forming lead acetate.
Stages 1 and 2 form a continuum, in the sense that there is no abrupt end to stage 1 before stage 2 begins. The reactions on the Pb metal surface continue to produce lead hydroxide (or lead oxide) and acetate; given the increased levels of CO 2 (due to the microbial respiration/metabolic activity), hydrocerussite and acetic acid are formed during stage 2. At the end of stage 1, O 2 levels have depleted due to microbial consumption and the formation of lead hydroxide stops. During stage 2, anaerobic microbes are (re)activated in the bottom of the pithos and due to the depletion of O 2 , they will be active throughout the pithos during stage 3. In stage 3, the reactivation of the anaerobes within oxos leads to the formation of additional levels of CO 2 due to the anaerobic respiration/metabolic activity by microbes converting acetic acid into CO 2 and H 2 O. During the depletion of lead hydroxide (through the formation of hydrocerussite) and the increase in the levels of CO 2 , cerussite is formed at the expense of hydrocerussite. We should note, however, that we cannot rule out or confirm the formation of hydrocerussite via plumbonacrite, as an intermediate phase, as observed by Gonzáles et al. (2019a) during the Dutch process. Figure 6 gives an illustrative summary of the trend in gas and solid phase composition for stages 1-3 as described in Fig. 5. Stage 1 is characterized by a slow decrease in O 2 , rapid increase in acetic acid vapour and gradual increase in CO 2 . Lead hydroxide and lead acetate form on the lead metal surface. During stage 2, there is a levelling in the amount of acetic acid vapour, together with a sharp decrease in O 2 and a continuing increase in CO 2 . On the metal surface, lead hydroxide gets depleted as hydrocerussite forms rapidly followed by an initially slow rise in lead carbonate. Finally, in stage 3, acetic acid vapour is gradually depleted, while CO 2 levels continue to rise leading to the preferential formation of lead carbonate (cerussite), rather than basic lead carbonate (hydrocerussite) on the 'corroded' lead metal surface.
Conclusions
The manufacture of a white lead-based pigment has had a long history, recorded in detail, at least since the 4th c BCE. Given the commonality of its raw materials, i.e. lead metal and oxos, the simplicity of the installations and the relatively hands-off nature of the process, there has been a broad, albeit expressly unstated, assumption that from antiquity to the modern era 'one recipe' fitted all. This is not true and earlier researchers took pains to recognize and report on the many variations within that long time-span, not simply chronologically, but also regionally (Pulsifer 1888, 196). We suggest that during that time-depth 'different' white lead-based minerals were produced and each period may have developed its own recipes and working conditions. Mass production of this white pigment continued well into the modern times via the stack/ Dutch process (Gettens et al. 1967). We argue that present archaeological evidence suggests that, for the period of concern here, synthetic cerussite was the main mineral intended to be produced. The question is how was this achieved.
The proposed hypothesis for the conditions prevailing within the Theophrastus pot is that they are dynamic and not static throughout the 10-day cycle. Active (and inactive) microbial communities within the oxos control the composition of the gas phase and in turn are controlled by it. This dynamic state must have been well understood by the psimythion makers. Any disturbance thereof, even a mere opening of the lid at any stage in the 10-day cycle to 'check progress', or indeed any interruption of the process somewhere between stages 2 and 3 (with subsequent introduction of O 2 ) would alter the dynamics and probably push towards the production of the hydrocerussite, at the expense of cerussite. The above consistent 'push' of the equilibrium towards cerussite combined with the standardisation of the pellet form, shape and weight (see the 'The NAM pellet metrology' section) suggests an industry well on top of its own practice.
Returning to the NAM artefacts and our search for saccharomyces and acetobacter, as already mentioned, no such microorganisms were found. The two genera are most commonly associated with the suggested processes, but possibly not exclusively so. Their absence may be due to the concentrations of extracted DNA being practically 'nil'.
Psimythion has for long been valued as an important white pigment in art and in cosmetics. In the period concerned here, it was also used as a mineral constituent of various medicines. Studying the material culture of the past on the basis of its use alone is only one way of looking at it and as such, it is usually limiting. It leaves unexplored other areas, ranging from aspects of its manufacture to its perceived value and symbolism (if any) within the cultural framework that generated it. In the case of psimythion, what is perhaps most intriguing is the implicit empirical understanding, from the part of psimythion manufacturers, not only of the range and dynamics of the chemical reactions, both biotic and abiotic, taking place within the pot, but also of their ability to control them. 'Oxos', its composition and properties, holds centre-stage and a better understanding of its role in early chemical synthesis of lead-and copperbased minerals is perhaps timely. Fig. 6 Trends in gas and solid phase composition as a function of 10-day cycle of events within the pot for making psimythion according to Theophrastus
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2014-06-11T00:00:00.000
|
13483077
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2014.00212/pdf",
"pdf_hash": "08da446ed385354527dcf8d38f5bcef53aa0eacc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43311",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "08da446ed385354527dcf8d38f5bcef53aa0eacc",
"year": 2014
}
|
pes2o/s2orc
|
The CXCL12/CXCR4 chemokine ligand/receptor axis in cardiovascular disease
The chemokine receptor CXCR4 and its ligand CXCL12 play an important homeostatic function by mediating the homing of progenitor cells in the bone marrow and regulating their mobilization into peripheral tissues upon injury or stress. Although the CXCL12/CXCR4 interaction has long been regarded as a monogamous relation, the identification of the pro-inflammatory chemokine macrophage migration inhibitory factor (MIF) as an important second ligand for CXCR4, and of CXCR7 as an alternative receptor for CXCL12, has undermined this interpretation and has considerably complicated the understanding of CXCL12/CXCR4 signaling and associated biological functions. This review aims to provide insight into the current concept of the CXCL12/CXCR4 axis in myocardial infarction (MI) and its underlying pathologies such as atherosclerosis and injury-induced vascular restenosis. It will discuss main findings from in vitro studies, animal experiments and large-scale genome-wide association studies. The importance of the CXCL12/CXCR4 axis in progenitor cell homing and mobilization will be addressed, as will be the function of CXCR4 in different cell types involved in atherosclerosis. Finally, a potential translation of current knowledge on CXCR4 into future therapeutical application will be discussed.
INTRODUCTION
Chemokines are small 8-12 kDa cytokines that mediate cell chemotaxis and arrest by binding to their respective receptors on the cell surface (Blanchet et al., 2012) (Box 1). The chemokine receptor CXCR4 and its ligand CXCL12, also known as stromal cell-derived factor 1 (SDF-1), are attractive therapeutic targets in the treatment of cancer, as they support migration, proliferation, and survival of cancer cells (Teicher and Fricker, 2010;Domanska et al., 2013). Also, CXCR4 is intensively studied in different autoimmune diseases, including rheumatoid arthritis, systemic lupus erythematosus, and autoimmune disorders of the central nerve system as multiple sclerosis, for its involvement in leukocyte chemotaxis in specific inflammatory conditions (Debnath et al., 2013;Domanska et al., 2013). Furthermore, the CXCL12/CXCR4 axis plays a crucial role in the homing of stem and progenitor cells in the bone marrow and controls their mobilization into peripheral blood and tissues in homeostatic conditions as well as after tissue injury or stress. Small molecule CXCR4 inhibitors are intensively being studied as mobilizers of hematopoietic stem cells for transplantation therapy of patients with specific types of cancer (Debnath et al., 2013) (Box 2). Upregulation of CXCL12 in hypoxic conditions with subsequent mobilization of CXCR4-positive stem and progenitor cells (Ceradini et al., 2004) has prompted researchers to explore the role and therapeutic value of progenitor cells and the CXCL12/CXCR4 axis in diverse models of ischemic injury, including in heart, kidney, lung, and brain. In addition, CXCL12/CXCR4-mediated mobilization of progenitor cells has also been intensively investigated in models of vascular injury-induced restenosis as observed after e.g., organ transplantation, balloon angioplasty, or stent implantation (Schober et al., 2006), as will be discussed in more detail later. Also, CXCR4 acts as an important coreceptor for human immunodeficiency virus (HIV) facilitating its entry in host CD4 + T-cells. In fact, the low-molecular weight CXCR4 antagonist AMD3100, the prototype of a group of so-called bicyclams, was originally identified in 1994 as a highly potent inhibitor of HIV replication in human T-cells. Only 3 years later, it was unraveled that blockade of CXCR4 was the underlying mechanism of the HIV inhibitory function of AMD3100 (De Clercq, 2000;Debnath et al., 2013). Taken together, it is not surprising that the CXCL12/CXCR4 axis is one of the most studied chemokine ligand/receptor axes in a diversity of pathological disorders.
Here we will focus on the role of CXCR4 in coronary artery disease (CAD) (Box 3). This pathology is caused by atherosclerosis, Box 1 | Chemokines.
Chemokines constitute the largest family of cytokines and are defined as small molecules that trigger chemotaxis of cells along a concentration gradient. They are generally important in cell signaling (Bachelerie et al., 2014). Although chemokines are a large family, they all have the same basic structure, known as the chemokine fold, which consists of a short N-terminal region, an extended N-loop region, followed by three β-strands and one α-helix (Rajagopalan and Rajarathnam, 2006). To date, chemokines have been classified into four major subdivisions, being C-, CXC-, CC-, and CX3C-chemokines. This classification is based on the number and spacing of conserved N-terminal cysteine groups, which form disulfide bonds and are hence crucial for the spatial arrangement and stability of the chemokine (Rajagopalan and Rajarathnam, 2006). In addition to these classical chemokine groups, chemokine-like function (CLF) chemokines have recently been suggested to form a fifth subclass, which does not have the classical chemokine fold and N-terminal residues, but nevertheless exerts typical chemokine activities (Tillmann et al., 2013).
The chemokine receptors are classified according to the chemokines they bind and are divided into two groups, being G-proteincoupled receptors (GPCRs) and atypical chemokine receptors. GPCRs generally signal by activating G-proteins, leading to a plethora of cellular functions, whereas atypical chemokine receptors appear to shape chemokine gradients and scavenge chemokines in the context of inflammation independent of G-protein signaling (Bachelerie et al., 2014). Apart from the classical function of cell recruitment, chemokines also mediate arrest of rolling leukocytes through activation of integrins. Some chemokines, like CXCL12, also have a role in cell homeostasis, which together with chemotaxis and arrest is an important regulatory factor of atherogenesis, as discussed in detail recently (Zernecke and Weber, 2014).
Box 2 | CXCR4 Antagonists.
AMD3100, also known as Plerixafor or Mozobil (Genzyme Corp), is the first CXCR4 antagonist that has been approved by the FDA as mobilizer of hematopoietic stem cells in combination with G-CSF in treatment of patients with non-Hodgkin's lymphoma and multiple myeloma, and many other small molecule inhibitors of CXCR4 are under investigation or in clinical trial for different pathological settings, as recently discussed in detail (Debnath et al., 2013). AMD3100 is the prototype of bis-tetraazamacrocycles (bicyclams), a class of highly potent HIV1 antagonists. Subsequent studies examining the effect of structural modifications of AMD3100 on pharmacokinetic properties led to the discovery of AMD3465, a monocyclam analog of AMD3100, in which the second cyclam ring of AMD3100 was substituted by a pyridinylmethylene group. Similarly as AMD3100, AMD3465 interferes with the binding of CXCL12 to CXCR4, thereby preventing CXCL12 to trigger CXCR4 endocytosis and CXCR4-induced intracellular signaling as calcium mobilization and MAPK activation. As an advantage, AMD3465 shows a 10-fold higher effectiveness in inhibiting CXCR4 activity compared to AMD3100, but has no FDA approval to date and hence is less used in studies (Hatse et al., 2005).
Box 3 | Cardiovascular Disease.
Cardiovascular disease, including ischemic stroke and heart attack, is a leading cause of death and morbidity worldwide. Its underlying pathology, atherosclerosis, is defined as a chronic inflammatory disease of arterial walls (Hansson and Hermansson, 2011;Weber and Noels, 2011). Atherosclerotic lesion formation is initiated by dysfunction of the endothelial layer lining the arterial wall, caused by irritative stimuli such as dyslipidemia. Upon endothelial activation, monocytes start adhering to and migrating through the endothelium. Monocytederived macrophages in the arterial wall take up cholesterol-rich LDL particles, leading to the formation of so-called foam cells. As the atherosclerotic lesion progresses, smooth muscle cells (SMCs) migrate from the media to the intima, resident intimal SMCs proliferate and extracellular matrix molecules such as elastin, collagen and proteoglycans are synthesized. A necrotic core made of extracellular lipids derived from necrotic and apoptotic foam cells forms in advanced plaques, along with a fibrous cap consisting of collagen and SMCs. The ultimate complications of atherosclerosis are flow-limiting stenosis and plaque rupture, the latter triggering vessel occlusion through thrombus formation. a chronic inflammatory disease of the vessel wall characterized by the development of lesions through continuous and progressive infiltration and accumulation of lipids and leukocytes (Hansson and Hermansson, 2011;Weber and Noels, 2011). Restriction of the blood flow by extensive lesions or thrombus formation caused by rupture of unstable plaques is a main cause for myocardial ischemia and infarction. Interventions such as balloon angioplasty or stent implantation aim to re-open the occluded artery, but are often associated with a re-narrowing of the vessel lumen (restenosis) caused by injury-induced vascular remodeling and neointimal hyperplasia. Bypass grafting remains the gold standard therapy for severe, diffuse coronary artery occlusion, particularly in the elderly and patients with diabetes. In this context vein graft failure is a major problem and late vein graft failure is associated with neointimal hyperplasia and accelerated atherosclerosis. Interestingly, recent genome-wide association studies (GWAS) revealed CXCL12 as an important candidate gene associated with CAD and myocardial infarction (MI), but the underlying mechanisms remain totally unclear (Burton et al., 2007;Samani et al., 2007;Kathiresan et al., 2009;Farouk et al., 2010;Schunkert et al., 2011) (Box 4).
To facilitate future research exploring the role of CXCL12 and CXCR4 in CAD, this review aims to discuss the current concept of the CXCL12/CXCR4 axis in atherosclerosis, injury-induced vascular restenosis and MI in relation to its role in progenitor cell mobilization and biological functions in atherosclerosis-relevant cell types. We will also introduce MIF as an alternative chemokine ligand for CXCR4, and CXCR7 as an additional receptor for CXCL12, to emphasize the complexity of identifying specific CXCL12-and CXCR4-associated functions through intertwining of chemokine (receptor) signaling.
CXCR4 AS A CHEMOKINE RECEPTOR FOR CXCL12 AND MIF CXCR4 AND ITS CHEMOKINE LIGAND CXCL12
The chemokine receptor CXCR4 belongs to the family of sevenspan transmembrane G-protein-coupled chemokine receptors (GPCRs). It is ubiquitously expressed and evolutionary conserved, with 89% of similarity between the human and mouse protein. In 1996 SDF-1, later called CXCL12, was identified as a ligand for CXCR4 (Bleul et al., 1996a;Oberlin et al., 1996). Similar as CXCR4, CXCL12 is highly conserved, with human and mouse showing around 92% amino acid (aa) identity for both ubiquitously expressed isoforms α (89 aa) and β (93 aa) (Shirozu et al., 1995). Except for these 2 classical isoforms, which are so far functionally indistinguishable and differ only in 4 amino acids at the C-terminal end, four additional isoforms have been identified in humans. These γ (119 aa), δ (140 aa), ε (90 aa), and φ (100 aa) isoforms show a more restricted expression pattern and have been much less studied. They result from differential splicing and only differ at their C-terminal region (Yu et al., 2006) (Figure 1).
Mice deficient for Cxcl12 or Cxcr4 die perinatally due to defects in hematopoiesis, vasculo-, cardio-, and neurogenesis (Nagasawa et al., 1996;Ma et al., 1998;Tachibana et al., 1998;Zou et al., 1998;Ara et al., 2005). The importance of the CXCL12/CXCR4 axis in embryonic development is associated with an essential role in homeostatic (progenitor) cell migration. This clarifies the classification of the CXCL12/CXCR4 axis into the "homeostatic/constitutive" chemokine ligand/receptor group, instead of the "inflammatory/inducible" group of chemokines, which groups chemokines that are upregulated during inflammation to drive immune responses (Bachelerie et al., 2014). The CXCR4 receptor and its ligand CXCL12 are mostly studied for their crucial role in the homing of (hematopoietic) progenitor cells in the bone marrow and their mobilization into the periphery in physiological and pathological conditions. Furthermore, the CXCL12/CXCR4 axis is involved in chemotaxis, cell arrest, angiogenesis, and cell survival. These functions explain not only the interest of oncologists in CXCR4, but also underlie the involvement of the CXCL12/CXCR4 axis in CAD, as will be discussed in more detail later.
Binding of CXCL12 to CXCR4 mediates intracellular signaling through a classical heterotrimeric G-protein, composed of an Gα, Gβ, and Gγ subunit. Of the 4 general classes of Gα proteins, named Gα s , Gα i , Gα q , and Gα 12 , CXCR4 signaling seems mainly coupled to the Gα i subunit. Receptor stimulation induces the dissociation of the heterotrimeric G-protein. The Gα i monomer inhibits adenylyl cyclase activity and triggers MAPK and PI3K pathway activation, whereas the Gβγ dimer triggers intracellular calcium mobilization through the activation of phospholipase C (Teicher and Fricker, 2010). Furthermore, CXCL12 induces the recruitment of β-arrestin to CXCR4. This mediates receptor desensitization through CXCR4 endocytosis (Orsini et al., 1999), but also reduces CXCR4 coupling to Gα i signaling, favoring β-arrestin-mediated MAPK activation. The latter was particularly shown upon overexpression-induced dimerization of CXCR4 with CXCR7, a chemokine receptor that will be discussed in more detail below (Decaillot et al., 2011) (Figure 2). Preferential signaling through G-proteins vs. β-arrestin is not only influenced through dimer formation of CXCR4 with CXCR7, but also by the oligomerization state of CXCL12 (Drury et al., 2011;Ray et al., 2012).
MIF AS AN ALTERNATIVE CHEMOKINE LIGAND FOR CXCR4
The CXCL12/CXCR4 axis was long considered a monogamous relation, until in 2007 the chemokine MIF was surprisingly identified as an alternative ligand for CXCR4 (Bernhagen et al., 2007). MIF is a ubiquitously expressed and highly conserved protein of 114 aa (excluding the N-terminal methionine which is posttranslationally removed), with 90% homology between human and mouse (Bernhagen et al., 1994). MIF plays an important role in cell recruitment and arrest through binding to the chemokine receptors CXCR2 and CXCR4 (Bernhagen et al., 2007). However, it cannot be classified into one of the four typical chemokine classes (C,CC,CXC,CX 3 C) due to the absence of a characteristic cysteine motif in its N-terminus, and is therefore called a chemokine-like function chemokine (Box 1). In contrast to CXCL12, MIF is secreted in response to diverse inflammatory stimuli, and has been associated with a clear pro-inflammatory and pro-atherogenic role in multiple studies of patients and animal models (Pan et al., 2004;Bernhagen et al., 2007). On the other hand, MIF can also exert protective functions, as observed after myocardial ischemia/reperfusion injury (MI/IRI) and in experimental liver fibrosis. We will address the double-edged role of MIF in myocardial ischemia below. For the involvement of MIF in chronic atherosclerosis and injury-induced restenosis, we refer to a recent review by Tillmann et al. (2013).
CXCR7, AN ALTERNATIVE RECEPTOR FOR CXCL12 HETERODIMERIZING WITH CXCR4
In 2005, CXCL12 was revealed to bind a second chemokine receptor, named CXCR7 (or RDC1), with an even 10-fold higher affinity compared with CXCR4 (Balabanian et al., 2005;Burns et al., 2006). Similarly as CXCR4, CXCR7 is highly conserved between human and mouse. Its deletion in mice is perinatally lethal and associated with defective cardiovascular development (Sierro et al., 2007;Yu et al., 2011b). CXCR7 has been FIGURE 2 | The CXCL12 signaling network. CXCL12 employs two distinct receptors, CXCR4 and CXCR7. CXCR4 additionally acts a receptor for MIF, whereas CXCR7 can also bind CXCL11. Generally, stimulation of CXCR4 triggers preferentially G-protein-coupled signaling, whereas activation of CXCR7 or the CXCR4/CXCR7 complex induces β-arrestin-mediated signaling. Internalization of the receptors CXCR4 and CXCR7, and subsequent recycling to the cell membrane, is also mediated through β-arrestin. Upon binding to CXCR7, CXCL12 is internalized and subjected to lysosomal degradation. AKT, PKB, Protein kinase B; MAPK, mitogen-activated protein kinase; MIF, macrophage migration inhibitory factor; PI3K, phosphatidylinositide 3-kinase; G αβγ , heterotrimeric G-protein consisting of the subunits α, β, and γ. implicated in cell survival and adhesion (Burns et al., 2006) and can mediate CXCL12-directed T-cell chemotaxis independently from CXCR4 (Balabanian et al., 2005;Kumar et al., 2012). Binding of the chemokine ligands CXCL12 and CXCL11 (also called I-TAC) to CXCR7 enhances continuous CXCR7 internalization and delivery of the chemokine ligands to the lysosomes for degradation (Luker et al., 2010;Naumann et al., 2010). Such CXCR7-mediated regulation of available CXCL12 concentrations has associated CXCR7 with a function as decoy receptor, reducing acute CXCL12/CXCR4 signaling (Luker et al., 2010). Furthermore, study of a CXCR7 agonist recently suggested downregulation of CXCR4 protein levels by CXCR7 signaling as another negative regulatory mechanism of CXCR7 toward the CXCL12/CXCR4 axis (Uto-Konomi et al., 2013). Also, heterodimerization of CXCR7 with CXCR4 interferes with CXCR4-induced Gα i protein-mediated signaling and favors βarrestin-linked signaling (Levoye et al., 2009;Decaillot et al., 2011). On the other hand, the CXCL12 scavenging function of CXCR7 can also positively influence CXCR4-mediated migration by preventing the downregulation of CXCR4 surface expression and function through excessive CXCL12 concentrations (Sanchez-Alcaniz et al., 2011). In addition to its modulatory effect on CXCL12/CXCR4 signaling, CXCR7 is able to mediate CXCL12-induced MAPK activation independently from CXCR4 . Although the precise signaling mechanisms downstream of CXCR7 remain unclear, CXCR7 does not bind to or induce the activation of heterotrimeric G-proteins as typical in classical GPCR signaling, but depends on ligand-induced β-arrestin recruitment (Rajagopal et al., 2010) (Figure 2). This atypical signaling explains why CXCR7 was recently renamed atypical chemokine receptor (ACKR) 3 (Bachelerie et al., 2014).
As an important consideration, the widely used CXCR4 antagonist AMD3100 (Box 2) was recently revealed to be an allosteric agonist of CXCR7, being able to induce the recruitment of β-arrestin to CXCR7 at concentrations from 10 μM. Also, AMD3100 increased CXCL12 binding and CXCL12-triggered βarrestin recruitment to CXCR7 (Kalatskaya et al., 2009). The CXCR4 antagonist TC14012 even showed a higher agonistic effect on CXCR7 (Gravel et al., 2010). These findings, together with the regulatory effects of CXCR7 on CXCL12/CXCR4 signaling, considerably complicate the interpretation of all studies aiming to dissect the molecular mechanisms and biological consequences of specifically CXCL12/CXCR4 signaling.
HOMING OF PROGENITOR CELLS IN THE BONE MARROW
In physiological conditions low numbers of hematopoietic stem and progenitor cells (HSPCs) constantly circulate from the bone marrow to the blood and back. The CXCL12/CXCR4 axis has been shown to play a crucial role in the homing and retention of HSPCs in the stem cell niches of the bone marrow (Mazo et al., 2011). First, CXCL12 secreted by endothelial cells of sinusoids in the bone marrow may trigger firm arrest of rolling CXCR4 + HSPCs through CXCR4-mediated integrin activation (Peled et al., 2000;Mazo et al., 2011). After vascular extravasation, HSPCs home into specialized bone marrow niches that provide optimal conditions for HSPC survival, self-renewal, and lineage differentiation and function (Jones and Wagers, 2008). High expression of CXCL12 by bone marrow stromal cells constitutes an important adhesive mechanism to retain CXCR4 + HSPCs in the bone marrow.
STRESS-INDUCED MOBILIZATION OF PROGENITOR CELLS
In conditions of stress or injury, HSPCs lose their anchorage in these niches and are increasingly mobilized into the circulation (Mazo et al., 2011). Interference with CXCL12/CXCR4-mediated retention is an important mechanism underlying HSPC mobilization. For example, granulocyte colony-stimulating factor (G-CSF or CSF3), a mobilizing cytokine frequently used in the clinic, reduces CXCL12 expression by bone marrow osteoblasts through depletion of endosteal macrophages that support osteoblast function (Semerad et al., 2005;Winkler et al., 2010). Furthermore, G-CSF has been associated with proteolytic inactivation of CXCR4 on HSPCs in the bone marrow (Levesque et al., 2003), despite enhanced surface expression of CXCR4 on bone marrow cells upon G-CSF treatment (Petit et al., 2002). In addition, G-CSF-induced mobilization of HSPCs involves activity of the protease dipeptidyl-peptidase 4 (DPP4, also known as CD26), which inactivates CXCL12 through proteolysis (Christopherson et al., 2003a,b;Campbell and Broxmeyer, 2008). A similar role in CXCL12 destabilization during HSPC mobilization has been suggested for the proteases matrix metallopeptidase (MMP) 9, neutrophil elastase and cathepsin G. But although these proteases are upregulated in the bone marrow upon treatment with G-CSF (Levesque et al., 2002), combined inhibition of a broad range of metalloproteinases and neutrophil serine proteases-including MMP9, neutrophil elastase and cathepsin G-did not significantly affect G-CSF-triggered HSPC mobilization (Levesque et al., 2004).
A fourth mechanism proposed to interfere with CXCL12/CXCR4-mediated retention of HSPCs in the bone marrow is an increased plasma level of CXCL12, which may favor CXCL12-induced migration of HSPCs into the circulation over their retention in the bone marrow. For example, increased concentrations of CXCL12 in the blood due to injection of CXCL12-expressing adenovirus or stabilized methionine-CXCL12 induced the mobilization of hematopoietic cells and progenitors Moore et al., 2001). However, it has been debated whether this is indeed due to CXCL12-induced mobilization, or rather to a reduced CXCL12/CXCR4-mediated retention of these cells in the bone marrow caused by CXCL12induced downregulation of CXCR4 on their cell surface (Bonig and Papayannopoulou, 2012). Similarly, it is still debated whether the mobilization of HSPCs by CXCR4 antagonists, as was recently discussed in detail (Rettig et al., 2012), is mostly the result of a direct blockade of CXCR4 on HSPCs in the bone marrow interfering with CXCR4-mediated homing (Karpova et al., 2013), or whether these antagonists may also mediate HSPC egress by altering the CXCL12 gradient in favor of mobilization. In context of the latter, it was shown that treatment with the CXCR4 antagonist AMD3100 induces a fast release of CXCL12 from bone marrow stroma to the circulation, which may favor CXCL12/CXCR4-mediated mobilization of HSPCs into circulation over their anchorage in bone marrow niches (Dar et al., 2011).
Clearly, the CXCL12/CXCR4 axis plays a pivotal but complex role in the homing and mobilization of progenitor cells, and an intertwinement with different other pathways even further complicates the picture. For example, AMD3100-induced progenitor cell mobilization was recently shown to require the expression of MMP9 and endothelial nitric oxide synthase (eNOS, also known as NOS3) in bone marrow-derived cells (Jujo et al., 2010(Jujo et al., , 2013. Also, CXCL12-mediated homing and mobilization of progenitor cells is modulated by e.g., fms-related tyrosine kinase 3 (FLT3) ligand , transforming growth factor (TGF) β (Basu and Broxmeyer, 2005), and CCR5 chemokine receptor ligands (Basu and Broxmeyer, 2009).
SURVIVAL AND PROLIFERATION OF PROGENITOR CELLS
In addition to homing and mobilization, the CXCL12/CXCR4 axis provides important survival and proliferative signals to progenitor cells (Lataillade et al., 2000;Lee et al., 2002;Broxmeyer et al., 2003a,b;Guo et al., 2005). Together, these functions underlie to a considerable extent the currently known involvement of the CXCL12/CXCR4 axis in CAD, as will be discussed in detail below.
THE CXCL12/CXCR4 AXIS IN CAD: IDENTIFYING POTENTIAL FUNCTIONS AND UNDERLYING MECHANISMS FROM IN VITRO, ANIMAL AND PATIENT OBSERVATIONS
The involvement of CXCR4 and its ligand CXCL12 in injuryinduced restenosis and MI has mostly been linked to progenitor cell recruitment. In contrast, the role of the CXCL12/CXCR4 axis in native atherosclerosis remains largely unclear, with mostly only in vitro studies shedding some light on the effect of CXCR4 signaling on cell type-specific functions relevant in atherogenesis. Here, we aim to provide an overview of in vitro, animal and patient observations that may provide insight into the (cell type-specific) effects of CXCR4 signaling on native atherosclerosis, injury-induced restenosis and MI ( Table 1, Supplemental Table 1).
Protective effects through cardiomyocyte protection and progenitor cell recruitment
CXCR4 and its ligand CXCL12 are expressed in cardiac myocytes and fibroblasts, and myocardial ischemia significantly upregulates CXCL12 (Pillarisetti and Gupta, 2001;Yamani et al., 2005;Hu et al., 2007). Different studies have revealed a protective role for CXCL12/CXCR4 signaling after MI and MI/IRI through survival effects on resident cardiomyocytes and recruitment of protective circulating cells. Intracardiac or intramyocardial injection of CXCL12 reduced infarction size and increased cardiac function after MI/IRI (Hu et al., 2007) and MI (Segers et al., 2007;Saxena et al., 2008), and cardioprotective effects were blocked with AMD3100 (Hu et al., 2007). CXCL12-induced cardioprotection was associated with improved survival of hypoxic myocardium and increased neo-angiogenesis, and was linked with anti-apoptotic AKT (also known as PKB = protein kinase B) and MAPK3/1 (also known as ERK1/2) signaling in cardiac
CXCR4 expression
Patients with angina show reduced CXCR4 surface expression on peripheral blood cells Damas et al., 2002 myocytes and endothelial cells (Hu et al., 2007;Saxena et al., 2008). Also, delivery of CXCL12 triggered upregulation of vascular endothelial growth factor (VEGF) in the infarcted area and in cardiac endothelial cells, with VEGF an important regulator of angiogenesis and progenitor cell recruitment (Saxena et al., 2008). Furthermore, multiple studies have explored the effect of CXCL12 delivery into the myocardium through local treatment with CXCL12-overexpressing adenovirus, CXCL12transgenic skeletal myoblasts or CXCL12-releasing hydrogels. Such exogenous CXCL12 delivery was associated with enhanced recruitment and incorporation of CXCR4 + stem and progenitor cells in the infarcted area (Abbott et al., 2004;Elmadbouh et al., 2007;Segers et al., 2007;Purcell et al., 2012). Stem cell transplantation during MI seems promising to improve cardiac outcome (Sanganalmath and Bolli, 2013), but it is debated whether this is primarily mediated through direct regeneration of cardiac myocytes or through protective paracrine effects on remodeling or preservation of injured tissue (Liehn et al., 2013b). For example, transplantation of endothelial progenitor cells (EPCs) was associated with increased neovascularization and improved cardiac function after MI and MI/IRI, despite variable effects on inflammation and apoptosis (Schuh et al., 2008(Schuh et al., , 2012. Overexpression of CXCL12 in transplanted EPCs further increased angiogenesis, however without significant improvement of cardiac function (Schuh et al., 2012). In a complementary approach, transplantation of mesenchymal stem cells (MSCs), which have been described as cardiac precursors, is widely investigated as a potential therapy after MI (Hatzistergos et al., 2010;Dong et al., 2012;Liehn et al., 2013b). Overexpression of CXCL12 in transplanted MSCs improved survival of cardiomyocytes after MI, however without evidence for cardiac regeneration (Zhang et al., 2007). MSCs with transgenic CXCR4 expression displayed increased incorporation into the ischemic area, which was associated with increased angiogenesis, myogenesis and cardiac function (Zhang et al., 2008;Huang et al., 2012).
In vitro experiments demonstrated hypoxia to increase CXCR4 expression on MSCs, and CXCR4-mediated migration of MSCs to CXCL12 was shown to require PI3K/AKT signaling . The CXCL12/CXCR4 axis was also at least partly responsible for the beneficial effects of VEGF-overexpressing MSCs by mediating the recruitment of cardiac stem cells to the infarcted region (Tang et al., 2011). Furthermore, cardioprotective effects of transplanted MSCs were shown to require myocardial CXCR4 expression (Dong et al., 2012).
The efficiency of AMD3100 in mobilizing progenitor cells, including EPCs, from the bone marrow was associated with enhanced accumulation of progenitor cells in the infarcted tissue, enhanced neovascularization and improved cardiac function after single AMD3100 treatment in an MI and MI/IRI model (Jujo et al., 2010(Jujo et al., , 2013. Similarly, cardioprotective effects were reported of daily AMD3100 injections after MI in rats (Proulx et al., 2007). However, two independent groups revealed a reduced cardiac outcome after MI upon chronic AMD3100 administration (Dai et al., 2010;Jujo et al., 2010). This was either associated with a reduced incorporation of progenitor cells in the infarcted region despite enhanced mobilization (Jujo et al., 2010), or with an increased proliferation of resident cardiac progenitor cells. This latter observation raised the suggestion whether increased proliferation may be linked to reduced differentiation, so whether cardioprotective CXCR4 signaling may be required to direct cardiac progenitors to cardiac commitment to ensure their participation in repair of injured myocardium (Dai et al., 2010).
Interestingly, platelet-surface binding of CXCL12, which correlated with platelet activation, was significantly increased in patients with acute coronary syndrome (ACS) compared to patients with stable angina pectoris, and correlated with the number of circulating hematopoietic progenitor cells (Stellos et al., 2009). Similarly, surface expression of CXCR7 but not CXCR4 was found to be significantly enhanced on platelets from patients with ACS compared to subjects with stable CAD (Rath et al., 2014). These data may be supported by recent findings that CXCL12 upregulates CXCR7 surface availability on platelets , as will be discussed in more detail later. Of note, platelet CXCR7 surface expression levels above average in patients with ACS positively correlated with an increase in the left ventricular ejection fraction as a measure of recovery after MI, suggesting a beneficial effect of CXCL12/CXCR7 signaling on functional recovery in ACS patients (Rath et al., 2014).
Double-edged role of CXCR4 in the ischemic heart
Despite cardioprotective functions of the CXCL12/CXCR4 axis in the ischemic heart, Cxcr4-heterozygosity in mice reduced infarct size after MI, however without affecting cardiac function. This was explained by a counterbalance of on the one hand reduced neovascularization, and on the other hand reduced inflammation with less neutrophils and a preferential recruitment of Gr1 low over inflammatory Gr1 high monocytes (Liehn et al., 2011). Likewise, adenovirus-mediated overexpression of CXCR4 in the heart increased infarct size and reduced cardiac function. This was associated with an enhanced recruitment of inflammatory cells, enhanced tumor necrosis factor (TNF) α expression and increased apoptosis of cardiomyocytes (Chen et al., 2010a). On the other hand, deficiency of Cxcr4 specifically in cardiac myocytes did not affect heart function or remodeling after MI (Agarwal et al., 2010).
Together, these studies demonstrate a double-edged role of CXCR4 in the ischemic heart, and require further investigation of the role of CXCR4 and its chemokine ligands in the inflammatory processes associated with MI. In this context, also the alternative CXCR4 ligand MIF is upregulated in myocardium and plasma after MI (Yu et al., 2001(Yu et al., , 2003 and can exert cardioprotection. Underlying mechanisms include activation of AMPactivated protein kinase (AMPK) (Miller et al., 2008), reduction of oxidative stress (Koga et al., 2011) or inhibition of c-Jun N-terminal kinase (JNK)-mediated apoptosis (Qi et al., 2009). An important role for the chemokine receptor CXCR2 on resident cardiac cells in MIF-mediated myocardial protection after ischemic injury was recently shown (Liehn et al., 2013a), but it remains unclear whether also MIF/CXCR4 signaling in cardiomyocytes may contribute to cardioprotection. Furthermore, MIF-induced recruitment and differentiation of protective EPCs through CXCR4 and CXCR2 may be involved in the cardioprotective effects of MIF (Simons et al., 2011;Asare et al., 2013;Kanzler et al., 2013). On the other hand, adverse effects of MIF through myocardial infiltration of inflammatory cells were revealed after prolonged ischemic injury in both MI and MI/IRI (Gao et al., 2011;White et al., 2013). Although these studies did not examine which MIF receptor was involved, a recent report demonstrated an important role for CXCR2 in mediating MIF-triggered monocyte recruitment in the ischemic heart (Liehn et al., 2013a). But also here, an additional involvement of MIF/CXCR4 interaction in ischemic inflammatory cell recruitment remains unclear. In conclusion, recent data indicate that similarly as CXCR4, MIF plays a double-edged role in myocardial ischemia. However, the relative importance of CXCR4 vs. other MIF receptors as CXCR2, in MIF-mediated effects remain unclear.
CXCR4 IN ARTERIAL INJURY-INDUCED RESTENOSIS
The CXCL12/CXCR4 axis has been revealed to contribute to injury-induced restenosis, which is a major problem after coronary revascularization. CXCL12 expression is increased after vascular injury through enhanced hypoxia-inducible factor (HIF) 1α expression (Schober et al., 2003;Karshovska et al., 2007) and is preceded and mediated by apoptosis in the injured vessel wall (Zernecke et al., 2005). Systemic treatment of mice with a CXCL12 blocking antibody or a CXCR4 antagonist reduced injury-induced neointimal size and content of smooth muscle cells (SMCs), which are a driving force of neointimal hyperplasia. Similar results were obtained after transplantation with Cxcr4 −/− bone marrow or local treatment with a dysfunctional CXCL12 mutant (Schober et al., 2003;Zernecke et al., 2005;Karshovska et al., 2008;Hamesch et al., 2012). In these studies reduced SMC content was associated with a reduction in injury-induced mobilization of Lin − Sca1 + progenitor cells, which were shown to be incorporated into neointimal lesions and capable to differentiate into SMCs (Schober et al., 2003;Zernecke et al., 2005). This corresponded with previous observations that bone marrow-derived cells can be recruited to mechanically injured arteries, where they can differentiate into vascular SMCs (VSMCs) and even endothelial cells (ECs) (Sata et al., 2002;Tanaka and Sata, 2008).
Multiple studies have linked EPCs with enhanced reendothelialization and reduced neointimal hyperplasia after vessel injury (Werner et al., 2003;Kong et al., 2004). CXCR4 was shown to contribute to adhesion of in vitro mononuclear cell-derived EPCs to injured arteries, although to a lesser extent than CXCR2 . Whereas CXCR4 blockade interfered with the capacity of infused EPCs to promote reendothelialization and reduce neointimal lesion size after carotid artery injury (Yin et al., 2010;Li et al., 2011), overexpression of CXCR4 promoted CXCL12-triggered migration and adhesion of EPCs in vitro and enhanced their capacity to promote endothelial recovery after vascular denudation in vivo (Chen et al., 2010b). However, it remains debated whether EPCs really affect injury-induced restenosis through direct incorporation in the injured vascular wall, or rather through paracrine effects on resident vascular cells by secreting mitogenic cytokines and growth factors as VEGF (Yoshioka et al., 2006;Iwata et al., 2010;Nemenoff et al., 2011;Hagensen et al., 2012;Merkulova-Rainon et al., 2012).
Furthermore, blockade of CXCR4 was shown to reduce cellular proliferation and the macrophage content of neointimal lesions after femoral artery injury, which was associated with reduced neointimal lesion size (Olive et al., 2008). In the same mouse model, ability of macrophage colony stimulating factor (M-CSF or CSF1) to accelerate injury-induced neointimal hyperplasia was abolished upon treatment with AMD3100. This was associated with reduced lesional incorporation of CXCR4 + cells despite enhanced white blood cell count in the peripheral blood, suggesting a detrimental role for CXCR4 in injury-induced neointimal hyperplasia in mediating the recruitment of inflammatory cells into neointimal lesions (Shiba et al., 2007).
In conclusion, blocking the CXCL12/CXCR4 axis interferes with injury-induced neointimal hyperplasia through reduced recruitment of CXCR4 + smooth muscle progenitors and inflammatory cells to the site of injury. On the other hand, CXCR4 enhances the ability of infused EPCs to adhere to injured vessels and promote reendothelialization. Although these data derive from in vitro cultured EPCs, which may behave different than the rather vaguely defined "circulating EPCs" (Steinmetz et al., 2010;Rennert et al., 2012), the current findings suggest a double-edged role for CXCR4 in injury-induced restenosis through recruitment of (progenitor) cells that either stimulate or interfere with neointimal hyperplasia. Such double-edged function was recently also observed upon tamoxifen-induced endothelial-specific deficiency of Cxcr4 (Cxcr4 EC−KO )in apolipoprotein E-deficient (Apoe −/− ) mice, which significantly decreased mobilization of both circulating Sca1 + Flk1 + Cd31 + cells, often referred to as EPCs, and of Lin − Sca1 + cells upon wire-mediated injury of the carotid artery. Furthermore, Cxcr4 EC−KO Apoe −/− mice showed a reduced reendothelialization efficiency, which was linked with a decrease in endothelial wound healing and in vivo proliferation. As a net result, endothelial-specific Cxcr4 deficiency triggered the formation of larger neointimal lesions, displaying an increase in inflammatory macrophages but a reduced SMC content (Noels et al., 2014). Whether CXCR4 signaling also affects specific functions of VSMCs or macrophages in context of vascular injury remains to be investigated.
Vascular progenitor cells
In contrast to conditions of MI and injury-induced restenosis, not much is known about a potential involvement of bone marrow-derived vascular progenitor cells in native atherosclerosis, as has been recently summarized for EPCs (Du et al., 2012) and vascular smooth muscle progenitor cells (SPCs) (Merkulova-Rainon et al., 2012). It was shown that infusion of EPCs as well as treatment with AMD3100, which triggered EPC mobilization, enhanced plaque regression after normalization of plasma lipid levels in Reversa mice (Yao et al., 2012). In contrast, a study in Apoe −/− mice did not reveal an atheroprotective effect of systemic AMD3100 treatment, but rather found AMD3100 to abolish beneficial effects of apoptotic body treatment on atherosclerosis. Endothelial apoptotic bodies were shown to contain miRNA126, which is transferred to neighboring ECs to induce the expression and release of CXCL12 by unleashing autoregulatory CXCR4 signaling. Injection of EC-derived apoptotic bodies into Apoe −/− mice increased CXCL12 expression in atherosclerotic lesions and promoted progenitor cell mobilization and their recruitment to the endothelial lining of the lesions. Lesions of mice treated with apoptotic bodies were generally smaller in size and exhibited a less inflammatory phenotype with reduced macrophage and apoptotic cell content (Zernecke et al., 2009). Thus, despite contradictory findings on the effect of AMD3100 treatment, both studies suggest an atheroprotective function for CXCL12/CXCR4 signaling through mobilization of protective EPCs. Interestingly, it was revealed that patients with CAD show lower levels and a decreased migratory response of circulating EPCs (Vasa et al., 2001). In addition, systemic treatment of mice with CXCL12 in a partial ligation model-which induces advanced atherosclerotic lesions with an unstable phenotype-enhanced the recruitment of Lin − Sca1 + SPCs and promoted a more stable plaque phenotype (Akhtar et al., 2013). Such plaque-stabilizing role for SPCs was also suggested earlier by Zoll et al, who showed that injection of SPCs reduced atherosclerotic lesion size and improved lesion stability (Zoll et al., 2008). Together, these studies reveal an atheroprotective function for CXCL12/CXCR4 signaling through recruitment of protective EPCs and plaque-stabilizing SPCs.
However, others reported on an atheroprogressive role for vascular progenitor cells. Inducing apoptosis of rare lesional bone marrow-derived SMCs substantially decreased plaque size (Yu et al., 2011a), and George et al. found EPC transfer to increase atherosclerosis in mice (George et al., 2005). Contradictory findings on the presence or function of vascular progenitor cells in chronic atherosclerosis may be related to the atherosclerosis model, lesion stage, but also on the vague definition of such progenitor cells. Further research is definitively needed to improve phenotypical and functional characterization of vascular progenitor cell subsets in context of atherosclerosis before a role of the CXCR4/CXCL12 axis in their mobilization and potential functions in this pathology can be investigated in detail.
Hematopoietic progenitor cells
Interestingly, MI was recently revealed to accelerate atherosclerosis in mice. MI reduced CXCL12 expression in bone marrow through sympathetic nervous system activity and signaling through β3 adrenergic receptor (β3AR or ADRB3). In this way, MI enhanced mobilization of HSPCs from bone marrow niches and their hosting in the spleen, triggering myelopoiesis and increased atherosclerosis up to 3 months after coronary ligation (Dutta et al., 2012). Although the latter study did not address the underlying mechanisms of CXCL12 upregulation upon β3AR blocking nor examine potential effects on CXCR4 expression or function, LaRocca et al. revealed a direct (physical) interaction of β2 adrenergic receptors with CXCR4 resulting in the modification of the contractile nature of cardiomyocytes (Larocca et al., 2010). It is further known that β2and β3 adrenergic receptors cooperate during progenitor cell mobilization with partial functional redundancy under stress (Mendez-Ferrer et al., 2010). Hence, one may speculate that Cxcr4 may also functionally interact with other adrenergic receptors, like β3AR, in a direct or indirect way.
CXCR4 IN NATIVE ATHEROSCLEROSIS: CELL TYPE-SPECIFIC FUNCTIONS?
In addition to a role for CXCR4 through progenitor cell mobilization, CXCR4 may affect native atherogenesis by modifying atherosclerosis-relevant cellular functions. CXCR4 expression has been described on many cell types including monocytes and macrophages, neutrophils (Bruhl et al., 2003), T-cells (Murphy et al., 2000), B-cells (Nie et al., 2004), mature ECs (Gupta et al., 1998), and SMCs (Nemenoff et al., 2008;Jie et al., 2010). All of these cells play distinct roles in the pathophysiology of atherosclerosis, but not much is known about the precise role of CXCR4 in individual cellular responses.
Monocytes and macrophages
Monocytes and macrophages have been proven to be of outstanding importance in the progression and development of mature atherosclerotic lesions and their depletion has been recognized atheroprotective already 20 years ago (Ylitalo et al., 1994). As the picture grows it becomes more and more evident that depletion of individual cell subsets does not serve as
Frontiers in Physiology | Vascular Physiology
June 2014 | Volume 5 | Article 212 | 10 a realistic therapeutic approach, hence understanding the details of cell-cell interactions and communication gains importance (Weber and Noels, 2011).
CXCR4 expression and potential functions: studies of human cells
Macrophages and foam cells. CXCR4 is expressed on all monocyte subsets, with highest expression on classical human monocytes. This contrasts with observations in mice, which show highest CXCR4 levels on non-classical monocytes (Ingersoll et al., 2010), as will be discussed later in more detail. Gupta et al. revealed high expression of CXCR4 on human blood monocytes, which declined while they differentiated into macrophages, but restored again after 24 h, peaking at 7 days. Interestingly, CXCR4 expression on macrophages could be further upregulated by stimulation with oxidized low-density lipoprotein (oxLDL). From here the authors conclude that, although there is no direct evidence, restoration of CXCR4 expression on lesional macrophages and its further up-regulation by oxLDL during foam cell formation may contribute to migration of intimal foam cells and the subsequent progression of plaque growth (Gupta et al., 1999). Furthermore, CXCL12/CXCR4 signaling was linked with enhanced macropinocytosis in leukocytes (Tanaka et al., 2012), suggesting that a lack of CXCR4 may also influence (modified) lipid accumulation in macrophages and other lesional cells. In contrast, a recent study found CXCL12 to induce phagocytosis and the uptake of acetylated LDL in THP1-derived macrophages specifically through binding CXCR7 but not CXCR4 (Ma et al., 2013). In this context, the CXCR7 agonist CCX771 was recently shown to increase the uptake of very low-density LDL (VLDL) in adipocytes. Correspondingly, treatment of Apoe −/− mice with CCX771 reduced the levels of circulating VLDL and decreased atherosclerosis . Whether similar mechanisms can be identified in other cell types as macrophages remains to be examined, as are the exact mechanisms underlying CXCR7mediated uptake of VLDL or modified lipids.
Patients with CAD.
In patients with stable and unstable angina pectoris CXCR4 surface expression on peripheral blood mononuclear cells (PBMCs) was decreased and CXCL12 levels in patients with unstable angina pectoris were explicitly low. However, in vitro treatment of PBMCs from these patients with high concentrations of CXCL12 reduced mRNA and protein levels of chemokine ligands CCL2 and CXCL8, MMP9 and tissue factor, while increasing tissue inhibitor of metalloproteinases (TIMP) 1. Therefore, high (local) concentrations of CXCL12 may mediate anti-inflammatory and matrix-stabilizing effects promoting plaque stabilization and may be beneficial in angina pectoris and ACSs (Damas et al., 2002). Another study showed autocrine CXCL12 signaling to downregulate expression of runt-related transcription factor (RUNX) 3 in human monocytes/macrophages, thereby promoting a pro-angiogenic, but immunosuppressive phenotype of these cells (Sanchez-Martin et al., 2011).
Compounds regulating CXCR4. Several compounds were identified to modify CXCR4-mediated immune responses in vitro.
Glucocorticoids have been demonstrated to upregulate CXCR4 expression on human blood monocytes and Caulfied et al. assume that increased CXCR4 expression sensitizes monocytes to tissue resident CXCL12, guiding monocytes away from sites of inflammation with supposingly lesser local CXCL12 release (Caulfield et al., 2002). The latter most likely contradicts another study, which revealed high expression of CXCL12 in SMCs, ECs and macrophages in human atherosclerotic plaques but not in normal vessels (Abi-Younes et al., 2000). This would rather argue for a chemotactic gradient of CXCL12 toward the site of inflammation, although this might be disease-and cell type-dependent. As a second example, monocyte CXCR4 expression has been shown to be modulated by hydrogen sulfide (H 2 S) donors. H 2 S donors have lately been recognized as vasoprotective agents and changes in H 2 S may affect atherosclerosis. Interestingly, a synthetic slow H 2 S releaser (GYY4137) inhibited oxLDL-induced foam cell formation and cholesterol esterification in RAW264 cells and primary human monocytes, which was accompanied by decreased CXCR4 expression . In contrast, angiotensin-converting enzyme (ACE) inhibitors, widely used to treat high blood pressure by interfering with the reninangiotensin system, did not affect CXCR4 expression on primary human monocytes and THP1 cells (Apostolakis et al., 2007). The same group also assessed if angiotensin I and II treatment would have a direct impact on chemokine receptor expression on THP1 cells, again CXCR4 expression was not altered (Apostolakis et al., 2010).
Conjugated linoleic acids (CLA) were shown to influence human peripheral blood monocyte function by suppressing CD18 expression, thereby reducing the number of β2-integrins expressed on the external surface and decreasing adhesion to activated ECs. In addition, CLA reduced CXCR4 expression, resulting in an only minor initiation of "inside out" signaling. As a result, partial, but incomplete, activation of β2-integrins further reduces adherence of leukocytes and their migration to CXCL12 (De Gaetano et al., 2013).
Statins, which lower intracellular cholesterol synthesis, are the gold standard to treat hyperlipidemia-associated atherosclerosis, but have also been reported to cause numerous other pleiotropic effects. To examine if statin withdrawal would affect human monocyte subsets in patients with stable CAD, statin treatment was cut off for 2 weeks. Subsequent evaluation of blood monocyte subsets did not reveal any differences in numbers, but downregulation of Toll-like receptor (TLR) 4 on all subsets and decreased expression of CXCR4 on classical monocytes (CD14 ++ CD16 − ) (Jaipersad et al., 2013). Interestingly, high doses of statin treatment were also shown to reduce general CXCL12 plasma levels in hyperlipidemic patients (Camnitz et al., 2012). However, it remains elusive whether statin treatment and a subsequent increase in CXCR4 expression, but decreased CXCL12 titers, point at a direct pro-or anti-atherosclerotic role of the CXCL12/CXCR4 axis.
Another mechanism, recently recognized to drive atherosclerotic lesion growth, is hypoxia (Marsch et al., 2013). Notably, hypoxia-induced upregulation of the transcriptional activator HIF1 triggers CXCR4 mRNA and protein expression in human monocytes. In addition, these cells showed increased migration toward CXCL12 under hypoxic conditions. Based on these findings the authors conclude that the hypoxia-HIF1-CXCR4 pathway may regulate cell trafficking and localization into hypoxic tissues, such as atherosclerotic lesions (Schioppa et al., 2003).
In contrast, CXCR4 expression on macrophages was suppressed in the presence of M-CSF. Increased M-CSF titers have been implicated in the pathogenesis of atherosclerosis and it was shown that M-CSF delivers a pro-atherogenic signal to human macrophages by stimulation of cholesterol accumulation and pro-inflammatory chemokine secretion. Thus, M-CSF-induced suppression of macrophage CXCR4 expression may point at an atheroprotective function of downstream CXCR4 signaling events (Irvine et al., 2009).
CXCR4 expression and potential functions: mouse studies.
The above-mentioned studies on human monocytes and macrophages suggest diverse potential roles for CXCR4 in atherosclerosis, still mouse studies are equally elusive, without a clear indication for a pro-or anti-atherogenic role for CXCL12/CXCR4. In contrast to human monocytes, CXCR4 in mouse is higher expressed on nonclassical monocytes and it is not clear if this does necessarily imply functional differences in individual mouse and human monocyte subsets or not (Ingersoll et al., 2010).
One study reported an interesting finding using a mutant non-heparin sulfate-binding CXCL12 (HSmCXCL12). In vitro, HSmCXCL12 failed to promote transendothelial migration of PBMCs if used as chemoattractant in the bottom well of transwell plates, and inhibited the haptotactic response to wild-type CCL7, CXCL12, and CXCL8. Further, intravenous administration of HSmCXCL12 into mice also repressed the recruitment of lymphocytes and mononuclear phagocytes to air pouches injected with CXCL12. Moreover, repetitive administration of HSmCXCL12 in vivo reduced leukocyte-surface expression of CXCR4, and CXCL12-induced chemotaxis and adhesion. From here the authors conclude that non-heparin sulfate-binding variants of CXCL12 can mediate a powerful anti-inflammatory effect through induction of chronic CXCR4 internalization on leukocytes in vivo. Subsequently, this leads to receptor desensitization putatively explaining the functional deficits of these leukocytes (O'boyle et al., 2009). Hence, it could be interesting to carefully dissect the differential functional consequences of receptor desensitization through receptor internalization compared to CXCR4 blocking with AMD as reported by Zernecke et al. (2008).
A potential pro-inflammatory role for wild-type CXCL12 may also be deduced from findings from Liu et al., who revealed decreased CXCR4 expression on RAW264 cells and primary human monocytes treated with the H 2 S releaser GYY4137, which was introduced above. Administration of GYY4137 into Apoe −/− mice receiving a high-fat diet for 4 weeks decreased atherosclerotic plaque formation and partially restored aortic endotheliumdependent relaxation. Further, intercellular adhesion molecule (ICAM) 1, TNF-α and interleukin (IL) 6 mRNA expression as well as superoxide generation in the aorta declined in mice treated with GYY4137 . Similarly, and again paralleling in vitro studies with human monocytes, CLA treatment of Apoe −/− mice with already pre-established atherosclerosis induced lesion regression by reducing leukocyte adhesion and decreasing CD18 expression on classical monocytes (De Gaetano et al., 2013).
The above studies may point at a pro-atherogenic role of CXCR4 signaling in atherosclerosis; however this may strongly depend on the binding partner interacting with CXCR4. As already described, CXCL12 is not the only ligand for CXCR4 and MIF, an alternative ligand for CXCR4, has a strong proatherogenic impact. In this context, Bernhagen et al. revealed that antibody mediated-neutralization of MIF, but not CXCL12, induced atherosclerotic lesion regression in Apoe −/− mice (Bernhagen et al., 2007). In line, knock-out of Mif in Ldlr −/− mice did also result in diminished atherosclerosis (Pan et al., 2004). This suggests potential different roles of CXCR4 and its ligand CXCL12 in atherosclerosis through interplay of CXCR4 with MIF. Similarly, interplay of CXCL12 with other signaling molecules may modify its inflammatory effects. For example, hetero-complexes of high mobility group box (HMGB) 1 and CXCL12 were reported to induce inflammatory cell recruitment to injured tissue, which was not the case for each compound alone. Further, these complexes exclusively bound to CXCR4 inducing a conformational rearrangement of CXCR4, which differed from the single binding of CXCL12 to its receptor (Schiraldi et al., 2012).
Neutrophils
Treatment of mice with the CXCR4 antagonist AMD3100 induced cell egress from the bone marrow (Schiraldi et al., 2012), which is in line with findings by Zernecke et al. who described increased leukocytosis, mostly neutrophil mobilization, and enhanced lesion formation in Apoe −/− receiving a cholesterol-rich diet for 12 weeks while supplemented with AMD. Interestingly, monocyte numbers were only moderately enhanced in these mice and, according to the authors, lesion growth was mainly attributable to increased plaque neutrophils and enhanced apoptosis .
Notably, a growing body of evidence underlines the role of neutrophils in atherogenesis (Drechsler et al., 2010Doring et al., 2012) and it was recognized that the CXCL12/CXCR4 axis maintains neutrophil homeostasis primarily by regulation of neutrophil release from the bone marrow in a cell-autonomous fashion (Eash et al., 2009). It was further implicated that senescent neutrophils in the periphery expressing high levels of CXCR4 home back to the bone marrow to be cleared (Martin et al., 2003). In contrast, activated neutrophils downregulate CXCR4 expression putatively postponing their clearance (Bruhl et al., 2003;Martin et al., 2003).
Frontiers in Physiology | Vascular Physiology
June 2014 | Volume 5 | Article 212 | 12 It became evident that the impact of B-and T-cells in atherosclerosis is strongly subset-dependent. While e.g., Th1 responses are known to be pro-atherosclerotic, regulatory T-cells haven been proven to be protective. Similarly, B1-and B2cells exhibit diverse functions in lesion development (Weber and Noels, 2011). Nevertheless, studies dissecting the role of CXCR4 on T-and B-cells in the context of atherosclerosis are scarce. Two independent groups showed that lysophosphatidylcholine (LPC), a main phospholipid component of oxLDL, upregulates CXCR4 expression on Jurkat cells and human blood CD4 + Tcells. Further, the chemotactic ability of CD4 + T-cells toward CXCL12 and their production of pro-inflammatory cytokines were increased in the presence of LPC. Hence, the ill alliance of LPC and CXCL12 in atherosclerotic lesions may amplify proinflammatory responses by stimulation of CD4 + T-cells and subsequent plaque growth (Han et al., 2004;Hara et al., 2008). Interestingly, it was also shown that excess of mineralocorticoids, mainly aldosterone, drive CAD by cardiac and renal fibrosis, as well as hypertension. Here, Chu et al. imply a specific role of the CXCL12/CXCR4 axis in the detrimental consequences of mineralocorticoid excess and render CXCL12 explicitly responsible for the accumulation of T-cells in fibrotic tissue (Chu et al., 2011). From patients with abdominal aortic aneurysm (AAA) we further learn that T-and B-cells recruited to sites of AAA express high levels of CXCR4 and exhibit a pro-inflammatory signature. Hence, CXCR4/CXCL12 interactions may strongly impact on the recruitment and retention of inflammatory lymphocytes infiltrating AAAs (Ocana et al., 2008). In contrast, acute stress induced by public speaking did not enhance the number of CXCR4 expressing T-cells, but increased the frequency of T-cells expressing CXCR2, CXCR3, and CCR5. Therefore, cardiac sympathetic activation may lead to EC and T-cell activation subsequently driving acute flooding of atherosclerotic lesions with pro-inflammatory mediators (Bosch et al., 2003).
In addition to CXCL12-triggered effects, CXCR4 is able to mediate MIF-induced B-cell chemotaxis, and T-cell chemotaxis and arrest in vitro (Bernhagen et al., 2007;Klasen et al., 2014). Blockade of MIF in Apoe −/− mice on high-fat diet resulted in the formation of smaller atherosclerotic lesions displaying a reduced macrophage and T-cell content, supporting a role for MIF in Tcell chemotaxis also in the context of atherosclerosis (Bernhagen et al., 2007).
Platelets
CXCR4 expression (mRNA, protein) was reported on platelets (Wang et al., 1998;Kowalska et al., 1999) and although platelets lack nuclei and many organelles and are mainly known for their important role in blood coagulation, their impact on immunological and inflammatory responses, in particular atherosclerosis, should not be underestimated (Lievens and Von Hundelshausen, 2011). Addition of CXCL12 to platelets from healthy donors induced platelet aggregation, which could be inhibited by blocking CXCR4. The latter implies an atherogenic, pro-thrombotic, and plaque-destabilizing role for the CXCL12/CXCR4 axis in vivo (Falk et al., 1995;Abi-Younes et al., 2000). In contrast, others report CXCL12 to be a weak platelet agonist, however still amplifying platelet activation, adhesion and chemokine release triggered by low doses of primary platelet agonists, such as adenosine diphosphate (ADP) and thrombin, or arterial flow conditions (Kowalska et al., 1999;Gear et al., 2001). Furthermore, CXCL12 gradients could induce platelet migration and transmigration in vitro involving PI3K signaling (Kraemer et al., 2010). In addition, recent work showed CXCL12 to trigger CXCR4 internalization and cyclophilin A-dependent CXCR7 externalization on (mouse and human) platelets, resulting in prolonged platelet survival. Mice lacking the cytosolic chaperone cyclophilin A showed less CXCL12-induced rescue of platelets from activation-induced apoptosis through CXCR7 engagement. Hence, differential regulation of CXCR4/CXCR7 surface expression on platelets upon CXCL12 exposure at sites of platelet activation/accumulation may orchestrate platelet survival, subsequently impacting on plateletmediated physiological mechanisms .
CXCR4 expression in arterial ECs.
Expression of CXCR4 on various types of vascular ECs has been widely reported (Hillyer et al., 2003), however, it should be emphasized that ECs are a very heterogeneous population, with ECs from different anatomic sites differing in basal gene expression, localization and function (Aird, 2007). Further, one has to carefully distinguish between expression on venous and arterial ECs (Dela Paz and D'amore, 2009). Unfortunately, many studies extrapolate e.g., in vitro findings generated with human umbilical vein endothelial cells (HUVECs) to arteriosclerosis, which is at least daring.
In a study examining CXCR4 expression following vessel wall injury in porcine coronary arteries, CXCR4 expression could be shown 24 h to 7 days after injury, but only in lymphocytes, granulocytes and myelo-fibroblasts entering the injured tissue (Jabs et al., 2007). However, others investigated CXCR4 expression in human carotid artery specimens, where CXCR4 was abundantly expressed by lesional ECs and only marginally in minimally diseased endothelium (Molino et al., 2000;Melchionna et al., 2005). Similarly, Gupta et al. showed CXCR4 mRNA expression in human coronary artery ECs, although it is not clear if these cells originated from inflamed or steady-state endothelium (Gupta et al., 1998). CXCR4 was also shown to be expressed (mRNA and protein) by cultured bovine aortic ECs (BAECs), in cryo-sections of rabbit thoracic aortas (Volin et al., 1998) and in mouse aortic endothelium (Melchionna et al., 2010). For BAECs it was further demonstrated that resting BAECs accumulate CXCR4 protein in cytoplasmic granules, while migrating BAECs display a diffuse surface expression of CXCR4 (Feil and Augustin, 1998).
In addition, in vitro studies with human aortic ECs (HAECs) revealed enhanced CXCR4 surface expression and CXCL12induced chemotaxis in the presence of VEGF or basic fibroblast growth factor (bFGF) 48 h after stimulation. Notably, interferon (IFN)-γ, lipopolysaccharide (LPS) or CXCL12 did not elevate the surface expression of CXCR4 (Salcedo et al., 1999).
CXCR4 expression in venous and microvascular ECs.
Upregulation of CXCR4 surface expression after addition of VEGF and bFGF was also seen in human microvascular ECs and HUVECs (Salcedo et al., 1999(Salcedo et al., , 2003. In contrast, Schutyser et al. do not report changes in CXCR4 mRNA expression in human microvascular ECs after treatment with VEGF, but describe augmented CXCR4 expression (mRNA and protein) after serum starvation and/or hypoxic treatment of microvascular ECs (Schutyser et al., 2007). Further, hypoxia is also an important regulator of the CXCL12/CXCR4 axis in HUVECs by enhancing CXCR4 expression (Ceradini et al., 2004;Jin et al., 2012). In general hypoxia does cause lowering of local pH, and pH changes are also known to occur upon physical exercise and hemodynamic shear stress, as well as in pathological states including cardiac ischemia. In this context Melchionna et al. reported that acidosis decreased CXCR4 surface expression on mouse aortic ECs in vivo and on HUVECs in vitro in a HIF1α-dependent manner (Melchionna et al., 2010).
Notably, mouse microvascular ECs were also shown to augment CXCR4 expression in vitro in response to erythromycin, an anti-inflammatory antibiotic drug used for treatment of chronic inflammatory diseases. The authors assume that beneficial effects of erythromycin are partly due to CXCR4-expressing ECs recruited to sites of tissue injury (Takagi et al., 2009). However, since microvascular ECs also comprise a broad variety of ECs, for example of dermal, brain, heart or pancreatic islets origin, it remains elusive how any differential regulation of CXCR4 expression described above would impact on atherosclerotic plaque development.
Role of CXCR4 in ECs?
Concerning possibly relevant functions of endothelial CXCR4 signaling in context of atherosclerosis, several putatively athero-relevant findings were described in HUVECs, but confirmations in arterial ECs are pending. For example, laminar shear stress suppresses CXCR4 expression in HUVECs while low shear stress favors CXCR4 expression, subsequently resulting in increased EC apoptosis and CCL2 and CXCL8 release (Melchionna et al., 2005).
Another study revealed enhanced release of CXCL12 by HUVECs after oxLDL treatment and a subsequently increased migratory and adhesive response of MSCs . It was further demonstrated that MIF facilitates leukocyte rolling on stimulated HUVECs while siRNA-mediated knock-down of endothelial MIF resulted in decreased expression of E-selectin, ICAM-1, vascular cell adhesion molecule (VCAM) 1, CXCL8, and CCL2 (Cheng et al., 2010).
In addition, it seems that not only VEGF may regulate CXCR4 expression, but it was also shown that CXCL12 treatment increased VEGF protein expression in human microvascular ECs after serum starvation (Saxena et al., 2008), underlining the role of CXCL12 in angiogenesis. Given the CXCL12/CXCR4 axis to be angiogenic in general (Ara et al., 2005;Unoki et al., 2010), but also in tumor development (Domanska et al., 2013) and potentially in atherosclerotic lesions (Di Stefano et al., 2009), and angiogenesis being considered to increase plaque vulnerability, the question still remains if this reflects an important unfavorable role of CXCR4 in atherosclerosis. In a different approach, blocking of TLR2 resulted in increased angiogenic capacity of HUVECs, putatively mediated via association of TLR2 with CXCR4 and subsequently enhanced CXCR4 signaling. Consequently, knock-down of CXCR4 in the presence of TLR2 blocking antibodies revealed less angiogenesis. From these data the authors conclude that TLR2 blocking might serve as a promising therapeutic approach in e.g., MI or peripheral artery disease by promoting revascularization. However, considering angiogenesis pro-atherogenic, TLR2 blocking might have detrimental consequences in the context of plaque stability and development, as already mentioned above (Wagner et al., 2013). Nevertheless, it was also shown that the inflammatory mediators IFN-γ and TNF-α decrease CXCR4 and CXCL12 expression in HUVECs, thereby decreasing their angiogenic capacity (Gupta et al., 1998;Salvucci et al., 2004). Interestingly, the latter contradict findings from Salcedo et al, who showed no difference in CXCR4 expression in HAECs after IFN-γ treatment (Salcedo et al., 1999). Yet, it should be emphasized again that HUVECs and HAECs might exert totally different responses in the presence of the same stimulus, underlining again the importance of caution in generalizing findings from ECs of different origin.
Vascular smooth muscle cells
VSMCs are highly specialized cells controlling contraction and regulation of blood vessel diameter, blood pressure, and blood flow. Moreover, VSMCs play a critical role in secretion of extracellular matrix components, which determine the mechanical properties of mature blood vessels. Differentiated VSMCs in adult blood vessels proliferate at very low rates and retain high plasticity, which enables changes in phenotype, referred to as phenotypic switching (Owens et al., 2004). Phenotypic switching of VSMCs is considered an important pathophysiological mechanism in atherosclerosis (Gomez and Owens, 2012).
CXCR4 expression in vascular SMCs.
Not much is known about the expression and function of CXCR4 in mature VSMCs. Several studies reported no CXCR4 expression on human or bovine (aortic) SMCs (Gupta et al., 1998;Volin et al., 1998). In contrast, Schecter et al. were the first to claim a functional CXCR4 expression on human aortic SMCs, as the addition of CXCL12 or envelope proteins of HIV specifically binding CXCR4 did induce tissue factor activity in human aortic SMCs (Schecter et al., 2001). Similarly, it was demonstrated that HIV can infect arterial (lesional) human SMCs and blocking CXCR4 in human aortic SMCs in vitro did significantly reduce their viral load. Again, direct expression of CXCR4 on aortic SMCs was not investigated. Yet, the authors still claim that HIV infection of VSMCs through CXCR4 may be one reason why HIV patients are more susceptible to develop atherosclerosis (Eugenin et al., 2008). Nevertheless, Li et al. revealed CXCR4 protein expression on human saphenous vein SMCs (Li et al., 2009) and others later reported CXCR4 expression (RNA, protein) on mouse medial SMCs (Nemenoff et al., 2008), rat aortic SMCs (Jie et al., 2010;Pan et al., 2012) and human aortic SMCs (Weber et al., unpublished data).
Role of CXCR4 in vascular SMCs?
As mentioned before, phenotypic switching of VSMCs from a contractile to a synthetic secretory phenotype and their assumed migration from the medial to the intimal arterial wall, where they secrete pro-inflammatory mediators, is considered a pathophysiological mechanism in atherogenesis (Gomez and Owens, 2012 cap formation. This contrasts the pathology of injury-induced restenosis and neointimal hyperplasia, which are mainly driven by SMC proliferation. Hence, the contribution of SMCs to lesion formation is strongly context-dependent. In a rat model of diabetes, a metabolic disorder associated with a higher prevalence of atherosclerosis, high glucose levels were shown to trigger activation, proliferation, and enhanced chemotaxis of VSMCs via stimulation of the CXCL12/CXCR4 axis (Jie et al., 2010). Correspondingly, salvianolic acid B (Salvia miltiorrhiza), used to treat cardiovascular diseases in traditional Chinese medicine, was shown to inhibit CXCL12/CXCR4-mediated proliferation, migration and subsequently neointimal hyperplasia by VSMCs in a balloon angioplasty-induced neointima formation model in rats. Here salvianolic acid B directly decreased surface expression of CXCR4 on aortic rat SMCs (Pan et al., 2012).
Further, lesion reduction in Mif −/− Apoe −/− mice was attributed to a reduction in lesional SMC proliferation, cysteine protease expression, and elastinolytic and collagenolytic activities (Pan et al., 2004). Notably, oxLDL, supposed to be an important trigger of atherogenesis and plaque growth, was shown to induce rat aortic SMC proliferation. This effect could even be further enhanced by addition of CXCL12 and came along with diminished SMC apoptosis. It remains open if this effect would be beneficial through plaque stabilization, or detrimental because of intimal hyperplasia, as discussed above . As mentioned before, vein graft failure after bypass grafting is a major problem and may be associated with neointimal hyperplasia and accelerated atherosclerosis. In this context, Zhang et al. demonstrate that CXCL12/CXCR4 signaling might be a crucial step in vein graft atherosclerosis and contribute to SMC-mediated vein graft neointimal hyperplasia in mice. Furthermore, CXCR4mediated recruitment of inflammatory (progenitor) cells to the vein graft may add to this picture . Also, it was described that CXCL12 stimulates pro-MMP2 expression in human aortic SMCs via CXCR4 in association with the epidermal growth factor receptor in vitro. The authors conclude that CXCR4 expands its signaling repertoire by crosstalking with other receptors, pointing at an important role of ligands engaged in receptor cross-talk as critical players in CAD (Kodali et al., 2006).
GENOME-WIDE ASSOCIATION STUDIES REVEAL CXCL12 AS AN IMPORTANT CANDIDATE GENE IN CAD
Genome-wide association studies of European ancestry revealed 2 single nucleotide polymorphisms (SNPs) on locus 10q11.21, 80 kB downstream of CXCL12, to be significantly associated with CAD and MI (Burton et al., 2007;Samani et al., 2007;Kathiresan et al., 2009;Farouk et al., 2010;Schunkert et al., 2011), although genome-wide significance was not reached in 2 other studies (Ripatti et al., 2010;Peden et al., 2011) (Box 4). Whether and how the risk alleles of these SNPs rs1746048(C/C) and rs501120(T/T) affect expression level and/or function of the CXCL12 protein is currently still unclear.
A significant association was revealed between the CAD risk genotype for rs501120 (T/T) and reduced CXCL12 plasma levels (Kiechl et al., 2010). Likewise, patients with angina displayed reduced CXCL12 plasma levels compared to healthy controls. The reduction was even higher in case of unstable disease (Damas et al., 2002), suggesting an atheroprotective role for CXCL12. Remarkably, patients with angina showed a significant reduction in CXCR4 surface expression on PBMCs, despite increased levels of CXCR4 RNA transcripts, but its connection to disease remains unclear (Damas et al., 2002).
In contrast, a recent study revealed the risk alleles of these SNPs to be associated with higher CXCL12 plasma levels, rather suggesting a pro-atherogenic role for CXCL12 (Mehta et al., 2011). Additional large-scale studies in human patients with CAD/MI investigating CXCL12 plasma levels in relation to SNPs and disease would be helpful to get better insight into the role of CXCL12 in this pathology. Furthermore, animal studies are definitively required to unravel disease-associated functions in a cell-specific and molecular way.
Also for the alternative CXCR4 ligand MIF, SNPs have been identified that are associated with cardiovascular disease, as recently summarized (Tillmann et al., 2013). Although the effect of these SNPs on MIF expression or function remain unknown, enhanced MIF plasma levels in patients with ACS (Muller et al., 2012) and the identification of a high MIF plasma level as a risk factor for adverse coronary events in CAD patients with impaired glucose tolerance or type 2 diabetes mellitus (Makino et al., 2010) may support a pro-inflammatory role of plasma MIF in CAD.
CLINICAL PERSPECTIVES AND CONCLUSION
In conclusion, the role of CXCR4 in native atherosclerosis remains elusive, with only few isolated studies shedding some light on the effect of CXCL12/CXCR4 signaling on cell type-specific functions involved in inflammation or atherosclerosis. In contrast, the CXCL12/CXCR4 axis has been better explored in context of injury-induced restenosis and myocardial ischemia, in which a role for this chemokine ligand/receptor axis has mostly been linked to the mobilization and recruitment of progenitor cells and, to a lesser extent, inflammatory cells (Figure 3). The CXCR4 antagonist AMD3100, also known as Plerixafor, has been approved as mobilizer of hematopoietic stem cells in combination with G-CSF in treatment of patients with non-Hodgkin's lymphoma and multiple myeloma, and many other small molecule inhibitors of CXCR4 are in clinical trial or under investigation (Debnath et al., 2013). However, a potential future application of such inhibitors in treatment of patients with CAD is currently only speculative. Although mobilization of progenitor cells has been associated with cardioprotection in context of myocardial ischemia and also initial clinical trials for stem cell therapy after MI are encouraging, many important aspects of such therapy-as optimal cell type, dose, time and method of administration, long-term effects-remain to be investigated (Sanganalmath and Bolli, 2013). Furthermore, contrasting reports on the effect of AMD3100 treatment on cardiac outcome after MI warns for a further evaluation of underlying mechanisms, and also the recently revealed double-edged role of CXCR4 in myocardial ischemia necessitates a careful evaluation of drugs interfering with CXCL12/CXCR4 signaling. In addition, possible unwanted side effects need to be cautiously monitored. For example, one patient study examining the effects of progenitor cell mobilization on cardiac function after MI FIGURE 3 | Involvement of CXCR4 in CAD. The chemokine receptor CXCR4 plays a role in angiogenesis. Furthermore, it is an important regulator of homing, mobilization and survival of progenitor cells. This has linked CXCR4 with a role in myocardial ischemia and injury-induced restenosis, but its significance in the context of native atherosclerosis remains unclear. CXCR4 has also been reported to mediate leukocyte chemotaxis in specific inflammatory diseases. A similar role in inflammatory cell recruitment has been suggested in the context of myocardial ischemia, but the importance of CXCR4-induced leukocyte recruitment to atherosclerotic lesions in vivo remains to be further addressed. The current view mainly emphasizes the involvement of inflammatory chemokines instead of the homeostatic chemokine CXCL12 in mediating atherogenic leukocyte recruitment. However, CXCR4 can mediate both CXCL12-and MIF-induced chemotaxis of B-and T-cells in vitro, and is also expressed on a subset of monocytes, requiring further research of its function in atherogenic leukocyte recruitment in vivo. Also, it remains unclear which cell type-specific functions of CXCR4 may be important in context of atherosclerosis, with currently only scarce information on potential cellular functions in most cell types present in atherosclerotic lesions. For more details, we refer to the text. Green arrows indicate beneficial effects, red arrows indicate detrimental effects. The interrelation between different pathologies belonging to CAD is visualized. The lower panels indicate relevance of CXCR4-involving cell type-specific functions to atherosclerotic plaque formation. bFGF, basic fibroblast growth factor; CAD, coronary artery disease; H2S, hydrogen sulfide; M-CSF, macrophage colony stimulating factor; MMP, matrix metallopeptidase; oxLDL, oxidized low-density lipoprotein; VEGF, vascular endothelial growth factor. was terminated early due to enhanced in-stent restenosis (Kang et al., 2004). Also, a closer investigation of the effect of CXCR4 antagonists on progenitor cell mobilization in context of different cardiovascular disease settings or upon different dosage or administration methods seems interesting, as contrasting findings were reported on the effect of continuous administration of the CXCR4 antagonist AMD3465 on the mobilization of Lin − Sca1 + cells in context of native atherosclerosis vs injuryinduced restenosis (Karshovska et al., 2008;Zernecke et al., 2008).
Furthermore, additional studies are required to unravel in more detail the cellular processes in which CXCR4 is involved and the underlying molecular mechanisms. In this context, complexness of CXCR4-associated biological and mechanistic aspects is significantly being increased by the identification of MIF as a secondary chemokine ligand for CXCR4, and of CXCR7 as an alternative receptor for CXCL12. Intertwining of chemokine receptor signaling may enhance fine tuning and optimization of leukocyte chemotaxis in physiological conditions. In addition, it may increase the possibilities for designing therapeutics interfering with only selective aspects of chemokine signaling, for example by targeting chemokine receptor heterodimerization (Koenen and Weber, 2010). But again, this necessitates a better understanding of biological and mechanistic aspects of all involved chemokine ligand/receptor axes and their interplay.
ACKNOWLEDGMENTS
This work was supported by the German Research Foundation (DFG FOR809 to Christian Weber), the European Research Council (ERC AdG 249929 to Christian Weber), the Fondation Leducq (to Christian Weber), the START Program of the Faculty of Medicine, RWTH Aachen (49/13 to Heidi Noels), and the German Heart Foundation/German Foundation of Heart Research (F/40/12 to Heidi Noels). We thank Dr. med. E. Liehn, Prof. J. Bernhagen and Prof N. Marx for their support, and sincerely apologize to all scientists whose important contributions to the field could not be cited due to space limitations.
|
v3-fos-license
|
2020-04-09T07:15:41.602Z
|
2019-01-01T00:00:00.000
|
88160829
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.3126/nje.v8i2.18708",
"pdf_hash": "9b902cbd173bb4b0d930356ee6100cd2af890b76",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43315",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "038437bfdbf073e40841a8dbb8934cd478a67c5f",
"year": 2018
}
|
pes2o/s2orc
|
Distribution and prevalence of oral mucosal lesions in residents of old age homes in Delhi, India
Background It has been seen that very less attention has been given to the oral health of the geriatric population residing in old age homes and as the oral mucosal lesions are a matter of concern for this growing population. Therefore, a study was done with the objective of finding the prevalence of oral mucosal lesions and the distribution of oral mucosal lesions among 65-74 year old residents of old age homes in Delhi, India. Materials and Methods A cross sectional study was done on 65-74 year old age group elders of old age homes in Delhi. A total of 464 subjects participated in the study. Oral Health Assessment Form, WHO was used for assessing oral mucosa. Clinical examination was performed using two mouth mirrors under natural illumination in a systematic manner. Data was processed and analyzed using SPSS version 23. Results Out of a total of 464 subjects, 291 (62.70%) were males and 173 (37.30%) were females. Oral mucosal lesions seen in the study subjects were malignant tumours, leukoplakia, lichen planus, ulcerations, ANUG, Abscess and candidiasis. Leukoplakia was seen in 70 subjects (15%) and was present on buccal mucosa in the majority. A malignant tumour was seen in 7 subjects (1.5%) and commonly seen area is floor of mouth. Conclusion Prevalence of oral mucosal lesions among residents of old age homes shows the need for increased preventive and diagnostic measures for prevention and early identification of oro-mucosal lesions. Taking adequate care for oro-mucosal health of elderly people residing in old age homes is necessary.
Introduction
There has been a renewed interest in study of geriatric oral health owing to a recent increase in this population [1]. Oralmucosal lesions, along with dental caries and periodontal diseases are often subject of concern among elderly [2]. Oral mucosal lesions (OMLs) are usually associated with various local and/or systemic conditions. Thus, it has become an intriguing subject in the recent literature. Oral mucosal health is directly related to the general health. It performs many functions like protection, sensation and secretion. With increasing age, it gets thinner and even the collagen synthesis by connective tissue gets slower, which further makes the mucosa less resistant to cancer causing agents and other toxic substances. Ultimately, this leads to cancer in some individuals [2]. So, age is an important contributing factor for oral mucosal lesions. A study conducted in Iran in institutionalized elderly patients observed oral OMLs in 98% of the elder individuals [3].OMLs have been reported in 41.2% of the Indian population [4].The part of oral mucosa which can be involved is the right and left buccal mucosa, followed by labial mucosa, tongue, gingiva, hard/soft palate, and alveolar mucosa [5]. Oral mucosal lesions interfere with the eating, drinking and speaking practices of the patient. So the routine activities of the person get hampered, as the lesion cause pain and discomfort [6]. As has been established by the WHO (2013), a population ageing more than 60 years should be considered to be an elderly population [7]. The latest national census conducted in India in 2011 presented that 8.6% of the population were 60 years or more [8] indicating that the health of geriatric population is important. The various barriers in the maintenance of oral health in elderly people have been recognized. These include the lack of trained health workers, incapability of caretakers or patients to maintain good oral hygiene, deficit financial support, and inefficient dental health care delivery structure [9]. Till now little has been done for this geriatric population in the country. However, before planning any local or national oral health program; data regarding the extent of the problem have to be collected. The recent, 2011 census showed that Delhi, the capital of India, inhabits 16.7 million population making it second largest metropolis in the country [8]. Therefore, a study was planned to assess the prevalence of oral mucosal lesions among elderly residing in old age homes of Delhi, India.
Study design and participants:
A cross sectional study was conducted among 65-74 year old residing in old age homes in Delhi, India. Data collection was performed during October-November 2017.
Sample size calculation:
Prior to the main study, a pilot study was conducted among 30 elders of an old age home, which were excluded from the final sample. This old age home was not a part of the study sample. Considering the prevalence of oro-mucosal lesion (52%) obtained from the pilot study and a non-response rate of 15%, a minimum sample size of 460 subjects was estimated. Delhi is divided into 5 different regions with a total of 38 old age homes, according to the list collected from municipal office. From these, 20 old age homes were selected by cluster randomized sampling. Randomization was done using a lottery method. The old age homes list of Delhi is:- Questionnaire design and validation: Informed consent was obtained from the subjects before starting the examination procedure. The examiner was trained and calibrated prior to the study to ensure uniform interpretation of the codes and criteria for various diseases and conditions to be observed or recorded. The kappa value for all the item was found to be > 0.85. A schedule of the survey for data collection was prepared. On an average 10-15 subjects were interviewed and examined per day. A survey proforma was prepared based on Oral Health Assessment Form, WHO (2013) [7]. It was used in the present study for assessing oral mucosa .Demographic information was also collected which included name, age, sex, address and religion. Clinical examination was performed using two mouth mirrors under natural illumination. No biopsies or laboratory tests were done in the present study. According to WHO, a thorough systematic examination was performed in the following sequence: Labial mucosa and labial sulci (upper and lower), followed by labial part of the commissures and buccal mucosa (right and left), Tongue (dorsal and ventral surfaces, margins), floor of the mouth, hard and soft palate, alveolar ridges/gingiva (upper and lower).
Inclusion Criteria:
Those who were 65-74 years old and were residents of these old age homes were included in the study.
Exclusion criteria:
Those who were bedridden, medically compromised and denied the consent for examination were excluded from the study. Ethical committee approval: Ethical clearance (2017/20/IEC/MRDC) was obtained from the institutional ethical board. Prior to the study, permission was taken from the concerned authorities of old age homes also after explaining the purpose and procedure of the study.
Data management and statistical analysis:
The data so obtained was compiled systematically. The collected data was processed and analysed by SPSS version 23. Chi square test was used to find the significance between two variables. A p-value less than 0.05 was considered statistically significant (95% confidence interval).
Results
Out of a total of 464 subjects, 291(62.70%) were males and 173 (37.30%) were females. The mean age was found to be 69. Major ethnic groups was Hindu (81.20%) followed by Muslims (10.60%). Table 1 shows the demographic characteristics of the subjects. Oral mucosal lesions seen in the study subjects were malignant tumour, leukoplakia, lichen planus, ulcerations, ANUG, Abscess and candidiasis. Table 2 shows the oral mucosal lesion prevalence in older individuals residing in old age homes. No statistical difference (p = 0.371) was found among male and female participants when considered for lesion presence and absence. A malignant tumour was seen in 7 subjects (1.5%) and commonly seen area is floor of mouth. Leukoplakia was seen in 70 subjects (15%) and was present on buccal mucosa in the majority. Lichen planus was also present on buccal mucosa in 19 subjects (4.09%). Ulcerations were commonly seen on the tongue followed by lips i.e. 17 (3.6%) &11 (2.3%) subjects respectively.
Acute necrotizing gingivitis was seen in 13 subjects (2.8%) and candidiasis was seen mostly on the buccal mucosa. The abscess was found mainly on alveolar ridges/gingiva in 23 subjects (4.9%), followed by sulci and hard palate and soft palate. Other conditions were also seen in 7 subjects (1.5%) which include oral sub-mucous fibrosis on the buccal mucosa. Buccal mucosa (19.7%) was the most involved site found in the present study (Table 3).
Discussion
The overall prevalence of Oral mucosal lesions in elderly residing in old age homes of Delhi was found to be 44%. It is lower than reported among Yemen elder population by Al-Maweri SA, i.e. 77.1% [9]. However, this rate is much higher than the results of other epidemiological studies [10,11,12,13,14]. Due to variations in social, cultural and demographic of the present study population, the comparison of the results of this study with other studies becomes difficult.
Distribution of lesion according to Gender:
According to gender distribution, Ferriera R C, et al [14] and Rabiei M, et al [15] reported a female preponderance in their cases. However, in the present study, OMLs were more common among men (65%) than women, which is in accordance with several studies [13,16,17,18]. This finding is possibly because men are more exposed to risk habits than are women. Also, men in our community are usually involved in smoking and other risk habits as compared to women. Thus, a higher prevalence of certain lesions among men was not a surprise.
Leukoplakia:
During clinical examination, Leukoplakia (14.7%) was the most commonly observed oral mucosal condition in the study subjects. However, it was less prevalent in the study done by Bansal V, et al i.e. 7.23% [19]. The reason could be that maximum age reported in the study done by Bansal et al was 97 years and it was limited to 74 years in the present study. It has been seen that incidence of OMLs decreases with advancing age due to the fact that older people usually, due to certain medical reasons, quit oral risk habits such as smoking and habitual quid chewing with advancing age. Consequently, the incidence of such lesions will decrease significantly.
Locations of the lesions:
The results of the study done by Bansal V et al (2010) support the present study in the case of most affected site which is buccal mucosa [19]. Further, it was found that commisures were least affected site, followed by vermillion border and hard/soft palate. However, in a study done by Patil S, it is different, i.e. hard palate (23.1%) was the most commonly affected site and soft palate was the least involved (3.6%) [20]. The result of the present study is supported by the study done by Mehrotra R et al, where it was found that both the right and left buccal mucosa were involved in 121(53%) patients. [5] Retromolar trigone was found to be the next most commonly involved area after buccal mucosa. Hard and soft plate, tongue, alveolar mucosa and floor of the mouth were other infrequent sites for oromucosal lesion. [6] Abscess and ulceration: The abscess was noted on alveolar ridges and gingiva in 23 subjects, which was due to grossly decayed teeth and root stumps in elderly. Gonsalves W C (2008) [21] discussed that older persons are at risk of chronic diseases of the mouth and found that the most common oral condition in the study population is oral candidiasis and xerostomia (dry mouth). In the present study, the most common oral condition found is leukoplakia followed by ulcerations on the mucosa. Reason for more prevalence of leukoplakia and ulceration in the present study could be that, the homebound elderly does not visit the dentist for a long time which results in deterioration of their oral health. Hence, their mouth becomes more susceptible to these lesions.
Lichen planus:
Lichen planus was seen in 4.4% of the study subjects. This result supports the findings of Mujica [11] and Cebeci [6] studies 3% and 0.8 respectively. Similarly, in the study done by Al-Maweri SA et al, it was observed that lichen planus was seen in 1.6% of study population [9]. But the present study results conflicts with the conclusions of other authors such as Patil S (2015) [20], who found that 18% of the population had lichen planus.
Malignancies:
In the present study, 7 patients (1.4%) were diagnosed as having malignancies, all of whom were diagnosed as having squamous cell carcinoma (SCC). Similarly, in the study done by Pai A [22], it was observed that thirteen patients were diagnosed with squamous cell carcinoma (1.3%). For an oral cancer to develop, mouth is the most susceptible and common location. Most malignancies, especially SCCs involving the mucosal tissue, are usually evident. However, all such potentially malignant lesions should be confirmed by microscopic analysis [11].
Candidiasis:
In the study done by Mujica V et al., in 2008 it was found that only one individual had candidiasis out of 340 institutionalized elderly populations [11]. Similarly, a study done by Pai A [22] reported 5 cases of candidiasis in geriatric population of Bangalore, India. The result of these previous studies supports the present study, as only 4 candidiasis patients were noted in this study. Ulceration was noted in 11.6% of the present study subjects, a percentage comparable to that in a study by Pai A [22], but higher than that of a study done by Cebeci et al. [6]. Reason could be that elders have decreased physical mobility, dependency on help and general tiredness that makes it difficult for them to visit dental clinic even when they have dental problems and it results in their poor oral health. [23,24,25,26].
Conclusion
As seen in the results of the study, many elders are suffering from various oral mucosal conditions which are worsening their sufferings. They are away from their home due to various reasons but they deserve a better quality of life. Primary care physicians present in old age homes are comparatively in a more frequent contact with the inmates as compared to the dentist. They may play a crucial role in recognizing risk for oromucosal lesions through a focused examination of the oral cavity while performing a general examination and prompt referral to a dentist whenever required. Further, if a patient is suffering from any physical and mental condition which limits his/her movements, then these patients can get benefit from various dental aids for their oral hygiene maintenance like customized handle brushes, electronic toothbrushes and specially designed floss holders to improve the grip. So, various NGOs and government agencies should come forward for providing these devices to the elder patients in these old age homes.
Limitation of the study:
Limitation of the study was that the various risk habits like smoking, alcohol, lifestyle factors, and oral hygiene practices of the subjects were not asked. As these may be the reason for oral mucosal lesions in elderly.
Future scope of the study:
This study provides epidemiological data on prevalence of oromucosal lesions in elderly residing in old age homes of Delhi, India. It may prove valuable in the planning of future oral health studies in India. Oral mucosal changes are progressive and if not prevented or left untreated, it can affect the general health of the elderly individual. Hence, a periodic and a thorough oral screening and an examination are highly required for the geriatric population of old age homes.
What is already known on this topic?
To our best knowledge, our study is the one to assess the distribution and prevalence of oral mucosal lesion in the elderly residing in old age homes. Various studies have been done in other parts of the country, but not in Delhi which is national capital of the country.
What this study adds:
Result of the study indicate that there is a need for periodic examination of elder's oral health, as this will affect their quality of life as well. So, various health promotional programs can be initiated by the government authorities. Authors' affiliations: 1 NRY, MJ, AS, RY, MP and VJ have substantial contributions to conception and design, acquisition of data, analysis and interpretation of data. All the authors also contributed in drafting the article and revising it critically. All the authors approved the final version of manuscript.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2016-03-23T00:00:00.000
|
10848343
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0151820&type=printable",
"pdf_hash": "ae78c490549ca2692c6b8eecf9459f7bf09097ff",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43318",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science",
"Medicine"
],
"sha1": "d12de639f750526610d915108fd23c685a8dcbe7",
"year": 2016
}
|
pes2o/s2orc
|
Novel Antimicrobial Peptides EeCentrocins 1, 2 and EeStrongylocin 2 from the Edible Sea Urchin Echinus esculentus Have 6-Br-Trp Post-Translational Modifications
The global problem of microbial resistance to antibiotics has resulted in an urgent need to develop new antimicrobial agents. Natural antimicrobial peptides are considered promising candidates for drug development. Echinoderms, which rely on innate immunity factors in the defence against harmful microorganisms, are sources of novel antimicrobial peptides. This study aimed to isolate and characterise antimicrobial peptides from the Edible sea urchin Echinus esculentus. Using bioassay-guided purification and cDNA cloning, three antimicrobial peptides were characterised from the haemocytes of the sea urchin; two heterodimeric peptides and a cysteine-rich peptide. The peptides were named EeCentrocin 1 and 2 and EeStrongylocin 2, respectively, due to their apparent homology to the published centrocins and strongylocins isolated from the green sea urchin Strongylocentrotus droebachiensis. The two centrocin-like peptides EeCentrocin 1 and 2 are intramolecularly connected via a disulphide bond to form a heterodimeric structure, containing a cationic heavy chain of 30 and 32 amino acids and a light chain of 13 amino acids. Additionally, the light chain of EeCentrocin 2 seems to be N-terminally blocked by a pyroglutamic acid residue. The heavy chains of EeCentrocins 1 and 2 were synthesised and shown to be responsible for the antimicrobial activity of the natural peptides. EeStrongylocin 2 contains 6 cysteines engaged in 3 disulphide bonds. A fourth peptide (Ee4635) was also discovered but not fully characterised. Using mass spectrometric and NMR analyses, EeCentrocins 1 and 2, EeStrongylocin 2 and Ee4635 were all shown to contain post-translationally brominated Trp residues in the 6 position of the indole ring.
Introduction
Increasing numbers of pathogenic bacterial strains are becoming resistant to antibiotics. More people in US hospitals now die from methicillin-resistant Staphylococcus aureus (MRSA) infections than HIV/AIDS and tuberculosis combined [1]. It is therefore a pressing need to find and develop antimicrobial agents as alternatives to classical antibiotics. Antimicrobial peptides (AMPs) are part of the immune system in both plants and animals and they are considered to constitute an evolutionarily ancient response to fight invading pathogenic microorganisms [2]. AMPs are evolutionary conserved and gene-encoded peptides, usually cationic, short amino acid chains [3]. Most AMPs exhibit broad-spectrum activity towards both Gram-positive and Gram-negative bacteria. In contrast to commercial antibiotics where the development of resistance is a problem, bacterial resistance towards AMPs is much less pronounced [1,4]. Because of their propensity to be rapidly metabolised in the gastrointestinal tract, peptides have been considered poor drug candidates. This problem has diminished somewhat in recent years with the development of new synthetic strategies to improve bioavailability and reduce metabolism of peptides, and bolstered by the development of alternative routes of administration [5,6]. A large number of peptide-based drugs are now in clinical trials or being marketed, including AMPs [5,7]. Today, more than 2600 peptides have been registered in the Antimicrobial Peptide Database [8], mainly from terrestrial sources. Marine invertebrates, although less studied, have proven to be a promising source for discovering AMPs with novel scaffolds [9]. Echinoderms are exposed to relatively high bacterial levels because they are often found in the photic zone where conditions for microbial growth are optimal. The survival of these organisms relies on the production of efficient antimicrobial components to defend themselves against microbial infections and fouling. As invertebrates in general [10], echinoderms do not have an adaptive immune system like the one recognised in vertebrates where specific memory towards pathogens is developed. Their innate defence system is mediated by the coelomocytes and compounds like complement factors, lectins, lysozymes and AMPs [11][12][13].
A number of AMPs have previously been found in echinoderms [13][14][15][16][17][18][19][20]. Examples include lysozymes that catalyse the hydrolysis of the peptidoglycans of the bacterial cell wall on Grampositive bacteria and act as non-specific innate immunity molecules [17,18]. A 6 kDa AMP in the coelomic fluid of the orange-footed sea cucumber, Cucumaria frondosa, was discovered but no sequence was reported [20]. Several antibacterial peptides from the coelomic fluid of the starfish Asterias rubens with masses around 2 kDa have also been discovered [19,21]. Two of these peptides were identified as fragments of the histone H2A molecule, two peptides were identified as fragments of actin, and one was identified as a fragment of filamin A. A 5 kDa peptide having antistaphylococcal biofilm properties was discovered in the coelomocytes of the sea urchin Paracentrotus lividus [16]. The antibiofilm activity was suggested to be ascribed to beta-thymosin like fragments [22]. From the coelomocytes of the sea urchin Strongylocentrotus droebachiensis, two novel AMP families were characterised-the strongylocins [14] and the centrocins [23]. The strongylocins are cysteine rich peptides containing three disulphide bonds with MW in the 5.6-5.8 kDa range. Homologous genes have been discovered in S. purpuratus and their deduced peptide sequences named SpStrongylocins. Recombinantly produced SpStrongylocin analogues were also shown to be antibacterial [24]. The centrocins are a family of heterodimeric AMPs ranging between 4.4 and 4.5 kDa in mass. The peptides consist of two peptide chains: a 30 amino acid residue heavy chain (HC) and a 12 amino acid residue light chain (LC) connected by a single disulphide bond. Bioactivity studies have shown that the cationic HC is responsible for the antimicrobial activity of these peptides [23]. The HC of centrocin 1 displays potent activity against both bacteria and fungi and display antiinflammatory properties [25].
The Edible sea urchin, Echinus esculentus (Fig 1) has been reported to contain antimicrobial compounds [26], of which the quinone echinochrome-A has been identified [15,27]. No AMPs have yet been discovered. The aim of the present study was to search for, isolate and characterise AMPs from the coelomocytes of the sea urchin E. esculentus. In this paper, we present the discovery of new AMPs belonging to the centrocin and strongylocin family of AMPs.
Ethics statement
All experiments performed in the present study were conducted in accordance with national and international guidelines and the ethical guidelines of UiT The Arctic University of Norway. For the haemolytic assay the collection of blood from a healthy donor was approved by the regional committee for medical research (REK 2014/1653).
In Norway, collecting wild E. esculentus for research purposes does not require specific permits. Our study does not involve endangered or protected species and sea urchins are not subject to any ethical animal use restrictions.
Coelomic fluid was sampled from the animals 1-4 weeks after animal collection by penetrating the peristome with a scalpel and pouring the content into Ca 2+ /Mg 2+ free anti-coagulating buffer [28] containing 70 mM EDTA and 50 mM imidazole in a 2:1 v/v ratio in 50 ml Falcon tubes (BD Biosciences, CA, USA) on ice. In total, approximately 5500 ml coelomic fluid was obtained. The mixture was subsequently centrifuged for 20 min at 4°C and 800 g. The pellet (coelomocytes) was collected by pouring off the cell-free supernatant and kept at -70°C until lyophilisation on a VirTis Genesis 35 EL freeze dryer (SP Industries, PA, USA) for 24 h. A total amount of 12.2 g (dry weight) of coelomocytes was sampled. All sample weighing was performed on a Sartorius Cubis MSA scale, Sartorius AG, Gottingen, Germany.
Extraction and purification
Lyophilised coelomocytes were extracted according to a previous protocol [29] with one modification: liquid-liquid extraction was carried out twice with 5% (w/v) lyophilised coelomocytes in 60% acetonitrile (ACN) containing 0.1% trifluoroacetic acid (TFA) (both from Sigma-Aldrich, MO, USA) for 24 h at 4°C. The combined extracts were partitioned into an aqueous phase (approximate total of 100 ml) at the bottom and an ACN-rich phase (approximate total of 150 ml) at the top by leaving it in a -20°C freezer for approximately 1 h. Both phases were dried in a ScanSpeed 40 vacuum centrifuge (Labogene ApS, Denmark) for 24 h. The ACN-rich phase was reconstituted in MQ-H 2 O (Millipore MA, USA) to a concentration of 10 mg/ml and subjected to antibacterial activity testing.
The aqueous phase (5.49 g) was reconstituted to 10 mg/ml (549 ml) in 0.05% TFA/H 2 O (v/ v) and further subjected to solid phase extraction (SPE) on a reverse phase C 18 35cc Sep-Pak cartridge (Waters, MA, USA) according to [29] as follows. The extract was loaded onto the cartridge, previously conditioned with ACN, and equilibrated with 0.05% TFA/H 2 O (v/v). After washing of the loaded extract with 0.05% TFA/H 2 O (v/v), four stepwise elutions were performed with 10%, 40%, 80% and 100% ACN containing 0.05% TFA (v/v). The different fractions collected were dried under vacuum, reconstituted to 10 mg/ml and tested for antibacterial activity.
Due to its pronounced antibacterial activity (S1 Table), the 40% SPE eluate was further fractionated by reversed-phase high-performance liquid chromatography (RP-HPLC) using a preparative XBridge C 18 (5 μm, 19 × 250 mm) column, a 717 autosampler, 600E pump system, 2996 photodiode array detector and an in-line degasser (Waters, MA, USA), all controlled by the Millennium 32, v4.00 software (Scientific Equipment Source, Ontario, Canada). The flow rate was set to 8 ml/min with an optimised HPLC protocol containing 0.05% TFA/H 2 O (v/v) and 0.05% TFA/ACN (v/v). The protocol started with 10 min of 0.05% TFA/H 2 O (v/v) followed by linear gradients increasing the 0.05% TFA/ACN (v/v) concentration from 0% to 18% over 8 minutes, 18% to 32% over 32 minutes, 32% to 50% over four minutes, and finally washing with 0.05% TFA/ACN (v/v) for five min and re-equilibration with 0.05% TFA/H 2 O (v/v) for five min. One minute (8 ml) fractions were collected automatically with a Gilson FC 204 fraction collector (Gilson, WI, USA), dried in a ScanSpeed 40 vacuum centrifuge for 24 h, and reconstituted in 500 μl MQ-H 2 O before antibacterial activity testing. Active fractions were analysed for purity, and bioactive peptides were detected using liquid chromatography UV-Vis mass spectrometry (LC-PDA-MS, see section 2.6). Impure, active fractions were fractionated again using the same experimental conditions but with fractions collected manually until pure peptides (estimated purity >90%) were obtained.
To perform extended bioactivity screening and structural analyses of individual peptides, more material of each peptide was needed. Multiple HPLC-runs, 23 injections in total, using the same conditions as described above, and by collecting fractions manually, where therefore performed. Pure peptide fractions (>90% as estimated with LC-MS) were pooled, lyophilised and weighed. Test solutions for bioactivity were prepared with MQ-H 2 O.
Cultures stored at -80°C were smeared onto agar plates and cultured for 24 h at 35°C. One colony of each bacterial strain was transferred to 5 ml liquid Müller-Hinton (MH, Difco, Lawrence, KS, USA) medium in a glass tube and left shaking at room temperature overnight at 600 rpm. The cultures of actively growing bacteria (20 μl) were further inoculated in 5 ml MH medium and left shaking for 2 h at room temperature. The antibacterial assays were performed as previously described by [31]. Briefly, the bacterial cultures were diluted with medium to 1.3-1.5 × 10 4 bacteria/ml final concentrations and an aliquot of 50 μl was added to each well in 96-well Nunclon TM microtiter plates (Nagle Nunc Int., Denmark) preloaded with 50 μl test sample solutions, i.e. extracts, eluates or peptides.
The test plates were incubated for 24 h at 35°C with optical density (OD 595 ) recorded every hour using an Envision 2103 multilabel reader, controlled by the Wallac Envision manager (PerkinElmer, CT, USA). Antibacterial activity was defined as a sample showing >90% inhibition (as measured by optical density) compared to the negative (growth) controls, consisting of bacteria and water. Oxytetracycline (20 μM) served as a positive (inhibition) control. The minimum inhibitory concentration (MIC) was defined as the lowest concentration of a sample displaying >90% inhibition.
The synthetic peptides (see section 2.11) were also screened for antifungal activity against Candida albicans (ATCC 10231), Saccharomyces cerevisiae, Rhodotorula sp., Aureobasidium pullulans and Cladosporium sp. The antifungal assay was performed as described previously [32]. Briefly, fungal spores were dissolved in potato dextrose broth (Difco, Lawrence, KS, USA) to a concentration of 4 × 10 5 spores/ml. The spores (50 μl) were inoculated on 96-well Nunclon TM microtiter plates containing the synthetic peptides (50 μl) dissolved in MQ-H 2 O. The fungal growth and MIC were determined visually after incubation for 24 h at room temperature. MIC was defined as the lowest concentration of peptide giving no visible fungal growth. The negative (growth) control contained medium and fungal solution.
Haemolytic activity assay
Synthesised peptide analogues of EeCentrocin 1 and 2 were screened for eukaryotic cell-toxicity with a haemolytic activity assay using human red blood cells as described previously [32]. The assay was performed on 96-well U-shaped microtiter plates (Nagle Nunc) with 50 μl peptide sample, 40 μl phosphate-buffered saline (PBS) and 10 μl red blood cells. The final peptide concentrations ranged from 100 to 0.1 μM in two-fold serial dilutions. After one hour of incubation at 37°C in a shaker, the plate was centrifuged at 200 g for 5 min and the supernatants (60 μl) were carefully transferred to a new flat-bottomed polycarbonate microtiter plate (Nagle Nunc) and absorbance at 550 nm was measured on a Synergy H1 multimode reader (BioTek, VT, USA). Cell suspension added 0.05% Triton X-100 (Sigma-Aldrich, MO, USA) in PBS served as positive (100% haemolysis) control and cell suspension added PBS served as negative (0% haemolysis) control. The percent haemolysis was calculated using the formula [(A sample -A baseline )/(A triton -A baseline )]×100. The experiment was performed in duplicates.
Two-dimensional liquid chromatography-mass spectrometry analyses
Antibacterial HPLC fractions and SPE eluates were analysed by liquid chromatography UV-Vis mass spectrometry (LC-PDA-MS) to identify antibacterial compounds and to perform purity determinations. The LC-PDA-MS system consisted of a 2695 separation module, a Sunfire C 18 (5μm, 2.1 × 100 mm) column, a 2998 PDA detector reading from 190 to 500 nm in 1.2 nm increments, and a Micromass ZQ mass spectrometer controlled by Masslynx v4.1 software (all from Waters, MA, USA). Compounds were eluted by running a linear gradient of increasing ACN concentrations in water (both containing 0.05% TFA) from 5 to 50% over 16 min, using a flow rate of 0.2 ml/min. Samples (5-20 μl) were introduced to the MS and analysed in positive ESI mode. Ions were recorded in full scan mode in the 100-2000 m/z-range (See S2 Table for typical MS settings). The mobile phases were the same as when running HPLC (0.05% TFA/H 2 O (v/v) and 0.05% TFA/ACN (v/v)).
High-resolution mass spectrometry (HR-MS) was performed on a Thermo LTQ Orbitrap XL with an electrospray ion source (ION-MAX) coupled to an Accela HPLC-system (Thermo Fisher Scientific, MO, USA). A Supelco Ascentis Express (2.7 μm, 50 × 2.1 mm) C 18 reverse phase column was used. The datasets were deposited to The Mass Spectrometry Interactive Virtual Environment repository (MassIVE, http://massive.ucsd.edu/ProteoSAFe/static/ massive.jsp) database with accession number MSV000079515.
Peptide sequencing
Enzyme digestion with endoproteinase Arg-C and Edman degradation were performed by Eurosequence BV (Groningen, The Netherlands). Further sequence confirmation and elucidation of modified amino acids was achieved through trypsin digestion or reduction/alkylation and successive HR-MS. The protocol of Promega (Promega, WI, USA, available from http:// no.promega.com/resources/protocols/product-information-sheets/n/sequencing-grademodified-trypsin-frozen-protocol/) was followed regarding protease treatment. Briefly, peptides (700 μg) were dissolved in 6 M Guanidine HCl, 50 mM Tris HCl (pH 8) and 4 mM Dithiothreitol (DTT, Sigma-Aldrich, MO, USA) in a reaction volume of 100 μl. The reaction mixture was heated at 95°C for 20 min, cooled down to room temperature and added 550 μl 50 mM NH 4 HCO 3 (pH 7.8). Porcine trypsin (Promega, WI, USA) was added at a 33:1 ratio (peptide:trypsin, 700:21 μg, 15.000 u/mg) and incubated for 16 h at 37°C. High-resolution LC-MS of the digested peptide was performed as previously described in section 2.6.
Reduction and alkylation was performed by dissolving~20 nmol peptide in 100 μl 0.5 M Tris HCl/1 mM EDTA/6M Guanidine HCl and adding 5 μl 2.2 M DTT (Sigma-Aldrich, MO, USA). The peptide solution was flushed with N 2 to prevent oxidation and incubated for 16 h at 37°C. After incubation, 5 μl 4-vinylpyridine (Sigma-Aldrich, MO, USA) was added to the solution and incubated for 20 min at 37°C. The reaction was stopped using RP-SPE with a C 18 cartridge as previously described in section 2.3. The alkylated peptides were eluted with 80% ACN/H 2 O (v/v) containing 0.05% TFA.
Characterisation of full length cDNA
Total RNA was isolated from the pooled coelomocytes of three animals using the QIAZol TM reagent in accordance with the manufacturer's instructions (QIAGEN, MD, USA). Reverse transcription polymerase chain reaction (RT-PCR) was carried out using a rapid amplification of cDNA ends (RACE) kit (Clontech, CA, USA). Total RNA (1 μg) was used as a template to synthesise 5' Ready-to-Go cDNA or 3' Ready-to-Go cDNA according to the manufacturer's instructions.
In order to obtain partial cDNA sequence (3' region), degenerate oligonucleotide primed PCR (DOP-PCR) was performed as previously described [14]. Briefly, 0.5 μg of template (3' Ready-to-Go cDNA), 10 × Optimised DyNAzyme™ Buffer, 1 μM of the forward primer EeCen 1DF (for EeCentrocin 1), EeCen 2DF (for EeCentrocin 2) or EeStrong 2DR (for EeStrongylocin 2) and reverse primers nested universal primer (NUP), 0.2 mM dNTP, 0.4 units DyNAzyme™ II DNA polymerase (Finnzymes, Finland) and water were mixed to bring the reaction volume up to 25 μl. DOP-PCR was performed according to the following cycle: 94°C for 5 min, 35 cycles at 94°C for 30 sec, 55°C for 30 sec and 72°C for 2 min, followed by a final extension at 72°C for 10 min. The DOP-PCR products of 3' region were cloned into pGEM1-T vector and sequenced using primer Sp6 and T7. The correct sequences were confirmed by comparison of deduced amino acid sequences with sequences obtained by Edman degradation. The 5' region of gene was cloned by using gene specific primer EeCen 1R, EeCen 2R or EeStrong 2R from the 3' region and the primer NUP. The full-length nucleotide sequence was deduced based on the overlapping sequence of the obtained RACE product with existing partial cDNA sequence. Furthermore, the full length coding sequences were verified by PCR amplification using the primers (EeCen 1 AF/AR for EeCentrocin 1, EeCen 2 AF/AR for EeCentrocin 2 or EeStrong 2 AF/AR for EeStrongylocin 2) situated at the extreme ends of the open reading frame. An overview of all primers is presented in S3 Table. The sequences were submitted to GenBank with accession numbers KR494262, KR494263 and KR494264. The acquired spectra were referenced on the residual solvent signal δ H1 = 4.79 PPM and δ C13 from γH1:-γC13 = 3.976813 (water-d 2 ). Data processing and figures were made using the MestReNova v9.0.1 and NMRPipe v8.1 [33] software, and peptide assignment was made using CARA v1.8.4.2 [34].
Data analysis and interpretation
The potential presence of peptide homologues was examined using the BLAST search engine (http://blast.ncbi.nlm.nih.gov/Blast.cgi) [35] on the National Centre for Biotechnological Information (NCBI) homepage. The blastp and blastn algorithms were used, searching for non-redundant protein sequences. Additionally, the built-in BLAST search functionality of the LAMP database (http://biotechlab.fudan.edu.cn/database/lamp/) [36] was used. Predicted signal sequences were determined using the SignalP 4.1 server (http://www.cbs.dtu.dk/services/ SignalP/) [37] with the default setting for D-cutoff values and no TM regions selected. The cDNA sequence and deduced amino acid sequence of sea urchins were analysed using the BLAST program and the ExPASy Translate tool (http://web.expasy.org/translate/) with the genetic code set to standard. Alignments using ClustalW [38] and phylogenetic trees using the Neighbour-joining method [39] with evolutionary distances computed using the Poisson correction method [40] were constructed by the Mega 6.06 software (http://www.megasoftware. net/) [41]. Mass spectrum predictions were performed with the ChemCalc online prediction software (http://www.chemcalc.org/) [42] using the peptides tool with resolution set to 0.001. Graphs were made using Graphpad Prism v 6.00 for Windows (Graphpad Software, CA, USA). Row means, SD and linear regression was performed by the software.
Peptide synthesis
The non-brominated heavy chains (HC) of EeCentrocins 1 and 2 and the light chain (LC) of EeCentrocin 1 were synthesised commercially (GenicBio Ltd., Shanghai, China), as were the brominated HC of EeCentrocin 1 (HC-diBr, Isca Biochemicals, Devon, UK). The synthetic peptides were subjected to antimicrobial activity screening and haemolytic testing as previously described in sections 2.4 and 2.5.
The fragment, GW Br W Br R, of EeCentrocin 1 was synthesised (Isca Biochemicals) to perform mass spectrometric comparisons with the trypsinated N-terminal fragment of EeCentrocin 1. The brominated peptides were produced with both Trp residues substituted in the 6 position (i.e. 6-D/L-Trp) of the indole ring.
Isolation of AMPs
Antimicrobial compounds have previously been detected in the coelomic fluid and coelomocytes of various echinoderms [14-16, 20, 23, 43]. In the present study, four different SPE eluates obtained from an aqueous extract and one organic extract of E. esculentus coelomocytes were tested for antibacterial activity. Of the extracts, the 40% SPE eluate displayed the highest antibacterial activity (S1 Table). The antibacterial activity in this eluate ranged from 0.01 to 0.31 mg/ml, depending on test organism, and was therefore selected for further examinations. Out of the four bacterial strains tested (C. glutamicum, S. aureus, P. aeruginosa and E. coli), the Gram-positive C. glutamicum was the most sensitive overall. The 40% SPE eluate was fractionated by RP-HPLC and one-minute HPLC fractions were screened for activity against the same four bacterial strains. Growth-inhibiting properties were discovered in a series of fractions eluted with 20-30% ACN (Fig 2).
Three antibacterial fractions (eluting at 31, 32 and 38 min; Table 1) were further subjected to purification by RP-HPLC and manual fraction collection. Antibacterial activity was monitored on aliquots of the collected fractions during the purification process. Fraction 31 proved
Primary structure elucidation
Edman degradation. An aliquot of each peptide fraction was reduced, alkylated and sequenced by Edman degradation. The analysis revealed multiple signals of amino acids of similar intensity in each position of the peptide fractions Ee4835 and Ee4635/Ee5024, suggesting that these fractions contained peptides composed of more than one peptide chain. The reduced and alkylated peptides were therefore subjected to RP-HPLC purification and successive Edman degradation of each chain. The analysis revealed that the peptide Ee4835 seems to have a heterodimeric structure, containing a heavy chain (HC) composed of 30 amino acids (Seq 5, Table 1) and a light chain (LC) composed of 13 amino acids (Seq 6, Table 1), possibly connected via a single disulphide bond. Additionally, the analysis exposed uncommon or modified residues (X) in positions two and three of the HC. Analysis of Ee4635/Ee5024 returned two sequences with similarity to the HC of Ee4835, probably belonging to the two different peptides known to be in the sample. The main peptide (Ee4635 giving rise to the largest signal intensity both in MS analysis and during sequencing) consisted of 29 amino acids (Seq 1, Table 1). Minor but distinct signals were also recorded for a 32 amino acid peptide (Seq 2, Table 1). Both peptide sequences contained a single cysteine residue and had uncommon or otherwise modified amino acids in different positions: 1 and 6 for Ee4635 and 1 and 9 for Ee5024. Although the HPLC chromatogram of the alkylated peptide fraction displayed additional peaks, no other sequences were obtained. This indicates that additional peptide fragments were present, but N-terminally blocked. Edman degradation analysis of Ee5922 revealed a partial 17 amino acid N-terminal sequence (Seq 3, Table 1). Enzymatic treatment with endoproteinase Arg-C and subsequent purification and sequencing of cleavage products revealed an additional 9 amino acid sequence (Seq 4, Table 1). The obtained amino acid sequences formed the basis for cDNA library construction.
Characterization of cDNA sequences. To elucidate the complete peptide sequences, degenerated primers were designed according to the partial primary peptide sequences. The constructed 3' RACE-Ready cDNA library was employed as template to amplify the 3'-end of the transcripts. Three partial cDNA clones of~320 bp,~400 bp and~370 bp were cloned and sequenced. These encoded the C-terminal end of the purified peptides Ee4835, Ee5024 and Ee5922 respectively. No cDNA sequence matching Ee4635 was found. Using the 5'RACE-PCR approach, 5' end cDNA sequences of Ee4835, Ee5024 and Ee5922 were cloned. The cDNA of Ee4835 was 660 bp in length with an open reading frame of 360 bp encoding a polypeptide of 119 amino acids (Fig 3). The theoretical pI and MW of the Ee4835 precursor were calculated to be 5.18 and 13040.0 Da respectively. The cDNA of Ee5024 was 676 bp in length with an open reading frame of 366 bp encoding a polypeptide of 121 amino acids (Fig 3). The theoretical pI and MW of the Ee5024 precursor were calculated to be 5.69 and 13151.1 Da respectively. The cDNA of Ee5922 was 675 bp in length with an open reading frame of 267 bp encoding a polypeptide of 89 amino acids (Fig 4). The theoretical pI and MW of the Ee5922 precursor was calculated to be 8.58 and 10297.96 Da respectively. Based on the deduced amino acids from cDNA analysis, the non-identified amino acids during Edman degradation sequenced in the two peptides Ee4835 and Ee5024 were found to be Trp residues. The N-terminal amino acid in the peptide Ee5922 was also shown to be Trp. Since Trp is normally detected during Edman degradation, these Trp-residues are likely modified. The precursor molecules of the peptides Ee4835 and Ee5024 have preprosequences. Analysis with SignalP 4.1 using the neural network model with SignalP-noTM setting showed that the highest calculated cutoff value was located between positions 20 and 21 in the N-terminal sequence of both proteins. Therefore, the predicted signal peptides consist of the 20 N-terminal amino acids followed by a prosequence of 30 amino acids (Fig 3). The native peptides with a proposed dimeric structure start at Gly-51 in Ee4835 and Trp-51 in Ee5024. A 24 amino acid interchain sequence separates the HCs from the LCs and is not present in the mature peptides. The two interchains (belonging to Ee4835 and Ee5024) are very similar, differing in only one amino acid residue. The LCs of both peptides seem to consist of 15 amino acids and both peptides contain a C-terminal dipeptide (Gly-Arg) which, based on Edman degradation and MW data, is cleaved of. The theoretical pI of the deduced mature peptides was 10.04 for both, indicating a cationic character. In silico analysis of the peptide Ee5922 suggested that the first 22 amino acids represent a signal peptide followed by a prosequence containing 16 amino acids (Fig 4). The native, cysteine-rich peptide starts at Trp-39.
All peptides discovered in the present study have an abundance of positively charged amino acids (23-29%) and hydrophobic amino acids (29-42%). This indicates that these sequences are cationic and have the possibility to form an amphipathic structure, a feature which is common for most AMPs and considered important for their antimicrobial activity [2,[44][45][46][47][48]. Their positive charge will aid in the electrostatic attraction between the peptide and the anionic microbial membranes [49][50][51][52]. Distributing positively charged residues on one side and hydrophobic residues on the other side of the structure (i.e. amphipathic), allows the peptide to incorporate itself into and act on the bacterial membranes, as suggested by several authors [2,45,52]. All three peptides contain Trp residues which are considered especially important for peptide hydrophobicity and the interactions in the membrane-water interface [48].
Homology searches and bioinformatics
BLAST searches performed on the deduced full-length amino acid sequences of Ee4835, Ee5024 (Fig 3) and Ee5922 (Fig 4) revealed homology to the centrocins and strongylocins isolated from S. droebachiensis [14,23]. Several putative proteins in S. purpuratus were also found to be homologues of the Ee-peptides. Due to their apparent similarity with centrocins and strongylocins (strongylocin 2 primarily), the names EeCentrocin 1 (Ee4835) and 2 (Ee5024) and EeStrongylocin 2 (Ee5922) are proposed for the peptides characterised in the present study. For clarity, the originally characterised centrocins and strongylocins (from S. droebachiensis) will be referred to as SdCentrocins and SdStrongylocins for the remainder of this paper.
Alignment of EeCentrocins, SdCentrocins and predicted homologues in S. purpuratus displays high similarity in the preprosequence region (58-67%) and in the interchain regions (58-79%), but not in the HC or LC regions (Fig 3) which display a much greater individual diversity. This suggests that the HCs and LCs are subject to a much higher mutation rate and that the signal sequences and pro sequences are more conserved regions. The partial peptide sequence of Ee4635 did not return any homologous sequences using NCBI BLAST search analysis. However, analysis performed by the built-in BLAST search engine of the LAMP database revealed low (E-value of 0.59) but significant homology to SdCentrocin 1. Based on the fact that the sequence also contains one cysteine residue, unidentified or modified residues and an abundance of positively charged and hydrophobic amino acids, it could be hypothesised that this peptide too belongs to the diverse heavy chains of the centrocins.
In silico analysis of the complete EeCentrocin sequences indicates that the first 20 amino acids in the precursors function as signal peptides. Furthermore, the data show that the precursor molecules have a first prosequence region of 30 amino acids, followed by a heavy chain sequence (30 and 32 amino acids for EeCentrocin 1 and 2 respectively), a second prosequence region of 24 amino acids (interchain), a light chain sequence of 15 amino acids and finally a prosequence consisting of two amino acids. The function of these prosequences is unknown, but they might aid in proper folding of the active peptides and/or function as a target for sitespecific proteases [23,53].
Alignment of EeStrongylocin 2, SdStrongylocins and predicted homologues in S. purpuratus (Fig 4) displays similarities in the presequence region (50-59%), the prosequence region (12-50%) and in the mature peptide (40-84%). EeStrongylocin 2 displays greatest identity with SdStrongylocin 2a and 2b (68-69%). In silico analysis defined the first 22 N-terminal amino acids as the signal peptide, leaving a 16 amino acid prosequence before the mature peptide. As centrocin-like and strongylocin-like peptide sequences have been discovered in three species of sea urchins, the possibility of them being a trait of this class of echinoderms exist, increasing the possibility of discovering other homologous bioactive peptides in other species of sea urchins. Fig 5 represents the phylogenetic trees composed of all homologues found in S. droebachiensis and S. purpuratus by BLAST searches. It appears that the EeCentrocins (Fig 5A) are phylogenetically separated from the other peptides, sharing one common ancestor with all. The SdCentrocins are more closely related to the predicted centrocin-like proteins in S. purpuratus. The SdStrongylocin 1 and 2 AMPs seem to be separated phylogenetically (Fig 5B) but share a common ancestor. EeStrongylocin 2 aligns more closely with SdStrongylocin 2. The S. purpuratus genome has been completely sequenced [54] whereas only a few proteins from S. droebachiensis and E. esculentus have been sequenced. It is therefore unknown whether the two species E. esculentus and S. droebachiensis contain additional strongylocin-like and centrocinlike peptides or not, and explains the excess of sequences from S. purpuratus in the figures.
Characterisation of post-translational modifications
The theoretical monoisotopic mass of EeCentrocin 1 (deduced from cDNA) containing an intramolecular disulphide bond was calculated to be 4675.37 Da, leaving a gap of 154.83 Da to the native isolated peptide (4830.20 Da). In order to identify the modifications of the second and third amino acids (both being Trp residues according to cDNA), the peptide was degraded by trypsin and analysed by HR-MS. The mass value of the major fragment and its corresponding isotope distribution of [M+H] + ions (Fig 6A) agreed well with the ion distribution of the synthetically produced fragment GW Br W Br R ( Fig 6B) and a theoretical ion distribution of [GW Br W Br R+H] + (Fig 6C), indicated that the peptide contains two brominated Trp residues. Seven ion peaks are clearly visible in all three figures with similar relative abundances and minute differences in m/z values (See S7 Table for calculated and measured m/z values). The presence of two Br-Trp in EeCentrocin 1 leads to a theoretical monoisotopic mass of 4831. 19 Da. This exceeds the measured mass by approximately one Da, suggesting that one of the peptide chains is amidated at the C-terminal (4830.20 Da).
The deduced sequence of the EeCentrocin 1 LC holds a dipeptide "Gly-Arg" at the C-terminal end, which resembles previously published amidation signal sequences. As shown by HR-MS (S3 Fig), the LC of the isolated peptide is in fact amidated at the C-terminal. The tachyplesin precursor from the horseshoe crab (Tachypleus tridentatus) [56], aureins (excluding aurein 5.3) from the frog (Litoria aurea) [57] and astacidin 2 from the fresh water crayfish (Pacifastacus leniusculus) [58] all contain amidation signals such as "Gly-Lys" and "Gly-Lys-Arg" which leads to a C-terminal amidation. However in the centrocins [23] of S. droebachiensis, no amidation is observed despite the presence of the cleaved-off "Gly-Arg" C-terminally.
No MS data fits the theoretical monoisotopic mass (4881.56 Da) calculated from the deduced amino acid sequence of EeCentrocin 2. Based on the findings from EeCentrocin 1 and the bromination occurring in centrocins [23], we hypothesise that EeCentrocin 2 also has two The evolutionary history was inferred using the Neighbour-joining method [39] and the optimal trees are shown. The percentage of replicate trees in which the proteins clustered together during the bootstrap test (500 replicates) is given next to the nodes [55]. The tree is drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree. The evolutionary distances were computed using the Poisson correction method [40]. Accession numbers are given in parentheses. post-translationally brominated Trp residues (positions 1 and 9) in the HC. This is also based on the inability of Edman degradation to identify any residue in these positions. Adding two bromines to the elemental composition leads to a theoretical mass of 5037.38. This deviates from the isolated peptide by +18.00 Da, indicating the presence of additional modifications on the isolated AMP. The deduced LC sequence of EeCentrocin 2 also contains a "Gly-Arg" amidation signal, which indicates that the peptide contains an amidated His residue C-terminally. Additionally, the LC contains an N-terminal Gln residue. The formation of pyroglutamic acid (Glp) is known to occur both enzymatically and spontaneously [59,60] when Gln or Glu is located N-terminally in a peptide sequence. HR-MS and MS/MS data of the alkylated LC of EeCentrocin 2 (S4 Fig) supports formation of both N-terminal Glp and C-terminal amidation, leading to a theoretical monoisotopic mass of 5019.37 Da, which corresponds to the measured mass. Glp-formation also explains why no sequence for the LC was obtained during Edman degradation as it sequences α-amino groups [61]. In The Antimicrobial Peptide Database, there are 19 entries with N-terminal Glp of which one is of marine origin [62]. The measured monoisotopic mass of EeStrongylocin 2 corresponds to the amino acid sequence deduced from cDNA. The theoretical MW of the peptide with 3 disulphide bonds is 5839.86 Da, and by replacing an indole hydrogen with a bromine, the theoretical monoisotopic mass (5917.80 Da) matches the experimental one (5917.77 Da). Additionally, a brominated ion at m/z 769.19 (Fig 7 and S7 Fig). This led to the confirmation of, not only 6-Br-Trp in EeCentrocin 1 and EeStrongylocin 2, but also two 6-Br-Trp residues in EeCentrocin 2 where only cDNA data (displaying Trp) existed for positions 1 and 9. One other 6-Br-Trp residue was identified, likely to belong to the 6 th amino acid in Ee4635 as the chemical shifts of a 6-Br-Trp in the first position of Ee4635 would perfectly overlap with EeCentrocin 2 due to identical N-terminals.
Marine species are well known to incorporate bromine in their secondary metabolites and peptides [63,64]. Bromination of Trp in the 6 position as a post-translational modification was first reported in 1997 in toxins isolated from cone snails Conus imperialis and C. radiatus [63]. Styelin D from the marine tunicate (Styela clava) also has a confirmed 6-bromination of Trp [65]. Several other marine organisms have brominated Trp residues but without confirmed positioning: cathelicidins from the Atlantic hagfish (Myxine glutinosa) [49], hedistin from the marine annelid (Nereis diversicolor) [66], strongylocin 2 and the centrocins from the green sea urchin (S. droebachiensis) [14,23]. The biological function of bromine-substitution of Trp is not known, but has been suggested to aid in the proteolytic protection of the peptides [49].
No ordered structure of the centrocin-peptides in MQ-H 2 O and with high salt concentrations has been detected during the NMR experiments. However, several peptides only adopt an ordered structure in the presence of membranes or membrane-mimics [52]. The secondary structure of EeStrongylocin 2 as dictated by its disulphide bonds is yet to be elucidated. The proposed structure of the two dimeric peptides EeCentrocins 1 and 2, and the primary structure of EeStrongylocin 2 can be viewed in
Bioactivities
The antimicrobial activity of the native EeCentrocin 1 was measured against four selected bacterial strains ( Table 2, raw data in S9 Table). The peptide displayed potent antibacterial activity against the Gram-positive bacteria, C. glutamicum and S. aureus (MIC = 0.78 μM against both) and the Gram-negative bacteria, E. coli and P. aeruginosa (MIC = 0.1 and 0.78 μM respectively). The activity was in the same range as for the SdCentrocins [23]. Because of a challenging purification, we were not able to perform investigations of the antibacterial potency of the native EeCentrocin 2 and Ee4635. However, the mix of both these peptides was antibacterial. The antibacterial potency fluctuates enormously among amphipathic AMPs [67][68][69], but EeCentrocin 1 (and the HC of EeCentrocins 1 and 2) are antimicrobial in similar ranges as those of the SdCentrocins [23]. According to previous published work [51], this is a typical activity range for AMPs. It has also been stated that the MIC activity range of an AMP rarely falls below 0.5-1.0 μM [50], which is in the same MIC region as the most potent AMPs presented in this paper. The HC of the EeCentrocins was shown to be the antimicrobial portion of the peptides, displaying similar potency as the native peptide against some bacterial strains ( Table 2). Interestingly, the activity of the HC also appears to be independent of the bromination of the Trp residues. The MICs of EeCentrocin 1 HC-diBr towards bacteria ranged from 0.78 to 6.25 μM which is almost identical to the MIC displayed by EeCentrocin 1 HC (0.39-6.25 μM). The HC of EeCentrocin 2 displayed similar antibacterial activities (MIC ranging from 0.78-6.25 μM). The antifungal activities of the two EeCentrocin 1 HCs were also quite similar, differing only by one dilution step, whereas EeCentrocin 2 HC seems to be a slightly more potent antifungal agent. The LC (synthesised with a C-terminal carboxyl group) of EeCentrocin 1 is of no observable importance to antimicrobial activity, when comparing the native peptide with the dibrominated or nonbrominated HCs, and it is not antimicrobial when tested alone. This supports our previous studies on the LC of the SdCentrocins [23] and suggests other tasks for the LC. Whether a C-terminally amidated peptide (increasing the charge by +1) would have displayed antimicrobial activity is uncertain.
The MIC of native EeStrongylocin 2 was found to range from 0.78 to 3.13 μM against the tested strains. The SdStrongylocins displayed similar antibacterial activity with MICs ranging from 2.5 to 5 μM [14]. The two recombinantly produced peptides, SpStrongylocins 1 and 2 (originating from S. purpuratus) displayed MICs towards the same strains at 15 and 7.5 μM respectively [24].
The synthesised EeCentrocin 1 (HC, HC-diBr and LC) peptide analogues displayed no or minor haemolytic activity at a concentration of 100 μM (Fig 9, raw data in S10 Table), a concentration which is 16 times higher than the MIC against the least sensitive bacteria (Staphylococci) tested in this study. No or minor haemolytic activity is a prerequisite if the peptides are ever to be exploited clinically or as food additives [70]. EeCentrocin 2 HC was more haemolytic, displaying 11.7, 18.9 and 56.3% haemolysis at concentrations 25.0, 50.0 and 100 μM, respectively. The reason for this higher haemolytic activity of EeCentrocin 2 compared to EeCentrocin 1 is unclear. However, increasing physicochemical parameters like hydrophobicity and hydrophobic face are known to enhance the haemolytic activity of α-helical AMPs [71].
Conclusion
The widespread use of antibiotics and the associated development of microbial resistance to these drugs has emerged as a major global problem. The search for new antibacterial agents has therefore become an important area in natural product drug discovery. Cationic AMPs have previously been isolated from various species and phyla, and represent a novel class of antibiotics. In this study, potent AMPs were for the first time characterised from coelomocyte extracts of the Edible sea urchin, E. esculentus, collected from sub-Arctic waters. Three 5-6 kDa AMPs were shown to be novel members of the centrocin and strongylocin families of AMPs. The EeCentrocins have a heterodimeric structure composed of a heavy chain and a light chain connected by a single disulphide bond. Neither the presence of brominated amino acids in the heavy chain nor a light chain with C-terminal amidation and N-terminal pyroglutamic acid seem to be necessary for maintaining antibacterial activity, but these additional structures might aid in the protection against proteolytic degradation of the peptides. The secondary structure and the three-dimensional conformation of EeStrongylocin 2 as dictated by its three disulphide bonds remains unknown, but should be explored. Future research should also include mode of action studies, where cellular targets are identified, and structure-activity relationship studies where truncated analogues of the EeCentrocin HCs are constructed in order to pinpoint the pharmacophore. Additionally, as it appears that sea urchins in general are producers of strongylocins and centrocins, a genomic approach to discover homologues in other sea urchins (or echinoderms) for the discovery of novel AMPs could be a beneficial venture.
This study has demonstrated that marine invertebrates are a valuable resource for discovering unique bioactive peptides, providing promising leads for development of novel antimicrobial drugs. Table. MS experiment and parameters. The experiments used for mass spectrometry are listed for low resolution above and high resolution below. (XLSX) S3
|
v3-fos-license
|
2021-01-14T14:25:31.051Z
|
2021-01-13T00:00:00.000
|
231598930
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-020-80276-3.pdf",
"pdf_hash": "18e4a877d17f3d0eb7c24a041274f87d936dec4c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43322",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "34e9c62136dab4ff43ae04db4899e762ba2d400c",
"year": 2021
}
|
pes2o/s2orc
|
Design of planar distributed three beam electron gun with narrow beam separation for W band staggered double vane TWT
A novel planar distributed three-beam electron gun with narrow beam separations is designed based on grids loaded sheet beam method. The dimensions of the three-beam gun in the y–O-z plane are determined using our basic theoretical design method developed for sheet beam gun. The results show that the profile of focusing electrode in the y–O-z plane is related to the beam width in the x-O-z plane. Then, the characteristics and parameters of three-beam array formation with their stability are analyzed thoroughly by adjustment of control grids in the x-O-z plane. Each of the beamlet obtained is with a small axial deviation of the two transverse waists. Based on the theoretical analysis and simulations, the planar three-beam electron gun is constructed with the beam voltage of 22 kV and the current of 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document}× 0.15 A. The average radius of 0.08 mm at each beam waist is obtained with the compression factor of 4 for the 0.18 mm beam tunnel radius. The beam waist can be achieved at about 4.4 mm away from the cathode with the axis separation about 0.46 mm for each of beamlet. Thus, the design method can be generally used to construct such type of narrow beam separation and planar distributed multiple beam electron gun for the miniaturization and integrated vacuum electron devices in millimeter wave and terahertz band.
High-power millimeter-wave and terahertz (THz) sources play a critical role in a wide spectrum of potential applications, including high-data-rate communications, biomedical diagnosis, chemical spectroscopy and threat detection [1][2][3] . Travelling wave tube (TWT) is a kind of vacuum electron devices (VEDs) which can provide high output power in a broad bandwidth 4 . However, the millimeter-wave or THz TWT combined with conventional circular beam is difficult to fulfill these demands due to the limitation of beam current carrying capacity. The adoption of a sheet beam can significantly promote the beam's current capacity, but the successive problems are difficulty in sheet beam focusing and an over-mode beam tunnel 5,6 , which may also result in beam self-excitation and oscillation. However, planar distributed multi-beam TWT, with independent of beam tunnels and convergent beamlets, can achieve high output power by sustaining the total high beam current with a small current for each beam, which may avoid the over-mode beam tunnels comparing with the sheet beam devices. Moreover, the planar interaction circuit suitable for the planar distributed multi-beam array is not only easier in fabrication than the circular structures in the millimeter wave and THz regime, but also it can meet the trend of planarity, miniaturization and integration for next generation of vacuum electron devices 7,8 .
Focused on the numerous advantages of integrated VEDs with planar multi-beam array, there is an increasing number of groups engaged in related research. The planar multi-beam arrays are not only used well in the folded waveguide TWT [9][10][11][12] , but also used in the staggered double vane TWT 13,14 . Up to now, there are three kinds of planar multi-beam devices with unique interaction characteristics between the beamlets and the high-frequency field. Firstly, each beamlet matches an independent waveguide, which is identical to the single tube and operates in the fundamental mode. Thus, the adjacent axial separation in this kind of device is not limited and can be adjusted according to the level of fabrication. There are two typical works as following. In 2010, a 220 GHz three-beam cascaded serpentine TWT is developed 10 . The simulation using MAGIC-3D shows that the cascaded three-beam device can achieve a high-gain of 42 dB with a very compact circuit length of only 1.5 cm. As part of the DARPA HiFIVE program, Northrop Grumman Corporation developed a 220 GHz five-beam folded waveguide TWT in 2013 based on power combination among the five identical FWG circuit with finite gain 9 www.nature.com/scientificreports/ short pulse tests resulted in 56 W at 214 GHz and 5.0 GHz bandwidth. However, the spacing distance between the adjacent beam axes in both of the two devices is large enough to construct an independent electron gun for each beamlet. Thus, it is easy to obtain the well-converged beam array by a similar approach in multi-beam klystron 15 . Secondly, two or three beamlets may match a common interaction circuit using over-moded structure 11,14 , e. g, the adjacent axial separation is about 1 ~ 2 mm for the over-moded multi-beam devices for W-band high power coherent radiation. Thirdly, two or three beamlets match a common interaction circuit, which operates in the fundamental mode 12,13 . The adjacent axial separation is, generally, less than 0.5 mm for 95 GHz and 0.2 mm for 220 GHz high output power tubes. Thus, for the latter two cases, it is challenging to develop new multi-beam gun design techniques to generate the desired planar multi-beam array with so narrow beam separation. The present work in this paper is a part of our project study of "W-band Multiple Beam High-power Staggered Double-vane TWT" 13 . Since the designed staggered double-vane slow wave structure (SWS) operates in the fundamental mode ( TM 11 ) under the first spatial harmonic, the spacing between adjacent beam axes is so narrow that is only 0.46 mm. To decrease the emission current density and keep the narrow beam separation, we' d like to construct a novel planar three-beam gun that combines the sheet beam gun with the special control grids 6 .
In this paper, a novel planar distributed three-beam electron gun with narrow beam separations is theoretically designed based on grids loaded sheet beam method, which have been develop by our group recently 15 . This design method is a universal method which has been fully verified in reliability and stability to obtain planar electron beam by one-dimensional compression of cathode electron beam. Moreover, we have also analyzed and verified the stability of the designed electron gun, including the sensitive dependence on the geometric size, and some electric parameters such as voltage and current. Therefore, this novel design method can be widely used in millimeter wave and terahertz planar electron beam devices. Thus, compared with the relevant literature 15,16 , a synthesized new method, which combines the theoretical analysis and numerical calculation is firstly adopted to design the novel planar three-beam gun with a narrow beam separation. Due to the plane symmetry of the three-beam array, the gun is designed the same as the profile of sheet beam gun with a whole throughout, which is different comparing with the conventional design concept of paraxial approximation for the structure construction with multiple beamlets 16,17 . Then, the dimensions and the position of the control grids are adjusted combined with the construction of a special focusing electrode to generate the desired planar three-beam array with the narrow beam separation. The good agreements have been achieved between the theoretical analysis and 3D simulation for the designed planar three-beam electron gun.
Design specifications and methodologies
The design W-band three beam high-power staggered double-vane (SDV) TWT with the scheme of a single period structure is given in Fig. 1, which shows the arrangement of the planar three electron beam with its SDV SWS. Thus, the design specifications of the planar three-beam electron gun are as follows. The operating voltage U a and the beam current I are 22 kV and 3 × 0.15 A, respectively. Each of the beam tunnel radii is 0.18 mm and the axis separation of the adjacent electron beams is 0.46 mm. The three-beam array is characteristic with so narrow beam separation due to the space limitation in the x-axis of the fundamental mode, which will result in a high emission current density and a shrunk cathode lifetime based on conventional one circular beam electron gun. Thus, a novel planar distributed three-beam gun based on grids loaded sheet beam gun is developed, and the guidelines of design methodologies are presented in the paper. Figure 2 shows the schematic of the three-beam gun in both two transverse directions. It's obvious that the outline of the three-beam gun is constructed identically to the sheet beam gun with the same profile except for the focusing electrode. In the following discussion, the centers of three beamlets are at x = -0.46 mm, y = 0 mm, at x = 0, y = 0, and at x = 0.46 mm, y = 0, respectively. Three-beam array emits electrons from the cathode that locates at z = 0, which will be transported along the positive direction of the z-axis. Figure 3 shows the logical diagram that indicates the detailed design guidelines of the novel planar threebeam gun with narrow beam separation in this paper. Apparently, the design process of the planar three-beam gun can be simply divided into two steps. Firstly, the dimension of the three-beam gun in the y-O-z plane is determined based on the theory guideline for designing Pierce-type sheet beam gun as our work in reference 15 . In this step, several key parameters of the three-beam gun can be determined, including the half-angle θ , curvature radius of cathode R c and anode R a , spacing from cathode to anode dK a , half-height of anode aperture r a . Then the simulations are used to verify the key parameters and beam transport in y-O-z plane thoroughly. Secondly, www.nature.com/scientificreports/ the dimension of the three-beam gun in the x-O-z plane is determined based on the scopes analysis for the formation characteristics and stability of three-beam array with several parameters optimization, which includes the separation between adjacent grids wires d x , the extension length of focusing electrode l zW , the thickness of the control grids t z ,and the separation from cathode to grid d z focusing electrode. The results show that the determination of the profile of focusing electrode in two directions can be included in the two steps, respectively. www.nature.com/scientificreports/ Figure 4 shows the configuration of the three-beam cathode combined with the control grids. Width (dimensions in x direction) of the side portions of the control electrodes (a/2) and the masks (b/2) equal half of their counterparts, respectively, which is located in the middle of the structure to construct the equivalent electric field distributions for the side beams and the central beam. To obtain the excellent beam transmission of the three-beam array, we'd like to keep a smaller beam-to-tunnel radial fill factor. The intended average radius at the beam waist is about 0.08 mm. Thus, half-height of the emission portion is about 0.32 mm with the selected beam compression factor of 4 in y direction. Half-width of each emission portion is chosen as 0.1 mm with compromising the lower emission current density and the focusing characteristics of the control grids. Finally, the cross-sections of each emission portion and the complete cathode are about 0.2 mm × 0.64 mm , and 1.38 mm × 0.64 mm , respectively. The compression factor of each beamlet in y direction is set to 4, which means that the current emission density of 117 A/cm 2 is required for the cathode. Such high emission density can be satisfied with a new type of impregnated dispenser cathode using the active substance with a molar ratio of 26BaO·29SrO·8Sc 2 O 3 ·7CaO·Al 2 O 3 , which can achieve the electron beam emission density of over 160 A/cm 2 experimentally in reference 18 .
In part III of this paper, we show that the dimension of the three-beam gun in the y-O-z plane is determined based on the theory guideline for designing Pierce-type sheet beam gun as our work in reference 17 . Each electron beam formation comes from a narrow rectangle cathode, which is only compressed in y direction with a ratio of 4. However, in part IV, we constructed the desired electric field distribution by the focusing electrode combined with the control grids, which is different from conventional Pierce-type guns, attaining the same proper beam size in the x direction. As a result, the rectangle electron beam in the beam waist is formed with the same dimension in x and y direction. Thus, in our manuscript, we use the description of radium symbols, e.g., the R and r which are widely used in the design of the traditional round electron beam. Such a description only denotes that our designed three electron beams have the same dimension in x and y direction.
Dimensions of the three-beam gun in y-O-z plane
In this section, several key dimensions of the three-beam gun in the y-O-z plane are determined based on the extended theoretical method used for designing sheet beam gun with the same profile in reference 17 . Simulation results show that the required length l zW of the focusing electrode and the beam throw distance z m are different according to the beam width in x direction. Thus, an appropriate range of l zW and z m will be estimated appropriately for the design of the three-beam gun. Figure 5 shows some important parameters for the design of sheet beam electron gun 17 in this paper. Here we emphasize that R c and R a are the curvature radius of the cathode and the anode. θ is the half-angle, which means the angle between the cathode edge and the convergence point when the sheet beam passing through the anode aperture.θ ′ is the new half-angle under the influence of the divergence of the anode aperture. θ ′′ is the equivalent half-angle considering both the convergence of the initial sheet beam and the divergence of the anode aperture. r c is the half height of the cathode while r a is the half height of the anode aperture. It's obvious that θ only depends on R a (or R c ) when R c /R a is given.
Determination of half-angle θ. According to the designed three-beam gun model, the current of beam portion with unit width I u can be calculated as 0.75 A/mm according to reference 17 . The design curves of beam compression factor and throw distance for the sheet beam gun are shown in Fig. 6, respectively. According to Fig. 6(a) with the beam compression factor of 4, the value of the curvature radius ratio of the cathode to anode R c /R a can be calculated as 2.464. Next, the L-B function (−β a ) 2 can be consulted from the chart of (−β) 2 ∼ R c /R [ Table III in www.nature.com/scientificreports/ the curvature radius of anode R a can be expressed in θ The curvature radius of cathode R c can be expressed as Combining (2) and (3) with considering the value of R c /R a , the half angle θ can be calculated as 5.983 degrees.
Determination of other parameters for sheet beam gun. According to the following formulas (4), , (6), the value of R c , R a , and the spacing from cathode to anode dK a can be calculated as 3.070 mm, 1.246 mm, 1.814 mm, respectively. (1) (2) R a = 11.924θ www.nature.com/scientificreports/ The throw distance z m can be obtained as 3.011 mm from Fig. 6(b) with the value of R c /R a . According to the advanced method for anode aperture reconstruction in sheet beam gun, half-height of the reconstructed anode aperture r a can be calculated as 0.245 mm. Considering the correction of cylindrical aberration, the key parameters for sheet beam gun can be determined successfully.
Simulation and verification.
To verify the effectiveness of the theoretical method above for design sheet beam guns and study the characteristics of a focusing electrode, two sheet beam gun models with different cathode cross-sections are constructed using CST PS model 20,21 , which are of single emission portion (Model I) and the complete cathode (Model II) of the three-beam gun, respectively.
A focusing electrode in sheet beam gun is constructed with the electric field distribution for wedge-shaped converging radial flow. In our sheet beam gun model, the focusing electrode profile in the y direction is preliminary composed of two tiled metal walls which makes an angle of 67.5 0 with the normal to the cathode followed by two parallel metal planes. Combined with 3D simulation software CST PS, the proper axial dimensions of the tiled parts in the two models are both l zF = 0.2mm , but of the parallel portions are l zW = 0.13mm for Model I and l zW = 0.24mm for Model II, respectively, when both of them are performed well.
Simulated beam trajectories of the two models are shown in Fig. 7. Figure 8 compares the simulated beam envelops in the y direction of the two models. Apparently, both of the two sheet beams achieve their intended waist of 0.08 mm. However, the throw distances z m in the two models are quite different. A further throw distance, 5.5 mm, is more likely to take place in Model I (with a smaller beam width). However, the throw distance in Model II is about 3.5 mm.
The main reason for the differences above in l zW and z m in two models may be that the y components of spacing charge force in sheet beams are different with various beam width in x direction, which results from the influence of beam boundary distortion. A smaller y component for the space charge force is with the smaller width sheet beam due to a more serious influence of boundary distortion in it. Thus, the convergent effect of the focusing electrode needed in Model I should be less than it needed in Model II. Consequently, an intermediate convergent effect of the focusing electrode is needed in our three-beam gun, 0.13 mm < l zW < 0.24 mm. And the throw distance of each beamlet in our three-beam array may be more than 3.5 mm and less than 5.5 mm.
From the calculation, the beam current is about 0.155 A for Model I and 1.076 A for Model II corresponds to the unit beam current of 0.78 A/mm, which is close to the expected value 0.75 A/mm. Thus, the effectiveness of the above method for designing a sheet beam gun is verified and the key parameters will be used in the following three-beam gun construction. www.nature.com/scientificreports/
Determination of the dimensions for three-beam gun in x-O-z plane with beam formation stability
Based on the analysis, the overall outline of the novel three-beam gun in the x-O-z plane can be initially built according to the sheet beam gun in Model II. However, the focusing electrode profile should be unique for the three-beam gun with the reason of different electric field distributions required for beam compression of the three-beam array and the sheet beam. In the novel three-beam gun, the desired electric field distribution for three convergent beamlets is constructed by the focusing electrode combined with the control grids. Their characteristics decide the performance of the three-beam gun directly and effectively. In our model, the potentials of the focusing electrode and the control grids are set at the cathode potential to make the simplicity and stable operation of the gun. In this section, the profiles of focusing electrode in two transverse directions are determined, respectively, for the equivalence of three beamlets with good beam characteristics. Transmission verification of the beamlet with transverse velocity in the axial Brillouin magnetic field is carried out theoretically. At last, several key dimensions of the control grids are discussed based on the characteristic's analysis of the three-beam array formation.
Focusing electrode profile in x direction. Non-equivalence of the beamlets with different lateral positions in the three-beam array may cause quite a few problems in the following transport process under the magnetic field by the focusing structure. To generate three beamlets with similar properties, a novel focusing electrode is constructed in the paper, which can adjust the electrical field distribution in two transverse directions independently. Figure 9 shows the configuration of the novel focusing electrode structure for the three-beam gun. The focusing structure profile in y direction is identical to the configuration in the sheet beam gun above. However, the profile in the x direction is without the two parallel metal plane parts, and the inclination angle of the focusing electrode in x direction called θ FX is larger. By the much more numerical simulation analysis on various threebeam gun models, the beam current of each beamlet reaches an approximately identical value when θ FX increase to a constant value. It's about 1.13 times of 67.5 0 , e.g. the center point is 76.275 0 , which can make the three beamlets transport along their symmetric axes strictly. However, the side beam currents are less than the value of central one and the side beamlets are all tilting towards the gun center with beam transport, when θ ′ is less than Fig. 2. Thus, the variations of the two parameters are primarily discussed based on their effects on the throw distances in both two transverse directions. Figure 10(a) gives the variation of throw distance in the y direction Z mY with both d x and d z . In our calculation, we have selected the calculation points for d x from 0.39 to 0.44 mm, and d z from 0.11 to 0.16 mm with the small intervals of 0.002 mm, so that there are much of enough original data which can express the characteristics and tendency in all the similar figures in our paper. Also, we have used cubic spline interpolation in plotting all the figures with MATLAB to achieve good smoothness for all the curves in these figures. From the results, it's clear that Z mY almost remains constant while keeping the variation ratio of d x and d z unchanged. And the linear change range of Z mY is mainly from 4.0 to 4.4 mm. Besides, the results verify the deduction in the following subsection that throw distance of each beamlet for three-beam may be more than 3.5 mm and less than 5.5 mm. Figure 10(b) gives the variation of throw distance in the x direction Z mX with both d x and d z . The models with Z mX < 2.3 in the top left corner are invalid due to the trajectories cross each other under the stronger convergent function in the x direction. With considering the calculated spacing from cathode to anode dK a = 1.814 mm, the models with Z mX < 1.8 in the bottom right corner are invalid due to the beam waist in the x direction hasn't emerged into anode aperture. Thus, the upper limit of d x and d z are selected at 0.44 mm and 0.16 mm, respectively, to keep a wider valid range of data. Besides, Z mX is shrunk dramatically with minimizing d z and d x simultaneously at a constant ratio of d x and d z , but Z mY almost keeps constant according to Fig. 10(a). To facilitate the design work and matching process of the magnetic focusing system, the three-beam gun with smaller waists separations among the two transverse directions are desired. Thus, the lower limits of d x and d z should not be too small. Consequently, the proper region of d x and d z are determined as to 0.39 mm ∼ 0.44 mm and 0.11 mm ∼ 0.16 mm, respectively.
Determination of extension length of focusing electrode l zW and t z . Throughout the analysis on focusing electrode above, all dimensions of the focusing electrode in the novel three-beam electron gun are determined except the parameter l zW , which will be discussed in this subsection combined with the consideration of beam transport characteristics on both x and y directions. Control grids that serve as the 'focusing electrode' in the x direction control the formation and transport of the beamlets in x direction. The thickness of the control grids t z is also selected in this subsection based on the beam transport characteristics analysis for both transverse directions.
From the above analysis, the three-beam gun model with intermediate values of d x = 0.415 mm and d z = 0.13 mm is selected to study the axial characteristics of focusing electrode and control grids, including l zW and t z . Figure 11(a) shows the variation of throw distances in both x and y directions with l zW . Figure 11(b) shows the variation of the aspect ratio (X/Y) of beam cross section at beam waist in y direction with l zW . According to Fig. 11(a), the throw distances in both transverse directions meet in the gun model with l zW = 0.146 mm. However, according to Fig. 11(b), the aspect ratio of beam cross-section in this model is about 0.257 that close to the aspect ratio of cathode 0.313 and is far from the desired circular beam. According to Fig. 11(a), it's obvious that the throw distance in y direction Z my is raising with the extended length l zW of focusing electrode since the incidence slope of boundary electrons are increased. The reason for the drop of Z my at l zW = 0.24 mm is that electron trajectories from different layers cross each other. However, the throw distance in y direction Z mx is relatively insensitive to l zW . To facilitate the design work of the following magnetic focusing system, the beam www.nature.com/scientificreports/ array with a closer beam waist in both transverse direction is desired, which means a shorter l zW according to Fig. 11(a). Besides, the value of l zW should not be too small from Fig. 11(b) to make the cross-section at the beam waist close to be circular. Thus, the value of l zW is compromised as 0.19 mm with the aspect ratio of beam cross-section at the waist about 0.7. After determining the value of l zW , we explored the influence of the value of t z on the throw distances in both x and y directions, as shown in Fig. 12. It's obvious that the throw distance in the x direction Z mx is increasing with t z firstly and begins to drop beyond the critical point of t z = 0.035 mm. But the throw distance in the y direction Z my always decreases as t z increases. Thus, a greater thickness of control grids but no more than 0.035 mm would be like for a narrower axial separation between two beam waists. Considering the stability of the gun and the machining error, t z = 0.03 mm is selected as the thickness of the control grid.
Determination of d z and d x with beam formation stability.
Based on the structure of the three-beam gun determined, the value of d x and d z are in the scope of 0.39 mm ∼ 0.44 mm and 0.11 mm ∼ 0.16 mm, respectively. In this range, Fig. 13(a) shows the variation of beam current and stability with d z and d x . It's found that the beam current is decreasing with enhancing d z , or minimizing d x , and the beam current almost unchanged and kept more stability with broadening d z and d x simultaneously. According to the design specification, the values of d z and d x that keeps the beam current varies in the range of 0.145 A ∼ 0.155 A could be selected easily. Figure 13(b) shows the variation of beam aspect ratio (x/y) at the beam waist in the y direction with d z and d x . It can be seen that the aspect ratio mainly increases with minimizing d z , or enhancing d x in the scope where the beam current is acceptable. Considering the stable beam transport in the following magnetic field, the beam aspect ratio is better limited from 0.67 to 1.
Construction and simulation.
To guarantee the good beam characteristics and satisfy the demands from the high frequency circuit, the three-beam array with the beam waist aspect ratio from 0.67 to 1 and the beam current from 0.145 A to 0.155 A would like to be taken into consideration. The combination of d z = 0.13 mm and d x = 0.415 mm is finally adopted in our design for the novel planar three-beam electron gun with CST simulation 20,21 . Figure 14 shows the beam trajectory from different perspectives. Apparently, the three-beam array is well generated and compressed on both two transverse directions. Figure 15 shows the beam crosssection along the direction of beam transport. The three-beam array achieves the waist in the x direction at about 2.5 mm away from the cathode, and achieves the waist in y direction at about 4.4 mm away from the cathode. The profiles at beam waists in the x direction and the y direction are indicated in (c) and (e), respectively. The beam aspect ratio at the waist in y direction is about 0.83. And the simulated beam current is about 3 × 0.153 A.
For the design of our three-beam electron gun with small distances between cathode, focusing electrode and anode, the high voltage will cause the breakdown due to the strong electric field strength. As we know, with the high vacuum (~ 10 -7 Pa-10 -8 Pa), the breakdown field strength is more than 10 6 V/cm as the experienced data. However, from the electric field distribution in our designed three-beam electron gun with CST PS, we know the maximum electric field of the whole structure is about 3 × 10 5 V/cm (the maximum electric field appears in the four corners of the edge the focusing electrode), which is smaller than the breakdown electric field of 10 6 V/ cm in high vacuum. Thus, the breakdown in our electron gun cannot happen.
Conclusion
Only by developing in the direction of planarization, miniaturization and integration, can the vacuum electronic devices be expected to break through the limitations of existing mechanisms, and thus achieve leapfrog development in the THz band. In this paper, methodologies and guidelines for designing the novel planar distributed three-beam electron gun are presented with narrow beam separations, which can be used in such type of narrow separation beam electron gun as one of the general method. The physical model of our novel planar three-beam gun with optimal characteristics is verified using simulation in the 3D software, the beam current of 3 × 0.15 A, www.nature.com/scientificreports/ an average beam radius of 0.08 mm for each beamlet waist is obtained successfully with a beam compression ratio of 4. Consequently, the designed three-beam electron gun will be used in our fundamental mode staggered double vane TWT in W-band with high output power later. Moreover, such planar distributed multi-beam with independent beam tunnels and singly convergent beamlets, can achieve high output power by sustaining the high total beam current, and avoids the over-mode beam tunnels comparing with the sheet beam devices. Such novel scheme for the planar multiple beam combined with the planar interaction circuit are not only suitable for the planar distributed vacuum electron devices, e.g. traveling wave tube, extended interaction klystron and oscillator, in widely range for the millimeter wave and THz regime, but also it can meet the trend of planarity, miniaturization and integration for the development of next generation of vacuum electron devices with good potential for the high output power and broad bandwidth in the future.
|
v3-fos-license
|
2018-04-03T03:19:13.331Z
|
2017-01-03T00:00:00.000
|
2521830
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmccomplementalternmed.biomedcentral.com/track/pdf/10.1186/s12906-016-1511-4",
"pdf_hash": "ed3b858b980e38246ad6e4e3b6681645ac03be98",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43325",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "ed3b858b980e38246ad6e4e3b6681645ac03be98",
"year": 2017
}
|
pes2o/s2orc
|
Gaultheria trichophylla (Royle): a source of minerals and biologically active molecules, its antioxidant and anti-lipoxygenase activities
Background Gaultheria trichophylla (Royle) is used as food and for treating many ailments in folk medicine especially against inflammation. The purpose of this in vitro study was to evaluate the ability of extracts of G. trichophylla as anti-oxidant and anti-inflammatory agent and for its mineral contents. Methods Powdered plant material (100 g) was extracted with 100 ml each of methanol, chloroform, and n-hexane using soxhlet extractor. Antioxidant activity of methanol extract was assessed by DPPH radical scavenging and FRAP assays. Determination of enzyme inhibition activity was determined using 5-LOX inhibitory activity. Total phenolic and flavonoids contents were measured by Folin-Chicalteu and colorimeteric methods respectively. Minerals and heavy metals contents were determined using Atomic absorption spectrophotometer. Qualitative HPLC analysis were performed using some standard phenolic compounds. Results The highest phenolic (17.5 ± 2.5 mg GA equivalent/g) and flavonoids (41.3 ± 0.1 mg QE equivalent/g) concentrations were found in methanol extract, which also showed more scavenging activity of 1, 1-diphenyl-2-picrylhydrazyl and ferrous reducing power with IC50 = 81.2 ± 0.2 and IC50 = 11.2 ± 0.1 μg/ml, respectively. The methanol and chloroform extracts showed best inhibition of 5-lipoxygenase enzyme with 90.5 ± 0.7% and 66.9 ± 0.1% at 0.5 mg/ml, respectively. G. trichophylla extract was also evaluated for mineral contents (K, Na, Ca, Mg, Fe, and Cu), and for chemical profiling of heavy metals (Cr, Pb, Cd, Co, Zn, Ni and Hg). Conclusion Our current findings suggest that this plant is good source of minerals and concentration of all heavy metals were within permissible limits. The results revealed that this ignored plant has great pharmaceutical and nutraceutical potential.
Background
The free radicals and reactive oxygen species (ROS) leads to the formation of harmful chemical compounds and have an important role in the management of health and disease and is therefore producing a medical revolution [1]. Research suggests that free radical damage to cells leads to the pathological changes associated with aging and therefore have an increasing number of diseases or disorders, as well as the aging process itself, demonstrates link either directly or indirectly to these reactive and potentially destructive molecules [2].
The antioxidants through their free radical scavenging property can delay or inhibit cellular damage. The antioxidants are usually being low-molecular-weight and can interact with free radicals and terminate the chain reaction before the important molecules are damaged. Antioxidants are produced in the body and others are found in the diet [3]. The intake of natural antioxidants has been reported to reduce risk of cancer, cardiovascular diseases, diabetes and other diseases associated with aging [4]. In many cases, it is concluded that antioxidants modulate the pathophysiology of chronic inflammation up to some extent [5]. In view of increasing risk factors of synthetic phenolics to human health, there has been a trend globally toward the use of medicinal and dietary plant substances as therapeutic antioxidants. Many antioxidant compounds, naturally occurring in plant sources have been identified as free radical or active oxygen scavengers [6]. Lipoxygenases (LOX) are enzymes are correlated with inflammatory and allergic reactions because of the formation of the leukotrienes (LTs) [7]. Increased levels of leukotrienes could be observed in the pathological conditions of asthma, psoriasis, allergic rhinitis, rheumatoid arthritis and colitis ulcerosa. The production of LTs can be prevented via inhibition of the lipoxygenase pathway [8]. The drugs having ability to inhibit LOX isoforms and/or their biologically active metabolites can be useful in cancer treatment [9].
Other than medicinal values the plants and their polyphenolic compounds have become the focus of current nutritional interest due to their health-promoting effects [10]. Because of the increasing interest in traditional medicinal products, it is important to determine whether they are safe for consumption. Levels of toxic elements such as As, Cu, Cd, Hg and Pb in the plant samples must be determined. Some common elements such as K, Na and P are essential for health and the quantification of these elements is important for nutritional purposes [11][12][13].
Many species of plants have been successful in absorbing metal contaminants which are essential for plant growth (Fe, Mn, Zn, Cu, Mg, Mo, and Ni). Some metals with unknown biological function (Cd, Cr, Pb, Co, Hg) can also be accumulated [14]. Arsenic, mercury, and lead are contaminants found in the environment which are notoriously toxic to man and other living organisms. Investigation of such heavy metals is necessary before the adjustment of the final recommended doses of the plant to avoid toxicities [15]. Therefore, there is an urgent need for quick assessment of these heavy metals in medicinal plants to control the level of contaminants in herbal raw materials [16].
Gaultheria (Ericaceae) consists of over 1700 species. Gaultheria trichophylla (Royle) is native to the Himalayas and commonly known as Himalayan Snowberry [17]. It is more distinct due to its blue color berries which are eaten as refreshing food from the local community and red to pink color flowers. The small green leaves are approximately 3 -7 mm in length. This plant is furnished with setae and found in cold and lofty situations of the mountains [18]. Research work on other species of this plant indicates anti-inflammatory, [19], antibacterial, [20] and anti-arthritis [21] activities. Other species of the genus Gaultheria like G.yunnanensis, G. fragrantissima and G. procumbens are used in traditional medicine to treat arthritis in China, India, America and Canada. Gaultherin a natural salicylate isolated from G.yunnanensis possess analgesic and anti-inflammatory activity [22]. The phytochemical investigation of the species investigated reported to contain methyl salicylate, diterpenoids, acids, dilactone, alkaloids and other glycosides [23].
The present approach was based on the fact that Gaultheria trichophylla was described as being used as anti-inflammatory in traditional medicines, therefore it was evaluated in vitro for antioxidant and lipoxygenase inhibitory activities, and to determine its potential as a safe functional food and alternative medicine.
Plant material
The Gaultheria trichophylla plant was collected from Kaghan valley, District Mansehra, KPK, Pakistan, in the month of November, 2013. After authentication by taxonomist, Professor Dr. Qazi Najum us Saqib, voucher specimen (CTPHM-GT01, 13) was deposited in the herbarium of the Department of Pharmacy, COMSATS Institute of Information Technology, Abbottabad. After washing with water the whole plant was dried in shade at room temperature. The dried material was ground to a coarse powder. The powder drug was stored in air tight and light resistant container before extraction.
Extract preparation
The powder material (100 g) was extracted with each of methanol, chloroform and n-hexane using soxhlet extractor for 20 h each. It was filtered through a Whatman Grade-I filter paper. The filtrate was evaporated on a vacuum rotary evaporator under reduced pressure at 40°C. Extractive yield (percent) of the methanol (Gt. MeOH), chloroform (Gt. Chlor) and for n-hexane (Gt. Hex) was 21.85%, 16.35% and 7% respectively. Determination of total phenolic contents (TPC) and qualitative HPLC analysis TPC of methanol extract of Gaultheria trichophylla was determined by the Folin-Chiocalteu method [24][25][26].
The modification made in the method was to use gallic acid as a standard phenolic compound. The extracts were diluted with distilled water to a known concentration in order to obtain the readings within the standard curve range of 0.0 to 600 μg of gallic acid/ml. 250 μl of diluted extract or gallic acid solution was mixed with 1 ml of distilled water in a test tube followed by the addition of 250 μl of Folin-Chiocalteu reagent. The samples were mixed well and then allowed to stand for 5 min at room temperature and allowed to complete a reaction with the Folin-Chiocalteu reagent. Then, 2.5 ml of 7% sodium carbonate aqueous solution was added and the final volume was made up to 6 ml with distilled water. The samples were then incubated for 90 min. The absorbance of the resulting blue color solution was measured at 760 nm using spectrophotometer. The result was expressed as mg of gallic acid equivalents (GAE)/g of extract by using an equation that was obtained from standard gallic acid graph. All the experiments were conducted in triplicates.
The gradient elution of solvent A [water/acetonitrile/ formic acid (95:5:0.1, v/v/v) and solvent B (acetonitrile/ formic acid (100:0.1, v/v) had a significant effect on the resolution of compounds. Solvent B was increased to 50% in 4 min and subsequently increased to 80% in 10 min at a flow rate of 1 mL per minute. Detection wavelengths were 250, 310, 280, and 360 nm. Injection volume was 10 ml for each sample and reference. Run time was 15 min. Before injection, sample (20 mg) and reference were dissolved in 70% (v/v) aqueous methanol (10 mL) and filtered through a 0.45 -mm membrane filter (Millipore,Billerica, MA) (Michel et al. 2014).
Determination of total flavonoid contents
The total flavonoid contents of G.trichophylla was determined by using of a modified colorimetric method described previously [27]. The dried plant extract (25 mg) was ground in a mortar with 10 ml 80% methanol. The homogenous mixture obtained was allowed to stand for 20 min at room temperature. Mixture was filtered through filter G4. An aliquot of 0.4 ml of filtrate was mixed with 0.6 ml distilled water, 5% NaNO 2 solution (0.06 ml). The mixture was allowed to stand for 5 min at room temperature. After 6 min 10% AlCl 3 solution (0.06 ml) was added to the mixture. 1 N NaOH (0.4 ml) and 0.45 ml distilled water were added immediately, to the mixture and allowed to stand for another 30 min. Absorbance of the mixture was determined at 510 nm and Quercetin was used as standard compound for the quantification of total flavonoid content. All values were expressed as milligram of Quercetin equivalents per 1 g dry weight. Data was recorded as mean ± SD for three replicates.
DPPH radical scavenging assay DPPH radical scavenging activity was evaluated as described by [28] with some modification. Ten μl of test solution was added in 96-wells plate followed by the addition of 90 μl of 100 μM methanolic DPPH solution in a total volume of 100 μl. The contents were mixed and incubated at 37°C for 30 min. The reduction in the absorbance was measured at 517 nm using Synergy HT BioTek® USA microplate reader. All experiments were carried out in triplicates. For the determination of IC 50 values, test solutions were assayed at various dilutions i.e. 0.5, 0.25, 0.125, 0.0625, 0.0313, 0.015 mM. Data obtained was computed using the computer software Statview version 5.0. Ez-fit software. Standard compounds used were quercetin and Vitamin C (as positive control). The percent inhibition by sample exposure was determined by comparison with a DMSO treated control group. The obtained data were used to determine the concentration of the sample required to scavenge 50% of the DPPH free radicals… The decrease in absorbance indicates increased radical scavenging activity which was determined by the following formula.
Ferrous reducing antioxidant power (FRAP)
This assay uses antioxidants to reduce ferric ion in a complex with tripyridiltriazine (Fe3 -TPTZ) to an intense blue ferrous (Fe 2 ) complex that develops an absorption maximum at 700. FRAP values are obtained by comparing the absorbance change at 700 nm in test reaction mixtures with those containing ferrous ions in known concentrations. Hence, this test measures the ability of a sample to reduce ferric ion complex [29].
Ferrous Reducing antioxidant power activity of extracts was determined according to the method of [30] with some additional changes. In the well of 96 microplate 25 μl of test sample was mixed with 25 μl of phosphate buffer (pH 7.2) and then added 50 μl of 1% potassium ferricyanide solution into the mixture. This mixture was incubated for ten min at 50°C then 25 μl of 10% trichloroacetic acid (w/v) solution and 100 μl distilled water was added into it then measured the absorbance at 540 nm micro plate reader and was taken as pre read value. At last, 25 μl of freshly prepared 0.2% ferric chloride (FeCl 3 ) solution was added into the mixture and measured the absorbance at wavelength of 700 nm on a micro plate reader. Quercetin was used as standard. For the determination of IC50 value, sample solution was assayed with different concentration until it reaches the %inhibition of 50. The ferrous reducing power was measured by using the formula as given below; %inhibition = (absorbance of sample/absorbance of control) × 100 Where, Absorbance of control = Total enzyme activity without inhibitor, Absorbance of test = Activity in the presence of test compound
Determination of minerals
Minerals (Na, Ca, Mg, Fe, K, Zn, Mn, and Cu) were measured by using an atomic absorption spectrophotometer (Perkin Elmer AAnalyst700, USA). Before analysis the samples were digested in mixture of H 2 SO 4 , HNO 3 and HClO 4 . All determinations were done in triplicate. The minerals were expressed as mg/100 g of fresh weight [32].
Determination of heavy metals
Heavy metals (Ni, Pb, Hg, Si, Co, Cr, As and Cd) contents analysis in Gaultheria trichophylla are determined according to method of [33] using an atomic absorption spectrophotometer (Perkin Elmer AAnalyst700, USA).
Standards of Ni, Pb, Hg, Si, Co, Cr, As and Cd were used as reference analytes for quantitative estimation of heavy metals as well as accurate calibration of each analyte. The standard stock solutions (1000 ppm) were diluted to obtain working standard solutions ranging from 1 ppm to 10 ppm and stored at 4°C. An acidity of 0.1% nitric acid was maintained in all the solutions. A calibration curve was plotted between measured absorbance and concentration (ppm). All the samples were analyzed in triplicate. Samples were digested in 20 ml mixture of concentrated acids (nitric and perchloric acid in 9:1 ratio) for 3 h in a water bath maintained at 70°C for dissolving the contents until a clear brownish solution was obtained using wet digestion method. After cooling, these solutions were reconstituted with deionized autoclaved water to 20 ml. Each sample was filtered using whatmann filter paper (pore size 0.45 μ, Axiva) and stored in closed acid-washed glass vials.
Determination of phenolic and flavonoid contents
The Gaultheria trichophylla was evaluated for their phenolic contents by Folin-Chiocalteu assay.
The methanol extract showed the highest phenolic content 17.58 ± 2.51 mg GAE/g of dried extract. The chloroform extract evaluated for phenolic contents showed the values of 5.01 ± 0.90 mg GAE/g of dried extract. The hexane extract of G.trichophylla showed the lowest values of phenolic contents with 3.214 ± 0.35 mg GAE/g of dried extract. The flavonoids contents of G.trichophylla extracts were expressed as quercetin equivalents in mg QE/g of dry weight. The flavonoids contents for G.trichophylla extracts of methanol, chloroform and hexane were found to be 41.345 ± 0.19, 9.828 ± 0.78, and 26.793 ± 1.45 mg QE/g, respectively Table 1.
Determination of antioxidant activities
DPPH assay was carried out to investigate the antioxidant potential of G. trichophylla. The methanol extract of G.trichophylla showed the maximum inhibition of 90.55 ± 0.89% with IC 50 = 81.2 ± 0.26 μg/ml. The chloroform extract showed the inhibition of 66.83 ± 0.64% with IC 50 = 119.2 ± 0.7 μg/ml. The hexane extract showed the inhibition of 58.97 ± 0.23% with IC 50 = 99.03 ± 0.5 μg/ml. All the results were compared with Quercetin and Vitamin C as standard antioxidants. The other antioxidant assay conducted was FRAP assay to investigate the antioxidant activity of G. trichophylla. The methanol extract of G.trichophylla showed the maximum inhibition of 98.28 ± 1.71% with lowest IC 50 = 11.2 ± 0.16 μg/ ml. The chloroform extract showed the inhibition of 77.09 ± 2.7% with IC 50 = 27.7 ± 0.6 μg/ml. The hexane extract showed the inhibition of 74.28 ± 1.4% with IC 50 = 13.5 ± 0.7 μg/ml Table 1.
All the results are tabulated in Table 1 and are expressed as comparison in Fig. 2.
Minerals and heavy metal contents
Determinations of mineral contents of Gaultheria trichophylla are presented in Table 2. The most abundant minerals found were magnesium (4.115 mg/100 g) and potassium (1.935 mg/100 g) and least abundant was copper (0.00097 mg/100 g). The concentration of Ni and Co in G. trichophylla sample were 0.162 and 0.043 ppm respectively, which are much less in concentration. The concentration of mercury (0.76 ppm) in sample was also very low as compared to allowed limits (0.8 ppm) Table 2.
Discussions
Plants are used traditionally for various conditions in many system of medicine. The claim of their use for different ailments needs to be justified. The phenolic and flavonoids contents of plants are very important medicinally. They are mostly considered as compounds responsible for antioxidant potential and therefore are evaluated mostly for the said purpose. It is now well known that the antioxidant activities of phenolic contents are mainly due to their ability to scavenge radical, donate hydrogen and quench the singlet oxygen [34]. There are many reports that the antioxidant activities of plants are due to presence of phenolic compounds [35].
These phenolic compounds can be used as possible quality control indicators in herbal products. Our previous HPLC analysis for phenolic compounds also showed the presence of phenolic compounds in extracts of G.trichophylla [17].
The extracts of G.trichophylla showed the presence of phenolic and flavonoids and most of these are concentrated in the methanol extract. This may have contributed towards its greatest radical scavenging activity in both DPPH and FRAP assays. The principle of this method is based on the reduction of purple colored DPPH solution in the presence of hydrogen donating antioxidants by the formation of yellow colored diphenyl-picryl hydrazine. DPPH free radical scavenging assay is more indirect assay [36]. The DPPH activity was significant for methanol extract of G.trichophylla. FRAP assay proved to be much sensitive comparatively and showed more inhibition of extracts tested.
The extraction of compounds from plants with antioxidant potential is often carried out using the organic solvents like methanol, chloroform and hexane etc. There is always variation in antioxidant activities of different plant samples extracted with different organic solvents which is attributed to the presence of variety of secondary metabolites [37]. And this can be observed in case of G. trichophylla, where extracts with different polarities of solvents showed different antioxidant activities. The methanol extract showed the maximum inhibition in DPPH and FRAP assay. This showed that most of the phenolic compounds are concentrated in the most polar solvent used for extraction as evident in phenolic content and flavonoids assay.
The analysis of correlation between the total phenolic/ flavonoids contents and antioxidant and anti-inflammatory (LOX inhibition) activities showed significant dependence as revealed in the Fig. 2. The methanol extract Chloroform and hexane extracts also showed good to moderate activity against the enzyme. Every mineral analyzed have its own importance in human body. Magnesium have a very important role in structure and function of the human body [38]. Gaultheria trichophylla presented high magnesium contents. It is reported that high potassium contents in the body enhance the utilization of iron and in patient suffering from extra excretion of potassium from the body fluid and are using diuretics to control hypertension [39]. To maintain the electrolyte balance in the body sodium has a very important part. Calcium is an important mineral and has beneficial effect in development of teeth and bones in children, pregnant and lactating women. It is well known that Iron is required in the body for the formation hemoglobin and a person can get anemic in case of deficiency. Manganese (Mn) is known as an essential trace element which acts as cofactor for many enzymes. Zinc (Zn) is an essential component of thousands of proteins in plants, although it is toxic in excess quantities [32]. Similarly other minerals also play their role in functioning of human body. It is clear from the analysis that G. trichophylla beside its medicinal values is also important in nutritional point of view.
Besides the wholesome minerals, the contents of heavy metals (As, Cd, Ni, Cr, Hg and Pb) are also the important standard to identify the quality of the plant. The medicinal plants may face serious consequences if heavy metal accumulation beyond permissible limits is detected. Therefore, it is necessary to check the levels of pollutants in extracts of medicinal plants before use. This practice of standardization will help to select the proper site of collection of medicinal plants and will exclude the environmentally polluted sites. The contents of heavy metals are defined by certain limits and these limits are different in different areas of the world e.g. the minimum permissible limits for arsenic, lead, cadmium, mercury and chromium are 5.0, 10.0, 0.3, 0.2 and 2.0 ppm respectively in Canada respectively [40]. The results showed that the concentrations of heavy metals in G. trichophylla were either low or are within defined limits [40]. Chromium (Cr), is considered one of the most toxic pollutant element. The permissible limit for Cr in raw herbal materials is 2.0 ppm and that for finished products is 0.02 mg/day [40]. Another element, Lead (Pb) is highly toxic for plants, animals and microorganisms which escalate in pollution due to increased fertilizer consumption, fuel combustion and sewage sludge. The sample of G.trichophylla showed to contain low concentration of Pb (0.649 ppm) as compared to the permissible limit of 10 ppm defined by [40]. Cadmium (Cd) is which occurs widely in medicinal plants is a hazardous heavy metal. The major sources leading to accumulation of cadmium in soil and plants are phosphate fertilizers. The sample of G. trichophylla analyzed in this study had Cd concentration (0.177 ppm) within the acceptable range of 0.3 ppm recommended by [40]. Ni is considered as allergen and has direct interaction with proteins and Co is mostly used as component of cyanocobalamin vitamin [41]. However no limit has been set for Nickel food stuffs [42]. The concentration of Co, another major cause of contact dermatitis next only to nickel and chromium, was also found to be within permissible limit [43].
Conclusion
In our experiments we have evaluated the G.trichophylla extracts for antioxidant and lipoxygenase activities. The results revealed that methanol extract of G.trichophylla is more concentrated with the phenolic and flavonoids contents and, therefore, have more DPPH and FRAP reducing activities. The results clearly indicate that G.trichophylla has strong antioxidant and anti-inflammatory potential and can be a source of important alternative therapeutic agent. Moreover, investigation also revealed that Gaultheria trichophylla is fortified with important minerals and it is a safe as a food and drug. As far environmental contaminants are concern it grows in hilly areas with very less population effect.
Highlights
Gaultheria trichophylla is an important medicinal plant. Phenolic compounds are related to antioxidant activities. Inhibition of 5-LOX showed its anti-inflammatory potential. Investigation showed its nutritional importance. In crude form it is a safe drug to use.
|
v3-fos-license
|
2021-08-27T16:54:15.690Z
|
2021-01-01T00:00:00.000
|
238016485
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/49/e3sconf_interagromash2021_12035.pdf",
"pdf_hash": "9fe20e9954ea17fa1bffa2c128cc024289239d89",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43326",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"sha1": "4b7cb590c3607449b239768f9a057a8333fe7376",
"year": 2021
}
|
pes2o/s2orc
|
Digital literacy of modern higher education teachers
The expansion of digital technologies in all spheres of human life determines the digital transformation of education. This is due to the fact that the digital transformation of education brings the educational system itself in line with the requirements of the technological revolution and allows students to obtain relevant digital competencies in order to be qualified personnel for the economic sector. In connection with the digitalization of education and the active use of information technologies in the educational process, the requirements to the faculty of higher education are also changing. This became especially noticeable over the past year, when universities worked in a remote format. Therefore, the authors of the article turn to the issue of digital literacy of modern higher education teachers. Based on the structurally functional theory of T. Parsons, the modernist concept of E. Giddens and the theory of determination of human behavior by A. Maslow, it is shown what competencies a modern higher education teacher has in the context of the digital transformation of education. Based on the analysis, it should be noted that digital competencies include competencies related to information literacy, skills for interaction in an electronic environment; the ability to generate digital content; ensuring the safety of the physical and psychological health of users; the ability to identify and eliminate technical problems when working with digital devices. Digital literacy of teachers is the mastery of a set of the listed competencies. In general, the digital literacy index of university teachers is quite high, but unlike young people, teachers trust technological innovations to a lesser extent, which reduces their willingness to actively work with the digital educational environment. In our opinion, modern teachers need to transform their attitude to digitalization with the help of high-quality training in working with a digital educational environment.
Introduction
Currently, we are actively observing the expansion of digital technologies in human life. The introduction into life of each "fruit" of technological progress causes a person to plunge into a completely new world -the world of high technologies, which radically transforms all spheres of human life.
Based on the forecast of the socio-economic development of Russia until 2036, we can note that digitalization and technological modernization of the economy and the use of new technologies ensure higher labor productivity. An important role is played here by adjusting the education system: mainly -the digital transformation of higher education.
Thus, education does not stand aside from digital transformations, since it is impossible to build a digital environment without qualified personnel with high digital competencies. In addition, the digital transformation of education brings the educational system itself in line with the requirements of the technological (digital) revolution.
The spread of digital technologies expands opportunities for each person, provides unlimited access to various digital tools, materials and services. The digitalization of higher education is aimed at creating student interest in learning, and represents an expansion of opportunities for teachers and students for interactive work and control of educational material. The digital transformation of education not only expands the capabilities of teachers and students, but also requires the actors of the educational process to acquire new knowledge, skills, and abilities to work effectively with digital tools, materials and services. With the use of information and communication technologies, the traditional educational process is changing, as well as the role of the teacher in the new educational environment is being transformed. The activities of a modern higher school teacher are aimed at spreading new models of organizing educational work with students. In this regard, along with considering the effectiveness of digitalization of education [1] and the risks of this process [2] described in the works of modern researchers, it is important to study the readiness of teachers for digital transformations and their digital competencies to work in the new educational space.
As part of this research work, we turn specifically to the digital literacy of modern higher education teachers, which will help to look at the problem of digitalization from a different angle.
Methodology
The modernization of education is of concern to the minds of many contemporary researchers. The issues of reforming the educational structure are considered from different angles and affect different aspects of education [3; 4]. Researchers pay special attention to the Bologna education system [5]. Great interest is also aroused by the digitalization of education: the positive aspects of this process and the risks [6].
Among the aspects that concern scientists involved in the digital transformation of education are the following: the use of information and communication technologies in the educational process [7], the development of online courses and the introduction of distance learning into the education system [8; 9], the creation of a unified digital educational space [10]. Researchers also address the issue of studying the competencies of scientific and pedagogical workers related to digital literacy, which will make it possible to fully use digital technologies in the educational process [11].
In 2020, due to the fact that the world was faced with a new infection, which complicated the epidemiological situation in many countries of the world, many processes began to go into remote mode. The process of teaching students was no exception. Universities of the world were actively involved in remote work with students. It is worth noting, that many Russian universities did not actively use distance learning as part of the educational process until 2020. Most often, universities included elective online courses in the curriculum. But already at the beginning of March 2020, absolutely all Russian universities had to master distance work, transform the educational environment, and teachers had to adapt their disciplines to the online format using various distance technologies and methods. Analyzing the data of many empirical studies concerning the views of students and teachers of Russian universities towards distance learning, we see that both students and teachers note the lack of digital competencies for effective distance learning. Many universities promptly developed and began to implement teacher-training programs aimed at mastering modern technologies for working in a distance format, which, in turn, speaks of the relevance and importance of this issue for the entire education system.
In our opinion, it is very important to address the problem of digital literacy of modern higher education teachers; this will help to understand the readiness of teachers for the digitalization of education in general and to determine their role in the educational space using distance technologies.
In connection with our goal, we turn to education as one of the important social institutions (M. Weber). Considering education as a social institution, we see that education is a set of institutions and organizations that perform the function of enlightenment and upbringing. The structural-functional approach (T. Parsons) allows defining all the functions of this institution. Moreover, since education is a system, modernist changes (E. Giddens) lead to the transformation of the structure of the educational system and its functions, which is reflected in the role of teachers within the framework of this transforming structure, which can be traced using A. Maslow's theory of human behavior determination.
Thus, the analyzed scientific literature regarding the selected problem field of research and the urgent transition to distance work in the educational process of higher school shows that it is very important to turn to digital literacy of the teaching staff of higher education, which will help us assess the readiness of teachers to digitalization of education.
Results
In order to determine the essence of digital literacy of modern higher education teachers, we turned to the definition of digital culture, to ways of expressing the digital transformation of education, and defined what digital competencies are.
Digital culture is the understanding by the population of modern information transfer technologies, their functions, as well as their correct and effective use both in work and in everyday life [12; 13].
The digital transformation of education is associated with the introduction of a digital educational environment using information and communication technologies in the educational process, with the formation of models for organizing distance learning based on online courses, the active development of digital teaching materials and tools, which can include communication of teachers and students with the help of digital platforms and digital assessment of students through special electronic resources [14; 15].
The digital revolution is now at its peak, so the introduction of new technologies in the educational process and the formation of new learning models requires special competencies from the teaching staff. Competencies are a range of issues in which a person is aware and within which he has practical knowledge and experience. The competence of a teacher is a set of characteristics of knowledge, abilities and skills that are internalized by the individual and manifested as the ability and readiness of the individual to design actions when solving various labor problems.
For the effective fulfillment of his work activity, a modern teacher must have a set of the following competencies: cognitive; socio-psychological; managerial; informational; communicative; digital; competence in health preservation [16; 17]. These competencies allow teachers to be successful in their professional activities. However, in the light of the active digital transformation of education, the digital competencies of teachers, their digital literacy, are of particular importance.
Digital competencies include competencies related to information literacy, that is, the ability of teachers to find and critically evaluate information in the digital environment, on various information resources; they also include skills for cooperation and interaction in an electronic environment: knowledge of the rules and norms of behavior in the process of digital communication; the ability to form digital content, the ability to protect personal data; ensuring the safety of the physical and psychological health of users; as well as competencies related to the ability to identify and eliminate technical problems when working with digital devices [18; 19].
Digital competencies constitute the digital literacy of a higher education teacher. Digital literacy is a system of knowledge, skills and attitudes that are necessary for a modern person to live in a digital society, and for a teacher to work successfully in a digital educational space. As defined by the United Nations: "digital literacy is the ability to safely and appropriately manage, understand, integrate, share, evaluate, create and access information through digital devices and networked technologies to participate in economic and social life" [20].
Do higher education teachers have sufficient digital literacy to actively engage in distance learning? Are they ready for the digitalization of education and can they effectively use digital technologies in the educational process?
We tried to find answers to the questions posed in the secondary analysis of empirical research data. The results of a study conducted by the analytical center "NAFI" show that the practice of using digital technologies by university teachers is as follows: 1/3 of respondents believe that about 40% of their colleagues either do not use digital technologies at all, or use them very rarely. However, it is also noted that about 85% of university teachers are active users of the Internet, 2/3 of teachers are interested in new applications, and about 60% are actively using social networks.
Thus, we see that the majority of university teachers keep up with the times and demonstrate high rates of digital literacy (Fig. 1).
Fig. 1. Educators opinion on the effectiveness of simulations
When measuring digital literacy, the analytical center "NAFI" used such an indicator as the digital literacy index. The digital literacy index of university teachers is determined in accordance with the correlation of indicators such as information literacy, computer literacy, communication literacy, media literacy, attitude to innovation. According to NAFI estimates, the digital literacy index of university teachers is 88 points out of 100, but it should be noted that, in comparison with young people, the indicator of attitude to technological innovation is reduced. Of course, here the younger generation gives a head start to teachers, since they treat various innovations with greater ease than teachers do. Teachers, in turn, strive to be more traditionalists, although due to their professional service, they are forced to work with new technologies, but sometimes they do not trust them. Despite the fact that the level of digital literacy of modern higher education teachers is high, and they have enough knowledge, skills and follow the correct guidelines, their readiness to actively use information technologies in the process of educational activities and online learning is not so great, therefore, the digital competencies of higher education teachers require additional development. In our opinion, high-quality training in working with a digital educational environment will help increase the readiness of higher education teachers. We believe that as part of such training, it is necessary to familiarize teachers with the opportunities that the distance format presents for digital communication with students and the scientific community. Training should be aimed at developing teachers` skills to create electronic materials and exchange them with colleagues in the cloud. In addition, university staff must learn how to protect information, to use digital technologies creatively for solving various educational and extracurricular tasks.
Currently, in Russia, the state pays special attention to the development of the higher education sphere associated with the training of qualified personnel for priority sectors of the digital economy. Innopolis University has been chosen as a supporting educational center that will train personnel for the digital economy. In Innopolis, a program of additional professional education has now been developed and started to be implemented, where students, including higher education teachers will be taught the competencies that are in demand in the digital economy in the non-digital areas of the real sector of the economy. In practical classes, which held on a special platform, teachers will be able to analyze the higher educational programs. In addition, they will receive feedback from experts on the updated work programs of disciplines and the main professional educational program in order to introduce into these documents the necessary digital competencies a student must master during his training at the university.
Conclusions
Currently, the digital transformation of education is actively taking place, which is associated with the formation of a digital educational environment, where actors of the educational process use information and communication technologies. Digitalization of education is the formation of models for organizing distance learning through the development and implementation of online courses, the development of digital teaching materials and communication of teachers and students using digital platforms.
The quality of the educational process in higher education largely depends on the proficiency of information and communication technologies by university teachers.
The analysis of scientific literature and empirical data has shown that in order to fulfill their work activities effectively, successful teachers must have the following competencies: cognitive, socio-psychological, socio-organizational, information and computer, creative, (digital) communicative. In light of the active digital transformation of education, the digital competencies of teachers and their digital literacy are of particular importance.
The secondary analysis of empirical data shows that, in general, modern higher education teachers have a high digital literacy rate, but the willingness to use information and communication technologies in the educational process is not high, because modern teachers do not accept innovations very quickly.
In our opinion, high-quality training in working with a digital educational environment will help increase the willingness of higher education teachers to work in a digital educational space. Currently, Innopolis has developed and started implementing a program of additional professional education, where students will be taught the competencies that are in demand in the digital economy in the non-digital areas of the real sector of the economy * .
* Research was financially supported by Southern Federal University, 2021 (Ministry of Science and Higher Education of the Russian Federation).
|
v3-fos-license
|
2017-01-07T08:35:44.032Z
|
2016-12-01T00:00:00.000
|
383236
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-9721/4/4/39/pdf",
"pdf_hash": "4c0791f71ac4ad569e70da08b7bb4c25b57a0f72",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43327",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "4c0791f71ac4ad569e70da08b7bb4c25b57a0f72",
"year": 2016
}
|
pes2o/s2orc
|
The Cardiovascular Effects of Cocoa Polyphenols—An Overview
Cocoa is a rich source of high-quality antioxidant polyphenols. They comprise mainly catechins (29%–38% of total polyphenols), anthocyanins (4% of total polyphenols) and proanthocyanidins (58%–65% of total polyphenols). A growing body of experimental and epidemiological evidence highlights that the intake of cocoa polyphenols may reduce the risk of cardiovascular events. Beyond antioxidant properties, cocoa polyphenols exert blood pressure lowering activity, antiplatelet, anti-inflammatory, metabolic and anti-atherosclerotic effects, and also improve endothelial function. This paper reviews the role of cocoa polyphenols in cardiovascular protection, with a special focus on mechanisms of action, clinical relevance and correlation between antioxidant activity and cardiovascular health.
Introduction
Cardiovascular diseases (CVDs) are the leading cause of mortality and morbidity worldwide. Every year, about 17 million people die from CVDs, which represent almost one third of all deaths in the world [1]. By 2030, it is estimated that about 23.3 million people will die due to CVDs [2]. The major heart health issues are: high blood pressure (hypertension), heart ischemic and cerebrovascular diseases. Hypertension is the main risk factor for CVDs. It is a silent killer that affects one third of the worldwide adult population and contributes to nearly 45% of heart ischemic diseases and 51% of stroke cases around the globe [3]. It is considered that every 20-mmHg drop in systolic blood pressure (SBP) would reduce the relative risk of ischemic heart disease mortality by 50% [2]. Other significant behavioural risk factors of CVDs include: unhealthy diet, physical inactivity, smoking, alcohol, exposure to continuous stress, diabetes, obesity, hyperlipidemia. Change of lifestyle and diet is considered to be one of the main tools for the prevention of CVDs, particularly ischemic heart diseases, atherosclerosis, and arterial thrombosis. Epidemiological studies have shown that a diet rich in fruits and vegetables with a high intake of polyphenols have beneficial effects on human health; it can protect and limit the incidence of CVDs, cancer or age-related degenerative diseases. Cocoa is one of the most polyphenol-rich foods that is widely consumed in the world. In recent years it has received much attention due to its antioxidant, cardioprotective, neuroprotective, and chemopreventive properties [4].
In this review, we aim to describe the current evidence on the cardioprotective effects of cocoa polyphenols and cocoa derived products with a particular focus on the main mechanisms of action, In this review, we aim to describe the current evidence on the cardioprotective effects of cocoa polyphenols and cocoa derived products with a particular focus on the main mechanisms of action, clinical relevance, and the link between antioxidant activity and cardiovascular health. The limitations of these studies and future perspectives on cocoa polyphenols research are also presented.
Cardioprotective Effects of Cocoa Polyphenols
Numerous epidemiological studies, short-term human intervention studies, or small-cohort studies, showed that the intake of cocoa and polyphenols-rich cocoa products is correlated with cardiovascular benefits in humans [16]. Cocoa flavanols and proanthocyanidins are pleiotropic molecules that can influence various biomedical markers and clinical endpoints of cardiovascular health. They improve cardiovascular function but also facilitate endogenous repair mechanisms [18]. Experimental research and clinical trials have investigated mainly the effects of cocoa products and cocoa polyphenols on oxidative stress and plasma antioxidant capacity, endothelium-dependent vasomotor function and arterial flow mediated dilatation (FMD), nitric oxide (NO) metabolism and activity, blood pressure, platelet function, lipid profile, and vascular inflammation [5,10]. An overview of the most recent clinical trials (2014-2016) with cocoa flavanols is shown in Table 1. The potential cardioprotective properties of cocoa include mainly antihypertensive, anti-atherogenic, and anti-inflammatory activities as well as inhibition of the platelet activation and aggregation, and attenuation of endothelial dysfunction ( Figure 5).
Antioxidant Activity
Oxidative stress is actively involved in CVDs pathogenesis. The increase of reactive oxygen species (ROS) production, mainly of superoxide anion radical, affects NO metabolism and facilitates endothelial dysfunction [45]. Besides, lipid peroxidation contributes essentially to the initiation and progression of atherosclerosis. Cocoa is a rich source of antioxidants. The assessment of the antioxidant capacity of 1113 different foods showed that of 50 products exhibiting the highest activity, five products are cocoa-based [12]. Cocoa powder has a higher antioxidant activity than green tea (1000 ORAC units compared to 800 ORAC units), and the content of total polyphenols is positively correlated with ORAC value [14]. Cocoa polyphenols mainly flavanols and oligomeric proanthocyanidins have shown significant in vitro antioxidant properties. They act as scavengers of various radicals (DPPH, ABTS, superoxide anion, peroxynitrite, hypochloride), inhibit lipid peroxidation and chelate pro-oxidants metal ions (Fe 2+ ) [4,46]. The main structural features of polyphenols that are responsible for their ROS quencher activity are catechol groups, phenolic quinoid tautomerism and delocalization of electrons [14]. Epicatechin enhances plasma antioxidant capacity and protects erythrocyte membrane from lipid peroxidation. Also, flavanol monomers and oligomeric proanthocyanidins protect against in vitro erythrocyte hemolysis induced by 2,2'-azobis Main cardioprotective properties of cocoa polyphenols. Abbreviations: ACE, angiotensin-converting enzyme; EPCs, endothelial progenitor cells; ET-1, endothelin 1; FMD, flow-mediated dilation; HDLc, high-density lipoprotein-cholesterol; ICAM, intercellular adhesion molecule; IL, interleukin; LDLc, low-density lipoprotein-cholesterol; LOX, lipooxygenase; LTs, leukotrienes; MMP-2, matrix metalloproteinase 2; MPO, myeloperoxidase; NADPH, reduced nicotinamide adenine dinucleotide phosphate; NF-κB, nuclear factor kappa-light-chain-enhancer of activated B cells; NO, nitric oxide; Nrf2, nuclear factor erythroid-related factor 2; PGI2, prostaglandin I2; PLC, phospholipase C; TG, triglycerides; TNF, tumor necrosis factor; VCAM, vascular cell adhesion molecule; XO, xanthine oxidase.
Antioxidant Activity
Oxidative stress is actively involved in CVDs pathogenesis. The increase of reactive oxygen species (ROS) production, mainly of superoxide anion radical, affects NO metabolism and facilitates endothelial dysfunction [45]. Besides, lipid peroxidation contributes essentially to the initiation and progression of atherosclerosis. Cocoa is a rich source of antioxidants. The assessment of the antioxidant capacity of 1113 different foods showed that of 50 products exhibiting the highest activity, five products are cocoa-based [12]. Cocoa powder has a higher antioxidant activity than green tea (1000 ORAC units compared to 800 ORAC units), and the content of total polyphenols is positively correlated with ORAC value [14]. Cocoa polyphenols mainly flavanols and oligomeric proanthocyanidins have shown significant in vitro antioxidant properties. They act as scavengers of various radicals (DPPH, ABTS, superoxide anion, peroxynitrite, hypochloride), inhibit lipid peroxidation and chelate pro-oxidants metal ions (Fe 2+ ) [4,46]. The main structural features of polyphenols that are responsible for their ROS quencher activity are catechol groups, phenolic quinoid tautomerism and delocalization of electrons [14]. Epicatechin enhances plasma antioxidant capacity and protects erythrocyte membrane from lipid peroxidation. Also, flavanol monomers and oligomeric proanthocyanidins protect against in vitro erythrocyte hemolysis induced by 2,2'-azobis (amidopropane)-dihydrochloride (AAHP) [14].
In animal experimental models, long-term feeding studies with flavanol-rich cocoa products showed an increase in total plasma antioxidant capacity [14]. In rat brain homogenates, cocoa polyphenolic extracts inhibit lipid peroxidation acting as chain-breaking antioxidants [11]. The suppressive effect of cocoa flavanols on lipid peroxidation has been suggested by some studies, which showed a decrease of the plasma level of F2 isoprostanes. However, many in vivo investigations did not identify similar results [17]. Also, some studies in healthy human subjects did not reveal changes of oxidative stress biomarkers consecutive to cocoa consumption. It is possible that the antioxidant effects of cocoa products could be better expressed in pathological conditions [4]. At the same time, it is difficult to translate in vivo antioxidant activity of cocoa polyphenols. The poor bioavailability and low plasma concentrations of cocoa polyphenols do not support a direct antioxidant activity. It is more likely that at low physiological levels reached by flavanols, the latter act indirectly as antioxidants by a specific interaction with lipids and enzymes associated with oxidant metabolism and cardiovascular diseases (NADPH oxidases, lipoxygenases, myeloperoxidase) [6,17,47]. Besides, cocoa polyphenols upregulate antioxidant defense response, such as the nuclear factor erythroid 2-related factor 2 (Nrf2) signaling pathway, a master regulator of cellular resistance to oxidants [46]. Therefore, they can attenuate the rise of intracellular oxidants via NF-κB activation [48].
Modulation of Endothelium-Dependent Vasomotor Function
The endothelium participates critically to maintain vascular homeostasis. It plays a pivotal role in the regulation of vascular tone, vascular permeability, platelet activity and aggregation. Its effects are mediated by a complex network of molecules, such as NO, endothelin-1 (ET-1), prostacyclin, cell adhesion molecules [49], leukotrienes, prostaglandins, catecholamines, vasoactive peptides (angiotensin-II) [48]. The impairment of functional properties of endothelium is involved in the initiation and progression of atherosclerosis and other CVDs [9,47]. The undesirable phenotypic changes of endothelium, such as vasoconstriction, proliferation of arterial wall, inflammation, and thrombosis, occur with endothelial dysfunction [49]. The quantification of endothelial function in humans is accomplished by the use of brachial artery flow-mediated dilation technique (FMD). FMD value largely reflects NO-mediated arterial function (including that of coronary arteries) and low levels are associated with cardiovascular events and elevated risk factors for CVDs [47,49]. Most studies have shown that the acute or chronic (≥1 week) intake of cocoa/chocolate increases FMD value in healthy young and elderly subjects, smokers, obese, patients with coronary artery diseases and arterial hypertension, or end-stage renal diseases with chronic hemodialysis, and diabetics [21,49]. Acute ingestion of the medium and high dose of flavanols (321 mg of flavanols/dose, three doses/day) in elderly type 2 diabetic patients improves FMD value by over 40%, but the vascular response to oral treatment with nitroglycerin was not affected by the dietary intervention [50]. The intake for seven days of cocoa containing 74 mg of flavanols and 232 mg of procyanidins (three doses/day) improved significantly FMD in smokers, but after the cessation of cocoa consumption and a seven-day wash-out period, FMD level returned to preintervention values [16].
In addition, metabolites of epicatechin may have a relevant physiological activity. It was observed that the increase of FMD level correlates with plasma levels of epicatechin metabolites after cocoa ingestion [49]. Thus, FMD response follows a similar temporal pattern with the appearance of (−)-epicatechin metabolites in plasma, peak effects being observed two-hours after cocoa ingestion [17,49]. This short-time effect can be explained by the reduction of superoxide anion-mediated loss of NO and oxidative stress via inhibition of NADPH oxidase activity [17]. The O-methylated metabolites of (−)-epicatechin (3'-O-methyl epicatechin, 4'-O-methyl epicatechin) that occur in plasma in free or glucuronidated form significantly inhibit endothelial NADPH oxidase activity [13]. The increase of circulating bioactive NO pool (RXNO) plays an important role in the attenuation of vascular NO deficit in patients with risk cardiovascular factors and the restoration of endothelium-dependent vasodilation. In smokers, the consumption of a flavanol-rich cocoa beverage (176-185 mg/mL flavanols with 20-22 mg/mL epicatechin and 106-111 mg/mL proanthocyanidins) at 12 h after cessation of smoking and during abstinence dose-dependently increases FMD value by almost 50% and RXNO level by more than a third [51]. An improvement of endothelial function in healthy subjects was also observed after combined intake of low amounts of cocoa flavanols and nitrate-rich foods [32].
Repeated administration of high-flavanol cocoa produces a longer-term effect that is characterized by an increase in the baseline level of FMD. This effect can be mediated by changes in gene expression and protein synthesis (endothelial nitric oxide synthase, eNOS) [17]. Besides the improvement of NO bioavailability and bioactivity, the positive endothelial effects of cocoa can be correlated with other mechanisms, such as modulation of PGI2 and leukotrienes, reduction of xanthine oxidase and myeloperoxidase activities, suppression of the proinflammatory cytokines IL-1β, IL-2 and IL-8 production, inhibition of ET-1 release [52], decrease of the biomarkers associated with vascular damage like monocyte CD62L expression and the formation of elevated endothelial microparticles [30], and mobilization of functionally unaltered circulating angiogenic cells (EPCs) [18]. Also known as endothelial progenitor cells, EPCs originate from bone marrow cells and are capable to differentiate into mature endothelial cells with unaltered functional properties [53]. EPCs participate and significantly contribute to the endothelial reparative processes, neoangiogenesis tissue regeneration, and platelet function regulation via direct effects and indirect activities by production of paracrine/juxtacrine signals (proangiogenic cytokines, angiogenic growth factors) [53][54][55]. EPCs are currently considered potential biomarkers of cardiovascular risk. The number and functionality of EPCs are adversely affected in some conditions such as systemic hypertension, heart failure, coronary artery disease, stroke, atherosclerosis, diabetes mellitus, chronic kidney disease or chronic venous insufficiency, or in chronic inflammatory diseases (rheumatoid arthritis, systemic lupus erythematosus, systemic sclerosis, Kawasaki's diseases). An increase in EPCs mobilization has been associated with acute ischemic events (acute myocardial infarction, unstable angina) [53,54,56].
Although the improvement of FMD induced by cocoa is supported by the preponderance of data, it is less clear what the minimal doses of cocoa/chocolate needed to be ingested to exert positive vascular effects are. Monahan [49] specifies that the minimal amount of cocoa that causes an increase in FMD value in healthy elderly individuals appears to be greater than 2 g and at most 5 g. Therefore, it seems that a pronounced and consistent increase in endothelial function occurs for large doses of about 900 mg of flavanols/day whereas low doses of 80 mg of flavanols/day do not produce a significant effect [34].
European Food Safety Authority (EFSA) [57] recommends 200 mg of cocoa polyphenols daily (provided by 2.5 g of polyphenol-rich cocoa powder or 10 g of polyphenol-rich dark chocolate) in order to obtain endothelium-dependent vasodilation in general population in the context of a balanced diet.
Effects on Blood Pressure
The research of cocoa effects on blood pressure (BP) was initiated by observations on Kuna Indians from the coast of the Panama islands. This population has very low levels of BP and incidence of hypertension and CVDs but the migration in the Panama urban continental areas leads to negative impact on the cardiovascular health. Subsequent investigations showed that the differences between the two populations are due mainly to the dietary habits. The high intake of flavanol-rich home-prepared cocoa in the island-dwelling Kuna in comparison to the mainland-dwelling Kuna (up to 10 times higher) appear to confer relevant beneficial effects [2,6,50]. Different epidemiological studies (Zutphen Elderly Study, Stockholm Heart Epidemiology Program) have reported a significant inverse relationship between the intake of cocoa/dark chocolate and cardiac mortality [58]. Many experimental studies and randomized intervention trials have shown BP-lowering effects of cocoa polyphenols and cocoa products. Single oral administration of flavonoid-enriched cocoa powder (128.9 mg/g of total procyanidins, 19.36 mg/g of epicatechin) at different doses (50, 100, 300, 600 mg/kg) clearly decreases BP in spontaneously hypertensive rats. A dose of 300 mg/kg cocoa powder produces the maximum effect, which is similar to that induced by captopril (50 mg/kg), a well-known antihypertensive drug [59]. Also, the presence of epicatehin in the diet of Sprague-Dawley rats (0.4 g epicatechin/100 g diet) for eight days prevents the increase of BP induced by NO deficiency associated to the NO-synthase inhibitor, nitro-L-arginine methyl ester (L-NAME) pretreatment [6]. It seems that epicatechin exerts the antihypertensive effects only in a pathological state. Besides, the epicatechin treatment limits infarct size in animal model of myocardial ischemia-reperfusion injury and after permanent coronary occlusion [60]. In humans, the BP-lowering properties of cocoa and cocoa polyphenols were assessed in studies with diverse design: various groups of subjects (young, elderly, overweight, obese, hypercholesterolemic, pre-hypertensive, hypertensive adults), different cocoa products and flavanols doses, and various duration of administration. A meta-analysis of 20 randomized controlled trials, which evaluated the cocoa polyphenols effects on BP, reported a small and significant reduction in both systolic blood pressure (SBP; −2.77 mmHg) and diastolic blood pressure (DBP; −2.20 mmHg) [2,61]. [2,65]. The intake of cocoa containing 1052 mg of flavanols decreased significantly BP (−5.3 mmHg SBP; −3 mmHg DSP) in patients with untreated moderate hypertension [58].
The meta-analysis by Ried [61] revealed that the BP-lowering effects of cocoa were more pronounced in younger subjects, hypertensive patients, and in the studies with flavanol-free controls. Also, the change of BP was more significant in studies lasting more than two weeks. It seems that the high-flavanol cocoa produces a sustained reduction in patients with hypertension when treatment lasts for at least seven days, which suggests a time-dependent effect [60]. On the other hand, Hooper et al. [66] mentioned that the BP lowering effects of cocoa and chocolate appeared greater in studies with higher doses and shorter duration. These discrepancies could be due to differences in the backgrounds of cohorts and cocoa type products or absence of suitable placebos [52]. The amount of sugar associated with cocoa products may influence the BP lowering effects. Cocoa products with more than 10 g of sugar cause smaller reduction in SBP and DBP (−1.32 mmHg) in comparison with products containing less than 10 g of sugar (−2.52 mmHg SBP; −2.35 mmHg DBP) [2]. Vlachopoulos et al. [67] have found that high habitual cocoa consumption (≥4.63 g/day) decreases aortic stiffness and wave reflections that have a causative role in the pathogenesis of systolic hypertension.
The positive effects of cocoa on BP are supported mainly by the increase of endothelial bioavailability and bioactivity of NO and the inhibition of endothelial vasoconstriction. Up-regulation of NO is achieved by several mechanisms, such as: (i) the augmentation of eNOS activity, enzyme, which facilitates the production of NO from L-arginine. Cocoa flavanols such as epicatechin acutely enhances NO concentrations in rabbit aortic rings and in cultured human coronary arteries [2]; (ii) down-regulation of NADPH-oxidase and the reduction of superoxide anion and cellular oxidants levels. Superoxide anion is primarily generated from the electron transport chain but also through NADPH oxidase, xanthine oxidase and cytochrome P450 activities. It modulates NO availability in the smooth muscle cells. Superoxide anion directly reacts with NO producing peroxynitrite that enhances local oxidative stress. Cocoa flavanols prevent NO loss via superoxide anion reaction. In addition, they protect against the oxidative loss of tetrahydrobiopterin and prevent dysfunctional production of superoxide anion via eNOS uncoupling [6]; (iii) inhibition of arginase activity and the preservation of the intracellular arginine pool, which is the substrate for NO synthesis [6]. Epicatechin inhibits arginase-2 mRNA expression and activity in human umbilical vein endothelial cells [8], and flavanols-rich cocoa decreases the arginase activity in rat kidney and human erythrocytes [6]. Other anti-hypertensive mechanisms of cocoa polyphenols include: (i) inhibition of endothelin-1 (ET-1), an endothelium-derived peptide with vasoconstrictive effect. At low micromolar concentrations, cocoa flavanols inhibit transcription of ET-1 gene in endothelial cells andprevent ET-1 ROS production [2]; (ii) modulation of renin-angiotensin-aldosteron system. Cocoa flavanols and proanthocyanidins inhibit the renal angiotensin-converting enzyme (ACE) activity, which is involved in the regulation of BP by transforming angiotensin I in angiotensin II, a potent vasoactive and inflammatory peptide. In addition, cocoa polyphenols reduce the prooxidant effects of angiotensin II [58]; (iii) improvement of sympathovagal balance [34].
Antiplatelet Effects
Platelets activation and aggregation have a crucial importance in the ethiology and pathogenesis of CDVs and cerebrovascular diseases. Several studies reported that the intake of cocoa and dark chocolate in moderate amounts acutely or chronically determines a significant inhibition of platelet aggregation and adhesion in healthy volunteers, smokers or people who suffered a heart transplant [9]. The consumption of flavanol-rich cocoa beverages (600 and 900 mg of flavanols) causes a significant inhibition of platelet activation and aggregation, and inhibits platelet-monocyte conjugate and platelet-neutrophil conjugate formation [68]. The administration of high (897 mg) and moderate (220 mg) doses of cocoa flavanols reduces platelet aggregation induced by ADP + collagen, as well as by epinephrine + collagen [14]. Catechin and epicatechin, as well as their methylated metabolites, inhibit the formation of A2 thromboxane, a potent vasoconstrictor and platelet aggregator, and they also inhibit ADP-induced aggregation but only at supra-physiological doses [52]. Cocoa attenuates platelet aggregation induced by ADP and thyrombin-receptor activation (thrombin receptor activating peptide SFLLRNamide, TRAP6), but it does not influence the one induced by collagen and thromboxane analogue U46619 [34]. Theobromine, the major methylxanthine in cocoa, also inhibits platelet aggregation induced by ADP and TRAP6, the effects being mediated by PDE inhibition and increase of cAMP [34]. Significant antiplatelet properties of theobromine explain the more complex effects obtained from the use of cocoa products in comparison to those of flavanols alone. It is estimated that the intake of 100 g of dark chocolate with 70% cocoa solids produces an effect on platelets comparable to that induced by a standard dose of aspirin (80 mg), a classic anti-aggregant agent [47].
In contrast to the aforementioned data, some studies on the chronic intake of cocoa polyphenols did not find a significant effect on platelet function [68,69]. Ottaviani et al. [31] showed that the daily consumption of up to 2000 mg of cocoa flavanols for 12 weeks did not affect platelet function in humans. Also, a daily intake of up to 1000 mg of cocoa flavanols for two weeks was not associated with changes in platelet function. The authors state that the investigations were performed in healthy subjects and the usual approach to define the health status of participants do not allow a differentiated assessment of the individual health. The divergent or inconsistent results related to the antiplatelet efficacy of cocoa polyphenols could be explained considering the following variables: (i) Health status/gender of participants. The different basal level of platelet function may result in different responses after the exposure to compounds with antiplatelet potential. In a randomized controlled human intervention trial, Ostertag et al. [70] showed gender-dependent antiplatelet effects of dark chocolate. Acute intake of flavan-3-ol-enriched dark chocolate (907.4 mg flavanols/60 g chocolate) significantly reduces ADP-induced platelet aggregation and P-selectin expression in men and decreases thrombin receptor-activating peptide (TRAP)-induced platelet aggregation in women. The platelets from men are more sensitive to activation via adrenergic and serotoninergic pathways and show more intense thromboxane A2 receptor-related aggregation responses. The women's platelets have a lower amount of thromboxane A2 receptors, which would explain the inhibitory effect of flavanol-enriched dark chocolate on TRAP-induced aggregation. Also, the use of oral contraceptives and menstrual phase may influence the effects on platelet function. In a critical review on antiplatelet effects of dietary polyphenols, Ostertag et al. [69] noted that the most significant changes on platelet functions have been reported for subjects with single or multiple cardiovascular risk factors. Also, acute intake of dark chocolate (40 g, cocoa > 85%) reduces platelet activation via antioxidant mechanisms only in smokers who have a higher baseline generation of oxidative stress compared to healthy subjects [71]; (ii) Acute or chronic intake. The antiplatelet effects of cocoa flavanols appear to be more intense and meaningful in the case of acute intake. Even a modest amount of flavanols may modulate platelet reactivity in acute studies. The metabolism of cocoa flavanols, bioactivity of metabolites and their persistence could contribute to these findings. The existence of possible different mechanisms for acute and chronic administration does not allow a direct comparison of results from such studies. A better assessment of cocoa flavanols antiplatelet activity should measure both acute and chronic effects in the same study [69]; (iii) Methodology. The assessment of antiplatelet effects of cocoa polyphenols was performed by various experimental approaches that differ in terms of principle, sensitivity, evaluated markers or functions. Besides, the correlations between methods are low. Bleeding time (BT) assesses primary hemostasis by in vivo measurement of bleeding block. Although BT is a simple and quick method, it has the disadvantage that is poorly standardized and is influenced by many variables (skin thickness, temperature among patients). Light transmission platelet aggregometry on platelet-rich plasma (PRP)-LTA is a standard test that evaluates various platelet functions such as platelet activation under action of different agonists (ADP, AA, collagen, and epinephrine, TRAP, thromboxane A2 mimetic U46619) and platelet-to-platelet clump formation in a glycoprotein (GP) IIb/IIIa-dependent manner. The preanalytical conditions (type of anticoagulant, lipid plasma, hemolysis, or low platelet count) as well as procedural conditions (manual sample processing, PRP preparation, use of different concentrations of agonists) may alter the final outcomes [72]. Besides, LTA is a relative non-physiological method, and platelets are not subjected to intense shear conditions [69]. The Platelet Function Analyzer-PFA-100 assesses platelet function in whole blood at the point-of-care under shear stress using collagen (C) plus ADP or collagen plus epinephrine as stimulators of hemostasis. The method presents some limitations such as platelet count-hematocrit-dependence and insensitivity to platelet secretion defects [72]. Platelet analysis based on flow cytometry provides information on platelet functional status in vivo, and includes different methods such as the assessment of platelet activation biomarkers, leukocyte-platelet aggregates or platelet-derived microparticles. However, the preanalytical phase may induce errors, and the measurement of circulating monocyte-platelet is performed under low shear conditions that do not accurately reproduce in vivo processes [69,72]; (iv) Small size of subject lots and different populations [73].
Considering the heterogeneity of studies and the aforementioned aspects, the antiplatelet activity of cocoa polyphenols still remains a topic in debate. Several mechanisms have been suggested but they should be considered from the perspective of previous comments. Also, theobromine may be an important modulator of antiplatelet effects of cocoa products. Such antiplatelet mechanisms of cocoa polyphenols are: (i) modulation of eicosanoids metabolism involved in regulation of vascular homeostasis (stimulation of PGI2 synthesis, inhibition of leukotrienes production) [74,75]; (ii) increase of NO bioavailability, decrease of platelet reactivity [9] and down-regulation of NADPH oxidase and inhibition of platelet isoprostanes [71]; (iii) changes in membrane fluidity [9]; (iv) reduction of the ADP-induced expression of the activated conformation of glycoprotein IIb/IIIa surface proteins and the modulation of platelet integrins that are necessary for platelet-platelet and platelet-endothelial leukocyte interactions [14,76]; (v) inhibition of platelet lipoxygenase; (vi) antagonism of TxA2 platelet receptors [48]; (vii) decrease of phospholipase C activity, an enzyme involved in thrombin-induced platelet activation; (viii) regulation of genes involved in cell adhesion and trans-endothelial migration by acting on the NF-κB and MAPK signaling pathways. These effects are attributed to metabolites of epicatechin and they occur at low, physiological concentrations [52].
The well-controlled intervention studies with a large sample size and populations with cardiovascular risk factors along with a uniform methodology are needed to better assess the effects on platelet function exerted by cocoa flavanols.
Modulation of Lipid Profile
Cocoa polyphenols favorably influence the lipid profile and promote antiaterogenic effects.
In vitro studies and studies on cell cultures showed an inhibition of the oxidation of low-density lipoproteins (LDL) and a reduction in the LDL oxidative susceptibility [8]. A diet with various concentrations of cocoa powder (0.5%-10%) or cocoa extract (600 mg/kg per day) for four weeks triggers a reduction in the levels of LDL and triglycerides (TG), a reduction of the LDL oxidability, an increase of the high-density lipoproteins (HDL) and of the plasma antioxidant capacity in normal rats and hypercholesterolemic rabbits [8]. Furthermore, in the case of rabbits with a hypercholesterolemic diet, chronic administration of cocoa proanthocyanidins reduces the levels of plasma lipid hydroperoxides and increases the plasma antioxidant capacity [74]. Numerous clinical trials with varied designs in normocholesterolemic, healthy, mildly hypercholesterolemic subjects, with glucose intolerance or hypertension have demonstrated positive effects on the lipid profile by reducing the plasma levels of LDL, and TG and LDL oxidation, increasing the plasma level of HDL and plasma antioxidant status, reducing the level of several lipid peroxidation markers (TBARS, F2-isoprostanes) [8] and apolipoprotein B (Table 1) [15]. Also, cocoa supplementation may offer protection against postprandial dyslipidemia [23]. Randomized double-blind studies highlighted the HDL enhancing effect of cocoa by the increase of the apolipoprotein A level [52]. The intensity of the effects as well as their profile varies among studies. Some short-term randomized and controlled investigations reported modest changes of the total cholesterol, and LDL, without influencing HDL in patients with cardiovascular risk, and the effects do not seem to be dose-dependent [52]. Other studies indicated that the effects of the cocoa and dark chocolate are higher in patients with cardiovascular risk, in short-term studies and for daily doses of 500 mg polyphenols [77]. The differences between the studies could be explained by the extremely varied designs in terms of the pathophysiologic status of the subjects, the initial lipid profile, the type of cocoa products, the length of the study (2-12 weeks), and the concentration of the polyphenols (187-657 mg proanthocyanidins/day; 46-377 mg epicatechin/day) [15]. At the same time, we must take into account the fact that in the case of cocoa products, other components can also modulate the lipid metabolism.
The positive effects of the cocoa polyphenols on the lipid profile could involve several mechanisms such as: (i) inhibition of cholesterol absorption in the digestive tract; (ii) inhibition of the hepatic biosynthesis of the cholesterol; (iii) the reduction of the susceptibility of LDL oxidation through changes on their surface [48]; (iv) increase of expression of scavenger receptor B type I, sterol regulatory element binding proteins, ATP binding cassette transporter A1 and apolipoprotein A1 [36].
Anti-Inflammatory Activity
Cardiovascular diseases are currently considered inflammatory diseases; the damage of various cells that participate in cardiovascular functions involves inflammatory and immune responses [12]. Cocoa polyphenols act on several inflammatory mediators and signaling pathways in patients with an increased risk of cardiovascular diseases [74]. On cell cultures, cocoa extracts with 5-100 µg/mL total polyphenols, as well as monomers and oligomeric proanthocyanidins (25 µg/mL) inhibit the production of inflammatory cytokines (IL-1β, IL-2, IL-6, TNF-α), but also the genic expression of iNOS through NF-κB and AP1 pathways. At the same time, in blood mononuclear cells, cocoa polyphenols (monomers to decamers) inhibit the expression of IL-1β. Dose-dependently, they decrease the activity of 15-LOX, 12-LOX and 5-LOX [8]. In mice with a high-fat diet, cacao supplementation (eight percent) for 10 weeks, reduced plasma levels of IL-6 and the TNF-α, IL-6, iNOS and NF-κB expression in adipose tissue. Also, cocoa flavanols can influence other markers of vascular inflammation as soluble adhesion molecules. Thus, the anti-inflammatory effects of cocoa polyphenols contribute not only to the alleviation of endothelial dysfunction, and arterial function and prevention of atherosclerosis, but they also provide benefits in atherothrombotic clinical syndromes. A reduction in mRNA expression of IL-1β, IL-6, E-selectin and vascular cell adhesion molecule (VCAM-1) was also observed in an experimental model of myocarditis in mice [78]. The proanthocyanidins-rich fractions (1-3 µg/mL) and B2 dimer (1.3 µM) reduce the vascular smooth muscle cells inflammatory phenomena by inhibiting the proMMP-2 expression and the MMP-2 activity, an enzyme involved in the degradation of extracellular matrix and the occurrence of atherothrombotic syndromes [78,79]. The intake of flavanol-rich cocoa products (446 mg of flavanols/day) significantly reduces the expression of VCAM-1 in women with postmenopausal hypercholesterolemia. Also, in patients with high cardiovascular risk, the intake of chocolate rich in polyphenols (495 mg of total polyphenols) decreases the levels of intercellular adhesion molecule 1 (ICAM-1). The use of a controlled diet that included 300-900 mg of flavanols/day did not alter ICAM-1 and VCAM-1 levels in obese adults at risk for insulin resistance or in patients with normal/hyper-cholesterolemia. These contradictory data could be explained by different polyphenol intake and extremely diverse pathophysiological status of patients, including the inflammatory picture. Epicatechin and type B proanthocyanidin dimers and their metabolites could be responsible for the effects on soluble adhesion molecules via NF-κB pathway inhibition [78].
Is (−)-Epicatechin the Main Compound Responsible for the Cardioprotective Properties of Cocoa Products?
A very wide range of data from in vitro and animal studies support the cardioprotective potential of (−)-epicatechin. As mentioned in the previous paragraphs, epicatechin has antioxidant properties, alleviates oxidative stress, diminishes ROS-mediated NO inactivation, up-regulates eNOS and increases the bioavailability of NO inducing endothelium-dependent relaxation in animals. Also, it stimulates Nrf2/ARE pathway and inhibits the vascular expression of some proinflammatory and proatherogenic markers (IL-1β, ICAM-1, and TNFα). Epicatechin improves some markers of endothelial function in ApoE knockout mice, a model of atherosclerosis. It reduces plasma ET-1 levels in the apolipoprotein E (ApoE) (−/−) gene-knockout mouse and in Deoxycorticosterone acetate (DOCA)-salt hypertensive rats most probably via Akt-regulation of the ET-1 promoter. The chronic administration of epicatechin prevents the progressive increase in SBP, the proteinuria, and the endothelial dysfunction in uninephrectomized rats chronically exposed to DOCA-salt [80]. In rats after permanent coronary occlusion, the chronic treatment with epicatechin protects against myocardial ischemic injury and preserves left ventricular structure and function [60]. Despite of these promising data, the studies with pure epicatechin in humans are scarce, and the results on endothelial function are conflictual and inconclusive (Table 2). Schroeter et al. [87] showed that the oral administration of pure flavanol (−)-epicatechin (1 or 2 mg/kg body weight) to healthy subjects increases FMD and mimics some of acute vascular effects of cocoa, accounting, at least in part, for cocoa beneficial activity. Also, Loke et al. [88] demonstrated that the acute oral treatment with epicatechin (200 mg) modulates some important endothelial markers in healthy subjects. It significantly reduces plasma ET-1 concentration and increases circulating concentrations of vasoactive NO products most probably via eNOs activation and inhibition of NADPH oxidase. On the contrary, Dower et al. [84] did not identify a significant change of FMD or a reduction of BP following acute or chronic epicatechin administration (100 mg/day, four weeks) in apparently healthy older adults but the authors do not exclude the contribution of epicatechin to cardioprotective effects of cocoa. They found that epicatechin negatively modulates fasting plasma glucose and insulin resistance, which are closely related to the endothelial dysfunction. Also, in another interventional study in healthy (pre)hypertensive men and women, they showed that the treatment with epicatechin (100 mg/day, four weeks) causes a decrease of seven percent in sE-selectin, a marker of endothelial dysfunction that is inversely associated with FMD [83]. Barnett et al. [82] showed that a multi-dose intake of epicatechin (50 mg × 2/day, five days) in healthy adults improves mitochondrial enzyme function and increases plasma follistatin levels, an indicator of muscle growth. A careful analysis of the exposed studies reveals several limitations of these interventions, some of them even mentioned by the authors themselves. The main limitations are: (i) Small number of subjects and statistical underpowered trials [87]; (ii) Large heterogeneity of the study population (40-80 years) and biological variations among subjects [84]; (iii) Dose of epicatechin. As we already noted, EFSA recommends 200 mg of cocoa polyphenols daily for a beneficial effect on endothelial function. Although, Dower et al. [83,84] chosed the dosage of epicatechin in line with the amount of epicatechin present in previous cocoa/chocolate intervention studies (46-107 mg/day); in those studies, the level of total polyphenols was significantly higher (more than 200 mg and even higher than 800 mg). In cocoa products, the effects of epicatechin can be boosted by the pharmacokinetic and pharmacological interactions with other cocoa flavonoids and compounds. The activity of a compound within the natural phytocomplexes may be different as intensity or even sense from that of pure compound. The interactions between compounds in the phytocomplex affect their solubility, bioavailability or bioactivity and lead to a nuanced expression of biological response. In fact, even the authors mentioned that the dose of 100 mg epicatechin is likely to be too low to exert an effect on NO metabolism, the main target of vasodilatory mechanisms. Moreover, using a nonlinear meta-regression model with a Bayesian approach, Ellinger et al. (2012) [88] showed that the dose of ingested epicatechin influences the mean treatment effect. In this respect, authors showed that the daily intake of 25 mg epicatechin via cocoa consumption (but not as pure compound) can reduce BP through an increased availability of NO. In the study of Schroeter et al. [87], the dosage of epicatechin can be even lower or higher than 100 mg, depending on the body weight of subjects. In this context, the occurrence of the effects at low doses is difficult to explain. Maybe the fact that the subjects are more age-homogenous (25-32 years) exerts a positive influence; (iv) Health status of subjects. Baseline values of cardiovascular status and metabolism are different for young adults (25-32 years) and older adults (over 50 years), the last category being included in the studies of Dower et al. [83,84]. The type of treatment with epicatechin (acute vs. acute-on-chronic effect) may influence the final outcome. Dower et al. [84] showed that it is possible to reach a plateau regarding the effect on the endothelial function after 4 weeks of epicatechin administration, and one additional acute dose could not elicit more effect on FMD. The present data do not allow to define the cardiovascular profile of epicatechin in humans. The topic remains in discussion, and further long-term well-designed studies, with a larger number of subjects and appropriate methodology, are needed to gain more insight into the cardioprotective potential of epicatechin.
Safety of Cocoa Polyphenols
Daily intake of cocoa flavanols in amounts up to 2000 mg was well tolerated in healthy adults; only mild gastrointestinal effects have been reported [31]. Cocoa polyphenols may reduce iron absorption. A cocoa beverage with 100-400 mg of polyphenols per serving could decrease iron absorption by about 70% [89]. Also, the administration of cocoa products with other caffeine-containing foods and drinks may enhance the side effects of caffeine.
Limitations of Cocoa Studies
Many of the current clinical studies using cocoa products show some shortcomings that should be corrected to allow a better assessment of cocoa's cardioprotective effects. The major critical points are: (i) a large heterogeneity in terms of cocoa products type (cocoa powder, dark chocolate, milk chocolate, cocoa beverages, semi-sweet chocolate baking bits), dose of flavanols, control, pathophysiological status and age of subjects; (ii) human intervention trials with a small number of subjects; (iii) few cross-over designed studies; (iv) lack of data related to polyphenolic profile of cocoa products (type and proportions of polyphenols as monomers, oligomers and polymers); (v) short-term studies (less than two weeks) and poor controls [8,58]; (vi) lack of data regarding plasma polyphenol concentrations [8]; (vii) numerous studies including only healthy subjects; (viii) lack of randomized controlled trials concerning the effects of cocoa on pivotal cardiovascular outcomes as cardiovascular mortality, myocardial infarction or stroke [49]. Therefore, the design of future clinical studies should comply with the following guidelines: (i) randomized, controlled, cross-over, multi-dose trials [10]; (ii) long-term trials in larger cohortes [34]; (iii) analytically well-characterized and standardized cocoa products in terms of flavanol and procyanidin content and profile and specification of the concentrations of fats, sugars, milk proteins [2,10,16]; (iv) flavanol-free controls [2]; (v) well-characterized participant population (life-style and diet habits, medical status) and the inclusion of subjects with elevated cardiovascular risk factors [16]; (vi) quantification of circulating flavanol levels and assessment of characteristic flavanol metabolites [16]; (vii) use of a dose of cocoa product, which could be included in the daily diet [10].
Future Perspectives
Future research on cardiovascular effects of cocoa polyphenols should consider the following topics: (i) clinical trials on individual polyphenols of cocoa: flavanol-monomers and proanthocyanidins [52]; (ii) assessment of cardiovascular activity of conjugates metabolites of cocoa polyphenols (mainly epicatechin metabolites) [2]; (iii) identification of the minimal doses of cocoa/chocolate that need to be ingested to exert cardioprotective effects [49]; (iv) investigation of molecular mechanisms of cocoa flavanols [52]; (v) investigation of polyphenol bioavailability from different cocoa-containing matrices [10].
Conclusions
Cocoa and cocoa polyphenols appear to exert promising cardioprotective effects in humans. Their clinical use depends largely on the clarification of the issues related to the pharmacokinetic and pharmacological properties as well as the interactions between polyphenols and other compounds of cocoa in well-designed studies.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2023-02-01T16:34:15.672Z
|
2023-01-30T00:00:00.000
|
256436437
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4409/12/3/449/pdf?version=1675082039",
"pdf_hash": "fd9cc2bfc3856505664ae3bf6fccf672959dfd46",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43328",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "a8fc82d12e321813221ccab3052ab499a8006044",
"year": 2023
}
|
pes2o/s2orc
|
Novel Molecular Therapies and Genetic Landscape in Selected Rare Diseases with Hematologic Manifestations: A Review of the Literature
Rare diseases affect less than 1 in 2000 people and are characterized by a serious, chronic, and progressive course. Among the described diseases, a mutation in a single gene caused mastocytosis, thrombotic thrombocytopenic purpura, Gaucher disease, and paroxysmal nocturnal hemoglobinuria (KIT, ADAMTS13, GBA1, and PIG-A genes, respectively). In Castleman disease, improper ETS1, PTPN6, TGFBR2, DNMT3A, and PDGFRB genes cause the appearance of symptoms. In histiocytosis, several mutation variants are described: BRAF, MAP2K1, MAP3K1, ARAF, ERBB3, NRAS, KRAS, PICK1, PIK3R2, and PIK3CA. Genes like HPLH1, PRF1, UNC13D, STX11, STXBP2, SH2D1A, BIRC4, ITK, CD27, MAGT1, LYST, AP3B1, and RAB27A are possible reasons for hemophagocytic lymphohistiocytosis. Among novel molecular medicines, tyrosine kinase inhibitors, mTOR inhibitors, BRAF inhibitors, interleukin 1 or 6 receptor antagonists, monoclonal antibodies, and JAK inhibitors are examples of drugs expanding therapeutic possibilities. An explanation of the molecular basis of rare diseases might lead to a better understanding of the pathogenesis and prognosis of the disease and may allow for the development of new molecularly targeted therapies.
Introduction
Rare diseases affect a very small number of people compared to the general population-less than 1 in 2000 people. They are characterized by a serious, often chronic, and progressive course. Due to the low frequency of rare diseases, delays in diagnosis and lack of treatment are some of the biggest patients' problems [1]. The first symptoms of rare diseases often appear in childhood but may also occur in adulthood. Precise attention should be paid to the genetic basis of rare diseases, which allows us to make an accurate diagnosis and properly treat patients. Nowadays, treatment of many rare diseases is possible due to novel targeted therapies, thanks to the development of medicine and the increased emphasis placed on an individual approach to the patient. Genetic causes and therapeutic methods of the following chosen diseases with hematologic manifestations were analyzed: mastocytosis, Castleman disease, histiocytosis, thrombotic thrombocytopenic purpura, Gaucher disease, hemophagocytic lymphohistiocytosis, and paroxysmal nocturnal hemoglobinuria. The study aims to summarize diagnostic methods for pediatric rare diseases in which hematologic abnormalities are included in the clinical picture.
Mastocytosis
Mastocytosis is an orphan disease, defined as a heterogeneous group of disorders with many variants. In mastocytosis, clonal mast cells accumulate in various tissues and organs, like the skin, bone marrow, spleen, liver, and lymph nodes [2][3][4]. Mediators from mast cells and the anatomical distribution of the cells are responsible for the symptoms of the disease [3]. The frequency of systemic mastocytosis is estimated at 1-5/10 000 [5]. There is no difference in children between the prevalence of mastocytosis among males and females, and no race is predominant; however, precise data are limited [4].
For cutaneous mastocytosis (CM), the main symptoms include skin lesions. For systemic mastocytosis (SM), symptoms include cytopenia due to massive infiltration of bone marrow, signs of anemia, hemorrhagic diathesis, susceptibility to infections, hepatomegaly, splenomegaly, impaired liver function with ascites and/or portal hypertension, malabsorption, weight loss, osteolytic bone lesions and/or pathological fractures, and even severe anaphylaxis [6,7]. Among children, the cutaneous manifestation of mastocytosis is most frequently seen, in about 85% of cases, and has a good prognosis [2,8,9]. Pediatric-onset mastocytosis is often diagnosed before 2 years of age, usually as urticaria pigmentosa [4]. Skin lesions are described as red to brown to yellow, 1-2 cm in diameter, macules, plaques, or nodules, mainly in the trunk and extremities. Stroking or rubbing can induce erythema, swelling, and blister formation, which relate to pruritus and dermatographism [4].
The diagnostic criteria for mastocytosis in children are based on studies among adults [8]. According to the World Health Organization diagnostic criteria for systemic mastocytosis from 2016, the diagnosis can be established when at least one major and one minor or three minor criteria are stated. Multifocal dense infiltrates of mast cells (≥15 mast cells in aggregates) in bone marrow biopsies and/or in sections of other extracutaneous organ(s) is the major criterion, whereas the minor criteria are as follows: the presence of atypical (type I or type II) or spindle-shaped morphology in >25% of all mast cells; detection of a KIT point mutation at codon 816 in the bone marrow or another extracutaneous organ; expression of CD2 and/or CD25 by the mast cells in bone marrow or blood or another extracutaneous organ; and a baseline serum tryptase level above 20 ng/mL (in case of an unrelated myeloid neoplasm, the final minor criterion is not valid as a systemic mastocytosis criterion) [6][7][8][9][10]. According to a recent study in adults, E-selectin, adrenomedullin, T-cell immunoglobulin, mucin domain 1, and CUB domain-containing protein 1 (CDCP1)/CD138 were other proteins elevated in mastocytosis. Allergin-1 and pregnancy-associated plasma protein-A (PAPP-A) were decreased in patients with anaphylaxis, whereas galectin-3 was increased [6].
Inheritance is not seen in most cases of mastocytosis. In familial cases, an autosomal dominant inheritance pattern with incomplete penetrance was noted [11]. A point mutation in codon 816 of the KIT gene (KIT D816V) is typically found in adults with systemic mastocytosis. The prognostic influence of the KIT gene mutation in codon 816 in pediatric mastocytosis is unknown [8]. KIT codes for c-kit, a membrane receptor for stem cell factor, which is expressed in the surface membrane of the mast cells. Monozygotic twins and triplets have been reported [11]. However, family cases are rarely seen, and in the familial and childhood variants, no single gene has been identified [4,12]. The single nucleotide polymorphism causing a Met-541-Leu c-kit mutation might predispose to mastocytosis among children [12]. Sporadic mutations in c-kit at codons 816 and 820 and inactivating mutations at codon 839 were described in 43% of pediatric patients with cutaneous mastocytosis (skin biopsies). Patients with the Asp816Phe mutation acquire the disease prior to patients with Asp816Val mutations. The missense activating mutations Asp816Val and Asp816Phe were noted in those with mastocytomas, urticaria pigmentosa, and diffuse cutaneous mastocytosis [4]. In children, the mutational pattern is distinct and more commonly involves the extracellular domains of KIT (exons 8 and 9) [10]. In the literature, the following mutations were reported: at codon 816 of KIT within the tyrosine kinase domain (D816Y, D816F, D816I, and D816G) and mutations at nearby codons (L799F, I817V, N819Y, D820G, N822I, N822L, InsVI815-816, E839K, S840N, and S849I) [10]. Van [14]. Genetic profiling of KIT and characterization of associated gene mutations by next-generation sequencing (NGS) panels enable the division of patients into three prognostic subgroups: patients with multilineage KIT D816V involvement, patients with mast cell-restricted KIT D816V, and patients with "multi-mutated disease " [7].
Therapy of mastocytosis is aimed at alleviation of symptoms [3]. Avoidance of triggering factors, leukotriene antagonists, H1 and H2 antihistamines, cromolyn sodium, corticosteroids, and methoxypsoralen therapy with long-wave psoralen plus ultraviolet A radiation (PUVA) can be mentioned as methods of treatment [4]. As c-kit mutations play a role in the etiology of the disease, targeted therapies using kit inhibitors might be promising treatment options [3]. A KIT tyrosine kinase inhibitor, midostaurin, which inhibits KIT D816V, has recently been introduced. Gotlib et al. evaluated in an open-label study the effectiveness of oral midostaurin at a dose of 100 mg twice per day in 116 patients with advanced systemic mastocytosis. Complete resolution of at least one type of mastocytosis-related organ damage was noted in 45% of patients. The overall response rate was described as 60% (95% confidence interval, 49 to 70) [15,16]. Imatinib, nilotinib, dasatinib, and masitinib exert favorable effects on mediator-related symptoms of mastocytosis. Novel inhibitors, such as avapritinib or ripretinib, are in clinical development [7]. Avapritinib, a KIT and PDGFRα (platelet-derived growth factor receptor A) inhibitor, was precisely projected to inhibit KIT D816V [16]. There is a possibility of using mTOR inhibitors like rapamycin for patients expressing D816V-mutated KIT in aggressive systemic mastocytosis [7]. Cladribine, which targets nucleoside metabolism, IFN-α, and allogeneic hematopoietic stem cell transplantation (HSCT) might be considered for patients with advanced mastocytosis [9]. Barete et al. assessed the efficacy and safety of cladribine in 68 adults with indolent or advanced mastocytosis in a scheme of 0.14 mg per kg in infusion or subcutaneously in days 1-5, repeated at 4-12 weeks until 1 to 9 courses. Cladribine (2-chlorodeoxyadenosine) is a synthetic purine analog cytoreductive medicine with a 72% overall response rate (complete remission [R]/major/partial R: 0%/47%/25%) [17].
Castleman Disease
Castleman disease (CD) is a heterogeneous non-malignant lymphoproliferative disease with an estimated incidence of 5 per million person-years [18]. The term describes a group of disorders that share a spectrum of characteristic histopathological features, including atrophic or hyperplastic germinal centers, prominent follicular dendritic cells (FDCs), hypervascularization, polyclonal lymphoproliferation, and/or polytypic plasmacytosis.
The pathogenesis of UCD is most likely determined by a neoplastic follicular dendritic cell population. HHV-8-associated MCD pathogenesis is caused by viruses, whereas POEMS-MCD pathogenesis is committed to a monoclonal plasma cell population. iMCD is poorly understood, although clinical data suggest a pathologic role for interleukin-6 [25].
Histopathology is the key to the diagnosis of Castleman disease, which is usually classified into three types: the glassy vascular type (HV type), the plasma type (PC type), and the mixed type. The HV type is more common in patients with UCD and manifests mainly follicular dysplasia, degeneration of the germline center, widening of the mantle, atrophy of the lymphatic sinus, or fibrosis. Another feature is the intravesical growth of endothelial blood vessels in the area with the vitreous lesion; it can penetrate the germline center, giving the appearance of a "lollipop". The PC type is more common in patients with MCD. The pathology is mainly characterized by follicular growth and infiltration of plasma cells; there are few vitreous vessels and onion peels in the interalveolar area, and the lymphatic sinuses are preserved. There are fewer hybrid types, and both have the above characteristics [19]. Assessment of Castleman disease should include, in addition to histological evaluation with immunological staining, a series of laboratory and radiological examinations, and PET system imaging, which can provide information on the metabolic activity of the affected lymph nodes and help determine the severity of the disease. Recommended laboratory tests include screening for anemia, elevated CRP and/or erythrocyte sedimentation rate (ESR), hypoalbuminemia, hypergammaglobulinemia, and other markers of cytokine-induced inflammation. It should be noted that most cases of UCD are asymptomatic, and often there are no laboratory abnormalities [26,27]. High levels of IL-6 that cannot be otherwise explained may be one of the potential diagnostic criteria for MCD. Most clinical features and laboratory abnormalities in patients with MCD are associated with IL-6 overexpression, and basic information on the differential diagnostic criteria for MCD can be identified based on patients' clinical and pathological characteristics. Although the international evidence-based consensus diagnostic criteria for HHV-8 negative/idiopathic MCD were published in 2017, more studies are needed to define the criteria for the diagnosis of MCD due to the lack of extensive epidemiological data [28].
Castleman disease is not considered to be inherited, and it occurs sporadically in people without a family history. Nagy A. et al. analyzed 15 cases of UCD and 3 cases of iMCD using next-generation directed sequencing (NGS; 405 genes), as well as 3 cases of FDCS associated with the vitreous vascular variant UCD (UCD-HVV) using whole exome sequencing. Typical amplification of ETS1, PTPN6, and TGFBR2 as observed in one case of iMCD and one case of UCD. The iMCD case also had the somatic DNMT3A L295Q mutation. This iMCD patient also exhibited clinicopathological features corresponding to a specific subtype known as Castleman-Kojima disease (thrombocytopenia, anasarca, fever, reticulin fibrosis, and the clinical subtype of organomegaly [TAFRO]). In addition, one case of UCD-HVV showed amplification of a histone gene cluster on chromosome 6p. UCD-HVV-associated FDCS have demonstrated mutations and copy number alterations in known oncogenes, tumor suppressors, and chromatin remodeling proteins [29]. In another study, recurrent PDGFRB mutations encoding p.Asn666Ser were detected in patients with UCD, which strongly suggests that PDGFRB mutations in stromal cells may play a key role in the pathogenesis of UCD [30]. There is a need for in vivo functional studies to determine how particular genetic changes affect the phenotypic symptoms of UCD and iMCD and to fully study all genes, intron regions, and translocations.
The advent of effective antiretroviral therapy and the use of rituximab improved the results in the treatment of HHV8-MCD. Therapies targeting interleukin 6 (like tocilizumab) are highly effective in many iMCD patients, but other therapies (such as corticosteroids, rituximab, thalidomide, lenalidomide, bortezomib, cyclosporine, sirolimus, or interferon) are required in refractory cases [18]. The subtypes of Castleman's disease are presented in Figure 1. interferon) are required in refractory cases [18]. The subtypes of Castlemanʹs disease are presented in Figure 1.
Langerhans-cell Histiocytosis
Langerhans-cell histiocytosis (LCH), the most common histiocytic disorder, encompasses conditions characterized by aberrant function and differentiation or proliferation of cells of the mononuclear phagocyte system. It is diagnosed approximately in 1-9/10,000 people [5]. Childhood LCH ranges from 3.5 to 7 cases per 1,000,000 children annually [31]. LCH has a widely variable clinical presentation, ranging from single indolent lesions to explosive multisystem disease. Bone, skin, pituitary gland, lung, central nervous system, and lymphoid organs are the main organs involved, whereas liver and intestinal tract localizations are less frequently encountered. Children with lesions in the liver, spleen, or bone marrow are classified as having high-risk LCH due to being at the highest risk for death [32,33].
LCH is caused by the clonal expansion of myeloid precursors that differentiate into CD1a+/CD207+ cells in lesions, which leads to a spectrum of organ involvement and dysfunction. Studies have shown that LCH cells originate from myeloid dendritic cells rather than skin Langerhans cells. The pathogenic cells are defined by constitutive activation of the MAPK signaling pathway [34,35].
LCH is generally considered a non-hereditary, sporadic disease. Since LCH may affect any organ or system of the body, the condition should be considered whenever suggestive clinical manifestations occur in the skin, bone, lung, liver, or central nervous system (CNS). A definitive diagnosis of LCH requires a combination of clinical presentation, histology, and immunohistochemistry. The inflammatory infiltrate contains various proportions of LCH cells, the disease hallmark, which are round and have characteristic "coffee-bean" cleaved nuclei and eosinophilic cytoplasm. Positive immunohistochemistry staining for CD1a and CD207 (langerin) is required for a definitive diagnosis [36,37]. A gain-of-function mutation in BRAF (V600E) was identified in more than half of LCH patient samples in research from 2010 [35]. A somatic mutation of BRAF that causes the alteration of RAS-RAF-MEK-ERK cell signaling pathway is the most common genetic abnormality associated with LCH and is a poor prognostic marker [38]. Mutations of MAP2K1, MAP3K1, ARAF, ERBB3, NRAS, KRAS, PICK1, PIK3R2, and PIK3CA were also described in the literature as a cause of the condition [32,34,35]. Smoking is the sole known risk factor, but a significant effect of smoking cessation on the course of disease could not be confirmed [39]. There is a need for a high index of suspicion for the diagnosis of LCH
Langerhans-cell Histiocytosis
Langerhans-cell histiocytosis (LCH), the most common histiocytic disorder, encompasses conditions characterized by aberrant function and differentiation or proliferation of cells of the mononuclear phagocyte system. It is diagnosed approximately in 1-9/10,000 people [5]. Childhood LCH ranges from 3.5 to 7 cases per 1,000,000 children annually [31]. LCH has a widely variable clinical presentation, ranging from single indolent lesions to explosive multisystem disease. Bone, skin, pituitary gland, lung, central nervous system, and lymphoid organs are the main organs involved, whereas liver and intestinal tract localizations are less frequently encountered. Children with lesions in the liver, spleen, or bone marrow are classified as having high-risk LCH due to being at the highest risk for death [32,33].
LCH is caused by the clonal expansion of myeloid precursors that differentiate into CD1a+/CD207+ cells in lesions, which leads to a spectrum of organ involvement and dysfunction. Studies have shown that LCH cells originate from myeloid dendritic cells rather than skin Langerhans cells. The pathogenic cells are defined by constitutive activation of the MAPK signaling pathway [34,35].
LCH is generally considered a non-hereditary, sporadic disease. Since LCH may affect any organ or system of the body, the condition should be considered whenever suggestive clinical manifestations occur in the skin, bone, lung, liver, or central nervous system (CNS). A definitive diagnosis of LCH requires a combination of clinical presentation, histology, and immunohistochemistry. The inflammatory infiltrate contains various proportions of LCH cells, the disease hallmark, which are round and have characteristic "coffee-bean" cleaved nuclei and eosinophilic cytoplasm. Positive immunohistochemistry staining for CD1a and CD207 (langerin) is required for a definitive diagnosis [36,37]. A gain-of-function mutation in BRAF (V600E) was identified in more than half of LCH patient samples in research from 2010 [35]. A somatic mutation of BRAF that causes the alteration of RAS-RAF-MEK-ERK cell signaling pathway is the most common genetic abnormality associated with LCH and is a poor prognostic marker [38]. Mutations of MAP2K1, MAP3K1, ARAF, ERBB3, NRAS, KRAS, PICK1, PIK3R2, and PIK3CA were also described in the literature as a cause of the condition [32,34,35]. Smoking is the sole known risk factor, but a significant effect of smoking cessation on the course of disease could not be confirmed [39]. There is a need for a high index of suspicion for the diagnosis of LCH due to frequent misdiagnosis. In addition to survival data and the analysis of prognostic factors, the prospective collection of data on diverse presentations is essential [40].
Treatment of LCH is risk-adapted; patients with single lesions may respond well to local treatment, whereas patients with multi-system disease and risk-organ involvement require more intensive therapy. Treatment with BRAF inhibitors, such as vemurafenib and dabrafenib, has been shown to induce complete and durable responses, and the role of BRAF and MEK inhibitors is currently being investigated [35,41]. Optimal therapy for patients with single-system bone LCH has not been established. Less toxic therapeutic approaches should be considered for these patients [42]. Among targeted therapies, imatinib, a tyrosine kinase inhibitor that targets the receptors expressed in LCH, has shown efficacy in patients with refractory multisystem LCH [35].
Immune TTP is a life-threatening blood disorder, the clinical features of which include severe thrombocytopenia, microangiopathic hemolytic anemia, fever, and renal and neurologic dysfunction. Ischemic stroke, renal insufficiency, and myocardial ischemia might be consequences of end-organ damage. Analysis of peripheral blood shows low hemoglobin and hematocrit, low haptoglobin, elevated serum lactate dehydrogenase, and the presence of schistocytes [43,[45][46][47]. In 10% of all immune TTP cases, symptoms present in childhood [43]. Congenital TTP presents as episodic microangiopathic hemolytic anemia, thrombocytopenia, and damage to internal organs. The disease might be diagnosed in neonates, and it can also present for the first time in adults [48]. Toret et al. described a case of cTTP in a 12-year-old boy. The patient presented with jaundice and a skin rash. Blood analysis revealed nonimmune hemolytic anemia, severe thrombocytopenia, 8% schistocytes, polychromasia, and anisocytosis [49].
Immune-mediated TTS is a result of anti-ADAMTS13 (a disintegrin and metalloprotease with thrombospondin type 1 repeats, member 13) autoantibodies and a severe deficiency of ADAMTS13. Congenital TTP is a consequence of biallelic mutations in the ADAMTS13 gene [43,50]. Modifying factors such as sex, ethnicity, and obesity, as well as genetic risk factors for autoimmunity at the human leukocyte antigen class II locus DRB1*11 and DQB1*03 alleles and the protective allele DRB1*04, are involved in the loss of tolerance towards ADAMTS13 [51].
Congenital TTP is inherited in an autosomal recessive manner. Nonaka et al. described a case of a family with cTTP in which the patient's parents were heterozygous carriers of ADAMTS13 mutations (p.R193W, c.577C>T, exon 6 in the father, and p.H1141Tfs*85, c.3421del, exon 25 in the mother, and no ADAMTS13 mutation in her brother). Therefore, the patient was a compound heterozygote of p.R193W (c.577C>T, exon 6) and p.H1141Tfs*85 (c.3421del, exon 25) [44]. In a case report by Toret et al., the DNA sequence analyses showed compound heterozygosity consisting of c.291_391del in exon 3 and c.4143dupA in exon 29 in a 12-year-old boy with cTTP [49]. Wang and Zhao described a case of a neonate with a novel variant of a missense compound heterozygous mutation in ADAMTS13, c.1187G>A/c.1595G>T. High-throughput sequencing, polymerase chain reaction, and Sanger sequencing were used in genetic screen testing. It was reported that ADAMTS13 mutation analysis was only performed in 8 of the 12 cases of congenital TTP in neonates that have been reported globally [46]. Therapeutic plasma exchange with fresh frozen plasma replacement is given as the front-line therapy for TTP. Immunosuppressive therapy with glucocorticoids, cyclosporine A, or mycophenolate mofetil has shown efficacy [43]. Measurements of ADAMTS13 activity have become, in clinical practice, not only diagnostic markers but also an indicator of recurrence and response to therapy [52]. Knowledge of the molecular cause of the disease allowed the off-label use of rituximab in iTTP. Rituximab is an anti-CD20 monoclonal antibody that suppresses anti-ADAMTS13 autoantibodies [50]. Safety and effectiveness of rituximab were evaluated in 22 adults in an open-label prospective study by Froissart et al. Patients with severe, acquired TTP who responded poorly to therapeutic plasma exchange and who were treated with add-on rituximab therapy (four infusions over 15 days) presented with reduced overall treatment duration and shorter 1-year relapses than controls [53]. Bortezomib, which is a proteasome inhibitor targeting plasma cells, appears to be effective as an alternative to rituximab. Caplacizumab, a humanized immunoglobulin that targets the A1 domain of von Willebrand factor, prevents its interaction with platelets, blocks platelet aggregation, and reduces time to platelet count normalization [43]. Peyvandi et al., in the second phase of a randomized controlled study, observed the effectiveness of caplacizumab in 75 patients with acquired TTP (36 received caplacizumab and 39 received a placebo). The time to a response was significantly reduced (39% reduction, p = 0.005) with caplacizumab as compared with placebo [54].
Gaucher Disease
Gaucher disease (GD) is a rare genetic disease caused by a deficiency of the lysosomal enzyme glucocerebrosidase that leads to the accumulation of its substrate, glucosylceramide, in macrophages. In the general population, its incidence varies between 0.4 and 5.8/100,000 inhabitants [55].
Type 1 Gaucher disease affects most patients and is characterized by its huge heterogeneity, including asymptomatic forms and more severe presentations. The most frequent symptoms are anemia, thrombocytopenia, splenomegaly, and/or hepatomegaly, as well as potentially severe bone involvement with avascular osteonecrosis (AVN), osteoporosis, fractures, and lytic lesions. This type is associated with a higher risk of some solid cancers, Parkinson disease, and hematologic diseases, particularly multiple myeloma. Type 2 and type 3 Gaucher diseases are associated with neurological involvement, either severe in type 2 or variable in type 3 [55][56][57].
GD may come to light because of investigations for visceromegaly or pancytopenia. Therefore, Gaucher cells may be identified on tissue biopsy specimens, principally those of the bone marrow (during investigations for splenomegaly or cytopenias) or liver (during investigations for hepatomegaly or abnormal liver-related biochemical tests). However, specific diagnosis is made by measuring acid β-glucosidase activity in fresh peripheral blood leukocytes or occasionally by enzymatic analysis of fibroblasts cultured from skin biopsy specimens. Confirmation and better characterization of the condition may subsequently be afforded by the identification of biallelic pathogenic variants in glucocerebrosidase gene (GBA1), which encodes lysosomal GBA [57][58][59]. MRI is useful for monitoring skeletal involvement because it provides a semi-quantitative assessment of marrow infiltration and the degree of bone infarction [60].
Gaucher disease is inherited in an autosomal recessive manner. Newly available techniques in molecular biology enabled the characterization of the GBA1. The gene was localized to chromosome 1q21 by in-situ hybridization analysis. The GBA1 cDNA served as a probe to identify and isolate clones from controls and patients. The gene was found to encompass 11 exons spanning around 7000 base pairs. Almost immediately, it was recognized that a highly homologous pseudogene was present near GBA1. The elucidation of the full sequence of GBA1 ultimately enabled the production of recombinant proteins for therapeutic use. The first mutation in the GBA1 identified was a C to T substitution in exon 10, resulting in the replacement of a proline for leucine at amino acid position 444 [L483P]. Identification of the common N370S [N409S] mutation was later found in a patient with type 1 Gaucher disease. To date, more than 300 different GBA1 mutations have been described. The mutation nomenclature is at times confusing, as the numbering of the affected amino acids was eventually changed to include the 39 amino acid leader sequence [61].
Specific treatment, such as enzyme replacement therapy (ERT) using one of the currently available molecules such as imiglucerase, velaglucerase, or taliglucerase, or substrate reduction therapy, is indicated in symptomatic type 1 Gaucher disease. Only ERT is indicated in type 3 Gaucher disease. The approval of ERT for GD in the pediatric age group has significantly altered the course of the disease, especially for non-neuronopathic and chronic neuronopathic forms, as ERT does not cross the blood-brain barrier. Treatment improves the quality of life and prognosis. The rarity of Gaucher disease and its wide variability in clinical presentations lead to diagnosis delays [55,56,62]. Miglustat or eliglustat are inhibitors of the biosynthesis of glucosylceramide that are possible to use in Gaucher disease [55].
Hemophagocytic Lymphohistiocytosis
Hemophagocytic lymphohistiocytosis (HLH), also known as hemophagocytic syndrome, is caused by overactivated macrophages and histiocytes that result in excessive cytokine release, destruction of hematopoietic cells, and multiorgan dysfunction [63,64]. HLH is a rare disease affecting mainly children but also adults. The course of HLH is life-threatening unless effective treatment is instituted [65]. The prevalence is 1.2/1 million/year in children. In adults, the disease is diagnosed less frequently [66].
The etiology of HLH is different in the adult and pediatric populations. Although there is no single specific and sensitive diagnostic test for HLH, various clinical and laboratory findings should be taken into consideration. Patients with an HLH-associated gene defect and/or at least five of the following eight criteria can be diagnosed with HLH: fever, low or absent natural killer cell function, cytopenias, splenomegaly, increased triglycerides or low fibrinogen, high ferritin, hemophagocytosis, and elevated soluble CD25 (interleukin 2 receptor alpha (IL2Rα)) [67].
HLH was primarily considered to be only a genetic disorder; however, secondary HLH can be triggered by infections, malignancies, autoinflammatory, and rheumatologic disorders. Familial HLH is caused by mutations at specific gene loci (HPLH1, PRF1, UNC13D, STX11, and STXBP2), which code for proteins with a fundamental role in lymphocyte cytotoxicity [67,68]. Mutations in the HPLH1 gene are responsible for familial hemophagocytic lymphohistiocytosis type 1 (FLH-1), a mutation in the PRF1 gene causes FLH-2, mutations in the UNC13D gene cause FHL-3, mutations in the STX11 gene cause FHL-4, and a mutation in the STXBP2 (UNC18B) gene causes FHL-5 [69]. HLH and lymphoproliferative disease can be caused by mutations in the following genes: SH2D1A, BIRC4, ITK, CD27, and MAGT1. They encode signaling proteins that play a role in the activation, survival, differentiation, and migration of NK and T cells [70]. Chediak-Higashi syndrome (mutations in LYST), Hermansky-Pudlak syndrome type 2 (mutations in AP3B1), and Griscelli syndrome (mutations in RAB27A) are immunodeficiencies with high rates of developing HLH [61]. Familial hemophagocytic lymphohistiocytosis results from a distinct set of autosomalrecessive gene mutations of lymphocyte cytotoxicity [64]. Shabrish et al., in their study of 101 Indian patients, found that 53% patients harboring homozygous mutations presented at the median age of 10 months, and patients with compound heterozygous mutations had onset of disease at the median age of three. Twelve patients with a monoallelic mutation in FHL genes had first symptoms of disease at a median age of 10 months [68]. GATA2 deficiency was described in the literature in patients with acute secondary HLH [64]. Lam et al. found a de novo CDC42 mutation (Chr1:22417990C>T, p.R186C) in four unrelated patients with NOCARH syndrome (neonatal-onset cytopenia with dyshematopoiesis, rash, autoinflammation, and HLH) [63].
Treatment of HLH includes immunosuppressive drugs, such as corticosteroids, etoposide, and cyclosporin [65,67]. Recently, novel molecular-targeting drugs have emerged. Emapalumab, which is a human anti-IFN-γ monoclonal antibody, was registered for the treatment of patients with refractory HLH [67]. Locatelli et al., in an open-label, singlegroup, phase 2-3 study, assessed the efficacy and safety of emapalumab administered with dexamethasone in HLH in 34 patients at the age of 18 or younger (27 who had received conventional therapy before enrollment and 7 who had not). A response was noted among 63% of the previously treated patients and 65% of the patients who received an emapalumab infusion [71]. Anakinra (interleukin 1 receptor antagonist) and tocilizumab (interleukin 6 receptor antagonist), which block cytokines, and ruxolitinib, tofacitnib, baricitinib, and itacitinib, which are JAK inhibitors, can be mentioned as examples of molecular drugs expanding therapeutic possibilities [67]. Treatment with ruxolitinib as monotherapy or combination therapy (in upfront and salvage settings) showed fast, sustained improvement in clinical status, hematological cell counts, and inflammatory markers followed by persistent remission among 4 patients with profound secondary HLH [72].
Paroxysmal Nocturnal Hemoglobinuria
Paroxysmal nocturnal hemoglobinuria (PNH) is an infrequent intravascular hemolytic anemia in which hemolysis occurs by the complement system [73]. It is a chronic, progressive, multi-systemic, and life-threatening disease that results from the expansion of a clone of hematopoietic cells [74]. Its prevalence is stated as 1-9/100,000 [5].
Symptoms include a classic triad of hemolytic anemia, thrombosis, and failure of the bone marrow. PNH is a rare condition in children (5-10% of cases). However, it should be taken into consideration in the differential diagnosis, particularly in children with acute kidney injury. Common symptoms in children include pallor, fatigue, weakness, hemorrhage, thrombosis, and isolated hemoglobinuria [75].
The gold standard test to confirm PNH is flow cytometry performed on peripheral blood that detects very small PNH clones (<1% of a patient's hematopoiesis) [74,76,77]. PNH can be caused by an acquired mutation in the phosphatidylinositol-N-acetylglucosaminyltransferasesubunit-A gene (PIG-A) that leads to the deficiency of cellular anchors for complement inhibitor proteins cluster of differentiation CD55 (decay accelerating factor, DAF, which stabilizes C3 and C5 convertase) and CD59 (membrane inhibitor of reactive lysis, MIRL, which inhibits membrane attack complex formation) [70,75]. PIG-A is located on the X chromosome (Xp22.1) [5,75]. CD55 and CD59 inhibit complement activation and prevent healthy cells from undergoing complement-mediated lysis [75]. A lack of them leads to suboptimal complement inhibition and complement-mediated hemolysis of erythrocytes [76]. Jeong et al. found a strong positive correlation between paroxysmal nocturnal hemoglobinuria clone size by flow cytometry and variant allele frequency mutations of the PIG gene [76]. Recently, the complement inhibitor eculizumab, a monoclonal antibody targeting C5, has been introduced. It significantly reduces hemolysis, anemia, the occurrence of thrombosis, and morbidity and mortality [76].
Treatment of PNH includes anti-thrombosis prophylaxis, blood transfusions, and allogeneic bone marrow transplantation. Recently, the complement inhibitor eculizumab, a monoclonal antibody targeting the protein C5, has been introduced. It significantly reduces hemolysis, anemia, the occurrence of thrombosis, morbidity, and mortality [74,76]. In a meta-analysis of six studies by Zhou et al., a total of 235 patients treated with eculizumab were included. Eculizumab was safe and effective at decreasing lactate dehydrogenase (LDH) levels and transfusion rates while increasing hemoglobin levels [78]. In 2021, pegcetacoplan was approved to treat adults with PNH. It is a pegylated pentadecapeptide that targets complement C3 to control intravascular and extravascular hemolysis. The study by Hillmen et al. indicated that pegcetacoplan was superior to eculizumab in clinical and hematologic outcomes in PNH patients [79]. Another well-tolerated drug was ravulizumab. Outcomes from a third phase of a randomized trial of ravulizumab in adults with PNH showed that those on stable eculizumab therapy who received ravulizumab over 52 weeks presented with durable efficacy. Further efficacy was noted in adults who received eculizumab during the primary evaluation period and then changed treatment to ravulizumab [80].
A summary of the genetic landscape and methods of modern treatment for the described diseases are shown in Tables 1 and 2; Figure 2.
Conclusions
All the above-mentioned rare hematological diseases have a genetic cause. Some of them are described in the literature as diseases caused by mutations in a single gene, like mastocytosis, thrombotic thrombocytopenic purpura, Gaucher disease, and paroxysmal nocturnal hemoglobinuria. In others, like histiocytosis, hemophagocytic lymphohistiocytosis, and Castleman disease, several mutation variants are possible. Performing genetic testing is not obligatory to make a diagnosis, and it serves more often as confirmation of a diagnosis. Unfortunately, genetic diagnosis of rare diseases sometimes takes place at the end of the diagnostic process. Delays in the diagnostic process might translate into unfavorable treatment results.
An explanation of the molecular basis of rare diseases in hematology leads to a better understanding of the pathogenesis and prognosis of the disease and may allow for the development of new molecularly targeted therapies. There is a need for further molecular investigations to discover other possible defects in genes that are responsible for rare diseases.
|
v3-fos-license
|
2018-12-11T16:53:28.460Z
|
2016-07-22T00:00:00.000
|
55195371
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=69323",
"pdf_hash": "da0267bf41d5e08f6d68a885c6f97c8c9b6ef55f",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43329",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "da0267bf41d5e08f6d68a885c6f97c8c9b6ef55f",
"year": 2016
}
|
pes2o/s2orc
|
Genetic Diversity of Quantitative Traits of Sugarcane Genotypes in Ethiopia
Information about the amount and distribution of genetic variation in germplasm collections is important for their efficient management and effective utilization in plant breeding. Therefore this study was conducted to assess genetic diversity of sugarcane germplasm in Ethiopia. An experiment comprising of 400 sugarcane genotypes (174 local and 226 introduced) was conducted between March 2012 and October 2013 at Wonji and Metehara Sugar Estates using partial balanced lattice design with two replications. Data was recorded on 21 quantitative characters which included cane yield and its components, sugar yield and sugar quality traits. ANOVA portrayed highly significant differences (P < 0.01) among the genotypes for 21 quantitative traits. Cluster analysis revealed intra cluster D2 values ranging from 2.16 10.60 and inter cluster from 7.24 5864. There were six principal components accounting for 79.26% of the total variation in the tested materials. Millable stalk count, single cane weight, stalk diameter, cane yield, sugar yield and sugar quality traits showed high positive loading on the first two PCs and accounted for most of the variation observed among the genotypes. Therefore, this study suggested that the important characters responsible for diversity in the sugarcane genotypes could be grouped in two principal components namely “Yield” and “Quality” with “Yield” traits being comparatively more important than “Quality”. Genotypes clustered for high mean values of various traits could be exploited for further improvement of the crop either through selection or through hybridization. The clusters having high mean value for yield could be selected for yield per se as well.
Introduction
Saccharum is a complex genus characterized by high ploidy levels and composed of at least six distinct species-S.officinarum, S. barberi, S. sinensi, S. spontaneum, S. robustum and S. edule.Accurate assessment of genetic diversity is very important in crop breeding as it helps in the selection of desirable genotypes, identifying diverse parental combination for further improvement through selection in the segregating populations, and introgressing desirable genes from diverse germplasm into the available genetic base.Therefore, genetically diverse germplasm is needed in breeding programs to enhance the productivity and diversity of cultivars.Utilization of introduced germplasm and the knowledge of genetic remoteness among them are vital for their manipulation in crop improvement program [1].In any breeding program collection of germplasm is always the first step as it provides plant breeders with sources of useful traits.Especially collecting local germplasm would be crucial as they provide locally adapted genes for better crop improvement.Towards this effort, an exploration and collection of local sugarcane germplasm in different geographic regions of Ethiopia has been conducted and more than 300 materials were collected [2].Documented in a history of the monastery in Northern Ethiopia, it was learnt during this survey that sugarcane had been growing in the country since around 16 th century [2].It is presumed that sugarcane was introduced into Ethiopia in the 16 th century by the Portuguese with other food crops like rice, banana, lime, mandarin and ginger [3].
Sugarcane has commercially been grown in Ethiopia for the manufacture of white sugar in the Upper Awash River Basin at Wonji on 5000 ha since 1951 which was started by a Dutch Handles Vereening Ammsterdam (HVA) company [4].The second sugar estate at Metahara started production in 1969/70 and the third at Fincha in 1998.At present sugarcane is cultivated on 37,000 ha and the four sugar mills in different parts of the country produce about 300,000 ton sugar per annum.Data from the last 10 years (2004-2013) indicated that the average cane yield at Wonj and Metahara ranged from 1300 -1500 qt/ha and 1700 -1800 qt/ha, respectively.Similarly, the average sugar percent obtained from the sugar mills indicated 11.5% -12.5% at Wonji and 10% -11% at Metahara.Accordingly the sugar yield ranged from 162.5 -187.5 qt/ha and 187 -198 qt/ha at Wonji and Metahara respectively.
As it has never had its own breeding program, the sugar industry of Ethiopia has been relying on imported varieties to satisfy the varietal requirements of the sugar cane plantations.So far more than 300 varieties were imported.Currently only 6 to 7 varieties are grown widely and commercially across Ethiopian Sugar Estates.This is because most of the varieties were not adaptable to the local agro ecological conditions of the country.Even the varieties under cultivation now are of old generations and are contracted with many problems and consequently of low yielders.In light of these, the Sugar Corporation of Ethiopia is currently on its way of establishing sugarcane breeding program.Therefore, establishment of good sources of sugarcane germplasm, of both exotic and local origin, and its characterization are of great importance to provide a diverse genetic base and efficient management of the germplasm source for sugarcane improvement program of Ethioipa.
Information about the amount and distribution of genetic variation in the germplasm collections is important for their efficient management and effective utilization in the breeding program.Multivariate statistical analysis techniques like Principal Component Analysis (PCA) and Cluster Analysis techniques could be used for evaluating genetic diversity among sugarcane genotypes.In studies on genetic divergence using cluster analysis, Mahalanobis' generalized distance (D 2 ) is commonly used as a measurement of proximity [5] due to the fact that characteristics with different measurement units and normally correlated are being considered, the optimization method of Tocher is also frequently used as a clustering algorithm, as described by [6].
These analyses have been used successfully to study genetic diversity.Reference [7] studied 30 hybrid clones involving Saccharum barberi, S. officinarum, and co-hybrid to evaluate their seven parents to find out the nature and pattern of genetic divergence.The clones were grouped in 15 clusters and grouping of progeny clones was independent of parent cross combination.They concluded that hybridization among clones from diverse clusters may help in isolating progenies with higher sugar yield and its traits.Reference [8] evaluated sugarcane germplasm from field plots of four Saccharum species and four commercial cultivars by means of analysis of sugar composition.Cluster analysis indicated heterogeneity within and among these species.They concluded that information on sugar composition should assist breeders in selecting superior clones for the relevant breeding programs.Ninety-four genotypes of S. spontaneum were studied by [9] for principal component and cluster analysis based on seven quantitative traits of S. spontaneum.The three principal components obtained provided 82.47% cumulative variance.Based on these seven traits, the 94 S. spontaneum genotypes were grouped into 4 clusters.
The present study was conducted to quantify the genetic diversity of quantitative traits using multivariate methods for locally collected and introduced germplasm in Ethiopia.
Description of the Study Sites and Plant Material
The experiment was conducted at Wonji and Metehara sugar estates during 2012/2013.
Wonji
Wonji Sugar Factory is located in Oromia Regional Government State, Eastern Shewa Zone, Adama Woreda, About 110 km from Addis Ababa and about 10 km south of Adama Town with latitude 8˚31'N and longitude 39˚12'E with elevation of 1550 masl.The average annual rainfall is 800 mm with maximum and minimum temperatures 26.9˚C and 15.3˚C respectively [10].
Metehara
Metehara sugar factory is located in Oromia Regional Government State, Eastern Shewa Zone about 200 Km from Addis Ababa and about 8 km south of Metehara Town with latitude and longitude 8˚51'N and 39˚52'E respectively and with elevation of 950 masl.Annual rainfall is 554 mm with temperature maximum and minimum of 32.6˚C and 17.5˚C respectively [10].
Plant Materials
The plant materials for this study consisted of a total of 400 accessions of which 174 were local sugarcane germplasm collected from different regional states of Ethiopia and 226 were introduced sugarcane germplasm collections maintained at conservation garden of Research and Training, Sugar Corporation, found at Wonji (see Appendix in Supplementary Material available online at http://dx.doi.org/10.4236/ajps.2016.710139).Selection among the local genotypes was made based on geographical regions where the materials were collected and the morphological variations noted during the collection work and when the varieties were quarantined in their collection areas for one year.In exotic/introduced genotypes selection was made taking into consideration the variation in place of origin i.e. source countries and different periods of introductions to the country.
Experimental Design and Field Layout
The experiment was laid out in 20 × 20 partial balanced lattice design with two replications.Canes were cut into three budded sets and planted in single row plots of 5 m × 1.45 m and 20 cm between plants within a row.Uniform crop management practices like irrigation, cultivation and fertilization were applied to all entries in the trial as recommended for the areas.Urea was applied 2.5 months after planting at a rate of 200 kg•ha −1 at Wonji and 400 kg•ha −1 at Metehara.The crop was harvested 20 months after planting as plant cane takes 18 -20 months to mature at the two sugar estates.
Data Collected
Data on quantitative stalk characters (Table 1) was recorded viz sprout count 1 and 2 months after planting (SPC1MAP and SPC2MAP), tiller counts 4 and 5 month after planting (TC4MAP and TC5MAP), stalk count 10 months after planting (STC10MAP), hand refractometer brix reading 10 months after planting (HRBrix 10MAP), Millable stalk count per hectare (MSCHA), single cane weight (SCW), number of internode (NOI), internode length (IL), stalk height (SH), stalk diameter (SD), leaf length (LL), leaf width (LW), leaf area (LA), Cane yield per hectare (CYHA), Sugar yield quintal per hectare (SY).Data on juice quality parameters i.e. brix percent (brix%), pol percent (pol%), purity percent (purity%) and sugar percent (SR%) was also recorded.For every accession, ten plants were used for recording data for quantitative characters, which were recorded on plot basis.Count data and cane yield was recorded considering all cane stalks from the whole plot.For quantitative leaf characteristics measurement, a procedure developed by [12] was used. 1) in the study were statistically analyzed as simple partial balanced lattice design using the statistical procedures described by [13].Characters with count data were log transformed before analysis [13].ANOVA was done first separately for the two locations.Combined ANOVA was done over locations after the homogeneity of error variance was tested using the F-max method of [14], which is based on the ratio of the larger mean square of error (MSE) from the separate analysis of variance to the smaller mean square of error as:
Larger MSE F ratio
Smaller MSE − = If the larger error mean square is not three-fold larger than the smaller error mean square, the error variance was considered homogeneous [13].
For characters having significant mean differences, the difference between treatment means was compared using Tukey's Studentized Range (HSD) Test at 5% of probability.All statistical analyses and data processing was performed using SAS software V9.
Cluster Analysis
Cluster analysis was employed by average linkage method using the appropriate procedure of SAS software V9.Means of each quantitative character were standardised prior to clustering as suggested by [15] to avoid the effect due to difference in scale.The genotypes were grouped into different clusters using Tocher's method as described by [16].The resulting cluster was subjected to Mahalanobis' D 2 statistics to assess inter and intra divergence among clusters.
Principal Component Analysis
Principal component analysis (PCA) was used as a data reduction tool to summarise the information from phenotypic data so that the influence of noise and outliers on the clustering results is reduced.Principal component analysis was performed on the traits using SAS software V9 in order to study the relationship among the genotypes and to complement and confirm the grouping obtained through cluster analysis [17] [18].
Analysis of Variance
Analysis of variance results for 400 genotypes indicated significant differences for all the characters under study (Table 2).All phenotypic traits including sugar quality traits showed highly significant variation revealing a high level of genetic diversity among them.Therefore, the existence of the genetic variability among the studied clones demonstrated a favorable situation to practice the breeding program.This result indicates that there was significant amount of phenotypic variability and all the genotypes differed with each other with regard to the characters that opened a way to proceed for further improvement through simple selection.Genetic variability in germplasm resources is a prerequisite to practice selection [19] [20].The relatively large genotypic mean squares indicated that clones differed in their potential for the traits.Significant genotype × location interactions for most of the traits revealed that mean performances of the genotypes were influenced by the locations.This interaction was largely due to changes in the relative ranking of the genotypes across the locations which suggest that at this stage evaluating sugarcane genotypes in more locations rather than one may be satisfactory.
Comparative advantages of means of characters of the 5% best selected accessions (Appendix 1) for most of the agronomic traits showed that local varieties collected from different geographic regions of the country had superiority over the standard varieties B52298 and NCO334 and mean of commercial cane cultivars (MCV) (Table 3) and those of the introduced varieties amongst the 5% best selected.Though the sucrose recovery percent was relatively higher for the introduced varieties amongst the best 5% selected, the higher cane yield per plot recorded for the local varieties compensated for their superior sugar yield over the standard varieties and mean of MCV.The local variety Nech Ageda collected from Amhara Region, Debub Welo Zone, Borena Wereda showed the highest sugar yield and 60.66%, 38.13% and 127.85% comparative sugar yield advantage over B52298, NCO334 and MCV respectively.
This variety had the highest stalk count per plot recorded 10 months after planting during which time that is 9 -10 months after planting when the stalk population stabilizes and the potential number of millable stalk would be known.The highest cane yield was also recorded for this variety.
Relatively higher tiller counts per plot four and five months after planting was recorded for the local varieties Ye Beskula Shenkora, Nech Kechacha Shenkora/Getr, Moris and Engda and among introduced varieties like CO810, CO991, CP72/2083, DB386/60 showed higher tiller counts (Appendix 1).The highest millable stalk count at harvest was recorded for B4425, B45154, CO842, B4906, CO957, Ye Beskula Shenkora, Nech Ageda, Aladi, and Moris.With regard to cane yield among the 5% best selected (20 clones) 18 were local varieties and only two introduced varieties namely B4425 and N55/805.This was also true in measure of single cane weight where 17 of the 20 selected were local varieties.Relatively higher inter node counts were recorded for the local clones whereas higher inter node length was observed in the introduced varieties.Among the 20 best selected (5%) for stalk diameter 16 were local varieties where medium thick stalk diameters ranging from 3 -3.5 [21] was recorded.The highest and lowest stalk diameter was recorded for the local varieties Kay Sidancho and Nech Ye Abesha Shenkora respectively.In terms of leaf area the standard variety NCO334 scored the highest value followed by the local varieties Ye Kenya Ageda and Nech Shenkora (code 35 as in Appendix 1).Among the best 5% selected, higher values of brix%, pol%, purity% and sugar% were recorded mostly for the introduced varieties.This information helps to determine the genetic variability and contribution of some morphological traits in cane yield and sucrose recovery and can largely facilitate the formulation of appropriate selection strategies to develop the clones of best commercial merits, which are suitable for the cultivation in different climate zones.
Cluster Analysis
Cluster (segmentation) analysis for phenotypic traits showed a clear demarcation between sugarcane accessions (Table 4).Furthermore, Table 5 showed differences among clusters by summarizing cluster means for the 21 quantitative traits.Based on these traits, the accessions were grouped into different clusters.The dendrogram divided the accessions into nineteen main clusters and a singleton.The first cluster included 136 genotypes out of which 62 were introduced while the rest 74 were local clones.This indicated that these local genotypes have close similarity with the group of exotic sugarcane accessions belonging to this group.This cluster is characterized by accessions having HRBrix10MAP, number of internodes, leaf length values close to the grand mean.Furthermore, it has brix, pol, purity and sugar percent greater than the grand mean averaged over all clusters.Cluster two consisted of 120 accessions where 67 were introduced and 53 local accessions.The genotypes in this cluster demonstrated values greater than the grand mean for most of the traits which included millable stalk number, cane yield, single cane weight, stalk height, stalk diameter, leaf area, brix, pol, purity and sugar present and sugar yield.Genotypes in this cluster could contribute in the future breeding program with regard to these traits.Cluster three had only six local accessions out of the total 80, which were collected from different geographic regions of the country.This indicated that these accessions had genetic similarity with the rest of exotic accessions within the cluster.TC5MAP, STC10MAP, HRBrix10MAP, MSCHA, SH, brix%, pol%, purity% and SR% had values greater than the grand mean in this cluster.
Cluster four comprised seventeen accessions all of which were local accessions.This accessions, though collected from different geographic regions of the country, they tend to cluster together indicating source of origin is not the criteria for clustering.Amazingly these genotypes had 18 of the 23 quantitative traits with means greater than the grand mean averaged over all the 20 clusters.Out of these traits TC5MAP, SCW, SH, SD, LL, CYHA and SY had the second largest means from all the clusters.These genotypes reliably would be major contributors to improve these traits in the crossing programs.Cluster five consisted of seven accessions four of which were foreign varieties namely CP 1/441, M112/34, M377/5, Mex53/142 introduced from three source countries i.e.Canal Point, Mauritius and Mexico, respectively.The other three accessions America, Nech Shenkora /Shenkora Adi and Nech Shenkora were local collections from three different regions in the country SNNP, Oromia and Amhara.This cluster had accessions with stalk height, stalk diameter, leaf width, brix, pol, purity, and sugar percent which had values greater than the grand mean and a mean leaf area comparable to the grand mean.Cluster six had eight genotypes all locals except one exotic accession CO945 form Coimbatore, India.This variety should have close similarity with the local accessions with which it cluster together.Accessions in this cluster had mean values greater than the grand mean for number of internode, stalk diameter, leaf length and width and leaf area.Other traits had means lower than the grand mean.Cluster seven contained four exotic accessions B45154 and B58230 from Barbados and CO842 and CO957 from Coimbatore, India.These varieties might share same parents in their genealogical history; this could be the reason for their clustering together.The genotypes in this cluster showed mean performance greater than the grand mean for tiller counts 4 and 5 months after planting, stalk count 10 month after planting, millable stalk number, cane yield, internode length, stalk height and leaf length.However, they had low single cane weight.Furthermore, they had lower means than the grand mean for all sugar quality parameters.
Cluster eight consisted of five local accessions collected from different parts of the country.No exotic variety has clustered with these clones.These accessions demonstrated the shortest internode length, shortest stalk height, the narrowest stalk diameter, lowest single cane weight, narrowest leaf width and the lowest leaf area of all the clusters.They have also showed lower mean than the grand mean for all the traits including sugar quality parameters.In terms of sugar yield they out performed only accessions in cluster 15 and 19.These lower means caused the separate clustering of these clones alienated from the exotic varieties and other local clones.They must have also peculiar characters different from those of the introduced varieties and the remaining locals, which required further study.Results from further studies could reveal important traits, which could make them good candidates for future sugarcane breeding program of the country to develop improved varieties that fit to the different agro ecologies of the country.Cluster nine consisted of two foreign clones CO911 and PR980 form Coimbatore, India and Puertorico respectively and one local clone, Nech Shenkora collected from SNNP Region, Amaro special Wereda/Jijola kebele/Kore village/Cheffa district.This local clone should have similar character with the exotic genotypes.It might also be similar to one of these exotic genotypes as there is a possibility for sugarcane germplasm taken from the germplasm conservation gardens found at Wonji and Metehara sugar estates and transported by local seasonal labourers who mostly come from SNNP.The accessions in this cluster had the tallest leaf length and had the highest leaf area next to NCO334 in cluster 17, which is the standard commercial variety.They have also exhibited larger means than the grand mean for number of internode and stalk diameter and for sugar quality characters brix and pol.Cluster ten included three local varieties namely Moris, Kay Ageda and Kay Shenkora collected from SNNP, Semen Mierab Tigray and Debub Tigray respectively.These local clones are suspected that it could be the same variety called with different names in different places, as there is human mediated movement of genotypes.This cluster grouped the accessions with the highest sprout count one month after planting, longest stalk height, the widest stalk diameter, the highest cane yield and highest sugar yield.The mean performance of other characters was also higher than the grand mean.These groups of accessions will be very important to improve the most important yield components and cane and sugar yield in the future breeding program.They could also be selected as candidate varieties to be further evaluated and released for commercial purpose.
Cluster eleven consisted of two introduced genotypes CO961 and M53/263 from Coimbatore, India and Mauritius.The accessions in this cluster were observed having the widest leaf width, the second longest internode length next to accessions in cluster 20, the third widest stalk diameter next to accessions in cluster 10 and 4 and the third largest leaf area next to clones in cluster 17 and 9.These two accessions also had mean values of cane yield, single cane weight, stalk height, sugar yield and all sugar quality characters greater than the grand mean.Cluster twelve comprised of one exotic accession TDRJAN from Australia and two local accessions Nech Ageda collected from Amhara Region, North Shewa Zone, Kewet Wereda and Guracha Shenkora collected from Oromia Region East Hararghe Zone Babile Wereda.Shortest internode and leaf lengths characterize accessions in this cluster.However, they had number of internode and all sugar quality characters greater than the grand mean while other characters had lower values than the grand mean.Cluster thirteen consisted of two foreign clones CO434 and PR1059 introduced from Coimbatore, India and Puertorico respectively.Two other clones from the same countries were also clustered together in cluster 9.This might indicate that some clones in Coimbatore and Puertorico could have same parentage in their genealogy history.These accessions showed lower values than the grand mean for all the characters evaluated.They have also scored the second lowest cane yield.
Cluster fourteen included two accessions from Barbados B4425 and B4906 and one local collection Ye Beskula Shenkora collected from Amhara Region South Welo Zone Legambo Wereda.The grouping of the two Barbados varieties with this local clone revealed that there are shared characteristics among them.The accessions in this group were characterized by the highest number of tiller number 5 month after planting, the highest number of stalk number 10 month after planting, the largest number of millable stalk count and the lowest number of internode.These accessions also gave the third largest cane yield next to accessions in cluster 10 and 4. The fourth highest sugar yield was also recorded for these group of accessions next to those in cluster 10, 4 and 17.They have also demonstrated higher mean values than the grand mean for most of the other traits.In the future breeding program of the country this group of accession could contribute a lot to improve traits such as tillering ability, millable stalk number and cane yield.
Cluster fifteen had two foreign clones CO678 from Coimbatore, India and 93-V1 from Natal, South Africa.These accessions exhibited the lowest sprout counts 1 and 2 months after planting, the lowest number of tillers 4 month after planting.They had also the third lowest single cane weight after those accessions in cluster 8 and 7, the lowest number of internode similar to accessions in cluster 13, the fourth lowest stalk height.In terms of sugar quality characters, they showed the third lowest brix, pol and sugar percent next to accessions in clusters 19, 7 and the second lowest sugar yield next to the local clone Burabure Shenkora in cluster 19.Furthermore, for the remaining characters their performance was below the grand mean.In the remaining clusters 16 -20, the accessions were not included in any of the clusters, and grouped as a singleton and stood individually as a separate cluster, this indicates that they were phenotypically dissimilar from the other accessions.
Cluster sixteen had the single accession CO475 introduced from Coimbatore, India.This variety showed the lowest millable stalk number, the highest single cane weight, the second longest leaf length next to accessions in cluster 9, the second highest brix and pol next to accessions in cluster 5 and the third highest sugar percent next to accessions in cluster 5 and 17.This accession had also mean values greater than the grand mean for sprout count 2 month after planting, stalk diameter, leaf width and leaf area.However, for other characters lower mean values than the grand mean was recorded.The accession demonstrated good values for sugar quality characters that could be harnessed for breeding programs.Cluster seventeen consisted of only one accession, the standard commercial variety NCO334, which was introduced from Natal, South Africa.This variety had the largest stalk count 10 month after planting and the highest leaf area.It was also observed that it had the third largest millable stalk number next to accessions in cluster 14 and 7, the fourth highest cane yield, the third widest stalk diameter and the second widest leaf width next to accessions in cluster 11.From sugar quality parameters the third highest brix and pol percents next those in cluster 5 and 16, the second highest sugar percent next to accessions in cluster 5 and the third highest sugar yield next to clones in cluster 10 and 4 was recorded.These values for the observed characters must have caused it to stand as a single cluster.
Cluster eighteen consisted of a single local accession, Gende Lega collected from Oromia Region West Hararghe Zone Gubakoricha Wereda 05 Kebele Nanofaro district.This accession was characterized by having the highest tiller count 4 month after planting.However, this tiller number was seen greatly reduced when counted 5 months after planting.The highest hand rifractometer brix reading 10 month after planting was also recorded for this clone.The highest values recorded for these traits might be the reason for this accession clustering as singleton.For sugar quality parameters higher mean values than the grand means was recorded.Other traits showed lower performance than the grand mean when averaged over all accessions in different clusters.Cluster nineteen had the single local accession, Burabure Shenkora, collected from Benshangul-Gumz region Asosa Zone Megele 32 Sefera Tabia.This local clone is the one found in many parts of the country during the collection period.This local clone was characterized by showing the lowest tiller count 5 month after planting, the lowest cane yield, the lowest brix, pol, purity and sugar percent and the lowest sugar yield per hectare.It had also the second lowest mean values for sprout counts 1 and 2 months after planting, tiller count 4 months after planting and millable stalk number.Higher mean values than the grand mean was recorded only for single cane weight, stalk diameter, stalk height, leaf length and width and leaf area.
The lowest mean values recorded for the important characters mentioned above should be responsible for the clone to stood alone as single cluster.However, this local variety is known for its tolerance to biotic and abiotic stresses and the ability to grow in marginal and drought prone areas.Therefore, it could be exploited for these traits in the future breeding program.
The last grouping, cluster twenty consisted of a single exotic accession B41227.This accession showed the highest sprout counts 2 months after planting and the longest internode length.It has mean values greater than the grand mean for most of the characters including sugar quality parameters and sugar yield.The highest values for sprout count and internode length seemed the reason for it clustering as a singleton.
Though cluster analyses grouped genotypes with greater similarity for agronomic traits, they did not necessarily include the genotypes from the same source or origin.In most of the germplasm resources lack of association between agronomic traits and origin has been reported [22] [23].This information will be helpful to use in crop breeding through identification of parents.
As discussed above based on the cluster means for different characters as given in Table 5, important characters that differentiate each cluster were identified.Crosses involving parents from these genetically divergent clusters are expected to manifest maximum hetrosis and generate wide variability in genetic architecture.These are also likely to produce potential recombinants with desired traits [24].The characters contributing maximum to the divergence should be given more emphasis for the purpose of further selection and choice of parents for hybridization.There was high genetic diversity for the quantitative characters in the populations studied.The genetic distances as measured by the pairwise generalized D 2 statistics between each cluster is shown in Table 6.The standardized Mahalanobis D 2 statistics showed existence of high genetic distances among clusters.The first exceptionally divergent D 2 values were obtained between cluster 20 and the rest of the clusters with D 2 values ranging from 480 -5864.The second exceptionally divergent D 2 values were between cluster 11 and the remaining clusters with D 2 values ranging from 480 -5102.The uniquely high distance values in this case may stem from the presence of highly contrasting character references, which resulted in D 2 values disproportionately high among the clusters.The maximum genetic distance was found between clusters C15 and C20 with D 2 = 5864.The second most divergent clusters were C8 and C20 with D 2 = 5708 and the third were C9 and C20 with D 2 = 5706.
The fourth and fifth most divergent clusters were C5 and C20 with D 2 = 5679 and C4 and C20 with D 2 = 5667, respectively.The highest intra-cluster average D 2 value (10.60) was of cluster number 11, 13 and 15 while the lowest intra-cluster average D 2 value (2.16) was of cluster number 1 (Table 6).
Generally the results of cluster and the D 2 analysis have shown that, local genotypes from the same collection site were often in different clusters and likewise accessions from different collection sites often clustered together (Table 4 and Table 6), indicating the possibility of exchange of materials between sites and regions within Ethiopia.Similarly, regardless of their origin foreign sugarcane cultivars from different countries tend to cluster together and likewise accessions from the same foreign country were often in different clusters.Local and exotic genotypes also grouped together in many clusters, which showed there should be some similarity among them.The same phenomenon was reported in sugarcane elsewhere [25] [26] and in sorghum by [27].This suggests that the genotypes of different locations have genetic similarity and could have been derived from the same breeding material.Similar results were obtained by [7] wherein they found that the progenies of a cross clustered independently of their parents.However, in three of the clusters C4, C6 and C8 except only one foreign clone CO945 in C6, these clusters contained local materials.These local clones should have their own unique properties that separate them from the exotic accessions and the other local clones.
Based on the average intra and inter-cluster distances one can early predict the genetic diversity that exist within and between clusters.Since in Ethiopia little information is available on sugarcane, it could be used for further planning of experiments using huge genetic resources.This information helps to determine the genetic variability and contribution of some morphological traits in cane yield and sucrose recovery and could largely facilitate the formulation of appropriate selection strategies to develop the clones of best commercial merits, which are suitable for the cultivation in different climate zones.[28]- [30] derived information on genetic variability, heritability and genetic advance in sugarcane to develop selection strategies.Genetic divergence investigated in germplasm material would be helpful for selection of important yield influencing characters [31]- [33].
Principal Component Analysis
In the present study the PCA grouped the 21 phenotypic characters into 21 components, which accounted for the entire (100%) variability among the studied accessions (Table 7).As [34] stated, components with an eigenvalue of less than 1 should be eliminated so that fewer components are dealt with.Furthermore, [35] suggested that eigenvalues greater than one are considered significant and component loadings greater than ±0.3 were considered to be meaningful.Hence, from this study, only the first six components which had eigenvalues greater than one and cumulatively explained about 79.26% of the total variation among the accessions was discussed (Table 7).
The first principal component (PC) alone explained 32.39% of the total variation, mainly due to variation in the millable stalk count, cane yield, sugar yield and stalk count 10 month after planting.Characters which contributed more to the second PC accounted for 16.06% of the total variation and were dominated by traits such as single cane weight, stalk diameter, leaf width, leaf area, brix, pol, purity and sugar percent.
The third PC with 12.96% of the variation was composed of leaf length, leaf width, leaf area, brix%, pol% and sugar%.Leaf area showed the most variation among the characters in this PC with a high positive loading.The fourth PC with 7.80% of variance comprised sprout count one month and two months after planting, number of internode and internode length.Number of internode contributed much for the variation in this PC with high positive loading.
The eigenvectors of PC5 showed large positive loadings for the sprout count two months after planting followed by inter node length.High negative loading of number of internode was observed for this PC.Leaf length and leaf area contributed much for the 4.69% variation explained by PC6.
Single cane weight showed high negative loading for this PC.The existence of wider phenotypic diversity among sugarcane accessions studied was further explained by the PCA biplot (Figure 1).The PCA biplots provide an overview of the similarities and differences between the quantitative traits of the different accessions and of the interrelationships between the measured variables.The biplot demarcated the accessions with characteristics most explained by the first two dimensions.
The first and the second PCs explained the most variation among the accessions, revealing a high degree of association among the characters studied.Millable stalk count, single cane weight, stalk diameter, cane yield, sugar yield and sugar quality parameters brix%, pol%, and sugar% showed high positive loading on these two PCs.Based on the characters loading on the principal components they could be named as "Yield", and "Quality", components.[36] found 4 principal components giving rise to 76% variation in the data, with the first component comprising juice quality, yield and stalk diameter traits.[37] also found two principal components explaining 88% of the variation with high loading of yield on Component 1 and quality characters like sugar recovery, pol%, and purity% loading well on Component 2.
With studies conducted on the same genotypes, results showed that, the characters responsible for the high variation in the first two PCs in the present study were also shown to have higher heritability and genetic advance which made them suitable criteria for simple selection [38].The same report also indicated these characters showed significant genotypic correlations.
Conclusion
All quantitative phenotypic traits including sugar quality traits showed highly significant variation, revealing a high level of genetic diversity among them that opened a way to proceed for further improvement through simple selection.This study suggests that the important characters responsible for diversity in the sugarcane genotypes could be grouped in two principal components, namely "Yield" and "Quality" with "Yield" traits being comparatively more important than "Quality".Similarly, the 400 genotypes clustered for high mean values of various traits could be exploited for improvement in yield and quality characteristics either through selection or through hybridization.The cluster having high mean values for yield could be selected for yield per se as well.
Table 1 .
List of quantitative characters recorded in the study.
2.5. Statistical Analysis 2.5.1. ANOVA All
the quantitative agro-morphological characters and sugar juice quality parameters considered (Table
Table 2 .
Analysis of variance for morphological and juice quality traits of sugarcane tested over two locations (Wonji and Metehara 2012/2013).
Table 3 .
Mean of 21 quantitative characters* for 10 commercial varieties in Ethiopian sugar estates.
Table 4 .
Clustering of 400 sugarcane genotypes into twenty clusters using mean of 21 quantitative characters (numbers refer to code of genotypes (see Appendix in Supplementary Material available online at http://dx.doi.org/10.4236/ajps.2016.710139).
Table 7 .
Principal component analysis of 21 quantitative characters in 400 sugarcane genotypes showing eigenvectors, eigenvalues, individual and cumulative percentage of variation explained by the first six PC axes.
|
v3-fos-license
|
2021-03-19T13:14:49.468Z
|
2021-03-19T00:00:00.000
|
232272004
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2021.559447/pdf",
"pdf_hash": "fce98f6423bb2fa13fc8c12054ff32cd1f33c5ba",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43330",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "fce98f6423bb2fa13fc8c12054ff32cd1f33c5ba",
"year": 2021
}
|
pes2o/s2orc
|
Severe Acute Respiratory Syndrome Coronavirus 2 Viral RNA Load Status and Antibody Distribution Among Patients and Asymptomatic Carriers in Central China
This study aimed to monitor severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) viral loads and specific serum-antibodies (immunoglobulin [Ig] G and M) among confirmed patients and asymptomatic carriers from returning healthy travelers. The throat swabs, sputum, and stool samples from 57 hospitalized coronavirus disease (COVID-19) patients and 8 asymptomatic carriers, among 170 returning healthy travelers were tested using reverse-transcription real-time polymerase chain reaction. SARS-CoV-2 IgM/IgG antibodies were detected via serum chemiluminescence assay. Sequential results showed higher viral RNA loads in the throat, sputum, and stool samples at 3–12 and 6–21 days after symptom onset among severely ill COVID-19 patients. Shorter viral habitation time (1–8 days) was observed in the oropharyngeal site and intestinal tract of asymptomatic carriers. The IgG and IgM response rates were 19/37 (51.4%) and 23/37 (62.6%) among the 29 confirmed patients and 8 asymptomatic carriers, respectively, within 66 days from symptom or detection onset. The median duration between symptom onset and positive IgG and IgM results was 30 (n=23; interquartile range [IQR]=20–66) and 23 (n=19; IQR=12–28) days, respectively. Of 170 returning healthy-travelers to China, 4.7% were asymptomatic carriers (8/170) within 2 weeks, and the IgG and IgM positivity rate was 12.8% (12/94). IgM/IgG-positivity confirmed 3 suspected SARS-CoV-2 cases, despite negative results for SARS-CoV-2 RNA. Compared with other respiratory viral infectious diseases, COVID-19 has fewer asymptomatic carriers, lower antibody response rates, and a longer antibody production duration in recovered patients and the contacted healthy population. This is an indication of the complexity of COVID-19 transmission.
INTRODUCTION
Coronavirus disease 2019 (COVID-19) was officially declared a pandemic and public health emergency of international concern by the World Health Organization, indicating that it may result in substantial morbidity and mortality (Jones, 2020). As of February 16, 2021, more than 109,820,928 confirmed cases of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections have been reported from 209 countries, including 101,569 cases in China. In total, 2,417,402 patients have died because of COVID-19 (Peeri et al., 2020). However, data on viral load kinetics in confirmed cases and carriers and the production time and distribution of antibodies among patients and the contacted healthy population remain scarce to date . Currently, there are no data on the proportion of asymptomatic carriers and antibody distribution among the returning healthy travelers from endemic areas. Thus, this study aimed to monitor SARS-CoV-2 viral RNA loads and specific serum-antibodies (immunoglobulin [Ig] G and M) in confirmed patients and asymptomatic carriers among returning healthy travelers.
Study Subjects and Design
This was a retrospective case-control study on the viral RNA load and antibodies in confirmed COVID-19 patients and asymptomatic carriers. We evaluated all hospitalized COVID-19 patients (n=57) and asymptomatic carriers (n=8) who were admitted to Henan Provincial People's Hospital between January 2, 2020, and April 29, 2020, and were diagnosed with SARS-CoV-2 infection by positive nucleic acid and antibody tests. In addition, 170 healthy travelers who were returning to China were also evaluated. Patients with negative real-time reversetranscription polymerase chain reaction (rRT-PCR) results or no SARS-CoV-2-specific IgM and IgG, with a definite diagnosis of other diseases, and with a negative SARS-CoV-2 nucleic acid or antibody test were excluded from this study. Case definitions of confirmed human infection and asymptomatic carriers with SARS-CoV-2 were in accordance with the latest diagnostic criteria (version 7) issued by the Chinese National Health Committee (5). This included patients that had a contact history with confirmed or suspected COVID-19 patients and those that tested positive for SARS-CoV-2 nucleic acid in throat swabs, sputum, or stool samples, or tested positive for IgM/IgG in serum. Further, the patients who had abnormal chest computed tomography (CT) findings and with fever and respiratory symptoms were defined as confirmed cases. Among the patients with a contact history who tested positive for SARS-CoV-2 RNA, those who had normal chest CT findings and no fever or other respiratory symptoms within 14 days of isolation were defined as asymptomatic carriers .
In total, 65 individuals were evaluated in this study, including 57 confirmed COVID-19 patients and 8 asymptomatic carriers. Among the 57 confirmed COVID-19 patients, 54 patients were local residents and 3 patients who had returned to China from Iran and France were confirmed to have COVID-19 based on IgM or IgG positivity despite negative nucleic acid test results. Notably, 3 patients and 8 asymptomatic carriers were among the 170 returning travelers.
This study was approved by the Institutional Ethics Board of Henan Provincial People's Hospital (20190050) and was conducted in accordance with the Declaration of Helsinki. The ethics committee waived the requirement for written informed consent for the patients' participation in this study.
Diagnosis and Data Collection
Throat swab, sputum, and stool samples for suspected cases were collected for SARS-CoV-2 testing using rRT-PCR assays (Shanghai Zhijiang Biotechnology Ltd., China). Cycle threshold (C t ) values of ≤44 from the rRT-PCR were considered positive. The duration of the continuous positive C t value was the number of days of viral habitation time in each patient (Padoan et al., 2020). Viral RNA load was presented as RNA copy number of SARS-CoV-2. Ct values of rRT-PCR were converted into RNA copy number of SARS-CoV-2. The RNA copy number was calculated using a reference method as follows. C t values were inversely related to the viral RNA copy number, with C t values of 30.86, 28.68, 24.56, and 21.48, corresponding to 1.5×10 4 , 1.5×10 5 , 1.5×10 6 , and 1.5×10 8 copies/mL, respectively (Zou et al., 2020). Negative samples were denoted with a C t value of 45, which was under the limit of detection. Additionally, blood specimens for IgM and IgG detection using chemiluminescence assay (Beijing Beier Biotechnology Ltd., China) were considered positive at a cut-off value if the number of antibodies were ≥8 U/ mL (Zou et al., 2020). Among the 65 patients, only 37 patients underwent IgM/IgG testing (23 of whom tested positive) because the antibody test was not available in our hospital until 24 February 2020. Information regarding the dates of illness onset, visits to clinical facilities, and hospital admissions were collected from clinical records. The incubation period was defined as the time from exposure to illness onset and was estimated among patients who could provide the exact dates of close contact with individuals who had confirmed or suspected SARS-CoV-2 infection. Throat swab, sputum, and stool samples were collected and tested as the standard point of care diagnostic workup. However, these samples were not collected and tested at the same time. According to the latest Chinese COVID-19 diagnosis and treatment guidelines (version 7, released on March 4, 2020), in suspected patients, whose diagnosis is confirmed by nucleic acid testing, a throat swab is the first sample to be collected, followed by sputum and stool. Additionally, during the patient's treatment, physicians generally decide on the number and interval of sample collections based on the patient's response to treatment and recovery status. However, patients are required to have two consecutive negative nucleic acid test results with an interval of >24 h before hospital discharge. Available sequential C t values of rRT-PCR assays and IgM and IgG test results for those included were obtained from the Henan Provincial People's Hospital laboratory information system. Dates of disease onset, hospitalization and classification of COVID-19 severity were recorded. The date of illness onset was defined as the day when any symptoms were noticed by the patient and later confirmed by a physician. Severity classification was defined using the diagnostic and treatment guidelines for SARS-CoV-2 issued by the Chinese National Health Committee (version 7) . Patients were defined to have severe COVID-19 if they met one of the following criteria: 1) respiratory distress with respiratory frequency ≥30/min; 2) pulse oximeter oxygen saturation ≤93% at rest; and 3) oxygenation indexes (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mmHg.
Statistical Analyses
Continuous variables are summarized as either means and standard deviations if they were normally distributed or as medians with interquartile ranges (IQRs) if they were nonnormally distributed. Meanwhile, categorical variables are presented as the percentages of patients in each category. Categorical variables were compared using chi-square or Fisher's exact tests, and continuous variables were compared using Student's t tests or Mann-Whitney U tests, according to their distribution. The Pearson correlation coefficient (r value) was used to describe the correlation between continuous variables, including Ct value, and the IgG/M response of patients and asymptomatic carriers. All statistical analyses and graphs were generated and plotted using GraphPad Prism version 8.00 software (GraphPad Software Inc.) or SPSS software version 20.0 (IBM Corp., Armonk, NY, USA). Pvalues <0.05 were considered statistically significant.
Characteristics of the Study Subjects
Of the 57 assessed patients and 8 asymptomatic carriers among 170 healthy traveling returnees, 7 patients required intensive care unit (ICU) admission. Of them, 6 died of severe disease. The remaining 58 patients had mild-to-severe illness or were asymptomatic carriers. Among the 36 severe patients, the mean (standard deviation) age was 60.4 years (16.5), 55.4% were men and 54 (83.1%) patients had at least one coexisting condition. The mean time from admission to symptom onset in severe patients was 11.6 (8.4) days. The mean length of hospitalization of severe patients was 11 (6.6) days, and the average time taken to obtain a positive PCR result was 13.0 (8.3) days. Furthermore, 29 patients had a history of travel to Hubei Province or contact history with confirmed patients, while 3 patients and 8 asymptomatic carriers were among the 170 overseas returnees during the COVID-19 outbreak. Table 1 shows this information in more detail.
Viral RNA Loads in Throat Swab, Sputum, and Stool Samples
The C t values were analyzed in a total of 297 samples, including 185 throat swabs, 56 sputum, and 9 stool samples, collected from the 65 patients. The viral RNA loads (inversely related to C t values) detected 3-12 days after symptom onset were higher than those detected after 12 days from symptom onset. The viral RNA loads detected in the sputum, throat, and stool were higher among severely ill patients than among those with mild disease and asymptomatic carriers. The viral RNA loads in the throat swab and sputum samples peaked approximately 3-12 days after symptom onset. The C t values ranged from 34 to 36 (10 5 -10 8 copies/mL; Figures 1A, B) (Zou et al., 2020). Notably, viral RNA was detected in stool samples from 4 symptomatic patients and 1 asymptomatic carrier ( Figure 1C). The viral RNA loads in stool samples peaked at approximately 6-21 days after symptom onset, with C t values ranging from 23 to 33 (10 5 -10 8 copies/ mL). Among the severely ill patients, the viral RNA loads were higher in the throat swab and sputum samples, and this lasted 33-66 days ( Figure 1D). Unexpectedly, we found that 8 asymptomatic carriers, on different return flights from Iran, France, Cambodia, and Thailand between March 8, 2020 and April 29, 2020, had a positive nucleic acid result in their oropharyngeal site and intestine tract that lasted 1-8 days, with C t values ranging from 33 to 34 (10 5 -10 8 copies/mL) ( Figure 1A).
Correlation and Comparison of Ct Values Among Sputum, Throat Swab, and Stool Samples
Available sequential C t values of every 1-day interval were used to determine the correlation of the viral RNA loads among the throat swab, sputum, and stool samples from the 65 confirmed patients and carriers. The viral RNA loads were significantly correlated between the throat swab and sputum samples (n=28 pairs, R=0.8018, p=0.0088; Figure 2A). Similarly, the viral RNA loads were also significantly correlated between the stool and sputum samples (n=8 pairs, R=0.9621, p=0.0389; Figure 2B) and between stool and throat swab samples (R=0.98, p=0.0156; Figure 2C). Meanwhile, the viral RNA loads and positive PCR duration differed among the sputum, throat swabs, and stool samples. Stool samples had higher viral RNA loads and shorter positive PCR duration than did sputum and throat swab samples ( Figure 1D). However, there were no significant differences in the viral RNA loads or positive PCR duration between the sputum and throat swab samples from 27 patients that had paired sputum and throat swab samples ( Figures 1D and 2D).
Correlation and Comparison of Viral Duration for Patients by Ward and Severity Classification
To examine viral presence, we compared the duration in days for positive C t values according to the type of ward and severity classification of the patient. The duration for available positive C t values (47 patients and 8 asymptomatic carriers) in the sputum, throat swab, and stool samples was longer for patients in the ICU (n=7) than for those in the general ward (n=46) (median: 13 days [IQR=9.5-25.25 days] vs. median: 6.5 days, p=0.0396, Figure 3A). In addition, the duration until available positive C t values were acquired was longer among severe cases (n=30) than those among mild cases and asymptomatic carriers (n=23) (median=12 [IQR=8-18.85 days] vs. median=2 [IQR=1-4.25 days], p<0.001, Figure 3B). There was a mutual linear positive relationship among the number of days until a positive C t value, hospitalization days, and days from symptom onset to SARS-CoV-2 detection ( Figures 3C, D).
Characteristics of Asymptomatic Carriers and Confirmed COVID-19 Cases Based on Antibodies
Among the 170 returning healthy travelers, 12 individuals had serum samples that tested positive for SARS-CoV-2 based on specific IgM or IgG antibodies. Further, eight throat swab samples from these individuals were positive for SARS-CoV-2 on rRT-PCR. The positive antibody and nucleic acid results were taken from different individuals among the 170 returning cases. Eight individuals with positive rRT-PCR results had no typical signs or symptoms of SARS-CoV-2, such as fever or cough, within the 14-day mandated quarantine period. Their chest CT scans showed normal imaging features ( Figure 4A), and their laboratory test results were within normal range ( Table 2). These 8 (4.7%) individuals were diagnosed as asymptomatic carriers. The IgM and IgG positivity rates were 12.8% (12/94). Among the 56 patients, 3 suspected patients with both positive IgM and IgG results but negative rRT-PCR results were diagnosed with COVID-19 according to the latest diagnostic criteria (version 7) . These 3 individuals had signs of infection, abnormal laboratory test results (Table 1), and abnormal findings on CT ( Figure 4B).
Distribution of IgM and IgG Antibodies Among Confirmed Patients and Carriers
Among the 29 antibody-positive patients, only 55.2% (16/29) and 58.6% (17/29) were IgG and IgM positive, respectively, within 66 days from symptom onset. Meanwhile, 3 of the 8 asymptomatic carriers had positive serum IgM and IgG results. The sensitivity of rRT-PCR for COVID-19 diagnosis in the 66 days following symptom onset was 94.6% (53/56), which was significantly greater than the 62.2% (23/37) sensitivity of the antibody test (c2 = 16.95, p<0.001). Of the 29 confirmed patients and 8 asymptomatic carriers among the 170 healthy traveling returnees, 20 patients had positive antibody test results during the 66 days from symptom onset and 3 asymptomatic carriers showed IgM antibody response within 14 days. The median duration from symptom onset to IgG response was 30 days (IQR = 20-66 days), while it was only 23 days (IQR=12-28 days) for IgMpositive patients and asymptomatic carriers.
Comparison of the Duration to Positive IgG and IgM Results and Positive Ct Value
We compared the duration until positive IgM and IgG results were detected among the available 29 patients and 8 asymptomatic carriers and found a significant difference in the duration (p=0.004, Figure 5A). However, there was no significant difference in the duration until a positive IgM result and a positive C t value were obtained ( Figures 5B, C). Moreover, the duration until a positive C t value was obtained was correlated with the duration to IgG response onset (R=0.6495, p=0.0163; Figure 5D). The IgM or IgG positivity rate was higher among severely ill patients than among mild cases and asymptomatic carriers: 71.4% (20/28) vs. 50% (5/10) (c 2 = 8.828, p=0.002).
DISCUSSION
There are currently limited data on the proportion of asymptomatic carriers and antibody distribution among the returning healthy travelers from endemic areas. In this study, asymptomatic infections accounted for 4.7% (8/170) of the returning cases among the healthy population, and the antibody production rate was 12.8% (12/94). This study presents the viral RNA load kinetics and distribution of antibodies in patients and asymptomatic COVID-19 carriers in central China. We speculated that during the 2 weeks following disease onset, disease transmissibility was higher. With recovery onset, the production rate for the antibody response increased to 62.2% (23/37) at 66 days after symptom onset. These data indicate that compared with other known respiratory viral infectious diseases such as hand-foot-and-mouth disease, mumps, and rubella, this emerging infectious disease caused by SARS-CoV-2 has fewer asymptomatic carriers and generates a lower antibody response rate among healthy contacts. Moreover, a longer duration is required for antibody production among both recovered patients and healthy contacts. Further, it is unclear whether the IgG antibody is protective, indicating the complexity of COVID-19 transmission. Therefore, we believe that this new virus may have just started spreading from animals to humans and it may persist in humans for a long time. To the best of our best knowledge, this is the first report of 3 confirmed COVID-19 cases detected using specific IgM or IgG antibody tests from serum specimens with negative rRT-PCR results from throat swab samples among Chinese nationals who returned from endemic countries. Moreover, this is the first report of 8 (4.7%) asymptomatic carriers diagnosed 2 weeks after returning to China among the same population. A positive IgM result indicates a recent infection with SARS-CoV-2. Additionally, IgG-positive results indicate that the body has begun to establish an immune defense. By testing patients and carriers for IgM and IgG antibodies and identifying time points at which they start producing antibodies, it is possible to monitor the extent to which COVID-19 spreads and the infection duration (Erensoy, 2020). A previous report showed that the rRT-PCR assays could be used to test throat swab samples to detect asymptomatic carriers with a travel history who later transmitted the infection to their contacts (Kannan et al., 2020;Lai et al., 2020). To better manage asymptomatic infections, the Chinese government has stipulated that from April 1, 2020, health authorities should report the daily number of new cases and outcomes of asymptomatic carriers nationwide. On February 18, 2021, there were 338 cases of asymptomatic carriers in China. Among them, 282 were returning asymptomatic carriers. However, the proportion of asymptomatic carriers and rate of antibody production among healthy contacts are unknown. Our findings reinforce that for identification and contact screening of individuals traveling from epidemic countries, it is important to conduct joint detection for SARS-CoV-2 using respiratory samples for nucleic acid testing and blood samples for IgM or IgG antibody testing (Bai et al., 2020).
Consistent with the findings in previous studies, the virus was detected in stool specimens in this study in addition to samples from the upper respiratory tract Won et al., 2020). Diagnostic and treatment guidelines recommend detection of SARS-CoV-2 via throat swabs using nucleic acid testing . A stool sample for nucleic acid testing or a blood sample for specific IgM or IgG antibody detection should be obtained from patients highly suspected of COVID-19 but with continuously negative nucleic acid test results from throat swabs.
Unlike SARS-CoV and Middle East respiratory syndrome coronavirus infection (Chafekar and Fielding, 2018;Xu K. et al., 2020), the SARS-CoV-2 viral RNA load is highest during the early phase of the illness then continues to decrease until the end of the second week. In severe cases, the high viral RNA load can last up to 2 months. The duration of the virus infection is positively correlated with the disease severity and symptom duration, suggesting that we should detect, diagnose, and isolate the patients as early as possible to prevent community transmission and mortality. The linear correlations between days for positive C t values and hospitalization days, and days since symptom onset to nucleic acid detection, respectively. There was a significant difference in duration until positive C t values between severe and mild patients, and for asymptomatic carriers. Similarly, the same significant difference in duration was observed for positive C t values between patients from the ICU and the general ward. The duration until a positive C t value in patients was related to hospitalization duration and the duration of symptoms in patients. This study has some limitations. First, the patients may not be representative of the general population of COVID-19 patients in China Wu and McGoogan, 2020). Second, we cannot estimate the time point that these patients were exposed to the virus and when viral shedding via respiratory secretions and stool started. Third, the virus was not cultured from respiratory secretions and stool specimens because we do not have a professional Bio-safety Level 3 laboratory in our hospital . Finally, there were no sequential IgM or IgG antibody distribution results available for the SARS-CoV-2 infected patients throughout the duration of their illness. The IgG and IgM positivity rates only accounted for 51.4% and 62.6% of the 29 confirmed patients and 8 carriers within 66 days following symptom onset, respectively. The IgM antibody is known to be produced in the early stages of an infectious disease, whereas the IgG antibody is produced during the recovery period (Kam et al., 2020;. We found that the median duration from symptom onset to IgG and IgM positivity was 30 and 23 days, respectively. This indicated that in COVID-19, initiation of IgG antibody production is longer than that for IgM production at 30 days versus earlier than 23 days. More samples should further be observed to confirm the phenomenon (Amanat et al., 2020).
In conclusion, we found that there are fewer asymptomatic COVID-19 carriers among the returning healthy travelers. Additionally, there are lower rates of antibodies produced among recovered patients and the contacted healthy population. These findings indicate the complexity of COVID-19 transmission and suggest that this infectious disease is likely to occur in humans for a long time.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Ethics Board of Henan Provincial People's Hospital (20190050). The ethics committee waived the requirement of written informed consent for participation.
AUTHOR CONTRIBUTIONS
YY and HW designed the study, analyzed the data and prepared the manuscript. JinZ and WL contributed to the collection and interpretation of the laboratory and clinical data. NJ and JX analyzed the antibody data of patients and carriers. GL, YL, SW, YW, and LL were involved in the project management and organizational work. BM and JiaZ collected data and EF reviewed the manuscript. All authors contributed to the article and approved the submitted version.
|
v3-fos-license
|
2021-10-14T05:21:13.355Z
|
2021-10-01T00:00:00.000
|
238743394
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/18/19/10408/pdf",
"pdf_hash": "a224a36c081fd672a0f8da2a13475729829d4dd9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43331",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"sha1": "a224a36c081fd672a0f8da2a13475729829d4dd9",
"year": 2021
}
|
pes2o/s2orc
|
Prevalence of and Factors Associated with Club Drug Use among Secondary Vocational Students in China
To understand the prevalence of and factors associated with club drug use among Chinese secondary vocational students, a nationally representative survey was conducted. The multistage cluster sampling strategy was employed to select participants. A total of 9469 students from eleven secondary vocational schools in five cities completed self-reported questionnaires, which included information on club drug use, sociodemographic variables, individual factors, as well as peer and family related factors. The data were separately analyzed with Poisson regression models for female and male students. The overall lifetime prevalence of club drug use was 2.7% (258/9469), and male students had higher prevalence than female students (3.5% vs. 1.9%, p < 0.001). Female and male students shared four risk factors (i.e., having ever smoked, perceiving social benefit expectancies, peer drug using and perceiving peer’s approval of drug use) and one protective factor (i.e., having medium or high levels of refusal skills) for club drug use. Moreover, family drug using and having a part-time job were two additionally independent risk factors for club drug use among male students. These findings indicate that the problem of club drug use among Chinese secondary vocational students is worthy of attention. The prevention of club drug use should address multiple risks and protective factors on individual, peer and family levels.
Introduction
Drug use is a serious public health problem worldwide. As shown in the World Drug Report 2020 [1], approximately 269 million people, which accounted for 5.4% of the global population aged 15-64 years, had used drugs (including opioids, cannabis, ecstasy, methamphetamine, etc.) in 2018. Drug use had globally resulted in 585,000 deaths and 42 million years of "healthy" life lost in 2017 [2]. Moreover, the global burden of disease attributable to drug use was 1.8%, and drug use was ranked as the sixth highest in terms of disease burden among young people aged 10-24 years in 2019 [3]. Furthermore, according to a report on the Chinese drug situation in 2019, cumulative total registered drug abusers reached 2.9 million, which accounted for 0.16% of the total Chinese population, 49% of whom were under 35 years old, as well as 0.3% under 18 years old [4].
Adolescence is an important period of physical, cognitive and emotional development, with robust behavioral, morphological, hormonal, and neurochemical changes [5]. It is also a vulnerable period of substance use, having an especially high risk for the initiation of substance use [6]. As shown in the European School Survey Project on Alcohol and Other Drugs (ESPAD), the lifetime prevalence of illicit drug use was 17.0% among European students aged 15-16 in 2019, with 16.0%, 2.3%, 1.7% and 0.7% for cannabis, ecstasy, methamphetamine and gamma-hydroxybutyrate (GHB), respectively [7]. According to the results of the Global School-Based Student Health Survey (GSHS) from different regions around the world, the prevalence of past-month cannabis use ranged from 3.1% to 15.5%, and the prevalence of lifetime amphetamine use ranged from 1.0% to 14.5% among young people aged 13 to 17 years [8][9][10][11][12]. Jia, Z., et al. reviewed 72 studies and reported that the pooled prevalence of illicit drug use was 2.1% among students in mainland China, with the prevalence ranging from 0.4% to 4.2% in different provinces [13].
Club drugs are a diverse group of recreational drugs that are used primarily by teenagers and young adults at raves, dance parties, nightclubs, and concerts [14]. Club drugs have been used more and more by younger Chinese people [15,16]. In China, five popular club drugs are Ketamine, methamphetamine (MA), Ecstasy (MDMA), 'Magu' pills (capsules which usually mix MA with caffeine) and GHB [17,18]. Club drugs can cause substantial physical and mental damage, such as vomiting, amnesia, delirium, aggression, anxiety, depression, suicidal ideation, psychotic episodes, and so on [19][20][21]. Their use is also closely correlated with high-risk sexual behavior and HIV transmission [17], as well as sexual assaults [22]. Moreover, their use may elevate the risk of violence, injuries and aberrant driving [23]. The most serious problem is that an overdose of drug use can also result in fatal cases [22,24].
To address these adverse consequences, it is imperative to take measures to prevent club drug use. Identifying the risks and protective factors associated with club drug use is crucial for the development of prevention programs [25,26]. Previous studies [6,16,27,28] have established the relationship between a series of correlates with drug use behavior. These correlations include sociodemographic factors (e.g., gender, age, social economics status), lack of knowledge about drugs, personality traits (e.g., impulsivity, sensation-seeking), peer drug use, family factors (e.g., family drug use, parental monitoring), school factors (e.g., neglecting the drug prevention education, academic pressure), social environmental factors (e.g., availability of drugs, subculture) and circadian rhythm.
The vocational education is an important part of upper secondary education in China. Chinese secondary vocational students receive a three-year vocational/technical curriculum after graduating from junior high schools [29]. The Ministry of Education of the People's Republic of China announced that there were ten thousand secondary vocational schools with 15.8 million students in 2019, which accounted for 39.5% of senior high school students [30]. Though most students were generally 15-18 years old in secondary vocational schools, there were also a few older students for suspension and return [31]. Previous studies [31][32][33][34] have shown that secondary vocational students had a higher prevalence of illicit drugs use than other types of school students. Nevertheless, studies of club drug use among Chinese secondary vocational students are limited. From my perspective, only a limited number of studies have reported on the prevalence and associated factors of club drug use among Chinese secondary vocational students [34][35][36]. However, these samples were all regional. Hence, the results from this prior research does not completely represent the current status of secondary vocational student's club drug use in China. The present study is positioned to fill this knowledge gap, with the goal of understanding the lifetime prevalence of club drug use among Chinese secondary vocational students and determining the risks and protective factors by a nationally representative sample.
Participants
Data were collected from September 2013 to December 2014 among Chinese secondary vocational students. A multistage cluster sampling strategy (see Figure 1) was utilized to select a nationally representative sample. In stage 1, five metropolises (i.e., Ningbo, Chongqing, Shenzhen, Taiyuan and Wuhan) were purposively selected from the eastern, western, southern, northern and central areas of China, respectively. In stage 2, eleven secondary vocational schools were purposively selected from five cities, including three schools from Wuhan and two schools in each of the other four cities. In stage 3, students from all or randomly selected classes in each school were recruited to participate in the survey. Only students in the 10th and 11th grade were recruited because students in the 12th grade were not in school due to internship. Overall, a total of 10803 students were recruited from all 302 selected classes. There were 523 students (4.8%) excluded, because 180 students (1.7%) refused, and 343 students (3.2%) were not in school. This resulted in 10,280 students participating in the survey, yielding a response rate of 95.2%. A total of 9469 respondents (92.1%) provided valid data in this sample, which consisted of 4562 female students and 4907 male students, with an average age of 17 years.
Participants
Data were collected from September 2013 to December 2014 among Chinese secondary vocational students. A multistage cluster sampling strategy (see Figure 1) was utilized to select a nationally representative sample. In stage 1, five metropolises (i.e., Ningbo, Chongqing, Shenzhen, Taiyuan and Wuhan) were purposively selected from the eastern, western, southern, northern and central areas of China, respectively. In stage 2, eleven secondary vocational schools were purposively selected from five cities, including three schools from Wuhan and two schools in each of the other four cities. In stage 3, students from all or randomly selected classes in each school were recruited to participate in the survey. Only students in the 10th and 11th grade were recruited because students in the 12th grade were not in school due to internship. Overall, a total of 10803 students were recruited from all 302 selected classes. There were 523 students (4.8%) excluded, because 180 students (1.7%) refused, and 343 students (3.2%) were not in school. This resulted in 10,280 students participating in the survey, yielding a response rate of 95.2%. A total of 9469 respondents (92.1%) provided valid data in this sample, which consisted of 4562 female students and 4907 male students, with an average age of 17 years.
Measures
In the present study, a battery of questions including self-reported club drug use, sociodemographic variables, individual factors, as well as peer and family related factors, were completed by students. Club drug use was measured by the following questions with five options (1 = never, 2 = tried them, but do not use them now, 3 = a few times monthly, 4 = a few times weekly, 5 = daily): How often (if ever) do you use the drugs listed below? These drugs included Ketamine, MA, MDMA, 'Magu' pills and GHB [37]. The lifetime club drug use in the present study was defined as "ever used any of these five drugs in their lifetime." Sociodemographic variables included gender (female/male), age (dichotomized into <18 or ≥18 years), ethnicity (coded into the Han Chinese or Minorities), grade (including the 10th and 11th grade), residence (coded into rural or urban) and living with parents (coded into yes or no). Social economic status (SES) was evaluated only by the occupations of parents. There were two questions separately asking paternal and maternal occupation with twelve options, which were then categorized into five levels (1 to 5), with a higher value indicating higher SES [38]. We chose a higher value from the responses of two questions to represent SES, which was finally coded into low (level 1), medium (level 2-4) and high (level 5).
Part-time job experience was measured by "Have you ever done a part-time job more than one month?" with a response of yes/no. Expenses per month were categorized into <1000 or ≥1000 Yuan. Academic achievement was measured by "what was your average grades last semester?", with responses coded into three levels (<60, 60-79, ≥80). Smoking behavior was measured by "How often (if ever) do you smoke a cigarette?", with the same five options as club drug use. Lifetime smoking was coded into yes/no. Social benefit expectancies assessed positive beliefs about club drug use with seven items [39,40], e.g., "Adolescents who use club drugs have more friends". Responses were described as below: 1 = strongly disagree, 2 = disagree, 3 = neither disagree nor agree, 4 = agree, 5 = strongly agree. The answer was coded as a 0 score if students chose responses 1 or 2, while responses 3 to 5 were coded into a 1 score. Then, perceiving social benefit expectancies towards club drugs was dichotomized into yes (total score ≥ 1) or no (total score = 0). Refusal skills were assessed by "How likely would you be able to use skills as following when someone offers you a club drug?". Five kinds of skills were included, i.e., directly saying 'no', telling them you do not want to use it, changing the subject, suggesting other activities, as well as making up an excuse and leaving [41]. Each skill was responded to with a 5-point scale from 1 (definitely would) to 5 (definitely would not). The answer was coded into 1 score if students chose response "definitely would", and other responses were coded into 0 score. Then, the variable refusal skills were coded into none (total score = 0), medium (total score = 1-4) and high (total score = 5).
Based on the previous study [42], peer drug use was measured by "How many of your friends use club drugs?", with five options ranging from 1 (none) to 5 (almost all). Perceiving peer's approval of drug use was measured by "How do you think the attitude of your friends if you try to use club drugs?" with five options ranging from 1 (strongly disapprove) to 5 (strongly approve). Family drug use was measured by "How many of your family members use club drugs?" with the same options as peer drug use. The responses of these three variables were finally coded into yes or no.
Ethics Statement and Data Collection
The study was reviewed and approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology and we obtained Institutional Review Board (IRB) approval for the conduct of this study. Furthermore, at each of the eleven participating schools, we obtained agreement from the principals before we conducted the survey, given that all of the participating students were in a young age group and that most of them were remote from their parents or guardians. Moreover, before paper-and-pencil questionnaires were distributed, all students in selected classes were told that they could quit the survey whenever they wanted. One well-trained investigator collected the data in each class within one class time (40 min). The confidentiality and anonymity were stressed before students began to answer the questionnaires. They were told that none of their parents, teachers, friends and classmates would know any related information. The class teachers were absent from the classroom during the survey. All materials were anonymous and in Chinese.
Statistical Analysis
We used a double-entry strategy to enter all data into Epidata Version 3.1 (The EpiData Association, Odense, Denmark). Questionnaires were all checked with two quality control strategies before data entry. One strategy was to assess false reports through a question in the end of questionnaires to ask whether students honestly answered the items. Data would be eliminated if participants did not respond honestly. The other one was to assess the attitude of students. If students answered the questions by following a rule, such as choosing the same options for most items, the data would be discarded. After data entry, questionnaires with incomplete responses were also excluded from the analyses. Overall, a total of 811 respondents (7.9%) were deleted according to these strategies.
Data were analyzed using SPSS version 20.0 (SPSS Inc., Chicago, IL, USA) and STATA 16. The characteristics of each variable and the prevalence of club drug use were explained. In order to understand any different relationship between associated factors and club drug use by gender, data were analyzed for female and male students separately. Univariate and multivariable Poisson regression models with robust variance were conducted to explore the factors associated with club drug use. All statistically significant variables in the univariate analyses were adjusted in the multivariable analyses. Unadjusted and adjusted prevalence ratios (PRs) and 95% confidence intervals (CIs) were obtained in the regression models [43,44]. All hypothesis tests were 2-tailed, and the significance level was set at α = 0.05.
Characteristics of Total Respondents
As shown in Table 1, male and 10th grade students accounted for 51.8% and 57.9% in the present study. About 6.9% of the included students were 18 years old or above. The Han Chinese accounted for 97.4% of the total respondents. Most of the students were from urban areas (67.5%), lived with parents (76.9%) and spent less than 1000 Yuan per month (87.5%). Slightly more than half of students got an academic achievement between 60-79 scores (53.7%), had medium SES (55.1%) and high levels of refusal skills for club drugs (51.5%). Approximately one-third of students had ever smoked (34.7%) and had part-time job experience (35.8%). Nearly a quarter of students (22.3%) perceived social benefit expectancies towards club drugs. A minority students reported their friends (7.6%) and family members (3.6%) used club drugs. Moreover, 3.6% of students reported their friends would approve them if they used club drugs. Additionally, the overall lifetime prevalence of club drug use was 2.7% and male students had higher prevalence than female students (3.5% vs. 1.9%, χ 2 = 21.03, p < 0.001).
Univariate Poisson Regression Analyses
The univariate Poisson regression analyses results were shown in Table 1. Without adjusting for other variables, ethnicity, academic achievement, expense per month, lifetime smoking, perceiving social benefit expectancies, refusal skills, peer drug use, perceiving peer's approval of drug use and family drug use were all associated with club drug use among female and male students. However, age, SES and the experience of part-time work were only associated with club drug use among male students.
Multivariate Poisson Regression Analyses
The multivariable Poisson regression analyses results were presented in Table 2. After controlling for statistically significant variables from univariate Poisson regression analyses, five common variables were all retained in final models for both female and male students. Students who had ever smoked and those perceiving social benefit expectancies were more likely to use club drugs. Moreover, peer drug use and perceiving peer's approval of drug use could increase the risk of club drug use for students. Conversely, students who had medium or high levels of refusal skills were less likely to use club drugs. Furthermore, family using club drugs and having a part-time job experience were two additionally independent risk factors of club drug use among male students.
Discussion
The current study is a nationally representative cross-sectional survey to understand the prevalence of and factors associated with club drug use among Chinese secondary vocational students. The data were separately analyzed for female and male students. As shown in results, the overall lifetime prevalence of five club drugs (i.e., Ketamine, MA, MDMA, 'Magu' pills and GHB) was 2.7% among Chinese secondary vocational students and some independent associated factors (five for female students and seven for male students) were determined from 15 indicators.
Even though many other studies reported a wider overall prevalence of illicit drugs use (including traditionally used drugs like marijuana, heroin and cocaine), we found that the lifetime prevalence of secondary vocational student's club drug use in our study was also higher than that from previous national [45,46] and regional [33,34,36] surveys in China. Two reasons might cause these differences. One is that the illicit drugs, especially club drugs, are more and more prevalent among Chinese younger people nowadays [15]. The other is that previous studies [33,34] involved more extensive student samples which also included elementary, junior or senior high school students. A series of Chinese studies [33,34,46] had shown that secondary vocational students had a higher prevalence of illicit drugs use than other student population in school.
Peer influence is usually a leading risk factor for adolescent substance use [36,[47][48][49]. We found the similar result that peer drug use had the strongest relationship with club drug use among female (aPR = 3.00) and male (aPR = 4.98) students, respectively. Meanwhile, perceiving peer's approval of drug use was consistently found as a risk factor for student's club drug use (aPR = 2.01 and 1.94 for female and male students, respectively). Individuals who have friends using drugs, or approving them to use drugs, might get more chance to obtain drugs and learn the same behavior from these peers [31]. Furthermore, they share similar beliefs, attitudes, values, and rationales for drug use, which could also prompt them to use drugs [47].
Positive outcome expectancies have been theoretically and empirically corroborated its significant influence on promoting drug use [29,36,50]. In the current study, our results were in accordance with these findings. Perceiving social benefit expectancies consistently increased the risk of club drug use for female (aPR = 1.96) and male (aPR = 2.40) students. This finding might be explained by a general expectancy-based model of substance use development [50]. The model elucidated that positive expectancies could motivate the initial substance use. Then, the experience of substance use may help reinforce expectancies in memory and further promote drug-taking behaviors. Meanwhile, we also found that refusal skills were taken as a protective factor for club drug use among female and male students. This was in agreement with previous studies [29,36]. High levels of refusal skills could help students to resist the peer and social pressure to use drugs [48]. Therefore, refusal skills have been typically considered as necessary social and cognitive skills in drug use prevention programs [48,51,52]. Furthermore, smoking was usually considered as the gateway behavior for illicit drug use [53]. Consistent with prior studies [11,12,31,54], lifetime smoking was an independent risk factor among secondary vocational student's club drug use in our study (aPR = 2.27 and 2.51 for female and male students, respectively).
Additionally, some gender differences were found in the current study. First, our study consistently showed that male students had higher lifetime prevalence of club drug use than female students [11,13,31,33]. In China, licit drugs (e.g., smoking and drinking) use was socially accepted and usually regarded as a symbol of independence and social status among males [55,56], and males might exhibit lower levels of self-control than females [57], all of which might increase the risk of illicit drugs use among males. Second, the type of associated factors had some distinction between different genders. As shown in the results, except for five common associated factors, family drug use (aPR = 1.39) and having a part-time job experience (aPR = 1.53) had additional independent risk effects on club drug use among male students. The observation of a family member using drugs not only offers an example for students to model [9,50], but also promotes them forming positive expectancies towards club drug use, which thereby increases the use of club drugs [51]. The location of a part-time job was usually outside schools, in which people were more likely to communicate an acceptable attitude for drug use. This would lead to students having a higher risk of exposure to other drug users and increase the opportunities for students to use licit or illicit drugs [58].
This study has some limitations. First, the cross-sectional design made it difficult for causal inference. Longitudinal studies should be conducted to verify the current findings. Second, it might be unavoidable that students misreported the status of drug use. However, the confidentiality and anonymity were stressed to promote the cooperation of students and other quality control strategies like checking questionnaires carefully had been taken in the study. Third, the prevalence of each club drug was not reported, and some variables such as sensation seeking [31] and circadian rhythm [28], which might have the far-reaching influence on club drug use, were not involved in the present study. Consequently, further studies targeting specific drugs and involving a wider range of associated factors should be considered. Finally, according to the previous theory framework [59], some mediation or moderation effects might exist among determinants in our study, but these are beyond the scope of this article. Further work should be considered to explore these effects in the future.
Conclusions
In summary, the prevalence of and factors associated with club drug use among Chinese secondary vocational students were assessed by a nationally representative sample in our study. Some independent risk and protective factors at individual, peer and family levels were separately determined for female and male students. These findings provide important implications on club drug use prevention for Chinese secondary vocational students. Peer education might be an excellent approach for the salient influence of peer factors. Moreover, it is very important to educate family members. Correcting inaccurate beliefs and promoting refusal skills might be beneficial to reduce club drug use. Smoking, as a gateway behavior to club drug use, should be addressed ahead of, or synchronous with, club drug use prevention. Moreover, the experience of a part-time job should also be considered. Furthermore, gender difference is a significant influential factor worthy of attention as well. Many prevention programs involving these associated factors have demonstrated short and long-term effects on adolescent drug use in other countries (especially in western countries) [48,60]. Nonetheless, only a limited number of prevention programs have been established in China [51,52]. Therefore, there is an urgent need for more programs to prevent the use of illicit drugs to be developed for Chinese students and our present study provides important information for this work. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to confidentiality of participants.
|
v3-fos-license
|
2018-04-03T02:37:29.324Z
|
2009-02-20T00:00:00.000
|
34536343
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/284/8/4978.full.pdf",
"pdf_hash": "dab9d747022431a01d4ca6f0108e7a320b8c6462",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43332",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "c249abf18e4d01ea3e1cd5ee84aca38bea096626",
"year": 2009
}
|
pes2o/s2orc
|
Context-dependent Function of Regulatory Elements and a Switch in Chromatin Occupancy between GATA3 and GATA2 Regulate Gata2 Transcription during Trophoblast Differentiation*
GATA transcription factors are important regulators of tissue-specific gene expression during development. GATA2 and GATA3 have been implicated in the regulation of trophoblast-specific genes. However, the regulatory mechanisms of GATA2 expression in trophoblast cells are poorly understood. In this study, we demonstrate that Gata2 is transcriptionally induced during trophoblast giant cell-specific differentiation. Transcriptional induction is associated with displacement of GATA3-dependent nucleoprotein complexes by GATA2-dependent nucleoprotein complexes at two regulatory regions, the –3.9- and +9.5-kb regions, of the mouse Gata2 locus. Analyses with reporter genes showed that, in trophoblast cells, –3.9- and +9.5-kb regions function as transcriptional enhancers in GATA motif independent and dependent fashions, respectively. We also found that knockdown of GATA3 by RNA interference induces GATA2 in undifferentiated trophoblast cells. Interestingly, three other known GATA motif-dependent Gata2 regulatory elements, the –1.8-, –2.8-, and –77-kb regions, which are important to regulate Gata2 in hematopoietic cells are not occupied by GATA factors in trophoblast cells. These elements do not show any enhancer activity and also possess inaccessible chromatin structure in trophoblast cells indicating a context-dependent function. Our results indicate that GATA3 directly represses Gata2 in undifferentiated trophoblast cells, and a switch in chromatin occupancy between GATA3 and GATA2 (GATA3/GATA2 switch) induces transcription during trophoblast differentiation. We predict that this GATA3/GATA2 switch is an important mechanism for the transcriptional regulation of other trophoblast-specific genes.
GATA transcription factors are important regulators of tissue-specific gene expression during development. GATA2 and GATA3 have been implicated in the regulation of trophoblastspecific genes. However, the regulatory mechanisms of GATA2 expression in trophoblast cells are poorly understood. In this study, we demonstrate that Gata2 is transcriptionally induced during trophoblast giant cell-specific differentiation. Transcriptional induction is associated with displacement of GATA3-dependent nucleoprotein complexes by GATA2-dependent nucleoprotein complexes at two regulatory regions, the ؊3.9and ؉9.5-kb regions, of the mouse Gata2 locus. Analyses with reporter genes showed that, in trophoblast cells, ؊3.9and ؉9.5-kb regions function as transcriptional enhancers in GATA motif independent and dependent fashions, respectively. We also found that knockdown of GATA3 by RNA interference induces GATA2 in undifferentiated trophoblast cells. Interestingly, three other known GATA motif-dependent Gata2 regulatory elements, the ؊1.8-, ؊2.8-, and ؊77-kb regions, which are important to regulate Gata2 in hematopoietic cells are not occupied by GATA factors in trophoblast cells. These elements do not show any enhancer activity and also possess inaccessible chromatin structure in trophoblast cells indicating a contextdependent function. Our results indicate that GATA3 directly represses Gata2 in undifferentiated trophoblast cells, and a switch in chromatin occupancy between GATA3 and GATA2 (GATA3/GATA2 switch) induces transcription during trophoblast differentiation. We predict that this GATA3/GATA2 switch is an important mechanism for the transcriptional regulation of other trophoblast-specific genes.
In the early mouse embryo, trophoectoderm overlaying the inner cell mass contains trophoblast stem (TS) 2 cells (1). During development, TS cells give rise to distinct highly dif-ferentiated trophoblast subtypes, which build the functional units of the organ, the placenta (2). Trophoblast cells are important for the anchorage of the embryo to the mother, for establishing a vascular connection for nutrient and gas transport to the embryo, and expression of hormones that are required for the successful progression of pregnancy (3). In rodents, multiple differentiated cell types can be derived from TS cells: trophoblast giant cells, spongiotrophoblast, syncytiotrophoblast, glycogen trophoblast cells, and invasive trophoblasts (2,4). Trophoblast giant cells are characterized by endoreduplication and expression of members of the prolactin gene family. During pregnancy, these cells invade into the uterus and promote local and systemic adaptations in the mother that are necessary for embryonic growth and survival (2,3). Differentiation of trophoblast giant cells occurs in a spatially and temporally highly organized manner and multiple transcription factors, including GATA2 and GATA3, have been implicated in the transcriptional regulation of trophoblast giant cell-specific gene expression (5)(6)(7)(8).
The GATA family of transcription factors, GATA1-GATA6, controls multiple developmental processes by regulating tissue-specific gene expression by binding to W(A/T)GAT-AR(A/G) motifs (GATA motifs) of regulatory elements (9,10). GATA family members have been subdivided into two subfamilies based on their expression and functional analysis. GATA1, GATA2, and GATA3 regulate the development of different hematopoietic lineages: erythroid, hematopoietic progenitor, and T-lymphoid, respectively (11)(12)(13). Similarly, GATA4, GATA5, and GATA6 have been shown to be involved in cardiac, genitourinary, and multiple endodermal developmental events (14 -16).
GATA2 was initially cloned from chicken reticulocyte as a GATA motif-binding factor and was shown to be present in all developmental stages of erythroid cells (17). Targeted deletion of Gata2 resulted in embryonic lethality at embryonic day 10.5-11.5 due to ablation of blood cell development (12). However, GATA2 is also expressed in other hematopoietic cells, neurons, and cells of developing heart, liver, pituitary, and in trophoblasts (7, 18 -22).
GATA3 was first cloned as a T cell-specific transcript (23,24). Germ line deletion of Gata3 results in embryonic lethality due to a multitude of phenotypic abnormalities, including growth retardation, severe deformities of the brain and spinal cord, and gross aberrations in fetal liver hematopoiesis (13). Interestingly, expression analysis during early mouse development showed that GATA3 is most abundantly expressed in trophoblast cells prior to embryonic day 10.5 (25).
Although GATA2 and GATA3 are expressed in trophoblast cells, very little is known about GATA factor function and their regulation in this context. Studies in a choriocarcinoma-derived rat trophoblast stem cell line (Rcho-1 trophoblast cells) showed that both GATA2 and GATA3 regulate trophoblastspecific expression of placental lactogen I (PL-I; also known as Prl3d1) gene (7). Studies with knock-out mice showed that placentas develop in Gata2 and Gata3-null embryos (8). However, placentas lacking Gata2 or Gata3 exhibited reduced PL-I and proliferin (also known as Prl2c2) gene expression, with Gata2null placentas having greater reductions in proliferin (8). Besides, placentation sites lacking GATA2 have significantly less neovascularization compared with wild-type placentas in the same uterus (8).
Important mechanistic information regarding Gata2 transcriptional regulation has come from analysis of the native nucleoprotein structure of the endogenous Gata2 locus in hematopoietic precursor cells (26 -29). These studies indicated that during erythroid differentiation GATA1 and GATA2 directly regulate Gata2 transcription in a reciprocal fashion (26). Analysis of the mouse Gata2 locus in erythroid progenitors showed that in the transcriptionally active state, GATA2 occupies four conserved upstream elements (Ϫ77, Ϫ3.9, Ϫ2.8, and Ϫ1.8 kb) along with an intronic (ϩ9.5 kb) conserved element (26,28,29). GATA1-mediated repression of Gata2 transcription was tightly coupled with displacement of GATA2 by GATA1 (GATA2/GATA1 switch) from those regulatory elements. Studies with GATA factor cofactor friend of GATA1 (FOG1)-null cells showed that FOG1 plays an unique role in this regulatory mechanism, in which it facilitates the chromatin occupancy of GATA1 displacing GATA2 from the Gata2 locus (27). These findings support a model in which GATA2 positively autoregulates transcription by binding to its own locus. In erythroid progenitors, this autoregulation is abrogated by a FOG1-dependent GATA2/GATA1 switch that triggers formation of regulatory complexes leading to repression of transcription.
To begin to understand the role of GATA factors in trophoblast function, we studied Gata2 transcriptonal regulation in Rcho-1 trophoblast stem cells (30,31) and mouse TS cells (32), and during their differentiation to the trophoblast giant cell lineage. Herein, we demonstrate that in trophoblast stem cells GATA3 directly represses Gata2 by occupying the Ϫ3.9and ϩ9.5-kb regulatory elements at the Gata2 locus. During trophoblast differentiation, GATA2 displaces GATA3 thereby forming a transcriptionally favorable nucleoprotein complex at the Gata2 locus. This GATA2-mediated displacement of GATA3 (GATA3/GATA2 switch) is associated with displacement of cofactor FOG1 and recruitment of cofactor Mediator1/ TRAP220 (MED1/TRAP220) at the Gata2 locus. These studies define an important mechanism of Gata2 regulation in trophoblast cells and implicate a GATA3/GATA2 switch as an impor-tant molecular determinant for gene regulation during trophoblast differentiation.
EXPERIMENTAL PROCEDURES
Cell Culture and Reagents-Rcho-1 trophoblast cells were cultured as mentioned earlier (31). Cells were maintained in a proliferative state by culturing under subconfluent conditions with RPMI 1640 medium (Invitrogen) supplemented with 20% fetal bovine serum (Atlas Biologicals, Fort Collins, CO), 50 M 2-mercaptoethanol (2-ME) (Sigma), 1 mM sodium pyruvate, and 1% penicillin/streptomycin (Invitrogen). Differentiation was induced by replacing the culture medium with NCTC 135 culture medium (Sigma) supplemented with 1% horse serum, 50 M 2-mercaptoethanol, 1 mM sodium pyruvate, 2.3 g/ml HEPES, 2.2 g/ml sodium bicarbonate, and 1% penicillin/ streptomycin. Differentiation was continued for a period of 8 days at which point most of the cells appeared to be giant cells. Mouse TS cells were initially cultured on a feeder layer of primary mouse embryonic fibroblasts (MEF) in the presence of 25 ng/ml fibroblast growth factor 4 (FGF4; Sigma) and heparin (1 g/ml) in TS cell medium (RPMI 1640 supplemented with 20% fetal bovine serum, 2-mercaptoethanol (100 M), sodium pyruvate (1 mM), L-glutamine (2 mM), and 1% penicillin/streptomycin). For experiments, mouse TS cells were expanded in a proliferative state without MEF feeders by culturing in the presence of 70% MEF-conditioned medium, 30% TS cell medium containing, 20% fetal bovine serum, 25 ng/ml FGF4 (Sigma), and 1 g/ml heparin (Sigma). MEF-conditioned medium was produced by addition of 10.5 ml of TS cell medium to 100-mm culture plates containing 2 ϫ 10 6 mitomycin-C (10 g/ml; Sigma)-treated MEFs. Differentiation of TS cells was induced by culturing them in medium devoid of FGF4, heparin, and MEF-conditioned medium. Human embryonic kidney-293T cells were cultured in Dulbecco's modified Eagle's medium (Invitrogen) supplemented with 10% fetal bovine serum.
Quantitative RT-PCR-RNA was extracted from different cell samples with TRIzol reagent (Invitrogen). cDNA was prepared by annealing RNA (1 g) with 250 ng of a 5:1 mixture of random and oligo(dT) primers heated at 68°C for 10 min. This was followed by incubation with Moloney murine leukemia virus reverse transcriptase (50 units) (Invitrogen) combined with 10 mM dithiothreitol, RNasin (Promega, Madison, WI), and 0.5 mM dNTPs at 42°C for 1 h. Reactions were diluted to a final volume of 100 l and heat inactivated at 97°C for 5 min. A 20-l PCR contained 2 l of cDNA, 10 l of SYBR Green Master Mix (Applied Biosystems, Foster City, CA), and corresponding primer sets. Control reactions lacking reverse transcriptase (RT) yielded very low signals. Relative expression levels were determined from a standard curve of serial dilutions of the proliferative Rcho-1 trophoblast cell and undifferentiated TS cell cDNA samples and were normalized to the expression of 18 S ribosomal RNA (18S rRNA) and glyceraldehyde-3-phosphate dehydrogenase, respectively.
Northern Blot Analysis-For Northern blot analysis total RNA was extracted from undifferentiated and day 8 differentiated Rcho-1 trophoblast cells using TRIzol reagent. Total RNA (20 g/lane) was resolved in 1% formaldehyde-agarose gels, transferred to nitrocellulose membranes (Schleicher & Schuell Bioscience, Keene, NH), and cross-linked. Blots were probed with 32 P-labeled cDNAs (PerkinElmer Life Sciences) for Gata2, NM_033442 (GenBank); and PL-II, NM_012535 (GenBank). Glyceraldehyde 3-phosphate dehydrogenase (Gapdh) cDNA was used to evaluate the integrity and equal loading of RNA samples. Probes were generated using Prime-it II random primer labeling kits (Stratagene). Probes were incubated with the blots at 42°C overnight and washed twice with 2ϫ SSPE, 0.1ϫ SDS at 42°C for 25 min and 1ϫ SSPE, 0.1ϫ SDS at 50°C for 35 min. Blots were then exposed to x-ray film at Ϫ80°C. At least three different samples from three different experiments were analyzed with each probe.
RNA Interference-Lentiviral vectors containing short hairpin RNAs (shRNAs) targeting rat Gata3 mRNA were cloned in pLKO1 (Open Biosystems, Huntsville, AL). Lentiviral supernatants were produced in human embryonic kidney-293T cells by transfection with calcium phosphate as described earlier (33). Lentiviral supernatants were collected after 24 and 48 h of incubation. Undifferentiated Rcho-1 trophoblast cells grown to 70% confluence were incubated with 8 g/ml Polybrene (Sigma) containing medium for 30 min followed by infection with lentiviral supernatants. Infected Rcho-1 trophoblast cells were selected by addition of 3 g/ml of puromycin (Sigma) after 48 h of infection. After 3 days samples were prepared for mRNA and protein analysis. The Gata3 target sequence 5Ј-GCCTGCG-GACTCTACCATAAA-3Ј successfully knocked down expression of the target gene. For control experiments cells were infected either with empty viral vector or vectors expressing shRNAs against the Gata3 target sequence 5Ј-CGGATGTA-AGTCGAGGCCCAA-3Ј, which did not knock-down GATA3 expression.
Quantitative ChIP Assay-Real-time PCR-based quantitative ChIP analysis was performed according to an earlier described protocol (34). Undifferentiated and differentiated Rcho-1 trophoblast and mouse TS cells were trypsinized, washed, and resuspended in phosphate-buffered saline, and protein-DNA cross-linking was conducted by treating cells with formaldehyde at a final concentration of 1% for 10 min at room temperature with gentle agitation. Glycine (0.125 M) was added to quench the reaction. Antibodies against GATA2, GATA3, FOG1, MED1/TRAP220 (M-255; Santa Cruz), CBP/ P300 (A-22; Santa Cruz), diacetylated histone 3 (acH3; Millipore, Billerica, MA), tetra-acetylated histone 4 (acH4; Millipore), and RNA polymerase II (Pol II; N20, Santa Cruz) were used to immunoprecipitate protein-DNA cross-linked fragments. Immunoprecipitated DNA was analyzed by real-time PCR (ABI 7500, Applied Biosystem, Foster City, CA). Primers were designed to amplify 60-to 100-bp amplicons and were based on sequences in the Ensembl data base for mouse and rat Gata2 loci. Samples from three or more immunoprecipitations were analyzed. Products were measured by SYBR Green fluorescence in 25-l reactions. The amount of product was determined relative to a standard curve of input chromatin. Dissociation curves showed that PCRs yielded single products. Primer sequences are available on request.
Transient Transfection Assay-Plasmid constructs (in pGL3 basic vector; Promega) containing a luciferase reporter gene fused to hematopoietic cell-specific (1S) promoter (35) of mouse Gata2 gene alone or in combination with Gata2 regulatory elements have been described earlier (28,29) and were kind gifts from Dr. Emery H. Bresnick (University of Wisconsin Madison, Madison, WI). For transient transfection analysis, undifferentiated or day 4 differentiated Rcho-1 trophoblast cells were transfected with an equal amount of each plasmid (3 g). Plasmids were added to 150 l of Opti-MEM (Invitrogen) reduced serum medium, incubated with Lipofectamine reagent (Invitrogen) for 20 min at room temperature, and then added to the cells. After 3 h of incubation the transfection mixture was replaced with culture medium. Cell lysates were harvested 48 h post-transfection and luciferase activity was measured in a Veritas Microplate Luminometer using the luciferase assay buffer (Promega). The luciferase activity for each sample was normalized to the protein concentration of the lysate. At least three independent preparations of each plasmid were analyzed.
DNase I Hypersensitive Site Mapping-DNase I hypersensitive sites (DHSs) were mapped according to the procedure described by Follows et al. (36) with a few modifications. Briefly, to generate whole genome DHS libraries from undifferentiated and day 8 differentiated Rcho-1 trophoblast cells, nuclei were generated by lysing cells (3 ϫ 10 6 cells for each condition) in cell lysis buffer (300 mM sucrose, 10 mM Tris, pH 7.4, 15 mM NaCl, 5 mM MgCl 2 , 0.1 mM EDTA, 60 mM KCl, 0.2% Nonidet P-40, 0.5 mM phenylmethylsulfonyl fluoride, 20 g/ml leupeptin, 5 mM dithiothreitol). Nuclei were gently resuspended in reaction buffer containing different units of DNase I (New England Biolab, Beverly, MA) and left to incubate at 4°C. After 1 h, 700 l of nuclear lysis buffer (100 mM Tris-HCl, pH 8, 5 mM EDTA, 200 mM NaCl, 0.2% SDS) and 50 g of proteinase K were added to each set and incubated at 55°C for 1 h followed by a 30-min incubation at 37°C with 10 g of RNase A. Digested DNA fragments were extracted with phenol/chloroform, blunt ended with T4 polymerase (New England Biolab), and ligated with an asymmetric double-stranded linker (LP21; GAATTCAGATC-TCCCGGGTCA-LP25; GCGGTGACCCGGGAGATCTGA-ATTC linker). The precipitated ligated DNA was amplified using Vent exo-polymerase (New England Biolab) and a biotinylated LP25 primer. Following amplification, products were extracted using DynaI-streptavidin beads (Dynabeads M-270, Dynal Biotech) and suspended in TE buffer. From the library, DHSs at the Gata2 locus were determined by measuring relative enrichment of DNase I-treated samples versus DNase I-untreated samples by real-time PCR using region-specific primers. We used the same primers that were used for ChIP analysis. Quantification of samples was done using SYBR Green (Applied Biosystems) where standard curves were generated with known amounts of genomic DNA.
Induction of Gata2 Expression during Trophoblast Giant
Cell-specific Differentiation-To determine whether GATA2 expression is dynamically regulated during trophoblast differentiation, we used Rcho-1 trophoblast cells as a model system. Rcho-1 trophoblast cells represent a faithful model for studying rat trophoblast cells in undifferentiated and differentiated states (31,37,38). These cells can be maintained in a proliferative stem cell state and can be induced to undergo endoreduplication and differentiation along the trophoblast giant cell lineage.
As mentioned earlier under "Experimental Procedures," over a period of 8 days, Rcho-1 trophoblast cells can be differentiated to trophoblast giant cells by replacing the culture condition. We performed a time course analysis with undifferentiated and differentiating Rcho-1 trophoblast cells to determine Gata2 mRNA expression. Quantitative RT-PCR analysis showed that the Gata2 mRNA level was significantly induced after 4 days of differentiation (Fig. 1A), and a 8-fold induction in the Gata2 mRNA level was observed after day 6 of differentiation. The maximum Gata2 mRNA level was maintained during the later period of differentiation. To validate this transcriptional induction we also performed Northern blot analysis with undifferentiated and day 8 differentiated cells. Northern blot analysis validated the RT-PCR analysis (Fig. 1B). The transcriptional induction PL-II, a prolactin family member that is only expressed in trophoblast giant cells (39) confirmed differentiation toward trophoblast giant cell lineage. We also performed Western blot analysis to determine GATA2 protein levels. Western blot analysis validated that the induced Gata2 mRNAs were translated to produce GATA2 proteins in differentiated Rcho-1 trophoblast cells (Fig. 1C). FEBRUARY 20, 2009 • VOLUME 284 • NUMBER 8
JOURNAL OF BIOLOGICAL CHEMISTRY 4981
To further validate GATA2 induction during trophoblast differentiation we used mouse TS cells. In the absence of a FGF4 and within a course of 6 days, mouse TS cells differentiate to generate polyploid trophoblast giant cells (32). Therefore, we determined Gata2 mRNA and protein expression in undifferentiated TS cells and after 6 days of differentiation. Similar to Rcho-1 trophoblast cells, differentiation of mouse TS cells significantly induced Gata2 mRNA and protein expression (Fig. 1, D and E). Low level GATA2 expression was also detected in mouse TS cells grown in the presence of FGF4. This is consistent with the observation that a small percentage of TS cells undergo differentiation to generate giant cells even in the presence of FGF4 (32).
As Gata2 is induced during trophoblast differentiation (Fig. 1), we wanted to test whether transcriptional activation is associated with GATA2 binding to the regulatory cis elements of the Gata2 locus. So, we performed quantitative ChIP analysis across a ϳ100-kb region of the Gata2 locus ( Fig. 2A) in undifferentiated and differentiated mouse TS cells to determine GATA2 chromatin occupancy. Quantitative ChIP analysis showed that, in differentiated mouse TS cells, GATA2 occupies only the Ϫ3.9and ϩ9.5-kb regions (Fig. 2C). As a small population of mouse TS cells spontaneously differentiate to GATA2 expressing trophoblast giant cells, we also found very low amounts of GATA2 binding at those regions in ChIP assays with undifferentiated TS cells. GATA2 occupancy was highly induced with differentiation. Interestingly, we were not able to detect GATA2 binding at other Gata2-regulatory regions (Fig. 2C) as well as at other conserved WGATAR motifs within the ϳ100-kb Gata2 locus (data not shown).
Genetic complementation studies in GATA1-null erythroid progenitor cells showed that during erythroid differentiation repression of Gata2 transcription is associated with recruitment of GATA1 and displacement of GATA2 from the Gata2 locus (26 -29). These results indicate that GATA1 directly represses Gata2 transcription in erythroid precursors. However, the mechanism of Gata2 repression in other cell types that do not express GATA1 is poorly understood. As Gata2 is repressed in the undifferentiated trophoblast stem cells, we hypothesized that Gata2 repression in those cells is directly mediated by other GATA factor(s). Earlier studies indicated that at least another GATA factor, GATA3, is expressed in the Rcho-1 trophoblast cells and in the trophoblast giant cells of the mouse placenta (7, 8, 25). However, we took an unbiased approach and determined the expression of all other GATA factors in undifferentiated Rcho-1 trophoblast cells. As shown in Fig. 3, mRNA analysis revealed that only Gata3 mRNA is highly expressed in undifferentiated Rcho-1 trophoblast cells (Fig. 3A). Analysis in mouse TS cells also showed the similar GATA factor expression pattern (data not shown).
The Gata3 mRNA levels do not change significantly in the Rcho-1 trophoblast cells or mouse TS cells with differentiation (Fig. 3B). In mouse TS cells, Western blot analysis validated the Gata3 mRNA expression patterns (Fig. 3C, right panel). However, Western blot analysis showed that GATA3 protein levels are reduced in the differentiated Rcho-1 trophoblast cells (Fig. 3C, left panel) indicating the involvement of post-transcriptional mechanisms in the regulation of GATA3 in Rcho-1 trophoblast cells.
To test the hypothesis that GATA3 directly represses Gata2 transcription in undifferentiated trophoblast stem cells, we tested GATA3 chromatin occupancy at the Gata2 locus in Rcho-1 trophoblast cells. As shown in Fig. 3D (top panel), we found that GATA3 occupies the Ϫ3.9and ϩ9.5-kb regions of the repressed Gata2 locus in undifferentiated Rcho-1 trophoblast cells. Furthermore, analysis of mouse TS cells also determined GATA3 occupancy at the Ϫ3.9and ϩ9.5-kb regions in undifferentiated cells (Fig. 3D, bottom panel). Interestingly, although GATA3 protein is present in differentiated Rcho-1 and TS cells, GATA3 occupancy was not detected at the transcriptionally active Gata2 locus.
To further validate the functional role of GATA3 in Gata2 repression, we utilized an RNA interference approach to knockdown GATA3 in trophoblast stem cells. As a small population of TS cells undergoes spontaneous differentiation in our culture conditions, we used the undifferentiated Rcho-1 trophoblast cells for the knockdown study. We found that when GATA3 was knocked down by ϳ65% in undifferentiated Rcho-1 trophoblast cells, Gata2 mRNA was induced by ϳ2.5fold (Fig. 4A). Western blot analysis also validated the knockdown of GATA3 and induction of GATA2 protein (Fig. 4B). These results, in combination with ChIP analyses, indicate that, in trophoblast stem cells, GATA3 directly represses Gata2 transcription, and Gata2 transcriptional induction during tro- FEBRUARY 20, 2009 • VOLUME 284 • NUMBER 8 phoblast giant cell differentiation is associated with a switch in chromatin occupancy between GATA3 and GATA2, which probably establishes positive autoregulation.
GATA2 Regulation in Trophoblast Cells
We also tested whether depletion of GATA3 affects differentiation of Rcho-1 trophoblast stem cells into different trophoblast cell types. Under conditions when GATA3 is knocked down in undifferentiated Rcho-1 trophoblasts cells, we evaluated the expression of different trophoblast lineage marker genes: Cdx2 (trophoblast stem cell), PL-I and PL-II (trophoblast giant cell), Tpbpa (spongiotrophoblast), PLP-N (invasive trophoblast), Tefb (syncytiotrophoblast), and Cx31 (glycogen trophoblast), in GATA3 knocked-down cells. Quantitative RT-PCR analysis showed reduced Cdx2 expression and increased PL-I and PL-II expression in GATA3 knocked down cells compared with control cells (cells with empty vector, data not shown, and cells expressing a shRNA that does not knockdown GATA3, Fig. 4C). Expression of Tfeb and Cx31 did not change with GATA3 knockdown, and Tpbpa and PLP-N expression were not detected in Rcho-1 trophoblast cells. These results indicate that altered expression levels of GATA3 and GATA2 might be an important determinant of trophoblast giant cell-specific differentiation.
Context-dependent Function of Gata2 Regulatory Elements in Trophoblast Cells-Interestingly, GATA factor occupancy was not detected at the Ϫ77-, Ϫ2.8-, and Ϫ1.8-kb regions in mouse TS cells and their corresponding regions in Rcho-1 trophoblast cells. In different cell types individual regulatory elements of a locus can function distinctly if the regulatory complexes are assembled by distinct cellular signals. In that context, distinct cell type-specific regulatory complexes containing unique transcription factors and cofactors might assemble at individual elements to regulate tissue-specific expression of the locus. Thus, in trophoblast cells, the lack of GATA factor occupancy to these elements indicates two possibilities: (i) functions of these elements are not dependent on GATA factor binding, or (ii) these elements do not regulate Gata2. To further compare functional properties of distinct Gata2 regulatory elements in trophoblast cells, we measured their activities in transient transfection assays in undifferentiated and differentiated Rcho-1 trophoblast cells expressing high levels of GATA3 and GATA2, respectively.
Previously, it was demonstrated that each of the five regulatory elements activated the mouse Gata2 1S promoter when fused to a luciferase reporter gene in transient transfection assays in hematopoietic cells (28,29). By contrast, we found that constructs containing the Ϫ77-, Ϫ2.8-, and Ϫ1.8-kb regions linked to the Gata2 1S promoter had no significant enhancer activity in Rcho-1 trophoblast cells (Fig. 5). However, we found that the ϩ9.5-kb region possesses strong (ϳ60-fold) enhancer activity and the Ϫ3.9-kb region possesses relatively weak (ϳ7-fold) enhancer activity in both undifferentiated and differentiated Rcho-1 trophoblast cells. These results suggest that the Ϫ77-, Ϫ2.8-, and Ϫ1.8-kb regions do not play an active role in the transcriptional regulation of Gata2 in trophoblast cells. Mutation of conserved GATA motifs within the ϩ9.5-kb region abolish the enhancer activity in both undifferentiated and differentiated Rcho-1 trophoblast cells. Intriguingly, mutation of conserved GATA motifs significantly induced the enhancer activity of the Ϫ3.9-kb region in undifferentiated Rcho-1 trophoblast cells indicating that the conserved GATA motifs within the Ϫ3.9-kb region confer a negative regulation in GATA3 expressing trophoblast stem cells. The mutation of GATA motifs did not alter the enhancer activity of the Ϫ3.9-kb region in differentiated Rcho-1 trophoblast cells. Although transient transfection analysis does not always recapitulate endogenous transcriptional mechanisms, these results indicate context-dependent functions of the regulatory elements in Gata2 transcriptional regulation.
To further validate the context-dependent function of Gata2 regulatory elements we determined the chromatin accessibility of those regions by conducting DHS mapping in undifferentiated and differentiated Rcho-1 trophoblast cells. In accordance with the results of transient transfection analysis, DHSs were detected at Ϫ3.9and ϩ9.5-kb regions in both undifferentiated and differentiated Rcho-1 trophoblast cells (Fig. 6, A and B). However, in contrast to the findings in hematopoietic cells (28,29), DHSs were not detected at the Ϫ1.8-, Ϫ2.8-, and Ϫ77-kb regions of the active Gata2 locus in differentiated Rcho-1 trophoblast cells.
We also found that, in Rcho-1 trophoblast cells, the acH3 and acH4 levels are enriched at the ϩ9.5and Ϫ3.9-kb regions but FIGURE 5. Context-dependent enhancer activity of Gata2 regulatory regions in trophoblast cells. Undifferentiated and day 4-differentiated Rcho-1 trophoblast cells were transiently transfected with plasmids in which Gata2 regulatory regions were fused to mouse 1S promoter in front of a luciferase (Luc) reporter gene. In (Ϫ3.9 mt)1Sluc and (ϩ9.5mt)1Sluc constructs conserved WGATAR motifs (mentioned in Fig. 2B) were mutated. The plots depict luciferase activity of the cell lysates normalized by the protein concentration of the lysates (mean Ϯ S.E., four and three independent experiments for undifferentiated and differentiated cells, respectively). In each independent experiment transfections were performed in triplicate.
not at the Ϫ1.8, Ϫ2.8-, and Ϫ77-kb regions of both repressed and active Gata2 locus (Fig. 6C). As increased histone acetylation facilitates factor access to nucleosomes, these results along with lack of DHSs indicate that although Ϫ1.8-, Ϫ2.8-, and Ϫ77-kb regions are important functional Gata2 regulatory elements in hematopoietic cells, in trophoblast cells these elements are probably inaccessible to trans-acting factors.
Dynamic Recruitment of Cofactors at the Gata2 Locus during Trophoblast Differentiation-Cofactor FOG1 has been implicated in transcriptional regulation of Gata2 (27,40). FOG1 has been shown to occupy the regulatory regions of the Gata2 locus in both transcriptionally active and inactive states, and plays a crucial role in GATA2 repression by facilitating GATA1 chromatin occupancy to displace GATA2 during erythroid differ-entiation. FOG1 also interacts with GATA3, and GATA3-FOG1 complexes repress transcription of several genes in T lymphocytes (41,42). As GATA3 represses Gata2 expression in trophoblast stem cells, we wanted to determine whether this repression is mediated via a GATA3-FOG complex. So, we tested FOG1 and FOG2 expression in Rcho-1 trophoblast cells. As shown in Fig. 7A, quantitative RT-PCR analysis showed that Fog1 is expressed in Rcho-1 trophoblast cells and Fog1 mRNA expression was induced by ϳ2-fold in differentiated Rcho-1 trophoblast cells. Protein analysis also validated FOG1 expression in Rcho-1 trophoblast cells (Fig. 7B). However, we found that Fog2 is not expressed in Rcho-1 trophoblast cells (data not shown).
As FOG1 is expressed in both undifferentiated and differentiated trophoblast cells and has both coactivator and corepressor activity, it could function in different ways in Gata2 regulation during trophoblast differentiation: (i) as a corepressor of GATA3, (ii) as a coactivator of GATA2, or (iii) as a chromatin occupancy facilitator to facilitate GATA2 binding and to displace GATA3 from the Gata2 locus. So, we performed ChIP analysis to ask whether FOG1 co-occupies the Gata2 locus with GATA factors in undifferentiated and differentiated Rcho-1 trophoblast cells. We found that FOG1 occupies the Ϫ3.9and ϩ9.5-kb regions in the undifferentiated Rcho-1 trophoblast cells (Fig. 7C, top panel). However, despite the fact that GATA2-FOG1 complexes form at the transcriptionally active Gata2 locus in hematopoietic precursors, FOG1 occupancy at the active Gata2 locus was not detected in the differentiated Rcho-1 trophoblast cells.
Studies in hematopoietic cells showed that cofactor CBP/ P300 can function as a coactivator of GATA factors (43) and co-localizes with GATA2 at the regulatory regions of the activated Gata2 locus (28,29). However, in our analysis we were able to detect CBP recruitment at the ϩ9.5-kb regions of both the repressed and activated Gata2 locus in Rcho-1 trophoblast cells (Fig. 7C, middle panel). This is in line with the fact that we did not observe any significant differences in histone H3 and H4 acetylation (Fig. 6C) between the repressed and activated Gata2 locus in trophoblast cells. Thus, CBP recruitment and changes in histone modifications do not correlate with Gata2 activation in trophoblast cells.
During transcriptional activation the Mediator complex serves as an interface for regulatory factors and Pol II (44). Targeted knock-out of the mediator subunit MED1/TRAP220 revealed its critical role in placental development (45,46). Previous studies have suggested that MED1/TRAP220 functions as a coactivator of GATA factors (47). Furthermore, it has been shown that MED1/TRAP220 physically interacts with GATA2 and functions as a coactivator of GATA2 to regulate certain genes (48,49). Thus, we wanted to determine whether MED1/ TRAP220 functions as a coactivator to regulate GATA2 transcription in trophoblast cells. We performed quantitative ChIP analysis to determine MED1/TRAP220 binding at the GATA2 locus in undifferentiated and differentiated Rcho-1 trophoblast cells and found that MED1/TRAP220 is recruited only to the Ϫ3.9and ϩ9.5-kb regions of the activated Gata2 locus in differentiated Rcho-1 trophoblast cells (Fig. 7C, bottom panel).
These results indicate the possibility that during trophoblast giant cell-specific differentiation a GATA2-Mediator complex forms at the Gata2 locus that positively regulates Gata2 transcription.
Our analysis showed that along with a switch in chromatin occupancy between GATA3 and GATA2, dynamic recruitment of FOG1 and MED1/TRAP220 at the Gata2 locus are associated with transcriptional activation of Gata2 during tro-phoblast giant cell-specific differentiation. Interestingly, we found significant Pol II occupancy at the Ϫ3.9and ϩ9.5-kb regions of the repressed Gata2 locus, and transcriptional activation of Gata2 is associated with Pol II recruitment at the promoter regions (Fig. 7D).
As we detected Pol II binding at the repressed Gata2 locus, we further tested whether Pol II at the repressed Gata2 locus is transcriptionally competent. To that end, we measured whether occupied Pol II at the Ϫ3.9and ϩ9.5-kb regions are phosphorylated at serine 5 (Ser(P)-5) at the carboxyl-terminal domain of the Pol II large subunit. Ser(P)-5 at the Pol II COOHterminal domain is a key modification for the transition from preinitiation to transcriptional initiation and elongation. ChIP analysis utilizing a monoclonal antibody (H14: Covance), specific for Ser(P)-5-Pol II detected a strong Ser(P)-5 signal at the Ϫ3.9-kb region (Fig. 7E) but not at the ϩ9.5-kb region, indicating the presence of a functional Pol II at the Ϫ3.9-kb region of repressed Gata2 locus. However, quantitative RT-PCR analysis revealed that very low levels of transcripts arise from both Ϫ3.9and ϩ9.5-kb regions in the undifferentiated cells (data not shown).
The presence of Pol II, enriched histone acetylation, and presence of DHSs at the Ϫ3.9and ϩ9.5-kb regions of repressed Gata2 locus indicates that Pol II containing "complexes" pre-assemble at those regions in trophoblast stem cells (Fig. 8) and a GATA3/GATA2 switch relocates Pol II to the promoter region. Thus, based on our findings, it is attractive to propose a mechanism in which GATA3-FOG1 complexes, formed at the Ϫ3.9and ϩ9.5-kb regions, repress Gata2 transcription in trophoblast stem cells and displacement of GATA3-FOG1 complexes by GATA2-MED1-TRAP220 activator complexes recruits Pol II at the promoter region leading to transcriptional activation during trophoblast giant cell-specific differentiation.
DISCUSSION
Although evidence is emerging regarding the important functional roles of GATA factors in trophoblast cells (7, 8, 50 -52), molecular mechanisms of their regulation in trophoblast cells are poorly understood. The results described in this study establish a molecular mechanism that regulates Gata2 transcription during trophoblast giant cell-specific differentiation. We have provided evidence for three different aspects of Gata2 regulation in trophoblast cells: (i) we have delineated the regulatory regions of the Gata2 locus that confers enhancer activity in the trophoblast cells, (ii) we have shown that GATA3 directly represses GATA2 expression in trophoblast stem cells, and (iii) we have shown that the transcriptional regulation is associated with dynamic recruitment of cofactors FOG1 and, MED1/TRAP220 at the Gata2 locus.
Multiple experimental approaches like analyses of reporter gene expression, identification of regulatory factors by chromatin immunoprecipitation, and functional analysis in vivo can be used to define regulatory regions of tissue-specific gene expression. Based on the findings that Ϫ3.9and ϩ9.5-kb regions showed enhancer activity in trophoblast cells, it is suggestive that these regions regulate endogenous Gata2 expression in trophoblast cells during placental development. However, dele-tion of those regions in the context of the endogenous Gata2 locus will provide definitive information.
Intriguingly, transient transfection analysis (Fig. 5) and DHS mapping (Fig. 6) indicated that Ϫ1.8-, Ϫ2.8-, and Ϫ77-kb regions lack enhancer activity and do not contain accessible chromatin structures in trophoblast cells (Fig. 5). Furthermore, the enhancer activity of the Ϫ3.9-kb region is not dependent on conserved GATA motifs. Rather GATA motifs in that region are probably required for GATA3-mediated transcriptional repression. Analysis in transgenic mice indicated that a 3.1-kb fragment of the Gata2 locus that contains the Ϫ1.8and Ϫ2.8-kb regions can drive expression of a Gata2 promotergreen fluorescence protein transgene in multipotent hematopoietic progenitors (53). Other studies showed that the ϩ9.5-kb region but not Ϫ3.9and Ϫ77-kb regions can function as an autonomous enhancer to drive Gata2 expression in endothelial cells in vitro and during embryonic development (54 -56). These studies along with our findings strongly indicate intrinsic differences among the Gata2 regulatory regions. These intrinsic differences, at least in part, are contributed by other tissuespecific transcription factors, which function in combination with GATA factors from the regulatory regions (54,55). Therefore, determining those factors in trophoblast cells is an area of further research.
Our results and studies in erythroid progenitors and TS cells revealed that both GATA1 and GATA3 directly repress Gata2 transcription in erythroid progenitors, and trophoblast stem cells, respectively. Furthermore, GATA2 might positively autoregulate transcription. These findings indicate a general mech-anism of GATA2 regulation in which multiple GATA factors, depending on their expression pattern and cellular signaling could regulate GATA2 expression in diverse tissues by directly modulating the nucleoprotein structure of the locus. So, the question arises how do different GATA factors function in a different fashion from the same GATA motifs? One possible explanation is the differential interaction with cofactors. We have shown here that in the repressed locus FOG1 co-localizes with GATA3, whereas in the activated locus MED1/TRAP220 co-localizes with GATA2. Although, we have not been able to correlate CBP/P300 binding with the Gata2 activation in trophoblast cells, it has been demonstrated in Gata2 regulation in hematopoietic cells. In erythroid progenitors CBP/P300 binding correlates with the DNase I hypersensitivity at Gata2 regulatory regions (28). All these observations provide evidence that formation of different GATA factor-cofactor complexes in response to diverse cellular signaling contribute to the functional outcome of the regulatory GATA motifs at the Gata2 locus.
Although the expression of several placental genes is reduced in Gata2 Ϫ/Ϫ and Gata3 Ϫ/Ϫ mice, the lack of an overt placental phenotype led to the prediction that these two factors might function redundantly during placental development. However, experiments have not been done in a context where both GATA2 and GATA3 are limiting. Many placentally expressed genes are expressed in a spatial and temporal pattern during the course of gestation. So, placental functions beyond embryonic day 11.5 (Gata2-null mice die at E10.5 and Gata3-null mice die at E11.5), specifically differential expression of placental hormones during late gestation (39, 57) might be regulated by specific GATA factors. In that context, we predict the GATA3/GATA2-switch is an important mechanism for regulating expression levels of multiple genes in trophoblast cells. The presence of conserved GATA motifs in the regulatory regions of multiple prolactin family genes (58) further supports this prediction. However, as both GATA3 and GATA2 proteins are present in the differentiated trophoblast giant cells, they might have both unique and shared target genes in those cells. Thus, two important experimental approaches would greatly expand our understanding of GATA factor function in the placenta: (i) determining trophoblast function in the absence of both GATA2 and GATA3 and (ii) identifying GATA target genes in trophoblast cells. Our analysis herein indicated that a GATA3-dependent nucleoprotein complex that contains FOG1 and Pol II preassembles at the Ϫ3.9and ϩ9.5-kb regions of the repressed (low levels of transcription) Gata2 locus in trophoblast stem cells. The coactivator CBP is also recruited at the ϩ9.5-kb region of the repressed locus. During differentiation toward trophoblast giant cells, signals yield sufficient concentrations of GATA2 to competitively displace GATA3 from Ϫ3.9and ϩ9.5-kb regions. This GATA3/GATA2 switch is accompanied by loss of FOG1, recruitment of the MED1/TRAP220 containing mediator complexes, and recruitment of Pol II to the promoter region, thereby establishing an active (high levels of transcription) Gata2 locus. In the differentiated trophoblast giant cells, the GATA2-dependent nucleoprotein complex maintains the active Gata2 locus.
|
v3-fos-license
|
2020-03-19T10:26:55.980Z
|
2020-03-13T00:00:00.000
|
221378405
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://f1000research.com/articles/9-187/v2/pdf",
"pdf_hash": "341262c764c012ab94df95f3a2c6be1ff92f0efa",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43336",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "56fac943c296bc48e82ed286325bc873c526f00f",
"year": 2020
}
|
pes2o/s2orc
|
Evaluation of antioxidant, antibacterial and wound healing activities of Vitex pinnata
Background : Vitex pinnata is a popular ethnomedicinal plant but scientific studies to validate its pharmacological properties are lacking for this plant. This study aims to determine the antioxidant, antimicrobial and wound healing properties of the methanolic extract of the leaves and the hexane, chloroform and ethyl acetate fractions. Methods: The leaves of Vitex pinnata underwent methanol extraction and the methanol extract was fractionated with hexane, chloroform and ethyl acetate solvents. The antioxidant activity was determined using the DPPH radical scavenging assay. The antimicrobial activity was assessed by disc diffusion assay against Staphylococcus aureus, Bacillus subtilis, Escherichia coli and Pseudomonas aeruginosa. For the wound healing studies, the methanolic extracts of V. pinnata were used to prepare ointments with compositions of 10% (w/w) and 50% (w/w), which were evaluated for wound healing activity in an excision wound model in Wistar rats. Results: All the extracts showed antioxidant activities, with the ethyl acetate extract having the highest DPPH radical scavenging activity, followed by the methanol, chloroform and hexane extracts. Similarly, their quercetin equivalent concentrations were 33.1, 31, 20.3 and 4.5 mg/mL, respectively. Except for the methanol extract, the disc diffusion assay showed that the extracts demonstrated species-specific antibacterial activities, with the ethyl acetate extract showing antibacterial activities against all four tested strains. The wound healing activity of the high dose treated group (50% [w/w]) shows significant increase of wound contraction when compared to the control group. Conclusion: In the current study, the ethyl acetate extract showed activity for all tested bacteria and also had the highest DPPH activity. The methanolic extracts of V. pinnata leaves show modest wound healing activity in an excision wound model.
Introduction
Vitex pinnata is a common ethnomedicinal plant locally known as 'kulimpapa' in Brunei Darussalam and East Malaysia. According to Goh et al. (2017), it is traditionally used as a treatment for stomach aches, fever, body aches and lesions. The bark may be used for abdominal pain, while the young shoots are utilized as sanitizing agents and deodorants. The roots and bark of this plant have been used in herbal baths and also taken orally in the form of decoction or herbal tea while the leaves may be eaten raw. Traditionally, the leaves were prepared into poultice and applied to wounds to induce fast healing (Burkill et al., 1966;Goh et al., 2017;Sahu & Barik, 2007;Suksamrarn & Sommechai, 1993). Although V. pinnata is a popular ethnomedicinal plant, scientific studies are lacking to validate its pharmacological properties. It is not clear if this plant has antioxidant, antimicrobial and wound healing properties.
Plants have been known to be a potent source of medicinal compounds and anti-oxidants. Oxidative stress produced through a chain reaction by free radicals can be a contributing factor to the pathophysiology of various conditions including cardiovascular dysfunction, atherosclerosis, inflammation, carcinogenesis, reperfusion injury and neurodegenerative diseases (Aruoma, 1998). Due to the increasing safety concerns with the consumption of synthetic antioxidants, alternative sources of antioxidants from natural origins, especially from plants, are currently in demand (Stankovic et al., 2016). This study is interested to determine if V. pinnata could be a good source of antioxidants.
Although various antibacterial agents of synthetic origin are available, bacterial resistance to current antibacterial agents is growing (Andersson & Hughes, 2010). Furthermore, available antibiotics can also cause side effects. For this purpose, plants can be good sources of antimicrobial agents due to their secondary metabolites (Nascimento et al., 2000). Previous studies have reported the antifungal activity of V. pinnata (Ata et al., 2009), however its antibacterial and antioxidant activity has not yet been reported. Therefore, this study also aims to screen its antibacterial activity against several bacterial strains.
In order to understand the possible effects of Vitex pinnata on wound healing, we have sought the use of animal models to reflect human wound healing conditions. Wistar rats were selected in our study because of their availability. In addition to this, the use of Wistar rats in this study provides a more mainstream model for understanding the wound healing process upon treatment with the extract. The use of this species and excision wound model also allows us to make a comparison of the wound healing process with other available literature involving a similar approach to ours. The scientific objective of this study is to determine the stage by stage process of wound healing such as general wound morphology, wound closure and presence of inflammatory cells, which could only be carried out in an animal model.
In this study, we report the antibacterial, antioxidant and wound healing activities of V. pinnata from Brunei Darussalam.
Plant materials and preparation of extracts
The leaves of V. pinnata were collected at the Universiti Brunei Darussalam Botanical Research Centre (UBD BRC) in Brunei Darussalam. Species identification was kindly carried out by the botanist at the UBD BRC. Voucher specimen of V. pinnata (catalog ID S00059) is available in UBD Herbarium (http://ubdherbarium.fos.ubd.edu.bn/). The plant samples were shade dried for a few weeks and grinded prior to solvent extraction.
A mass of 300 g of the ground leaves was exhaustively extracted with 1.5 L of methanol using Soxhlet extraction. The resulting methanol extract was then vacuum filtered and evaporated using a rotary evaporator. The methanol extract was further partitioned with different solvents in increasing order of polarity i.e. hexane, chloroform and ethyl acetate. A total of 10 g of the methanol extract was redissolved in approximately 1 L of methanol and 10 mL of distilled water, followed by the addition of 500 mL of hexane. After shaking vigorously, the mixture was left to stand to allow layers to form between the solvents. Once formed, the hexane layer was collected and the procedure was repeated with the other solvents (chloroform and ethyl acetate). Each solvent was evaporated with a rotary evaporator and subsequently air dried in a fume hood.
The percentage yield of extract was determined using the formula: Extraction yield (%) = [Dry weight of extract obtained / Dry weight of material used for extraction] × 100%. Prior to the antioxidant and antibacterial tests, each extract was re-dissolved in methanol at 1000 μg/mL and 500 mg/mL, respectively.
Amendments from Version 1
All changes have been made in response to reviewers 1 and 2. The Introduction has been revised to include several points to address the problem statement to include the importance of this research study. The Methodology has been clarified such as additions of chemicals and reagents used in the study, reference number of voucher specimens. The Results section has also been improved by the additions suggested by reviewers. We have also updated Figure 3 as there was an error with the error bars in the previous version. Labels are now included in Figure 4.
Any further responses from the reviewers can be found at the end of the article REVISED DPPH radical scavenging assay The antioxidant or radical scavenging activity (RSA) was determined by DPPH radical scavenging assay according to Awang-Jamil et al. (2019) with minor modifications. Each extract with a volume of 0.2 mL was mixed with 1 mL of 40 μg/mL of DPPH methanolic solution. For the control, 0.2 mL of methanol was used instead of the extract. After about 30 minutes, the RSA was determined by measuring the absorbance at 517 nm using a UV spectrophotometer. The assay was carried out in triplicate. The RSA (measured as the percentage of DPPH scavenged) was calculated using the formula: RSA (%) = [A control -A sample ]-/A control × 100%, where A control refers to the absorbance of the control and A sample refers to the absorbance of the extract.
To measure the quercetin equivalent (QE) concentration of each extract, the RSA of quercetin at six different concentrations (1, 5, 10, 20, 25 and 35 μg/mL) were also similarly measured. A standard calibration curve was prepared by plotting RSA (y-axis) against quercetin concentration (x-axis). Subsequently, the linear regression of the standard calibration curve (y = 2.74x + 2.77; R 2 = 0.997) was employed for the estimation of QE concentration.
Antibacterial assay
The microorganisms used in this study consisted of four different strains of bacteria: two Gram-positive bacteria, Staphylococcus aureus (ATCC-29213) and Bacillus subtilis (ATCC-11774); and two Gram-negative bacteria, Escherichia coli (ATCC-11775) and Pseudomonas aeruginosa (ATCC-27853). Each bacterial strain was grown in nutrient broth, and prior to the test, overnight bacterial culture was diluted to an absorbance of 0.08 to 0.1 (equivalent to 0.5 McFarland standard), measured with a UV spectrophotometer at 600 nm.
Antibacterial activities of the extracts were determined by disc diffusion method according to Abdullah et al. (2019) with minor modifications. A volume of 10 μl of each extract was applied onto a filter paper disc (6 mm in diameter) and left to dry. Methanol was used as the negative control and 20 mg/ml streptomycin sulphate (antibiotic) as the positive control. Each standardised bacterial culture was then swabbed onto Mueller-Hinton agar (MHA) plates. The impregnated discs were then loaded onto the swabbed MHA plates. The zone of inhibition was recorded after incubating the plates at 37°C for 24 hours. Each test was carried out with at least three replicates.
Wound healing activity Ethical statement. All work involving animals was approved by the Universiti Brunei Darussalam University Research Ethics Committee [approval reference: UBD/FOS/E2(g)]. All animal procedures were performed in accordance with Universiti Brunei Darussalam guidelines on the care and use of animals for research and internationally accepted guidelines. All investigators declare that every effort was taken to ameliorate harm to the animals via close monitoring for signs of pain and distress, and attended to in case signs of pain and distress was observed in accordance with the Universiti Brunei Darussalam guideline on care and use of animals for research.
Animals. Male Wistar rats aged between 8-10 weeks and weight range between 150-200g were used for the wound healing study. The animals were obtained from and housed in the Universiti Brunei Darussalam animal facility. The rats were fed with a standard maintenance pellet diet (Cat # 1324, Altromin, Germany) and access to food and water was provided ad libitum. Animals were housed in cages with wood shavings as bedding and maintained under standard lab conditions (12/12h light dark cycle; 25°-30°C). Prior to carrying out the experimental procedures, animals were assessed for their general health, determined by their behavioral responses to the handler and other general parameters such as hands-on physical examination, condition of the animal's body, physical and observable abnormalities. The welfare of the animals was attended to throughout the experiments such as by the provision of food and water, avoidance by the handler of conditions that cause suffering, pain and disease. Animals with obvious signs of pain and distress during the experiment would be excluded from the study.
Control animals were placed between three to four animals per cage and following the wound excision procedure, animals were placed in individual cages throughout the duration of the experiment. Upon completion of the experiments, animals were euthanized using CO 2 asphyxiation followed by cervical dislocation prior to tissue collections for histological analysis.
Formulation of ointment.
The methanolic leaf extracts of V. pinnata were formulated into ointments with pure petroletum jelly as the base. Herbal ointments were prepared in two doses, which is the low dose extract consisting of 10% (w/w) extract and high dose extract containing 50% (w/w) extract. The maximum concentration that was stable in the base was 50% (w/w).
Excision wound.
All procedures were refined in order to minimize any negative impact and pain to the animals resulting from the excision wound procedure. All procedures were carried out in the Universiti Brunei Darussalam animal facility at the same time of day between 10.00-13.00hrs. Prior to making the excision wound, all treated animals were anaesthetized using diethyl ether by inhalation anesthesia. Diethyl ether was chosen as the anesthetic because its action is also accompanied by analgesic properties. Animals were exposed to one drop of diethyl ether placed in a cotton wool and placed at one end of a conical tube. There was no close or direct contact between the cotton ball and the nose of the animal. Depth of anesthesia following exposure to diethyl ether was determined by responses to reflexes such as voluntary movements from stimuli such as extending of legs. Once deep anesthesia has been established, the animal excision wounds were performed as previously described (Umachigi et al., 2008). The dorsal fur of the animals was shaved and an area of about 100mm2 (10 × 10 mm) was excised using a sharp scalpel with disposable steel blades. Rats were then individually housed and the extract and ointment were applied topically on the wound area at the same time of day (between 10am -1pm) in the respective groups for a period of 21 days.
Animal grouping and dosing. Animals were randomly grouped into three groups, each group containing six rats. The numbers of animals (n) used in these experiments were based on previous studies in our laboratory and experiments of this nature (Nagar et al., 2016) and is also the minimum that is required to detect any significant difference between the treatment groups. Simple randomization was carried out when assigning the animals to groups in order to avoid any bias. Introduction of bias is reduced or eliminated as during the early stages of the experiment; i.e. during grouping, allocation of animals to treated and control groups was carried out randomly. Wound area determination was carried out by a validated instrument (Image J software, National Institute of Health, USA) which minimizes subjectivity of observer during assessment of wound areas.
Following this, all animals within each group received the same treatment throughout the duration of the experiment. The rats of Group I were treated with pure petroletum jelly only (negative control). Group 2 and Group 3 were treated with ointments containing 10% (w/w) leaf extract (low dose) and 50% (w/w) leaf extract (high dose), respectively. The ointment was topically applied to the wounds on alternate days for a period of three weeks.
Wound area determination. Wound area was measured using Image J software (National Institute Health, USA) ( Figure 1). The rate of wound contraction was measured as the percentage of reduction of wound size. The percentage of wound contractions were calculated as previously described (Shi et al., 2013): Wound contraction (%) = 100 × [(Original wound area -area on measured post-wounding day) / Original wound area] Histological analysis. Skin tissue samples from the wound and its vicinity were taken for histopathological analysis at post wounding days 3, 7, 14 and 21. Sections at 10μm thickness were prepared and stained with hematoxylin and visualized.
Statistical analysis
Statistical analyses were conducted with one-way ANOVA and a post hoc Tukey's HSD test. These were performed using R 3.3.3 software. Values of p < 0.05 indicated statistical significance.
Results and discussion
Extraction yields Soxhlet extraction yielded 8.0% of methanol extract from 300 g of ground leaves. For solvent partitioning, 20.3% hexane, 62.1% chloroform and 8.9% ethyl acetate extracts were obtained from 10 g of methanol extract.
Antioxidant activities
Statistical analysis indicated that the RSA of the different extracts were significantly different to each other. The ethyl acetate extract had the highest RSA and QE concentration, indicating it was the most potent in inhibiting free radicals compared to the other extracts (Table 1). This was followed by the methanol extract, the chloroform extract and finally the hexane extract.
Different solvents used for extraction have been reported to extract different compounds (Basri et al., 2017). It has been previously reported that phenolic content increased with the increasing polarity of solvent (Barchan et al., 2014;Belyagoubi et al., 2016). The extracts from V. pinnata in the present study possibly contained different types of phenolic compounds with different antioxidant capacities. Zhang (2015) has studied the effects of solvents on the phytochemical contents. It was found that the extraction of total flavonoids significantly increased in polar solvents, which might contribute to the antioxidant activity in V. pinnata. However, in this study, ethyl acetate had higher antioxidant activity compared to the more polar methanol. This could probably mean that certain non-polar compounds, which probably dissolved better in ethyl acetate, might also contribute to the higher antioxidant activity.
In the present study, the ethyl acetate and hexane extract showed the highest and lowest radical scavenging activities amongst the extracts, respectively. Similarly, a previous study conducted by Vats (2012) on cowpea (Vigna unguiculata) also showed that ethyl acetate extract had the best DPPH radical scavenging activity. Moreover, the highest phenolic content was also found in the ethyl acetate extract, implying that most phenols were soluble in ethyl acetate. The lowest scavenging activity was in the hexane extract, which could be attributed to its low amount of phenolic content.
Closely related plant species have been shown to have similar phytochemical constituents (Diris et al., 2017). The antioxidant activities of V. pinnata's close relatives, V. agnus-cactus, V. negundo and V. trifolia have also been determined using various solvents, and was found that the fruits and leaves of these Vitex plants were significantly capable of scavenging free radicals (Rashed, 2013;Saklani et al., 2017), signifying that Vitex plants are potential sources of promising antioxidants. Table 2 indicated the presence of antibacterial activities in the extracts of V. pinnata leaves at 500 mg/mL.
Antibacterial activity Detected zones of inhibition as shown in
The results generally showed the presence of antibacterial activities in the hexane, chloroform and ethyl acetate extracts and no antibacterial activity detected in the methanol extract. As shown in Table 2, the ethyl acetate extract could inhibit all the bacterial strains as compared to chloroform extract, which could only inhibit a total of three bacterial strains, and hexane extract, which could inhibit two strains.
Wound healing
All animals were continuously observed and assessed for their general health status throughout the duration of the experiment. We did not observe any adverse effects in these animals such as infections of the excised region or any behavioral changes following the excision wounding and during the recovery period. The difference in wound area was observed from Table 2
Bacteria
Methanol Hexane Chloroform Ethyl acetate The values are shown as average ± standard deviation of at least three replicates. Negative control did not show any inhibition zone as expected whereas positive control (streptomycin sulphate) showed inhibition zones ranging from ~20 to 30 mm. post-wounding day 7 in all experimental groups (Figure 2). The wound contraction at post-wounding day 7 was determined to be slightly higher for animals in the negative control group when compared to extract-treated groups (Figure 3). At post-wounding day 14, wound contraction in extract-treated groups were considerably higher when compared to the control group and particularly in animals receiving the higher concentration of extract (Table 3). Complete wound closure was observed in both the low dose 10% (w/w) and high dose 50% (w/w) extract treated groups at post-wounding day 21, indicating a faster rate of epithelialization compared to normal wound healing in the control group.
Our study demonstrated a dose dependent effect of the methanolic extract on wound healing, as evident from the wound closure observed for the 10% (w/w) vs 50% (w/w) extract treated animals. General wound morphology in post wounding day 3 shows a well-developed scab formation in all groups (Figure 2). We did not observe any adverse effects in both control and extract treated animals. At day 14, there was almost complete closure of the wound in the high dose group (Figure 2).
We observed a high density of inflammatory cells in the low dose and high dose treated tissues (Figure 4). The low dose and high dose treated animals showed epithelialization at day 7, whereas the control group showed epithelialization at day 14. Inflammatory cells become prominent, which may include neutrophils, followed by macrophages and mast cells that emigrate from nearby tissues, playing a crucial role in combating infection (Martin & Leibovich, 2005).
Based on histological observations, granulation tissue from all experimental groups displayed increased infiltration of fibroblast cells, indicative of active proliferation (Figure 4). Among all of the experimental groups, high-dose treated group showed greater wound contraction at post-wounding day 14 (Table 3). At post-wounding day 21, the wound in the high-dose treated group showed a more densely packed organization of collagen and connective tissues compared to the low dose-treated and control group.
Based on the wound contraction evaluation and histological analysis of the wound, our study has demonstrated a modest wound healing effect in the V. pinnata high-dose treated group followed by low-dose treated group as compared to control group (Figure 2). Phytochemical analysis reports have revealed that the petroleum ether, ethyl acetate, methanol and aqueous leaf extracts of V. pinnata contain varying amounts of the alkaloid, anthocyanidins, aucubins, coumarins, flavonoids, flavanols, gallic tannins, iridoids, proteins, reducing compounds, steroids, triterpenoids and glycoside compounds, of which flavonoids appeared to be present in the highest quantity in all the four different extracts (Ramesh et al., 2013). The chemical constituents of its barks have also been reported to include flavonoids (Ata et al., 2009). These phytochemicals are known to possess antioxidant, anti-inflammatory, and anti-microbial properties which are responsible for the enhanced rate of wound healing observed (Shah & Amini-Nik, 2017). Values are mean ± SEM; **p < 0.01 compared to control animals on similar treatment days. The starting experiment had n=6 animals per group. One animal was sacrificed for histological analysis at day 3, day 7 and day 14.
There was n=3 animals remaining at day 21 which is the final day of the experiment.
Conclusion
The extracts from V. pinnata leaves showed the ability to scavenge free radicals, with the ethyl acetate extract being the most potent. All extracts except for the methanol extract also exhibit species-specific antibacterial activities, with the ethyl acetate extract also being the most potent. Modest wound healing capabilities are also observed in the V. pinnata treated excision wound. Study limitations include our inability to determine the relative absorption of the plant extract into the excision wound tissue site, which may have an effect on the wound closure. Further optimization of the plant extract also needs to be carried out to elucidate factors such as toxicity and effective doses for human use.
We have focused on the effects of V. pinnata extract on an excision wound, the study could be extended to include other wound models such as burns and incision wounds in order to ascertain the plant extract's wound healing capabilities.
Open Peer Review
so why ethyl acetate extract has not tested for wound healing in this case?
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Natural products, Complementary and alternative medicine, vascular biology, cancer biology, non-communicable diseases.
I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
Version 1
Reviewer Report 20 July 2020 https://doi.org/10.5256/f1000research.23467.r67298 © 2020 Fitmawati F. This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Fitmawati Fitmawati
Department of Biology, Faculty of Mathematics and Natural Sciences, Riau University, Riau, Indonesia Generally, this article provides novelty information in the field of natural antioxidant sources from the tropics. The method used is good and detailed. The results have been explained well, as well as the discussion. However, the abstract did not convey a compelling reason to underlie this study, so the impression of this study was flat. I suggest writing three sentences that show the reasons for the importance of this research in the abstract and in the introduction. Then, explain fluently the strong relationship between the elements of the title, background, objectives, methods, results, and conclusions.
For example abstract :
Vitex pinnata is one of the many plants known for its ethnomedicinal properties (choose the strong sentence for the first to give a strong reason in choosing this research).
In the same case in the introduction, make sure the word chosen is a strong word to underlie this research and is described logically and fluently.
Yoke-Keong Yong
Department of Human Anatomy, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, Serdang, Malaysia
Section: Introduction
Problem statement not clear.
Section: Methods
Suggest to add another section on "chemicals & reagents", the authors should list all the chemicals and reagents used in this study by clearly stating the brand, company, and country.
Section: Results and discussion -Antioxidant activities
Authors stated and cited phenolic content increased with the increasing polarity of solvent and extraction of total flavonoids significantly increased in polar solvents, which might contribute to the antioxidant activity by Barchan et al., 2014 andZhang, 2015. However, in the current study ethyl acetate showed to have higher antioxidant activities compared to methanol where methanol polarity is higher than the ethyl acetate. Kindly justify this.
Section: Results and discussion -Antibacterial activity
The authors should include the results for a positive control (reference drug), as it enables a comparison with the other treatment group.
○
In addition, there is no statistical analysis provided.
○
The concentration of the reference drug should be provided.
Section: Results and discussion -Wound healing
Kindly provide details of the figures and tables, such as SEM or SD.
○ Kindly re-run the statistical analysis for the data of Figure 3 ( Caption for all the results (Table and Figure Section: Results and discussion -Antioxidant activities Authors stated and cited phenolic content increased with the increasing polarity of solvent and extraction of total flavonoids significantly increased in polar solvents, which might contribute to the antioxidant activity by Barchan et al., 2014 andZhang, 2015. However, in the current study ethyl acetate showed to have higher antioxidant activities compared to methanol where methanol polarity is higher than the ethyl acetate. Kindly justify this.
○ Response: We have revised the manuscript by adding: "However, in this study, ethyl acetate had higher antioxidant activity compared to the more polar methanol. This could probably mean that certain non-polar compounds, which probably dissolved better in ethyl acetate, might also contribute to the higher antioxidant activity.
Section: Results and discussion -Antibacterial activity
The authors should include the results for a positive control (reference drug), as it enables a comparison with the other treatment group.
Response:
The results for the positive control is included but one has to click Table 2 and underneath the table, one will see "positive control (streptomycin sulphate) showed inhibition zones ranging from ~20 to 30 mm". In addition, there is no statistical analysis provided.
○ Response: We prefer not to provide statistical analysis as this is only meant as a screening test. Further method such as broth microdilution method has to be carried out to determine which extract is more potent in its antibacterial activity.
The concentration of the reference drug should be provided.
○ Response: We have added "20 mg/ml" in the methods.
Section: Results and discussion -Wound healing Kindly provide details of the figures and tables, such as SEM or SD.
Response
The ± SEM values have been included in the respective legends according to the data set.
Kindly re-run the statistical analysis for the data of Figure 3 (especially Day 14) as the Control group showed big SEM or SD, however, there was a significant difference comparing Control and High dose at P<0.01.
Response
We have rectified the error on our part in the initial figure which was during drawing of the error bars. According to Table 3, the +/-SEM was 0.3. This has been rectified in the latest version of Figure 3.
Sample size of animal=3, which I doubt is enough for statistical analysis, minimum should have been at least 6 for each group.
Response
We have not conducted power analysis to determine the minimal number of animals for the study and have used the number of animals based on previous reported studies of similar nature. We agree that having more animals per group would produce more sound statistical outcomes. We agree that this is a major shortcoming on our study. However we do feel that the results of our findings are noteworthy.
Kindly justify where is the positive group.
Response
In this study we have not included a positive control group. The high dose treated animals showed a higher percentage of wound contraction at Day 14 compared to the normal untreated and group and have shown a significant difference.
The authors stated the study demonstrated dose dependent effect; however, I can't see any dose dependent data. Kindly justify.
Response
The dose dependent effect in the context of our work is the effects that we have observed for the high dose (50% (w/w) leaf extracts and low dose group (10% (w/w) leaf extracts where there is a difference in the percentage contractions of the wound.
Lack of labeling in Figure 4.
Other comments:
Tukey's test is to compare all group, however, the way the author presented was ○ more to Dunnet, which was compared to Control only Response The study analysis was conducted to compare the wound contractions of treated vs control.
|
v3-fos-license
|
2019-03-25T13:46:54.966Z
|
2019-03-01T00:00:00.000
|
85516837
|
{
"extfieldsofstudy": [
"Mathematics",
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/19/6/1413/pdf",
"pdf_hash": "f8dab8bd52cf6df8ef1860a0dabc3f2ee43abff7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43337",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "f8dab8bd52cf6df8ef1860a0dabc3f2ee43abff7",
"year": 2019
}
|
pes2o/s2orc
|
A Novel Underdetermined Blind Source Separation Method and Its Application to Source Contribution Quantitative Estimation
To identify the major vibration and radiation noise, a source contribution quantitative estimation method is proposed based on underdetermined blind source separation. First, the single source points (SSPs) are identified by directly searching the identical normalized time-frequency vectors of mixed signals, which can improve the efficiency and accuracy in identifying SSPs. Then, the mixing matrix is obtained by hierarchical clustering, and source signals can also be recovered by the least square method. Second, the optimal combination coefficients between source signals and mixed signals can be calculated based on minimum redundant error energy. Therefore, mixed signals can be optimally linearly combined by source signals via the coefficients. Third, the energy elimination method is used to quantitatively estimate source contributions. Finally, the effectiveness of the proposed method is verified via numerical case studies and experiments with a cylindrical structure, and the results show that source signals can be effectively recovered, and source contributions can be quantitatively estimated by the proposed method.
Introduction
Vibration and radiation noise have a significant effect on the safety and stability of some mechanical systems [1,2], for example, excessive noise of underwater vehicle will interfere with its own detection accuracy. Independently acquiring information from each source of the mechanical system can help to quickly judge its running state. However, in practice, the information measured by sensors is the superposition of some sources, because different components of the mechanical system will interfere with each other, which makes it difficult to directly measure the source information [3]. Therefore, some supplementary signal processing methods are needed to further process the collected information to obtain the expected source signals [1]. Among post-processing approaches, blind source separation (BSS) has demonstrated its usefulness in separating sources from mixed signals, due to its simplicity and effectiveness. More importantly, BSS can be utilized without the structure models and the transmission paths that are difficult to be obtained, and therefore BSS has been widely used in practice [4][5][6][7][8][9]. However, most of these methods are mainly designed for (over)determined BSS where the number of sensors is no smaller than that of sources, and thus they may fail when dealing with underdetermined cases. Therefore, we mainly address the problem of underdetermined BSS (UBSS) where the number of mixed signals is smaller than that of sources in this study. In addition, reducing the vibration of the major sources rather than all sources can achieve satisfactory results with smaller cost [1,10]. Therefore, how to evaluate source contribution quantitatively is another problem to be addressed in our study.
Basic Theory
The linear instantaneous mixed model of UBSS can be expressed as where x(t) = [x 1 (t), x 2 (t), . . . , x N (t)] T and s(t) = [s 1 (t), s 2 (t), . . . , s M (t)] T are the mixed vector and the source vector in the time domain, respectively, and · T represents the transpose operation; N and M (N < M) are the number of mixed signals and source signals, respectively; A = [a 1 , a 2 , . . . , a M ] is the mixing matrix with a i as its column. The aim of UBSS is to estimate source signals without any prior information of s(t) or A, except that N < M.
To increase the sparsity of source signals, the above linear instantaneous mixed model can be transformed into the TF domain as Equation (2) or Equation (3) by STFT.
Proposed Mixing Matrix Estimation Method
The ideal goal of UBSS is to estimate source signals without any prior information, except that N < M. Actually, it seems almost impossible to obtain an effective estimation of source signals if we know nothing about s(t) or A. Therefore, some assumptions are given first.
Assumption 1.
For each source signal s i (t), there are some TF points (t, f ) where only S i (t, f ) is dominant, i.e., |S i (t, f )| S j (t, f ) , ∀ j = i.
Assumption 2.
Source signals are mutually independent. Assumption 3. Any N × N submatrix of the mixing matrix A is of full rank.
These three assumptions could hold in many practical cases and have been widely used in recent UBSS methods. Assumption 1 is used to guarantee the existence of SSPs in the process of mixing matrix estimation. However, Assumption 1 is not necessary for recovering source signals, that is, if the mixing matrix is known or can be estimated by other methods, Assumption 1 can be removed. Assumption 2 and Assumption 3 are used to increase the stability of the SSPs identification method. Assumption 2 is also used to guarantee that there is no cross energy among source signals, which will be used in the source contribution estimation. Besides, Assumption 3 is also used to guarantee that all source signals could be correctly recovered. Now, at any TF point, say (u, v), if only one source is active, say S i (u, v), i.e., (u, v) is an SSP corresponding to s i (t), then Equation (3) can be rewritten as Equation (4) shows that the TF vector of mixed signals at TF point (u, v) is collinear with the i-th column of the mixing matrix. It can be also obtained that all TF vectors of mixed signals at SSPs corresponding to s i (t) will be collinear with a i , that is, all SSPs corresponding to the same active source could be linearly represented by each other. Assume that S i (ψ, ω) is also an SSP corresponding to s i (t), then we will obtain where r is a real coefficient. Now, the problem is how to identify the TF vectors that satisfy Equation (5) from all TF vectors of mixed signals.
From Equation (4), X(u, v) can be normalized as where represents the normalized vector and 2 is the 2-norm. Similarly, the normalized vector of X(ψ, ω) can also be written as X(ψ, ω) = a i As shown in Equations (6) and (7), all the normalized TF vectors at the SSPs corresponding to the i-th dominant source will be equal to the normalized vector of a i . Therefore, SSPs can be identified by searching the identical normalized TF vectors of mixed signals, i.e., Equations (5) and (8) are equivalent. Therefore, SSPs can be identified by checking whether normalized TF vectors are identical or not. As all vectors have been normalized, they will be identical if the directions of vectors are the same. The cosine of the angle between X(u, v) and X(ψ, ω) can be calculated by where X(u, v), X(ψ, ω) is the scalar product of X(u, v) and X(ψ, ω). Therefore, Equation (8) will hold if Noise effect is not considered in the above derivation. In noisy environments, we cannot find SSPs that exactly satisfies Equation (10). Instead, we can get SSPs from the following criterion: where |·| is the absolute value of ·, and δ 1 is the SSPs threshold close to zero. Therefore, both X(u, v) and X(ψ, ω) are regarded as SSPs if they satisfy Equation (11). As stated in [20], most of the signal energy will be concentrated in nearly 10% of the frequency bins. Therefore, in our study, the frequency bins are sorted in descending order according to their variances and only N f frequency bins with larger variance are selected for identifying SSPs. Moreover, we recommend that the data could be segmented when the sampling length is very large, and the results obtained in different segments can be combined by the similarity of the signal itself.
SSPs threshold δ 1 has a large effect on the accuracy of SSPs and we now discuss how to choose the SSPs threshold. If δ 1 is too small, the accuracy of SSPs will increase, however, the number of identified SSPs each time will decrease so that the efficiency of identifying enough SSPs will decline. Too small thresholds will even lead to insufficient SSPs. Otherwise, if δ 1 is too large, the criterion becomes loose and too many outliers will be misjudged as SSPs, which will reduce the accuracy of mixing matrix estimation and source recovery. Since δ 1 is related to the property of source signals, it is hard to give a unified range for all kinds of signals. A feasible approach that considers both efficiency and accuracy is to set a smaller threshold δ 1 and a minimum number N min-SSPs of identified SSPs. If the number N SSPs of extracted SSPs is smaller than N min-SSPs , the threshold will be doubled. When source signals contain large noise or source signals are not very sparse in the TF domain, the threshold will gradually increase, which can reduce the effect of using an unsuitable threshold.
In general, TF vectors with negligible energy are greatly influenced by noise, which will easily lead to the misjudgment of SSPs. To obtain effective SSPs, these vectors should be removed before identifying SSPs if they satisfy where δ 2 is a threshold close to zero and X 2 represents the average of 2-norm of all TF vectors. After identifying SSPs, the next stage is to estimate the mixing matrix. It can be found from Equations (6) and (7) that the identified SSPs are the set of normalized column vectors of the mixing matrix. Therefore, the mixing matrix can be estimated by clustering these TF vectors and the hierarchical clustering technique [21,22] is used here. It should be noted that this clustering technique may not be the best algorithm to cluster SSPs as other algorithms can also be used [23]. More details on adjusting the cluster number can be found in [17]. As studied in [17], the mixing matrix estimation error can be further reduced by removing the points which are away from the mean direction of the cluster. This strategy is also used in our study and the outlier detection rule is the same as [17]. By re-clustering SSPs after elimination of the outliers, each column of can be obtained via calculating the center of each cluster.
Source Recovery
Assumption 4. At each TF point, the number of source signals is smaller than that of mixed signals.
Even though is known, the solution of the system in Equation (1) is not unique. Actually, source signals can be recovered by a series of least square problems [19] with Assumption 3 and Assumption 4, which minimizes the error function by selecting the optimal N × (N − 1) submatrix ofÂ. Let A be a set composed of all N × (N − 1) submatrices ofÂ, that is Then for each TF point (t, k), there exists A * = â φ 1 ,â φ 2 , . . . ,â φ N−1 that satisfies where † is the pseudo-inverse of a matrix. Then, source signals can be estimated bŷ where e = [e 1 , e 2 , . . . , e N−1 ] T = A † * X(t, k), and A * can be obtained by Finally, the time domain of the estimated source signalsŜ(t) can be easily obtained by inverse STFT.
Proposed Source Contribution Estimation Method
Unlike the determined BSS, mixed signals in UBSS usually cannot be linearly represented by the estimated source signals, due to noise and separation errors, i.e., there exists a residual between mixed signals and the estimated source signals. Therefore, the i-th mixed signal x i can be expressed as where w i = [w i1 , w i2 , . . . , w iM ] T represents the coefficients and z i represents the residual. It should be noted that 1×T represents the whole discrete sequence of the i-th mixed signal andŜ = s T 1 , s T 2 , . . . , s T M T is also the whole discrete sequences of M estimated source signals, 1×T represents the i-th residual signal that has the same dimension as x i . Then, a problem arises that how muchŜ is contained in x i . This problem can be addressed based on minimum redundant error energy, i.e., Therefore, the problem can be transformed into how to find the optimal coefficients in Equation (18).
Let f (w i ) = z i 2 2 , it is easy to see that f (w i ) is a continuous differential function. From Equation (17), we can obtain The derivative of f (w i ) with respect to w i can be calculated by Let ∇ w i [ f (w i )] = 0, we will have From Assumption 2, source signals are mutually independent, therefore, the estimated source signals are also approximately mutually independent, that is, the rank ofŜ is M. Therefore, the rank ofŜŜ T is also M, that is,ŜŜ T is a matrix with full rank. Thus, f (w i ) has only one stationary point. The The Hessian matrix of f (w i ) is nonnegative definite. Therefore, from Equations (21) and (22), the minimum value of f (w i ) can be obtained at w i * = ŜŜ T −1Ŝ x T i . Based on the above analysis, the optimal combination coefficients ofŜ to all mixed signals can be obtained. Then, source contributions can be quantitatively estimated using w * . The following equation can be obtained.
x i−j = x i − w i * , jŝj , i = 1, 2, . . . , N and j = 1, 2, . . . , M, where x i−j represents the vector of x i that subtracts the contribution ofŝ j , and w i * , j is the j-th element of w i * . From Assumption 2, there is no cross energy among source signals, then the contribution C ij of the j-th estimated source signalsŝ j to the i-th mixed signals x i are calculated according to Generally, due to the noise and estimation error, the sum of the contributions of all estimated source signals to a mixed signal is not equal to 1, which is different from complete BSS.
, which implies that the j-th estimated source signals can decrease the overall vibration energy.
The flowchart of the proposed UBSS-based source contribution estimation method is shown in Figure 1.
Based on the above analysis, the optimal combination coefficients of Ŝ to all mixed signals can be obtained. Then, source contributions can be quantitatively estimated using w . The following equation can be obtained. ,ˆ, 1, 2, , and 1, 2, , Generally, due to the noise and estimation error, the sum of the contributions of all estimated source signals to a mixed signal is not equal to 1, which is different from complete BSS. From xx , which implies that the j-th estimated source signals can decrease the overall vibration energy.
The flowchart of the proposed UBSS-based source contribution estimation method is shown in Figure 1.
Performance of the Proposed UBSS method
In this section, we mainly evaluate the separation performance of the proposed UBSS method with different sample sizes and different numbers of mixed signals. Some numerical case studies
Performance of the Proposed UBSS Method
In this section, we mainly evaluate the separation performance of the proposed UBSS method with different sample sizes and different numbers of mixed signals. Some numerical case studies are conducted using five artificial source signals: s 1 (t) is a low frequency sinusoidal wave; s 2 (t) is a high frequency sinusoidal wave; s 3 (t) is a periodic wave with amplitude modulation; s 4 (t) is a shock attenuation signal wave; s 5 (t) is also a periodic wave with amplitude modulation. The generating functions of the source signals are listed as follows: Two, three and four mixed signals are generated by these five source signals. In each case, the averages of 50 Monte Carlo simulations are used to evaluate the performance of the proposed method, and in each simulation, Gaussian white noise with SNR = 10 dB is independently added to each source signal. The sampling frequency is 10 kHz. In the proposed method, the window length is 1024 and window overlap is 256, the number of selected frequency bins N f = 80, initial SSPs threshold δ 1 = 0.0001, minimum number of SSPs N min-SSPs = 300 and energy threshold δ 2 = 0.1.
To quantitatively verify the better performance of the proposed method, SNRs of andŝ(t) are calculated by Equations (26) and (27), respectively.
where a i andâ i are the i-th column of A andÂ, respectively.
where ς is a scalar that reflects the scalar indeterminacies. The average SNRs of the estimated mixing matrix and the estimated source signals are shown in Figure 2a,b, respectively. From Figure 2a, the average SNRs of the estimated mixing matrix will increase with the increase in sample sizes. However, from Figure 2b, the average SNRs of the estimated source signals remained nearly unchanged with the increase in sample size. This is because the average SNRs of the estimated mixing matrix have been more than 40 dB when the sample size is 10,000, which means the mixing matrix is nearly the same as true mixing matrix. It can also be seen from Figure 2 that the separation performance also improves with the increase in the number of mixed signals. Though the average SNRs of the estimated mixing matrix with two mixtures are nearly the same as that with three mixtures, the average SNRs of the estimated source signals differ a lot in these two cases. That is because the number of source signals must be smaller than that of mixed signals at each TF point according to Assumption 4, which means that at most one source exists at each TF point in the case of two mixtures. This restriction is too strict, leading to worse separation performance in source signals.
Performance of the Proposed Source Contribution Estimation Method
In order to validate the effectiveness of the proposed source contribution quantitative estimation method, the following simulations are conducted. Source signals are the first four signals in Equation (25) The sampling frequency and sampling length is 10 kHz and 1 s, respectively. One hundred Monte Carlo simulations are conducted to evaluate the performance of the proposed method. In each simulation, Gaussian white noise is independently added into each source signal and each mixed signal with SNR = 10 dB and SNR = 15 dB, respectively.
The performance of the proposed UBSS method is compared with Reju's method [17] and Zhen's method [19]. Since Reju's method is designed only for mixing matrix estimation, it cannot recover source signals. Therefore, the mixing matrix estimated by Reju's method is then inputted into Zhen's method to estimate source signals. The parameters in different methods are as follows. In all methods, the Hanning window is used in STFT, and the window length is 1024 and window overlap is 256. In Zhen's method, regularization parameter 0.001 λ = and energy threshold
Performance of the Proposed Source Contribution Estimation Method
In order to validate the effectiveness of the proposed source contribution quantitative estimation method, the following simulations are conducted. Source signals are the first four signals in Equation (25). The mixing matrix are The sampling frequency and sampling length is 10 kHz and 1 s, respectively. One hundred Monte Carlo simulations are conducted to evaluate the performance of the proposed method. In each simulation, Gaussian white noise is independently added into each source signal and each mixed signal with SNR = 10 dB and SNR = 15 dB, respectively.
The performance of the proposed UBSS method is compared with Reju's method [17] and Zhen's method [19]. Since Reju's method is designed only for mixing matrix estimation, it cannot recover source signals. Therefore, the mixing matrix estimated by Reju's method is then inputted into Zhen's method to estimate source signals. The parameters in different methods are as follows. In all methods, the Hanning window is used in STFT, and the window length is 1024 and window overlap is 256. In Zhen's method, regularization parameter λ = 0.001 and energy threshold δ 2 = 0.1. In Reju's method, the parameter ∆θ is set as 1.5 • and the number of selected frequency bins N f = 80. The parameters in the proposed method are the same as those in Section 4.1.
One example of the separation results is as follows. Waveforms and Fourier spectrums of source signals are displayed in Figure 3, while the major frequencies of the source signals can be easily obtained from Figure 3b. The major frequencies of s 1 (t), s 2 (t), and s 4 (t) are 23 Hz, 281 Hz, and 43 Hz, respectively, while the major frequencies of s 3 (t) are 95 Hz and 115 Hz. Waveforms and Fourier spectrums of mixed signals are shown in Figure 4. From Figure 4a, mixed signals are the superposition of source signals, therefore, we cannot directly obtain the waveforms of source signals. From Figure 4b, the major frequencies of source signals can be found in each Fourier spectrum of mixed signals, and the frequencies of s 4 (t) is overwhelmed by those of other source signals. Therefore, signal processing methods are needed to estimate all source signals.
The absolute differences between A and  are calculated in Equation (30), illustrating that the mixing matrix has been well estimated because each of the absolute differences is very small.
The absolute differences between A and  are calculated in Equation (30), illustrating that the mixing matrix has been well estimated because each of the absolute differences is very small.
The absolute differences between A and are calculated in Equation (30), illustrating that the mixing matrix has been well estimated because each of the absolute differences is very small. Source signals estimated by the proposed method, Zhen's method and Reju's method are displayed in Figures 5-7, respectively. The order ofŝ(t) has been adjusted according to s(t). Comparing Figure 5a with Figure 3a, we could find that the waveforms ofŝ(t) are quite similar to those of s(t). From the Fourier spectrums ofŝ(t), the major frequencies of s(t) have been well recovered, which can validate the effectiveness of the proposed UBSS method. As revealed by Figure 6a, it seems that the waveforms of s(t) are also well recovered by Zhen's method. However, as shown in Figure 6b, there is interference frequency 23 Hz in the Fourier spectrums ofŝ 3 (t), and interference frequencies 95 Hz, 115 Hz and 281 Hz in the Fourier spectrums ofŝ 4 (t), which indicates that source signalŝ 3 (t) andŝ 4 (t) were not well estimated. It could be seen from Figure 7 that s 4 (t) is not estimated by Reju's method.
Average SNRs of 100 Monte Carlo simulations of estimated by different methods are listed in Table 1, from which we can see that SNRs of estimated by the proposed method are larger than those estimated by Zhen's method and Reju's method. Average SNRs of all columns of the mixing matrix estimated by Zhen's method, Reju's method and the proposed method is 18.12 dB, 32.41 dB and 40.65 dB, respectively, which implies that the proposed method could estimate the mixing matrix more accurately. Table 2 shows the average SNRs of 100 Monte Carlo simulations ofŝ(t) estimated by different methods. As can be seen in Table 2, all SNRs ofŝ(t) estimated by the proposed method are also larger than those estimated by Zhen's method and Reju's method. Average SNRs of all sources of Zhen's method, Reju's method and the proposed method are 8.41 dB, 9.17 dB and 11.66 dB, that is, the average SNR increments of all sources estimated by the proposed method are 38.72% and 27.18% when compared with Zhen's method and Reju's method, respectively. The above results tend to validate that the proposed UBSS method performs more effectively than Zhen's method and Reju's method.
Source signals estimated by the proposed method, Zhen's method and Reju's method are displayed in Figures 5-7, respectively. The order of ˆ( ) t s has been adjusted according to ( ) t s .
Comparing Figure 5a with Figure 3a, we could find that the waveforms of ˆ( ) t s are quite similar to those of ( ) t s . From the Fourier spectrums of ˆ( ) t s , the major frequencies of ( ) t s have been well recovered, which can validate the effectiveness of the proposed UBSS method. As revealed by Figure 6a, it seems that the waveforms of ( ) t s are also well recovered by Zhen's method. However, as shown in Figure 6b, there is interference frequency 23 Hz in the Fourier spectrums of 3 ( ) s t , and interference frequencies 95 Hz, 115 Hz and 281 Hz in the Fourier spectrums of 4 ( ) s t , which indicates that source signal 3 ( ) s t and 4 ( ) s t were not well estimated. It could be seen from Figure 7 Table 1, from which we can see that SNRs of  estimated by the proposed method are larger than those estimated by Zhen's method and Reju's method. Average SNRs of all columns of the mixing matrix estimated by Zhen's method, Reju's method and the proposed method is 18.12 dB, 32.41 dB and 40.65 dB, respectively, which implies that the proposed method could estimate the mixing matrix more accurately.
SNR (dB)
Average SNR of All Sourceŝ s 1 (t)ŝ 2 (t)ŝ 3 (t)ŝ 4 (t) Zhen The running time is used to evaluate the efficiency of the methods. CPU of the computer is Inter Core i5-4590 of 3.30 GHz and RAM is 1333 MHz DDR3 of 16 GB. Average time costs of the proposed method, Zhen's method and Reju's method are 1.79 s, 14.22 s and 0.17 s, respectively. The main difference between these three methods is the process of SSPs identification, which is the main cause for a significant difference in time cost. Reju's method can identify SSPs according to single SSP, and only TF vectors in some frequency bins with a larger variance are selected for SSPs identification, therefore, time cost of Reju's method is the least. SSPs must be identified between two TF vectors in Zhen's method and the proposed method, which means more time consumption. However, SSPs are also identified in some frequency bins with a larger variance in the proposed method and they can be directly identified by searching the identical normalized TF vectors, instead of finding the sparsest coefficients. Therefore, the time cost of the proposed method is shorter than that of Zhen's method. Table 3 shows the average results of source contributions quantitative estimation using different methods, including also the real contributions. It can be clearly seen that source contributions of the proposed method are closer to the real source contributions than those of Zhen's method and Reju's method. The average absolute errors of source contributions are also calculated and listed in Table 4. As revealed by the data in Table 4, most of the average contribution errors of the proposed method are the smallest among these three methods, implying that the proposed method has higher accuracy in source contribution. All contribution errors of the proposed method are less than 1.80%, however, three contribution errors are larger than 10% in Zhen's method and three contribution errors are larger than 4% in Reju's method. Actually, the accurate source estimation is the premise for correct contribution estimation. Therefore, it can be concluded that the proposed method performs more effectively in recovering source signals and quantitatively estimating source contributions. Table 3. Average contribution comparison of estimated source signals. Table 4. Comparison of average contribution errors of estimated source signals.
Experimental Study with Cylindrical Structure
Some practical mechanical systems or their partial sections have the shape of cylindrical shells, such as the underwater vehicles. Generally, the sound radiation of underwater vehicles strongly interferes with their performance and safety. Therefore, it is quite important for underwater vehicles to reduce their radiation noise to accomplish tasks successfully. Before that, it is necessary to estimate sound sources in advance. When the number of sensors is smaller than that of sources, UBSS is an excellent method to estimate sources in these cases. Therefore, a test bed with a cylindrical shell structure is used to examine the effectiveness of the proposed method.
In the experiments, an adjustable speed motor is used as a vibration source and an eccentric mass disc is driven by the motor to simulate the unbalanced vibration. Two loudspeakers are also used to simulate two radiated noise sources and two arbitrary waveform generators are used to produce two different source signals which are the inputs of these two loudspeakers, respectively. Besides, mixed signals are collected by four sound pressure sensors and are recorded by GEN2i high-speed data recorder. Schematic diagram and photos of the test site are displayed in Figures 8 and 9, respectively.
Experimental Study with Cylindrical Structure
Some practical mechanical systems or their partial sections have the shape of cylindrical shells, such as the underwater vehicles. Generally, the sound radiation of underwater vehicles strongly interferes with their performance and safety. Therefore, it is quite important for underwater vehicles to reduce their radiation noise to accomplish tasks successfully. Before that, it is necessary to estimate sound sources in advance. When the number of sensors is smaller than that of sources, UBSS is an excellent method to estimate sources in these cases. Therefore, a test bed with a cylindrical shell structure is used to examine the effectiveness of the proposed method.
In the experiments, an adjustable speed motor is used as a vibration source and an eccentric mass disc is driven by the motor to simulate the unbalanced vibration. Two loudspeakers are also used to simulate two radiated noise sources and two arbitrary waveform generators are used to produce two different source signals which are the inputs of these two loudspeakers, respectively. Besides, mixed signals are collected by four sound pressure sensors and are recorded by GEN2i high-speed data recorder. Schematic diagram and photos of the test site are displayed in Figure 8 and Figure 9, respectively. The motor is running at 1740 r/m. Inputs of two loudspeakers, denoted by loudspeaker 1 and loudspeaker 2, respectively, are sine waves of 713 Hz and 917 Hz, respectively. The sampling length and the sampling frequency are 10 s and 5000 Hz, respectively. The second and the fourth mixed signals are selected to estimate three source signals and only a section of data from 4 s to 6 s is used. Waveforms and Fourier spectrums of mixed signals are displayed in Figure 10. From Figure 10a, mixed signals are the superposition of source signals, therefore, we cannot directly obtain waveforms of source signals from mixed signals. From Figure 10b, the major frequencies of source signals can be found in each Fourier spectrum of mixed signals. Therefore, mixed signals need to be further processed to obtain source signals. Source signals (displayed only from 4.5 s to 5 s) estimated by the proposed method, Zhen's method and Reju's method are illustrated in Figure 11-13, respectively. As revealed in Figure 11b, the major frequencies of source signals estimated by the proposed method are 29 Hz, 937 Hz and 713 Hz, respectively, which are consistent with the frequencies set in the experiment. However, from Figure 12, both the major frequency 29 Hz of the motor and the major frequency 713 Hz of the loudspeaker 1 is mis-estimated in the same signal, as shown in the Fourier spectrum of the first separated signal. Therefore, 29 Hz and 713 Hz will be mistaken for coming from the same source using Zhen's method. The major frequencies of the first signal estimated by Reju's method are also The motor is running at 1740 r/m. Inputs of two loudspeakers, denoted by loudspeaker 1 and loudspeaker 2, respectively, are sine waves of 713 Hz and 917 Hz, respectively. The sampling length and the sampling frequency are 10 s and 5000 Hz, respectively. The second and the fourth mixed signals are selected to estimate three source signals and only a section of data from 4 s to 6 s is used. Waveforms and Fourier spectrums of mixed signals are displayed in Figure 10. From Figure 10a, mixed signals are the superposition of source signals, therefore, we cannot directly obtain waveforms of source signals from mixed signals. From Figure 10b, the major frequencies of source signals can be found in each Fourier spectrum of mixed signals. Therefore, mixed signals need to be further processed to obtain source signals. The motor is running at 1740 r/m. Inputs of two loudspeakers, denoted by loudspeaker 1 and loudspeaker 2, respectively, are sine waves of 713 Hz and 917 Hz, respectively. The sampling length and the sampling frequency are 10 s and 5000 Hz, respectively. The second and the fourth mixed signals are selected to estimate three source signals and only a section of data from 4 s to 6 s is used. Waveforms and Fourier spectrums of mixed signals are displayed in Figure 10. From Figure 10a, mixed signals are the superposition of source signals, therefore, we cannot directly obtain waveforms of source signals from mixed signals. From Figure 10b, the major frequencies of source signals can be found in each Fourier spectrum of mixed signals. Therefore, mixed signals need to be further processed to obtain source signals. Source signals (displayed only from 4.5 s to 5 s) estimated by the proposed method, Zhen's method and Reju's method are illustrated in Figure 11-13, respectively. As revealed in Figure 11b, the major frequencies of source signals estimated by the proposed method are 29 Hz, 937 Hz and 713 Hz, respectively, which are consistent with the frequencies set in the experiment. However, from Figure 12, both the major frequency 29 Hz of the motor and the major frequency 713 Hz of the loudspeaker 1 is mis-estimated in the same signal, as shown in the Fourier spectrum of the first separated signal. Therefore, 29 Hz and 713 Hz will be mistaken for coming from the same source using Zhen's method. The major frequencies of the first signal estimated by Reju's method are also Source signals (displayed only from 4.5 s to 5 s) estimated by the proposed method, Zhen's method and Reju's method are illustrated in Figures 11-13, respectively. As revealed in Figure 11b, the major frequencies of source signals estimated by the proposed method are 29 Hz, 937 Hz and 713 Hz, respectively, which are consistent with the frequencies set in the experiment. However, from Figure 12, both the major frequency 29 Hz of the motor and the major frequency 713 Hz of the loudspeaker 1 is mis-estimated in the same signal, as shown in the Fourier spectrum of the first separated signal. Therefore, 29 Hz and 713 Hz will be mistaken for coming from the same source using Zhen's method. The major frequencies of the first signal estimated by Reju's method are also 29 Hz and 713 Hz, as shown in Figure 13. The results tend to illustrate that source signals have been well estimated by the proposed method. 29 Hz and 713 Hz, as shown in Figure 13. The results tend to illustrate that source signals have been well estimated by the proposed method. Reju's method can identify SSP based on the character of single SSP, and the performance of this method will be degraded in noisy cases. In Zhen's method, to increase the computational efficiency, SSPs are identified only in some TF vectors that are randomly selected from TF vectors of mixed signals. If no or very few SSPs corresponding to a source are included in selected TF vectors, this source will be estimated with large error. And this may be the main reason why the performance of Zhen's method is not so good as that of the proposed method. Reju's method can identify SSP based on the character of single SSP, and the performance of this method will be degraded in noisy cases. In Zhen's method, to increase the computational efficiency, SSPs are identified only in some TF vectors that are randomly selected from TF vectors of mixed signals. If no or very few SSPs corresponding to a source are included in selected TF vectors, this source will be estimated with large error. And this may be the main reason why the performance of Zhen's method is not so good as that of the proposed method. After obtaining the estimated source signals, their contributions to the mixed signals can be calculated and are presented in Table 5. Real source contributions also need to be obtained by the experiment [1]. When one source is stopped, the decreased amount in vibration energy of mixed signals is observed by the sensors. The decreased amount is regarded as the source contribution of the stopped source. The real source contributions are also given in Table 5. We can find the source contributions of the proposed method are closer to the real source contributions than those of Zhen's method and Reju's method.
The absolute errors of source contributions are calculated and listed in Table 6. All contribution errors of the proposed method are smaller than those of Zhen's method and Reju's method, implying that the proposed method has higher accuracy in source contribution. The largest contribution error of the proposed method is only 6.44%, however, three contribution errors are larger than 12% in Zhen's method and four contribution errors are larger than 15% in Reju's method. Actually, accurate estimation of source signals is the precondition for accurate estimation of computational complexity. To some extent, estimation accuracy of contribution increases with the increase in source signal estimation accuracy. As shown in Figures 12 and 13, the first separated signals of Zhen's method and Reju's method contains major frequencies of two sources (motor and loudspeaker one). Therefore, the contributions of their first separated signal contain contributions of two real sources, which will lead to an increase in their source contributions. Besides, since part of the contributions of loudspeaker 1 is mis-assigned to their first separated signals, the contribution of their third separated signal will be smaller than the real contributions. Based on a more accurate estimation of source signals, the contribution errors of the proposed method are smaller than those of contrast methods. Reju's method can identify SSP based on the character of single SSP, and the performance of this method will be degraded in noisy cases. In Zhen's method, to increase the computational efficiency, SSPs are identified only in some TF vectors that are randomly selected from TF vectors of mixed signals. If no or very few SSPs corresponding to a source are included in selected TF vectors, this source will be estimated with large error. And this may be the main reason why the performance of Zhen's method is not so good as that of the proposed method.
After obtaining the estimated source signals, their contributions to the mixed signals can be calculated and are presented in Table 5. Real source contributions also need to be obtained by the experiment [1]. When one source is stopped, the decreased amount in vibration energy of mixed signals is observed by the sensors. The decreased amount is regarded as the source contribution of the stopped source. The real source contributions are also given in Table 5. We can find the source contributions of the proposed method are closer to the real source contributions than those of Zhen's method and Reju's method. Table 5. Contribution comparison of estimated source signals. The absolute errors of source contributions are calculated and listed in Table 6. All contribution errors of the proposed method are smaller than those of Zhen's method and Reju's method, implying that the proposed method has higher accuracy in source contribution. The largest contribution error of the proposed method is only 6.44%, however, three contribution errors are larger than 12% in Zhen's method and four contribution errors are larger than 15% in Reju's method. Actually, accurate estimation of source signals is the precondition for accurate estimation of computational complexity. To some extent, estimation accuracy of contribution increases with the increase in source signal estimation accuracy. As shown in Figures 12 and 13, the first separated signals of Zhen's method and Reju's method contains major frequencies of two sources (motor and loudspeaker one). Therefore, the contributions of their first separated signal contain contributions of two real sources, which will lead to an increase in their source contributions. Besides, since part of the contributions of loudspeaker 1 is mis-assigned to their first separated signals, the contribution of their third separated signal will be smaller than the real contributions. Based on a more accurate estimation of source signals, the contribution errors of the proposed method are smaller than those of contrast methods. Table 6. Comparison of contribution errors of estimated source signals.
Mixed Signals Methods
Contribution Errors (%) Running time is also used to evaluate the efficiency of the methods. Time costs of the proposed method, Zhen's method and Reju's method are 1.86 s, 6.42 s and 0.32 s, respectively. Reju's method could identify SSPs by the property of single SSP, which can have higher efficiency than Zhen's method and the proposed method. The proposed method can identify SSPs only at some optimal frequency bins and SSPs are identified by directly searching the identical TF vectors in the selected frequency bins, which could the reason why the efficiency of the proposed method is higher than that of Zhen's method.
It should be noted that the proposed method is designed for off-line processing system because it needs some data to identify SSPs. However, for a real-time monitoring system, we can process the data in a piecewise way by the proposed method. From the experiment, the running time of the proposed method is only 1.86 s to process the data with a length of 2 s. Therefore, we can split the data into a fixed length segment and analyze them by the proposed method.
After source signals are well recovered and source contributions are calculated, the influences of sources on mixed signals can be determined. The vibration sources estimated by the proposed method can be used to machinery condition monitoring and fault diagnosis when source signals are difficult to be directly obtained. The main vibration sources can also be determined according to their contributions. Therefore, some measures can be taken to reduce the impact of the main vibration sources.
Conclusions
To identify the major vibration and noise sources of the mechanical systems, a novel source contribution quantitative estimation method is proposed for UBSS. The accuracy of the source contribution results relies largely on the accuracy of source recovery. Only by recovering source signals more accurately can we obtain higher accuracy of source contribution estimation. From the results of numerical case studies, the proposed method can not only estimate source signals from their mixtures in underdetermined cases, but also quantitatively estimate the source contributions with average deviations <2%. The results of experimental studies with a cylindrical structure also show the effectiveness of the proposed method in sources restoration and quantitative contribution estimation. The comparative results tend to validate that the proposed method performs more
|
v3-fos-license
|
2023-05-04T01:15:49.106Z
|
2023-05-03T00:00:00.000
|
258461441
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://aclanthology.org/2023.sigtyp-1.3.pdf",
"pdf_hash": "c9587450354548ecfa3e136ce5825f6c850abd77",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43340",
"s2fieldsofstudy": [
"Linguistics"
],
"sha1": "ae6359e1738bb00a0fc29beb7da5b856a3ca39fe",
"year": 2023
}
|
pes2o/s2orc
|
Identifying the Correlation Between Language Distance and Cross-Lingual Transfer in a Multilingual Representation Space
Prior research has investigated the impact of various linguistic features on cross-lingual transfer performance. In this study, we investigate the manner in which this effect can be mapped onto the representation space. While past studies have focused on the impact on cross-lingual alignment in multilingual language models during fine-tuning, this study examines the absolute evolution of the respective language representation spaces produced by MLLMs. We place a specific emphasis on the role of linguistic characteristics and investigate their inter-correlation with the impact on representation spaces and cross-lingual transfer performance. Additionally, this paper provides preliminary evidence of how these findings can be leveraged to enhance transfer to linguistically distant languages.
Introduction
It has been shown that language models implicitly encode linguistic knowledge (Jawahar et al., 2019;Otmakhova et al., 2022).In the case of multilingual language models (MLLMs), previous research has also extensively investigated the influence of these linguistic features on cross-lingual transfer performance (Lauscher et al., 2020;Dolicki and Spanakis, 2021;de Vries et al., 2022).However, limited attention has been paid to the impact of these factors on the language representation spaces of MLLMs.
Despite the fact that state-of-the-art MLLMs such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), use a shared vocabulary and are intended to project text from any language into a language-agnostic embedding space, empirical evidence has demonstrated that these models encode language-specific information across all layers (Libovický et al., 2020;Gonen et al., 2020).This leads to the possibility of identifying distinct monolingual representation spaces within the * Research was conducted at Zortify.shared multilingual representation space (Chang et al., 2022).
Past research has focused on the cross-linguality of MLLMs during fine-tuning, specifically looking at the alignment of representation spaces of different language pairs (Singh et al., 2019;Muller et al., 2021).Our focus, instead, is directed towards the absolute impact on the representation space of each language individually, rather than the relative impact on the representation space of a language compared to another one.Isolating the impact for each language enables a more in-depth study of the inner modifications that occur within MLLMs during fine-tuning.The main objective of our study is to examine the role of linguistic features in this context, as previous research has shown their impact on cross-lingual transfer performance.More specifically, we examine the relationship between the impact on the representation space of a target language after fine-tuning on a source language and five different language distance metrics.We have observed such relationships across all layers with a trend of stronger correlations in the deeper layers of the MLLM and significant differences between language distance metrics.
Additionally, we observe an inter-correlation among language distance, impact on the representation space and transfer performance.Based on this observation, we propose a hypothesis that may assist in enhancing cross-lingual transfer to linguistically distant languages and provide preliminary evidence to suggest that further investigation of our hypothesis is merited.
Related Work
In monolingual settings, Jawahar et al. (2019) found that, after pre-training, BERT encodes different linguistic features in different layers.Merchant et al. (2020) showed that language models do not forget these linguistic structures during fine-tuning on a downstream task.Conversely, Tanti et al. (2021) have shown that during fine-tuning in multilingual settings, mBERT forgets some languagespecific information, resulting in a more crosslingual model.
At the representation space level, Singh et al. (2019) and Muller et al. (2021) studied the impact of fine-tuning on mBERT's cross-linguality layer-wise.However, their research was limited to the evaluation of the impact on cross-lingual alignment comparing the representation space of one language to another, rather than assessing the evolution of a language's representation space in isolation.
Experimental Setup
In this paper, we focus on the effect of fine-tuning on the representation space of the 12-layer multilingual BERT model (bert-base-multilingual-cased).We restrict our focus on the Natural Language Inference (NLI) task and fine-tune on all 15 languages of the XNLI dataset (Conneau et al., 2018) individually.We use the test set to evaluate the zero-shot cross-lingual transfer performance, measured as accuracy, and to generate embeddings that define the representation space of each language.More details on the training process and its reproducibility are provided in Appendix A.
Measuring the Impact on the Representation Space
We focus on measuring the impact on a language's representation space in a pre-trained MLLM during cross-lingual transfer.We accomplish this by measuring the similarity of hidden representations of samples from different target languages before and after fine-tuning in various source languages.For this purpose, we use the Centered Kernel Alignment (CKA) method (Kornblith et al., 2019) 1 .When using a linear kernel, the CKA score of two representation matrices X ∈ R N ×m and Y ∈ R N ×m , where N is the number of data points and m is the representation dimension, is given by where ∥•∥ F is the Frobenius norm.
Notation We define H i S→T ∈ R N ×m as the hidden representation2 of N samples from a target language T at the i-th attention layer of a model fine-tuned in the source language S, where m is the hidden layer output dimension.Similarly, we denote the hidden representation of N samples from language L at the i-th attention layer of a pre-trained base model (i.e.before fine-tuning) as More specifically, the representation space of each language will be represented by the stacked hidden states of its samples.
We define the impact on the representation space of a target language T at the i-th attention layer when fine-tuning in a source language S as follows:
Measuring Language Distance
In order to quantify the distance between languages we use three types of typological distances, namely the syntactic (SYN), geographic (GEO) and inventory (INV) distance, as well as the genetic (GEN) and phonological (PHON) distance between source and target language.These distances are pre-computed and are extracted from the URIEL Typological Database (Littell et al., 2017) using lang2vec3 .For our study, such language distances based on aggregated linguistic features offer a more comprehensive representation of the relevant language distance characteristics.More information on these five metrics is provided in Appendix B.
Correlation Analysis
Relationship Between the Impact on the Representation Space and Language Distance.Given the layer-wise differences of mBERT's crosslinguality (Libovický et al., 2020;Gonen et al., 2020), we measure the correlation between the impact on the representation space and the language distances across all layers.Figure 1 shows almost no significant correlation between representation space impact and inventory or phonological distance.Geographic and syntactic distance mostly show significant correlation values at the last layers.
Only the genetic distance correlates significantly across all layers with the impact on the representation space.Figure 1: Pearson correlation coefficient between the impact on a target language's representation space when fine-tuning in a source language and different types of linguistic distances between the source and target language for each layer.Same source-target language pair data points were excluded in order to prevent an overestimation of effects.( * p < 0.05, and * * p < 0.01, two-tailed).
Relationship Between Language Distance and Cross-Lingual Transfer Performance.Table 1 shows that all distance metrics correlate with crosslingual transfer performance, which is consistent with the findings of Lauscher et al. (2020).Furthermore, we note that the correlation strengths align with the previously established relationship between language distance and representation space impact, with higher correlation values observed for syntactic, genetic, and geographic distance than for inventory and phonological distance.Relationship Between the Impact on the Representation Space and Cross-Lingual Transfer Performance.In general, cross-lingual transfer performance clearly correlates with impact on the representation space of the target language, but this correlation tends to be stronger in the deeper layers of the model ( ( * p < 0.01, two-tailed).
5 Does Selective Layer Freezing Allow to Improve Transfer to Linguistically Distant Languages?
In the previous section we observed an intercorrelation between cross-lingual transfer performance, the linguistic distance between the target and source language, and the impact on the representation space.Given this observation, we investigate the possibility to use this information to improve transfer to linguistically distant languages.More specifically, we hypothesize that it may be possible to regulate cross-lingual transfer performance by selectively interfering with the previously observed correlations at specific layers.A straightforward strategy would be to selectively freeze layers, during the fine-tuning process, where a significant negative correlation between the impact on their representation space and the distance between source and target languages has been observed.By freezing a layer, we manually set the correlation between the impact on the representation space and language distance to zero, which may simultaneously reduce the significance of the correlation between language distance and transfer performance.
Wu and Dredze (2019) already showed that freezing early layers of mBERT during fine-tuning may lead to increased cross-lingual transfer performance.With the same goal in mind, Xu et al. (2021) employ meta-learning to select layer-wise learning rates during fine-tuning.In what follows, we will, however, not focus on pure overall transfer performance.Our approach is to specifically target transfer performance improvements for target languages that are linguistically distant from the source language, rather than trying to achieve equal transfer performance increases for all target languages.
Experimental Setup
For our pilot experiments, we focus on English as the source language.Additionally, we choose to carry out our pilot experiments on layers 1, 2, 5, and 6, as the representation space impact at these layers exhibits low correlation values with transfer performance (Table 2) and high correlations with different language distances (Figure 2 in Appendix C).This decision is made to mitigate the potential impact on the overall transfer performance, which could obscure the primary effect of interest, and to simultaneously target layers which might be responsible for the transfer gap to distant languages.We conduct 3 different experiments aiming to regulate correlations between specific language distances and transfer performance.In an attempt to diversify our experiments, we aim to decrease the transfer performance gap for both a single language distance metric (Experiment A) and multiple distance metrics (Exp.C).Furthermore, in another experiment we aim at deliberately increasing the transfer gap (Exp.B).
Results
Table 3 provides results of all 3 experiments.Experiment A. The 2 nd layer shows a strong negative correlation (-0.66) between representation space impact and inventory distance to English.Freezing the 2 nd layer during fine-tuning has led to a less significant correlation between inventory distance and transfer performance (+0.0116).
Experiment B. The 5 th layer shows a strong positive correlation (0.499) between representation space impact and phonological distance to English.Freezing the 5 th layer during fine-tuning has led to a more significant correlation between phonological distance and transfer performance (-0.012).
Experiment C. The 1 st layer, 2 nd layer and 6 th layer show a strong negative correlation between the impact on the representation space and the syntactic (-0.618), inventory (-0.66) and phonological (-0.543) distance to English, respectively.Freezing the 1 st , 2 nd and 6 th layer during fine-tuning has led to a less significant correlation of transfer performance with syntactic (+0.0029) and phonological (+0.011) distance.
Conclusion
In previous research, the effect of fine-tuning on a language representation space was usually studied in relative terms, for instance by comparing the cross-lingual alignment between two monolingual representation spaces before and after fine-tuning.Our research, however, focused on the absolute impact on the language-specific representation spaces within the multilingual space and explored the relationship between this impact and language distance.Our findings suggest that there is an intercorrelation between language distance, impact on the representation space, and transfer performance which varies across layers.Based on this finding, we hypothesize that selectively freezing layers during fine-tuning, at which specific inter-correlations are observed, may help to reduce the transfer performance gap to distant languages.Although our hypothesis is only supported by three pilot experiments, we anticipate that it may stimulate further research to include an assessment of our hypothesis.
Limitations
It is important to note that the evidence presented in this paper is not meant to be exhaustive, but rather to serve as a starting point for future research.Our findings are based on a set of 15 languages and a single downstream task and may not generalize to other languages or settings.Additionally, the proposed hypothesis has been tested through a limited number of experiments, and more extensive studies are required to determine its practicality and effectiveness.
Furthermore, in our study, we limited ourselves to using traditional correlation coefficients, which are limited in terms of the relationships they can capture, and it is possible that there are additional correlations that could further strengthen our results and conclusions.
2. Geographic Distance refers to the shortest distance between two languages on the surface of the earth's sphere, also known as the orthodromic distance.
3. Inventory Distance is the cosine distance between the inventory feature vectors of languages, sourced from the PHOIBLE9 database (Moran et al., 2019).
4. Genetic Distance is based on the Glottolog10 (Hammarström et al., 2015) tree of language families and is obtained by computing the distance between two languages in the tree.
5. Phonological Distance is the cosine distance between the phonological feature vectors of languages, sourced from WALS and Ethnologue.
C Additional Figures
Figure 2 provides Pearson correlation coefficients between the impact on the target language representation space when fine-tuning in English and different types of linguistic distances between English and the target language for each layer.English-English data points were excluded in order to prevent an overestimation of effects.Figure 3
* 0.5901 * All 0.4343 * 0.5026 * Table 2: Pearson correlation coefficients between cross-lingual transfer performance and the impact on the representation space of the target language.
Table 3 :
Pearson correlation coefficients quantifying the relationship between cross-lingual transfer performance and different language distance metrics after freezing different layers during fine-tuning.The first row contains baseline values for full-model fine-tuning.The last column provides the average cross-lingual transfer performance (CLTP), measured as accuracy, across all target languages.English has been the only source language.
|
v3-fos-license
|
2020-11-19T09:16:52.158Z
|
2020-11-11T00:00:00.000
|
228867620
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1073/13/22/5887/pdf",
"pdf_hash": "17e9bb2bb70172633073a5d0c069f9baf52d8bc4",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43342",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "108a9632d855102734ba21872b7e643855a6e45f",
"year": 2020
}
|
pes2o/s2orc
|
A Methodology to Systematically Identify and Characterize Energy Flexibility Measures in Industrial Systems
: Industrial energy flexibility enables companies to optimize their energy-associated production costs and support the energy transition towards renewable energy sources. The first step towards achieving energy flexible operation in a production facility is to identify and characterize the energy flexibility measures available in the industrial systems that comprise it. These industrial systems are both the manufacturing systems that directly execute the production tasks and the systems performing supporting tasks or tasks necessary for the operation of these manufacturing systems. Energy flexibility measures are conscious and quantifiable actions to carry out a defined change of operative state in an industrial system. This work proposes a methodology to identify and characterize the available energy flexibility measures in industrial systems regardless of the task they perform in the facility. This methodology is the basis of energy flexibility-oriented industrial energy audits, in juxtaposition with the current industrial energy audits that focus on energy e ffi ciency. This audit will provide industrial enterprises with a qualitative and quantitative understanding of the capabilities of their industrial systems, and hence their production facilities, for energy flexible operation. The audit results facilitate a company’s decision making towards the implementation, evaluation and management of these capabilities.
Introduction
Energy systems worldwide are undergoing a radical transition to low-carbon energy sources. This transition is necessary for countries to achieve their nationally determined contributions (NDCs) as per the Paris Agreement of 2015. The International Renewable Energy Association (IRENA) global roadmap for energy transformation, ReMap, has quantified that, for countries to achieve their NDCs, renewable energy sources should account for two-thirds of the total primary energy supply worldwide by 2050 [1].
ReMap also calls for large-scale electrification of the energy demand. Currently, electricity accounts for 20% of the final energy demand worldwide, according to the roadmap this ought to be 49% by 2050. Therefore, to meet the intended NDCs, considerable electrification of the final energy demand and a tripling of the installed capacity of renewable electricity sources, when compared to its current levels, should occur simultaneously around the globe. Additionally, due to their extended availability and continuously reducing costs, variable renewable energy sources (VRE), particularly wind and solar energy, are expected to be the primary sources of 61% of the total electricity generated worldwide [2]. [12,14,15], own illustration).
Manufacturing (MA) is the central (value-adding) technical unit of the factory. It consists of all industrial systems that directly execute production tasks in the production processes, i.e., production equipment and human workforce that directly add value to the manufactured product. Although the [12,14,15], own illustration). Manufacturing (MA) is the central (value-adding) technical unit of the factory. It consists of all industrial systems that directly execute production tasks in the production processes, i.e., production equipment and human workforce that directly add value to the manufactured product. Although the industrial production systems that make up the MA technical unit differ in configuration, they all consist of an arrangement of two basic components, Production Machines (PM) and Workstations (WS). Groups of PMs and/or WSs that are commonly operated by a specific employee group are referred to as Manufacturing Cells. A series of Manufacturing Cells, that sequentially work together connected by a material-flow, constitute a Manufacturing Line. Based on their end goal, Manufacturing Lines can be in turn be grouped in Manufacturing Segments. Manufacturing Segments are self-contained groups of Manufacturing Lines limited by distinct boundaries from an organizational or management perspective. The segmentation of the MA technical unit is usually carried out by physically establishing separate areas, or even allocating different buildings to each segment [12].
The Auxiliary Systems (AS) technical unit consists of industrial systems that do not directly add value to the manufactured product but support the industrial production systems in the MA technical unit in the execution of their task. The industrial systems in the AS technical unit are further classified as centralized if they support a complete Manufacturing Segment or decentralized if they serve just a specific Manufacturing Line, Cell, PM or WS. Examples of industrial systems that belong to the AS technical unit include transport preparation systems like palletizing machines and logistic systems like conveyor belts and automated guided vehicles (AGVs).
The Technical Building Services (TBS) technical unit comprises industrial systems tasked with the generation, processing and/or storing of useful energy forms and media, demanded or emitted by the industrial systems within the MA and AS technical units. TBS include, for example, compressed air, process heating and cooling, and heating, ventilation and air-conditioning systems (HVAC). The industrial systems in the TBS technical unit can be further classified based on their operative function as generators, handlers or buffers [14].
The Energy/Media (EM) technical unit involves the industrial systems tasked with the buffering and conditioning of media and final energy forms supplied to, or any infrastructure intended for the generation of final energy forms directly at the factory. Final energy forms refer to all energy carriers that are in a form ready to be consumed. Examples include high-, medium-, and low-voltage electricity, natural gas, district heating, cogeneration and trigeneration systems and combustible fuels [15][16][17].
The boundary between the EM and the TBS technical units depends on the particularities of each facility, but generally, the EM technical unit will group industrial systems related to the generation, conditioning and storage of final energy forms and media at the factory-level. These final energy forms and media might be directly consumed or might then processed by the industrial systems in the TBS technical unit into useful energy forms or media for a specific application within the factory such as space and process heat, electricity, cooling media, mechanical energy (i.e., compressed air), light, etc.
Energy and media storage will take place throughout the MA, AS, TBS and, clearly, the EM technical units. If the storage serves a specific industrial system that belongs to the MA, AS or TBS technical units, the storage infrastructure belongs to this respective system. If, in turn, the storage supports multiple industrial systems across different technical units, the storage infrastructure is considered an industrial system by itself, which belongs to the EM technical unit.
The Energy and Manufacturing Control (EMC) technical unit englobes all the overarching data processing infrastructure that integrates the information flows to plan, monitor and control the operation of all the industrial systems across the other technical units and to coordinate the material and energy flows between them [12,15].
Finally, the factory boundaries delimit the factory's physical extension, determining its energy and media inputs and outputs. The building shell surrounds the factory's buildings, defining the impact of local climate on the factory and the emissions released by the factory into the surrounding environment.
The division of a production facility into its technical units and sub-units helps to delimit its constituting industrial systems. Nonetheless, the interdependences and interactions between these Energies 2020, 13, 5887 5 of 35 systems are explained by the material, energy and information flows connecting them. The material flows encompass the chain of production processes involving the handling, processing, storage and distribution of materials and goods within the factory. These flows usually start with raw materials and media entering the facility and end with products, by-products, emissions and waste, leaving it. In modern production facilities, the material flows are regulated via manufacturing orders, which are orders that stipulate the required manufacturing of a specific product on a specific volume and to a specific point in time [12].
Energy flows involve the energy transactions and conversions between the components of the industrial systems and between systems. As the factory space is essentially an open system, energy flows also include the interaction of the factory as an entity with its peripheries in the form of final energy forms entering and leaving its boundaries [14].
The information flows describe the information exchange relationships between the components in the different industrial systems in the factory and with actors in the periphery. The information flows are internal when they comprehend only the interaction among the industrial systems within the factory boundaries and external when they involve the communication of the factory as an entity with actors in the periphery [18].
In modern factories, where the different industrial systems are being progressively automated, information flows take place within a hierarchical automation infrastructure. The EMC technical unit is hence physically structured in the form of a communication and control pyramid, on which each level is defined by a specific set of hardware components, as presented in Figure 2 [19].
Energies 2020, 13, x FOR PEER REVIEW 5 of 35 flows encompass the chain of production processes involving the handling, processing, storage and distribution of materials and goods within the factory. These flows usually start with raw materials and media entering the facility and end with products, by-products, emissions and waste, leaving it. In modern production facilities, the material flows are regulated via manufacturing orders, which are orders that stipulate the required manufacturing of a specific product on a specific volume and to a specific point in time [12]. Energy flows involve the energy transactions and conversions between the components of the industrial systems and between systems. As the factory space is essentially an open system, energy flows also include the interaction of the factory as an entity with its peripheries in the form of final energy forms entering and leaving its boundaries [14].
The information flows describe the information exchange relationships between the components in the different industrial systems in the factory and with actors in the periphery. The information flows are internal when they comprehend only the interaction among the industrial systems within the factory boundaries and external when they involve the communication of the factory as an entity with actors in the periphery [18].
In modern factories, where the different industrial systems are being progressively automated, information flows take place within a hierarchical automation infrastructure. The EMC technical unit is hence physically structured in the form of a communication and control pyramid, on which each level is defined by a specific set of hardware components, as presented in Figure 2 [19]. [15,19], own illustration).
The base level is the field level where sensors measure the necessary parameters and actuators execute the necessary actions to manage the operations of all industrial systems across the facility. The second level is the control level constituted by programmable logic controllers (PLCs) and embedded control systems. In this level, control systems react in real-time to discrete inputs from the field level that result in specific operative commands to an industrial system or its components. The third level is the supervisory level consisting of the human-machine interfaces (HMI) and the supervisory control and data acquisition system (SCADA), which essentially combines the previous levels (field and control) to access data and control industrial systems and their components from a single location. The supervisory level is in charge of the control and coordination of multiple industrial systems. The fourth level is the planning level entailing the manufacturing execution system (MES) which has a direct link to process automation and allows prompt monitoring and [15,19], own illustration).
The base level is the field level where sensors measure the necessary parameters and actuators execute the necessary actions to manage the operations of all industrial systems across the facility. The second level is the control level constituted by programmable logic controllers (PLCs) and embedded control systems. In this level, control systems react in real-time to discrete inputs from the field level that result in specific operative commands to an industrial system or its components. The third level is the supervisory level consisting of the human-machine interfaces (HMI) and the supervisory control and data acquisition system (SCADA), which essentially combines the previous levels (field and control) to access data and control industrial systems and their components from a single location. The supervisory level is in charge of the control and coordination of multiple industrial Energies 2020, 13, 5887 6 of 35 systems. The fourth level is the planning level entailing the manufacturing execution system (MES) which has a direct link to process automation and allows prompt monitoring and control of all the production processes. The top-level is the management level, involving the enterprise resource planning system (ERP), which concisely maps all business practices of a company. The ERP's main function is the strategic and tactical (long-and medium-term) planning and scheduling of the activities related to procurement, storage, production, accounting and finance across the factory [14,18,19].
Key Concepts of DSEF and IEF
IEF is understood as the ability of an industrial system to adapt quickly and cost-effectively to changes in the energy markets [20]. Usually, the concept of DSEF, and therefore IEF, and that of demand response (DR) are used as synonyms. Nonetheless, the Federal Energy Regulatory Commission of the United Stated defines DR as "Changes in electric usage by end-use customers from their normal consumption patterns in response to changes in the price of electricity over time, or to incentive payments designed to induce lower electricity use at times of high wholesale market prices or when system reliability is jeopardized" [21]. Meanwhile, the European Commission describes DR as "A series of programs sponsored by the electrical grid, the most common of which pays companies (commercial DR) or end-users (residential) to be on call to reduce electricity usage when the grid is stressed to capacity" [22]. As can be inferred from the definitions, DR describes the activity per se of adapting the electrical consumption to profit from financial incentives sponsored by grid operators. DSEF, on the other hand, describes the capability of an energy-consuming system, which in the case of IEF is an industrial system, to react to a triggering event and change its energy consumption. This capability enables the energy-consuming system to take part in DR schemes and programs; nonetheless, the possible applications of DSEF expand even further. In the case of IEF, potential outcomes or implementation objectives include [23]:
•
An intelligent response to the volatility of energy prices: IEF, as mentioned before, have the capacity of optimizing the factory's energy costs-in its simplest form, this means reducing energy costs via the reactive adjustment of consumption to price fluctuations in the electrical markets.
•
Proactive marketing of the energy flexibility potentials in the grid service markets: the combination of IEF and production planning, can allow its proactive offering in the ancillary service markets of the electrical grid, thus obtaining a financial incentive from the grid operators.
•
Maximize the usage of local energy sources/maximize the use of the renewable energy portfolio: through IEF, the energy consumption of an industrial system can be adapted to match the production profiles of local (within the factory boundaries) or nearby electricity generation plants. Hence, achieving balanced or real energy self-sufficiency in the production facility [24]. In the specific case of renewable electricity sources, IEF can reduce the carbon footprint of the factory, thus reducing potential Green House Gases-emissions related costs.
•
Peak shaving and load management: peak shaving and load management are both benefits of IEF, eliminating the need for over-capacity to supply the peaks of highly variable loads and reducing time-of-use-related costs and stress on the energy distribution infrastructure. • Improvement of the resilience of the proprietary energy infrastructure: IEF also can assist the energy infrastructure to recover quickly from energy supply disruptions or support self-sufficient operation. Thus, avoiding the considerable costs of production disruption. IEF can also serve to avoid or delay energy infrastructure expansions and their investment cost, by adapting the consumption patterns of different industrial systems to the capacity of the existing infrastructure.
Energy Flexibility Measures and their Energy Flexibility Potential
IEF acquires a usable form by its formulation in an energy flexibility measure (EFM). An EFM is a conscious and quantifiable action to carry out a defined change of an operative state in an industrial system [20]. In this definition, an operative state refers to the energy demand rate of an industrial Energies 2020, 13, 5887 7 of 35 system at a specific point in time. Therefore, a change of operative state refers to the variation of this rate of energy demand for a definite period. The energy flexibility potential (EFP) is the quantification of the change in operative state that the EFM will induce on the industrial system. The EFP is, therefore, quantitatively described by a power component, the flexible power, and a temporal component, the active duration [25].
The quantification of the EFP is dependent on the characteristics of the industrial system and the features of its context, considered for its calculation. Therefore, a reference framework needs to be established to quantify the EFP.
This reference framework can be progressively developed to introduce additional system characteristics or context features, hence making its quantification more complex but attaining a more accurate EFP value. When the EFP is calculated only taking into consideration the physical characteristics of the industrial system as a reference framework, it will be theoretical. The theoretical EFP usually only takes the power rating of the industrial system and its operation time into consideration. The technical EFP, on the other hand, is calculated by adding the system's operative characteristics to the reference framework. The operative characteristics of the industrial system are attributes related to the patterns of operation that the industrial system follows to fulfil its task effectively. The practical EFP goes further and includes the relevant characteristics of the production facility of which the system is a part. These relevant characteristics relate to the production planning strategies prevailing in the factory. The economical EFP is the share of the practical EFP that is economically feasible, meaning when the revenues from making use of the EFM outperform its costs. These revenues are a function of pursued implementation objectives as defined in the last section. Finally, the viable EFP is the share of the economical EFP that also aligns with the company's investment approach, i.e., payback periods and risk policies, and, that outperforms other relevant investments, for example, energy efficiency measures. The different types of EFPs according to the different reference framework are presented in Figure 3.
Energies 2020, 13, x FOR PEER REVIEW 7 of 35 rate of energy demand for a definite period. The energy flexibility potential (EFP) is the quantification of the change in operative state that the EFM will induce on the industrial system. The EFP is, therefore, quantitatively described by a power component, the flexible power, and a temporal component, the active duration [25]. The quantification of the EFP is dependent on the characteristics of the industrial system and the features of its context, considered for its calculation. Therefore, a reference framework needs to be established to quantify the EFP.
This reference framework can be progressively developed to introduce additional system characteristics or context features, hence making its quantification more complex but attaining a more accurate EFP value. When the EFP is calculated only taking into consideration the physical characteristics of the industrial system as a reference framework, it will be theoretical. The theoretical EFP usually only takes the power rating of the industrial system and its operation time into consideration. The technical EFP, on the other hand, is calculated by adding the system's operative characteristics to the reference framework. The operative characteristics of the industrial system are attributes related to the patterns of operation that the industrial system follows to fulfil its task effectively. The practical EFP goes further and includes the relevant characteristics of the production facility of which the system is a part. These relevant characteristics relate to the production planning strategies prevailing in the factory. The economical EFP is the share of the practical EFP that is economically feasible, meaning when the revenues from making use of the EFM outperform its costs. These revenues are a function of pursued implementation objectives as defined in the last section. Finally, the viable EFP is the share of the economical EFP that also aligns with the company's investment approach, i.e., payback periods and risk policies, and, that outperforms other relevant investments, for example, energy efficiency measures. The different types of EFPs according to the different reference framework are presented in Figure 3. The different characteristics that constitute each reference framework, and hence each type of EFP, are described in Section 4. The division of the EFP in different types serves two purposes, first to be transparent on the scope of the quantification, and second, it allows estimating the influence each of the variations in the reference framework has on the EFP of the EFM.
Categorization of Energy Flexibility Measures
Based on their nature, EFMs can be classified as technical or organizational. Organizational EFMs involve actions that take advantage of the production strategy of the factory to modify the operative state of the industrial systems [26]. Usually, organizational EFMs do not alter the The different characteristics that constitute each reference framework, and hence each type of EFP, are described in Section 4. The division of the EFP in different types serves two purposes, first to be transparent on the scope of the quantification, and second, it allows estimating the influence each of the variations in the reference framework has on the EFP of the EFM.
Categorization of Energy Flexibility Measures
Based on their nature, EFMs can be classified as technical or organizational. Organizational EFMs involve actions that take advantage of the production strategy of the factory to modify the operative state of the industrial systems [26]. Usually, organizational EFMs do not alter the aggregated energy consumption of the respective industrial system. If this is the case, organizational EFMs will not influence the energy efficiency of the industrial system. Technical EFMs, on the other hand, influence the specific load profile of the industrial system by altering its operative pattern. They usually do alter the overall energy consumption and their influence must be carefully evaluated after the EFM has been characterized.
A list of general categories of EFMs in industrial systems was originally established in Reference [27] and further standardized in Reference [23]. Nonetheless, depending on the specific nature of the EFM, particularly if they are organizational or technical, specific EFMs only apply to industrial systems that belong to specific technical units. Table 1 lists and defines the established EFM general categories, classifies them as technical (T) or organizational (O), and pinpoints to which type of industrial system, as defined by the system's technical unit, the specific EFM category applies. This last point is referred to as applicability. The execution of an EFM has two parts. The virtual part takes place on the data processing systems of the EMC technical unit and consists of a targeted response to a triggering event, i.e., change in electrical price, activation request from the electrical grid operators, peak consumption, etc. Once a response is defined, the physical part of the EFM occurs in the form of an actual change of the operative state of the industrial system. The EFM is hence operatively a proportional response to a triggering event. The nature of the triggering event is determined by the intended implementation objective of the EFM. As mentioned before, the virtual part of an EFM is restricted to the EMC technical unit. The physical part, on the other hand, takes place on industrial systems belonging to the MA, AS, TBS or EM technical unit.
The presented structured understanding of the factory and its industrial systems, the definition of the available EFM categories and the considerations to calculate their EFP constitute the theoretical foundation on which the proposed the methodology was developed. The next section explains the proposed methodology in detail.
Methodology to Systematically Identify and Characterize Energy Flexibility Measures
The development of the proposed methodology started by establishing specific requirements that must be fulfilled. These requirements are:
1.
Systematicity: as is the case with current industrial energy audits [28,29], the methodology has to follow a structured procedure on which all industrial systems in a production facility and their characteristics are progressively analyzed and decisions regarding their energy flexibility capabilities respond to procedural considerations.
2.
Focus in electrical flexibility: although different energy carriers are considered, the EFMs resulting from the application of the methodology should aim to optimize the electrical consumption of production facilities and its costs. 3.
Applicable to a plethora of industrial systems and production facilities: the methodology has to apply to the heterogeneous nature of modern industrial systems and production facilities.
4.
Agile: the methodology needs to be more agile, hence providing results in a shorter time-lapse, than a more exhaustive approach to identify EFMs, i.e., industrial system modelling [15].
5.
Current operation-friendly: the methodology does not aim to redesign industrial systems for energy flexible operation but to identify EFMs based on their current operation patterns. 6.
Outcome relevant for industrial stakeholders: the outcomes of the methodology should be qualitatively and quantitatively sufficient to inform the decision-making process of companies regarding the implementation and usage of the energy flexibility capabilities of their production facilities.
Based on the previously defined requirements, the steps presented in Figure 4 constitute the proposed methodology to identify and characterize EFMs in industrial systems. Each step is detailed in the following subsections.
Energies 2020, 13, x FOR PEER REVIEW 10 of 35 5. Current operation-friendly: the methodology does not aim to redesign industrial systems for energy flexible operation but to identify EFMs based on their current operation patterns. 6. Outcome relevant for industrial stakeholders: the outcomes of the methodology should be qualitatively and quantitatively sufficient to inform the decision-making process of companies regarding the implementation and usage of the energy flexibility capabilities of their production facilities.
Based on the previously defined requirements, the steps presented in Figure 4 constitute the proposed methodology to identify and characterize EFMs in industrial systems. Each step is detailed in the following subsections.
Delimitation of the Available Industrial Systems and Relevant Implementation Objectives
The starting point to identify and characterize energy flexibility measures is to delimit the available industrial systems in the analyzed production facility and the relevant implementation objectives for the analyzed production facility. For this purpose, the facility is conceptualized as the series of technical units described in Section 2. The different energy-consuming and -handling components in the facility are then assigned to these units according to the task they execute within the production processes. Components that work together towards completing a specific task are then grouped in systems. These groups of components will constitute the available industrial systems. The focus is therefore only on industrial systems, hence systems that collaborate directly or indirectly in the production processes of the facility. Other energy-consuming elements in the facility, for example, office spaces, although potentially relevant for energy flexibility, are not the purview of the presented methodology.
The relevant implementation objectives are the group of the implementation objectives described in Section 3, that when accomplished will techno-economically improve the analyzed facility's energy consumption. The relevance of each implementation objective depends on the energy context (available energy markets, quality of energy supply, energy costs, energy supply contracts,
Delimitation of the Available Industrial Systems and Relevant Implementation Objectives
The starting point to identify and characterize energy flexibility measures is to delimit the available industrial systems in the analyzed production facility and the relevant implementation objectives for the analyzed production facility. For this purpose, the facility is conceptualized as the series of technical units described in Section 2. The different energy-consuming and -handling components in the facility are then assigned to these units according to the task they execute within the production processes. Components that work together towards completing a specific task are then grouped in systems. These groups of components will constitute the available industrial systems. The focus is therefore only on industrial systems, hence systems that collaborate directly or indirectly in the production processes of the facility. Other energy-consuming elements in the facility, for example, office spaces, although potentially relevant for energy flexibility, are not the purview of the presented methodology. The relevant implementation objectives are the group of the implementation objectives described in Section 3, that when accomplished will techno-economically improve the analyzed facility's energy consumption. The relevance of each implementation objective depends on the energy context (available energy markets, quality of energy supply, energy costs, energy supply contracts, etc.) on which the production facility operates. These objectives are ought to be decided by the production facility stakeholders.
Determination of the Physical Characteristics of the Available Industrial Systems
Once the available industrial systems and the relevant implementation objectives in the production facility have been defined, the way each system consumes energy needs to be understood. Hence, the system needs to be energy transparent. An initial understanding of the energy consumption of the industrial system is achieved through its physical characteristics. The physical characteristics from an energy transparency perspective are [14]: • Technical Unit: Already defined in the previous step, the technical unit to which the industrial system belongs provides relevant insights on the task the system performs on the production facility and hence its energy consumption patterns.
•
Industrial system layout: The arrangement of all, but particularly the energy-consuming, components in the industrial system help to understand the energy consumption chains, or how energy is distributed and used, across the system.
•
Power rating and maximum system output: The power rating is the maximum allowable power input, meaning the aggregated maximum rate of energy transfer, of the energy-consuming components in the system. The maximum output of the industrial system is the maximum material or energy production provided by a system on each operative cycle.
•
Operative Time: Aggregated utilization time of the components in the system, also understood as the duration of the task or tasks the system performs.
•
Control Concept: The course of action through which the behavior of the industrial system is managed.
For appropriately defined industrial systems, the physical characteristics can be easily inferred from surveying the technical specifications of the system's components. The physical characteristics provide an initial level of energy transparency and hence a very superficial overview of the available EFMs and, up to this point, theoretical EFP.
Inference of the Industrial Systems Suitable for Energy Flexible Operation
Once the available industrial systems and their physical characteristics have been settled, they need to be sorted based on their energy flexible operation suitability.
The suitability of an industrial system for energy flexible operation is assessed through three different criteria [26,30,31]:
1.
Controllability: indicating how restrictive is the control concept of an industrial system in terms of additional variations in its operative state.
2.
Criticality: specifying the grade on which a change of operative state in an industrial system might alter the quality of the manufactured product or the continuity of the production processes within the factory.
3.
Input/output interdependence: defined by the level of decoupling between the energy input and the output of the industrial system along its operative cycle. The operative cycle of an industrial system is understood as the series of sequential tasks the system performs to achieve a unit of output.
These three criteria are gradual. Therefore, they can be divided into different cases that help to quantify the system's suitability for energy flexible operation.
Regarding the controllability of industrial systems, four different cases are discernible. These are referenced as levels with the abbreviation Co and progressive case numbers. The system can be process-controlled and hence have its operation fully defined in time and quantity, leaving virtually no possibility for energy flexible operation (Co0). The control concept of the industrial system might be dependent on state variables, i.e., temperature, hampering the system's flexible operation ability (Co1). The system might only be controllable over its operative time, switching operative state over fixed intervals, allowing for a considerable degree of energy flexible operation (Co2). Finally, the control concept might allow the industrial system to execute its tasks continuously and unrestricted in time and quantity (Co3), completely freeing the system to operate in an energy flexible manner.The controllability criterion is directly determined by analyzing the control concept of the system, defined as a physical characteristic of the system in the last step.
For the second criterion, criticality, four different cases are also distinguishable. Similarly, the criticality cases are considered as suitability levels with the abbreviation Cr and progressive case numbers. A change of operative state in the industrial system might reduce the product quality or induce a continuity failure of the respective production processes exceeding the acceptable risk for energy flexible operation (Cr0). The change of operative state in the system could have a limited, not failure-inducing, but a significantly negative, influence on the continuity production processes (Cr1). The influence that a change of operative state in the industrial system might have on the continuity of the production processes might be limited and only marginally negative (Cr2). What constitutes a significantly or marginally negative influence is usually associated with an increase in the system's operating costs. Therefore, it is case-specific and it needs to be discussed with relevant stakeholders in the production facility. Alternatively, the change of operative state might have a neutral or even positive influence in the production processes (Cr3). The influence that a change of operative state in the industrial system has on the production processes is usually extrapolated from the previously mentioned dialogue with the relevant stakeholders in the factory. Plus, an analysis of the system's tasks in the facility. This last point is a combined evaluation of the system's technical unit and its control concept.
The final criterion, the input/output interdependence, is also subdivided into four different cases. The interdependence cases are also referenced with the abbreviation In and the case number. The energy input might be completely coupled proportionally and instantly to the output of the industrial system without any type of decoupling capability and then leaving no tolerance for energy flexibility (In0). On the other hand, decoupling capabilities might exist between the energy input of the industrial system and the system's output, i.e., through energy or media storage. These capabilities might be inherent, owing to the operative characteristics of the system (In1). Alternatively, specific components in the system might exist that provide decoupling capabilities. These capabilities are limited if, when aggregated, they are smaller in capacity than the required input to complete an operative cycle (In2). Conversely, these capabilities might be comprehensive, if the decoupling capabilities along the system, when aggregated, are larger in capacity than the necessary input to complete an operative cycle (In3). The input and output interdependence assessment is a result of the analysis of the system layout and its control concept.
The suitability of industrial systems is assessed graphically in a radar graph where each axis signifies each of the criteria and the levels are used as a scale. This is presented in Figure 5. Industrial systems with a level zero (0) on any of the criteria are unsuitable for energy flexible operation and should not be further examined. On the other hand, those systems with a level three (3) on all the criteria are highly suitable for energy flexible operation. Industrial systems with combined levels, between one (1) and (3) present risks when operating in an energy flexible manner. These risks need to be factored and evaluated after the EFMs have been identified and characterized.
Determination of the Relevant Operative Characteristics of the Suitable Industrial Systems
The physical characteristics serve to typify the industrial system and hence achieve an initial level of energy transparency. Nonetheless, they provide only reduced information about the dynamics of the industrial system's operation. Therefore, the operative characteristics of the industrial system serve to better understand the energy consumption patterns of an industrial system and the factors that influence them. The relevant operative characteristics of industrial systems from the energy transparency perspective are [14,23,32]: • Typical load profile: Typical pattern of energy consumption of the industrial system. A load profile consists of the curve of energy input versus time in the industrial system for a specific period. The typical load profile is usually a synthesis of the energy consumption record for a longer period, i.e., a year. There are several techniques to obtain the typical load profile, or profiles, of a system. The state-of-the-art consists on performing K-means clustering to the raw energy-consumption record of the system resulting in different clusters, or profiles, and calculating the median of the data samples in the cluster to obtain the typical curve profile. The optimal number of clusters is determined by using silhouette analysis and selecting the number of clusters that provide the maximum average silhouette scores. In practice, the clusters respond to the modes of operation of the industrial system under different operating conditions. The selected approach follows the recommendations of several research works that have dealt with the optimal approach to obtain the typical load profiles of electrical loads using machine learning algorithms, exalting the usage of silhouette scores and the k-means algorithm as the most fitting approach [33][34][35].
•
Controlled Variable: Independent parameter(s) that determine the operating state of an industrial system. Their variation will induce a change in the operative state. The suitability can also be analyzed through the calculation of a suitability score by multiplying their level in each criterion. A score of zero (0) will render the system unsuitable for energy flexible operation. A score of one (1) will symbolize marginal suitability for energy flexible operation. A score between one (1) and eight (8) will denote moderate suitability for energy flexible operation, while a score over eight (8) will indicate high suitability. The suitability scores do not reflect the EFP or attractiveness of EFMs in the system. Therefore, they should not be used to prioritize the analysis of specific suitable systems. The suitability analysis is performed through a qualitative analysis of each available industrial system based on the known physical characteristics and in close cooperation with relevant facility stakeholders. Once the industrial systems suitable for energy flexible operation have been singled out, the next step is to determine their relevant operative characteristics.
Determination of the Relevant Operative Characteristics of the Suitable Industrial Systems
The physical characteristics serve to typify the industrial system and hence achieve an initial level of energy transparency. Nonetheless, they provide only reduced information about the dynamics of the industrial system's operation. Therefore, the operative characteristics of the industrial system serve to better understand the energy consumption patterns of an industrial system and the factors that influence them. The relevant operative characteristics of industrial systems from the energy transparency perspective are [14,23,32]: • Typical load profile: Typical pattern of energy consumption of the industrial system. A load profile consists of the curve of energy input versus time in the industrial system for a specific period. The typical load profile is usually a synthesis of the energy consumption record for a longer period, i.e., a year. There are several techniques to obtain the typical load profile, or profiles, of a system. The state-of-the-art consists on performing K-means clustering to the raw energy-consumption record of the system resulting in different clusters, or profiles, and calculating the median of the data samples in the cluster to obtain the typical curve profile. The optimal number of clusters is determined by using silhouette analysis and selecting the number of clusters that provide the maximum average silhouette scores. In practice, the clusters respond to the modes of operation of the industrial system under different operating conditions. The selected approach follows the recommendations of several research works that have dealt with the optimal approach to obtain the typical load profiles of electrical loads using machine learning algorithms, exalting the usage of silhouette scores and the k-means algorithm as the most fitting approach [33][34][35]. • Controlled Variable: Independent parameter(s) that determine the operating state of an industrial system. Their variation will induce a change in the operative state.
•
Control horizon and latency: The control horizon is the minimum time interval between the variation of a control variable and the occurrence of the change in the operative state of the system. The control latency is the amount of time it takes signals to traverse the system or systems in the EMC Technical Unit.
•
Operative continuity: Consistency of the operative cycles of the system. Three types can be discerned: • Discontinuous, the operative cycle of the system, consists of multiple operative states that take place in irregular intervals throughout the operative time. The intervals are divided by irregular periods on which the system is idle.
•
Part continuous, the operative cycle of the system involves a single operative state that occurs in regular intervals throughout the operative time. The intervals are divided by regular periods on which the system is idle. • Continuous, the operative cycle of the system consists of a single operative state throughout the operative time. The system is never idle during its operative time.
• Operative Steps: Amount and type of successive steps that make up the operative cycle of the system.
•
Output flexibility: The ability of a system to operate at a range of different output levels without incurring in major setup alterations.
•
Bivalence or multivalence: The ability of a system to satisfy its energy demand with two or more energy carriers. • Buffer Capability: Ability of an industrial system to store energy and/or media temporarily and locally. The storage capability might come from the system's operative inertia (i.e., thermal or mechanical inertia) or dedicated storage components. • Redundancy: The ability of more than one component within a system (system level), or more than one system within a technical unit (technical unit level) to perform a specific task.
•
Operative Shiftability: The ability of a system to shift the totality or a part of their operation cycle to an earlier or later time point.
•
Interruptible: The ability of a system to stop its operation cycle and continue at a later time point.
•
Task Flexibility: The ability of a system to execute a variety of tasks for a production process, i.e., perform a range of operations or produce a variety of products, without incurring in any major setup variation.
•
Routing Flexibility: The ability of a system to execute its tasks via alternative operative sequences.
The operative characteristics of the system are determined through a detailed surveyal and analysis of the energy consumption data of the system, its material, energy and information flows and its design specifications. The operative characteristics provide a deeper level of energy transparency and hence a more realistic overview of the available EFMs and their EFP, which will be considered technical at this point.
Determination of the Production Characteristics of the Production Facility
Besides the physical and operative characteristics of the industrial system, it is necessary to understand the production approach of the production facility on which the system finds itself. The production approach is defined by a series of production characteristics common to the facility as a whole. These characteristics allow allocating the energy-to-cause and hence to understand the variation of energy consumption throughout time and within the system's modes of operation. The relevant production characteristics from an energy transparency perspective are [13,14,23,32]: • Manufacturing principle: The manufacturing principle follows the expected volume and variety of the manufactured product by the market. Four different principles are discernible [36]: • Make-to-Stock (MTS), the product is made in their final form and stocked as finished goods.
•
Assemble-to-Order (ATO), the product is assembled to its final form based on the customer's purchase order. • Make-to-Order (MTO), the product is completely manufactured after a customer has issued a purchase order.
•
Engineer-to-Order (ETO), the product is designed and manufactured after the customer's purchase order.
• Production Method: The production method is the basic approach to production planning, they fall into four categories [37]: • Job processing, the production focuses on a single item at a time and usually requires a specific set of skills depending on the manufactured product.
•
Batch processing, the production takes place in specific groups of pieces or completed products in small pre-set batch sizes.
•
Flow processing, production involves passing of sub-assemblies or individual parts from one production station to the next until the final product is completed.
•
Continuous processing, similar to flow production but there is no possible stop between production stations.
• Working shift model: Amount and extension of the working shifts on which the factory conducts its production processes. • Production planning horizon: Minimum time-lapse between the end of detailed production planning and the start of production.
•
Change in manufacturing orders: The possibility to change manufacturing orders once they have been issued. The production characteristics can be determined through the surveyal of the production planning strategies established on the ERP. These characteristics give the final necessary perspective to achieve a level of energy transparency in the industrial system that allows the identification of the practically viable EFMs and their EFP which will be considered practical as it includes the physical and operative characteristics of the industrial system and the production characteristics of the production facility.
Identification of the Prospective Energy Flexibility Measures
The EFMs, present in an industrial system, are a function of the characteristics of this system and the production characteristics of the facility to which the system belongs. These characteristics have one of four different levels of influence on each specific EFM category. These levels of influence are: • Crucial: A characteristic is crucial if it is decisive to the existence of an EFM belonging to a specific category. Meaning the way this characteristic manifests in the industrial system decides if the specific EFM-category is available in the industrial system. • Influential: A characteristic is influential if it will delimit the EFP of the EFM belonging to the specific category. Tables 2 and 3 present the level of influence each of the characteristics examined in the last steps has on the previously defined organizational and technical EFM categories respectively. Table 2. Level of influence of the physical, operative and production characteristics in the existence of organizational EFMs 1 .
Adaptation of Order Production Sequence Adaptation of Resource Allocation
Technical unit The identification of EFMs is hence a cross-analysis of the manifestation of each crucial characteristic and the tested category of organizational or technical EFM. Crucial characteristics can either support the existence of the specific EFM-category and hence be positive. They can hinder the existence of a specific EFM-category, being negative, and excluding the availability of this category in the analyzed industrial system. Finally, they can be unclear on which additional analysis or information is needed to accept or discard the existence of the specific EFM-category in the system. This additional information usually involves further discussion with the relevant stakeholders in the production facility.
The partial consideration of the characteristics, in case the complete list is not known, will reduce the analysis reference framework and then the type of EFMs identified and their EFP, as described in Section 3.1.
All the EFM-categories on which all crucial characteristics are positive are considered the prospective EFMs. After validation with relevant stakeholders in the facility, the next step is their characterization, where the influential and relevant characteristics are considered.
Characterization of the Validated Energy Flexibility Measures
The characterization of an EFM intends to define its scope. For this purpose, several groups of parameters, referred to as dimensions, have been defined that constitute the EFM characterization framework. The goal of this framework is to standardize the description of EFMs, facilitating their evaluation, modelling, implementation and management.
The proposed characterization framework consists of four different dimensions: functional dimension, performance dimension, temporal dimension and economic dimension. The parameters that constitute each dimension are explained in the following sub-sections.
Functional Dimension
This dimension serves to contextualize the EFM on the industrial system and the respective factory. The parameters that constitute the functional dimension are:
•
Industrial system description: definition of the industrial system on which the EFM takes place. The description should at least include a system's layout including all energy-consuming components, their performance data and an outline of the energy and material flows through which the system interacts with the other systems in the factory. • EFM category: category of the identified prospective EFM based on the general categories defined in Table 1.
•
Operative concept: a description of how the identified EFM induces a change of state in the industrial system. • Adjustment factor and adjustment relationship: the adjustment factor is the independently controlled variable(s) that induces the change of state in the industrial system. The adjustment relationship describes the correlation between the adjustment factor and the rate of energy demand. Usually, a mathematical function describes this relationship in the form of a correlation, i.e., linear, polynomial or step. • Amount and type of the modes of operation (MO): the MOs describe the operative states the EFM might induce in the industrial system. The modes of operation might be holding if only one operative step is induced on the system per activation of the EFM or modulating if the EFM induces more than one operative state per activation. The amount and type of the MOs are determined, among other characteristics, by the typical load profile of the industrial system, particularly its operative clusters.
•
Execution level: highest level across the control pyramid on which the virtual part of an EFM will take place. The execution level usually tends to be the level on which the system is controlled.
Temporal Dimension
This dimension groups the time-related parameters that characterize the EFM. The parameters that constitute the temporal dimension are: • Active Duration, ∆t active : temporal element of the EFP, it comprises the minimum and maximum period on which the EFM is active, meaning the duration on which the industrial system operates under the EFM-induced operative state(s).
•
Planning Duration, ∆t planning : minimum and maximum period necessary to plan the activation of an EFM. This parameter responds to the operative continuity of the industrial system. In the case of industrial systems belonging to the MA technical unit is also majorly influenced by the production planning horizon and change horizon, both are determined at the planning level of the EMC technical unit. The planning duration can take place before or after the occurrence of the triggering event of the EFM. • Perception Duration, ∆t perception : minimum and maximum period between the occurrence of a triggering event and the perception of this event by the control architecture in the EMC technical unit. The value of this parameter depends on the nature of the triggering event and the control latency in relevant systems in the EMC technical unit. The nature of the triggering event relates to the implementation objective of energy flexible operation as defined in Section 3.
•
Decision Duration, ∆t decision : minimum and maximum period ranging from the perception of a triggering event, t 0 , to the decision on the activation of the EFM. The performance, particularly the latency, of the systems that constitute the supervisory level of the EMC technical unit determine this parameter. • Shift Duration, ∆t shift : minimum and maximum period covering the change in the operative state. This parameter is usually a function of the latency in the control concept of the industrial system. Nonetheless, it might be influenced by its operative stages and operative continuity. • Activation Duration, ∆t activation : minimum and maximum period covering from the perception of a triggering event to the achievement of the EFM-induced operative state. It can be understood as the addition of the perception, decision, planning (if it is performed after the triggering event) and shift duration. Their calculation is relevant because it will quantify the overall interval between the triggering event and the fully active EFM. The calculation formula for the Activation Duration is presented in Equation (1).
• Deactivation Duration, ∆t deactivation : minimum and maximum period between the end of the active duration of the EFM and the return of the industrial system to its original operative state. As it was the case for the shift duration, this parameter depends on the control concept of the industrial system and its control horizon.
•
Regeneration Duration, ∆t regeneration : minimum and maximum period that must elapse before an EFM can be activated again after it has been deactivated. The regeneration duration can be understood as the necessary time to bring stability to the material and energy flows altered by the activation of an EFM. • Validity, V: parameter outlining the fraction of the operative time of the industrial system on which the EFM will be available for activation. This parameter is defined by the type and amount of operative steps of the industrial system and therefore the validity should include a reference to the specific operative step on which the EFM is available [38]. • Activation Frequency, N activation,T : the activation frequency parameter quantifies the maximum number of times an EFM can be executed over a specific period, T, usually a calendar year. Although it might be affected by other externalities, it might be calculated using the ratio between the product of the validity and the period, T, and the complete duration of the execution of an EFM. Equation (2) describes its calculation. The activation frequency should be referenced to the active duration for which it was calculated.
N activation,T = V T ∆t activation + ∆t active + ∆t deactivation + ∆t regeneration (2) Figure 6 shows a summary of the different durations in the temporal dimension of an EFM. In the figure, a representative consumption increase EFM is presented.
Performance Dimension
The performance dimension groups the characterization parameters of the EFM related to the change in the rate of energy demand. These parameters are: • Flexibility Type: describes the direction on which the operative state will be changed by the activation of each of the MOs of the EFM. The possible flexibility types are: o Load increase (↑): increase in the energy demand rate compared to the reference consumption profile. The increase can involve just an increase in the rate consumption or the complete switch-on of the industrial system. In a load increase, there is no consumption compensation requirement. Therefore, the activation of an EFM of this type will constitute an overall increase in energy consumption of the system.
o Load decrease (↓): reduction in the energy demand rate compared to the reference consumption profile. Similarly, like the increase, the renunciation can involve both a reduction of energy consumption and a complete switch-off of the influenced industrial system. In a load renunciation, consumption compensation is also not required. Therefore, the activation of this type of an EFM will constitute an overall decrease in energy consumption.
o Bidirectional (↑↓): the ability of the EFM to offer both a load increase and renunciation. Nonetheless, once activated in either direction this type of EFM will not require a compensation of the altered energy consumption.
o Consumption shift (↔): temporary rearrangement of the energy consumption, increase or decrease, with proportional compensation. The consumption shift is backwards when consumption is shifted to an earlier point in time. Inversely, it will be forward if it is postponed to a later point in time. A special case of load shift is "valley-filling" where the tasks that generate the consumption profile are broke down and rearranged at different points in time, thus reducing peak-consumption. In any of the consumption shift cases, the net energy consumption will stay constant despite activating the EFM. The different flexibility types are typified in Figure 7.
Performance Dimension
The performance dimension groups the characterization parameters of the EFM related to the change in the rate of energy demand. These parameters are: • Flexibility Type: describes the direction on which the operative state will be changed by the activation of each of the MOs of the EFM. The possible flexibility types are: • Load increase (↑): increase in the energy demand rate compared to the reference consumption profile. The increase can involve just an increase in the rate consumption or the complete switch-on of the industrial system. In a load increase, there is no consumption compensation requirement. Therefore, the activation of an EFM of this type will constitute an overall increase in energy consumption of the system.
•
Load decrease (↓): reduction in the energy demand rate compared to the reference consumption profile. Similarly, like the increase, the renunciation can involve both a reduction of energy consumption and a complete switch-off of the influenced industrial system. In a load renunciation, consumption compensation is also not required. Therefore, the activation of this type of an EFM will constitute an overall decrease in energy consumption. • Bidirectional (↑↓): the ability of the EFM to offer both a load increase and renunciation. Nonetheless, once activated in either direction this type of EFM will not require a compensation of the altered energy consumption.
•
Consumption shift (↔): temporary rearrangement of the energy consumption, increase or decrease, with proportional compensation. The consumption shift is backwards when consumption is shifted to an earlier point in time. Inversely, it will be forward if it is postponed to a later point in time. A special case of load shift is "valley-filling" where the tasks that generate the consumption profile are broke down and rearranged at different points in time, thus reducing peak-consumption. In any of the consumption shift cases, the net energy consumption will stay constant despite activating the EFM. The different flexibility types are typified in Figure 7.
• Flexible Power, ∆P flex : the power delta of the EFP, it describes the maximum difference of rate of energy demand between the reference operative state and the EFM-induced operative state. The unit for this parameter is usually kW flex .
• Flexible Energy Carrier: this parameter, defines the energy carrier or carriers influenced by the activation of the EFM. Usually, as previously introduced, the focus, due to its attractiveness, is on the electrical energy consumption. Nonetheless, at least, for the Bivalent Operation and Energy Carrier Exchange EFM-categories, another energy carrier is also influenced.
•
Flexible Energy, E flex,T : the average amount of energy that could be adapted through the activation of an EFM over a specific period, T, typically a year. The flexible energy consists of the product of the average flexible power, the active duration and the retrieval frequency for this active duration, as presented in Equation (3). The unit for this parameter is usually MWh flex and it must be referenced to the active duration for which it is calculated. • Flexible Power, ∆Pflex: the power delta of the EFP, it describes the maximum difference of rate of energy demand between the reference operative state and the EFM-induced operative state. The unit for this parameter is usually kWflex. • Flexible Energy Carrier: this parameter, defines the energy carrier or carriers influenced by the activation of the EFM. Usually, as previously introduced, the focus, due to its attractiveness, is on the electrical energy consumption. Nonetheless, at least, for the Bivalent Operation and Energy Carrier Exchange EFM-categories, another energy carrier is also influenced.
•
Flexible Energy, Eflex,T: the average amount of energy that could be adapted through the activation of an EFM over a specific period, T, typically a year. The flexible energy consists of the product of the average flexible power, the active duration and the retrieval frequency for this active duration, as presented in Equation (3). The unit for this parameter is usually MWhflex and it must be referenced to the active duration for which it is calculated.
Economic Dimension
This dimension comprehends all the parameters related to the costs of the implementation and execution of an EFM. The parameters that constitute the economic dimension are: • Investment Costs, Cinvestment: fixed, one-time expenses incurred to implement an EFM. Simply put the expenses necessary to bring the EFM to an operative status. The investment costs can be tangible including further development of component technology, further development of the IT-infrastructure and strengthening of the proprietary energy distribution infrastructure. These costs can also be intangible, like those associated with, the acquisition of software tools, hiring of third-party services or personnel training among others.
Economic Dimension
This dimension comprehends all the parameters related to the costs of the implementation and execution of an EFM. The parameters that constitute the economic dimension are: • Investment Costs, C investment : fixed, one-time expenses incurred to implement an EFM. Simply put the expenses necessary to bring the EFM to an operative status. The investment costs can be tangible including further development of component technology, further development of the IT-infrastructure and strengthening of the proprietary energy distribution infrastructure. These costs can also be intangible, like those associated with, the acquisition of software tools, hiring of third-party services or personnel training among others.
•
Activation Costs, C activation : ongoing expenses related to the activation of the EFM. These expenses are only incurred when the EFM is executed and hence are a function of the activation frequency. Examples include increased material, energy or labor costs due to the adaptation of the operative cycle of the industrial system and potential opportunity costs due to the activation of the EFM.
•
Maintenance Costs, C maintenance,T : ongoing expenses to keep the availability of the EFM over a specific time span, T, typically a calendar year. These costs are activation-independent. Therefore, they will stay unaffected by the activation frequency of the EFM. Examples include the hiring of third-party services to trade in energy markets and additional component wear and tear costs associated with energy flexible operation. • Expected payback period, τ payback : the expected period, typically given in years, on which the EFM is expected to reach a break-even point or the point on which the revenues associated with the EFM offset its costs. The company's management usually defines the expected payback period.
Normally it obeys to their historical approach to factory-upgrade investments. • EFM specific cost, c flex,T : cost summary indicator of the EFM, it represents the cost of the EFM by a unit of flexible energy over a specific period (T). It is calculated through the formula presented in Equation (4).
where k represents the temporal conversion factor between τ payback and T. Once all the different parameters across the four dimensions have been determined, the EFM has been fully characterized and the economical EFP can be determined.
Calculation of the Economical and Viable EFP of the Characterized EFMs
As previously mentioned, during this step, the calculated flexible power and active duration constitute the practical EFP of the industrial system, assuming all the relevant characteristics of the industrial system and the production facility have been considered. To calculate the economical EFP, the expected gross revenues, as a function of the intended implementation objective of the EFM, have to be estimated. An exact formula for the calculation of the gross revenues depends on the targeted implementation objective, as defined in the first step of the presented methodology. Generally, the gross revenues constitute the monetary savings achieved by the activation of the EFM when compared to the reference operation of the industrial system. Once the gross revenues, R flex , have been calculated, the EFM specific gross revenues, r flex,gross,T , constitute the ratio between the revenues for the specific period, T, and the flexible energy for the same period, as presented in Equation (5).
The difference between r flex,gross,T and c flex,T will provide the specific net revenues, r flex,net,T , of the EFM for the period T, as presented in Equation (6). The r flex,net,T , will define the economic feasibility of the EFM on its current configuration. A negative or equal to zero r flex,net,T will indicate that the costs are too high. Therefore, the scope of the EFM needs to be optimized. This usually refers to reducing the activation costs by altering the flexible power, active duration or activation frequency of the EFM. If a cost-reduction is not possible, the EFM is deemed economically unfeasible and needs to be rejected. When the r flex,net,T is positive, the EFM will be economically feasible. Nonetheless, the scope of the EFM can be revisited to pursue the maximization of r flex,net,T .
The resulting EFP, once the r flex,net,T of the EFM is maximized, constitutes the economical EFP. The final step will be to evaluate the scope and financial benefits of the EFM and weight it against other comparable investments, i.e., energy efficiency measures or other EFMs, and then decide on its implementation. This decision might further delimit the scope of the EFM, hence constituting the viable EFP.
At this moment, the EFM is completely identified and characterized and it can be grouped with the other identified EFMs across the facility, thus establishing the EFM catalogue of the facility.
Application of the Proposed Methodology
In the last section, the different steps to identify and characterize EFMs in industrial systems were described in detail. In this section, a representative example of an EFM using the described Energies 2020, 13, 5887 23 of 35 methodology is presented. The example belongs to an EFM identification analysis performed on an existing production facility. The scope of the analysis was limited to the calculation of the practical EFP. Therefore, the final step of the methodology is not presented in this example.
Delimitation of the Available Industrial Systems
The analyzed facility is a machinery assembly facility. All five technical units, as depicted in Figure 1, are present in the facility. The analysis, nonetheless, focused on the MA, TBS and EM technical units, employing the data from EMC technical unit as energy consumption records were available for the industrial systems in these technical units. The MA technical unit consists of three manufacturing segments, a press shop, a paint shop and an assembly production line. The TBS technical unit includes three industrial systems a compressed air system, a chilled water air-conditioning system, and a gas-fired hot water system. The EM technical unit consists of natural gas and electricity supplied to the facility. The latter supported by a photovoltaic array and two gas-fired combined heat and power (CHP) engines. The main objective for the implementation of energy flexibility in the facility is the intelligent response to the volatility of energy prices.
All the industrial systems were analyzed first for suitability and then to identify and characterize their available EFMs. The analysis extended over a period of two working weeks, once all available inputs were collected. In the following subsections, the application of the proposed methodology for a specific available industrial system, the chilled water air-conditioning system, is described.
Determination of the Relevant Physical Characteristics of the Chilled Water Air-Conditioning System
The physical characteristics of the chilled water air-conditioning system are: 1.
Technical Unit: The system belongs to the TBS technical unit of the production facility. 2.
Industrial system layout: the chilled water air-conditioning system consists of a 7/12 • C chilled water (CHW) circuit that provides room cooling for a production hall and an on-site data center. The cooling output is provided by three water-cooled, screw-driven mechanical chillers (CHWDX) and two hot water, single-effect absorption chillers (CHWAB) plus a free cooling module (CHWFC). The heat abatement of these units is performed by a 32/37 • C cooling water (CW) circuit with three cooling towers and three pumps. The hot water for the CHWABs is usually fed from the two CHP engines on-site but through minor modifications might be sourced from the 95/60 • C hot water (HW) system onsite. The cooling is delivered through a series of air handling units to the production hall and a data center that supports the production activities. Due to their air-quality-specific operation pattern, the air-handling units were not considered in the analysis of this system. For analysis purposes, the hot water loop for the CHWABs is assumed as an energy carrier entering the system. Therefore, the HW generation sources were not analyzed. The layout of the chilled water system is depicted in Figure 8.
3.
Power rating and cooling output of the cooling units: The power rating and cooling output of the cooling generation units are summarized in Table 4.
4.
Power rating and output of the other energy-consuming components: The power rating and design output of the other relevant energy-consuming components, pumps and cooling towers, in the system, are presented in Table 5.
5.
Operative Time: The system operates 24/7 in stand-by mode, going into active operation when there is a cooling demand. Therefore, its maximum operative time is limited to the working shifts in the production facility. 6.
Control Concept: The cooling demand is a function of the ambient temperature on-site. The system is controlled at the supervisory level through a SCADA architecture that monitors the air return temperature in the air-handler units and the return water temperature in the chilled water circuit. The current control concept prioritizes the operation of the CHWDX units for cooling supply. These mechanical chiller units are activated sequentially, based on the return water temperature. Activation priority is given to CHWDX-3, due to its better performance at partial loads. The other mechanical chiller units, CHWDX-1 and CHWDX-2, are rotated to guarantee equalized running time among them. The absorption units, CHWAB-1 and CHWAB-2, are mostly activated in-junction with two combined heat and power (CHP) engines on site. The CHP engines are activated for peak-shaving purposes in the factory. The absorption chiller units are also used to provide redundancy to the mechanical chiller units. The free-cooling module, CHWFC-1 gains priority activation, when the ambient temperature drops below 10 • C.
Energies 2020, 13, x FOR PEER REVIEW 24 of 35 Figure 8. Schematic of the chilled water system. Figure 8 4. Power rating and output of the other energy-consuming components: The power rating and design output of the other relevant energy-consuming components, pumps and cooling towers, in the system, are presented in Table 5. 1 Designation used in Figure 8 5. Operative Time: The system operates 24/7 in stand-by mode, going into active operation when there is a cooling demand. Therefore, its maximum operative time is limited to the working shifts in the production facility. 6. Control Concept: The cooling demand is a function of the ambient temperature on-site. The system is controlled at the supervisory level through a SCADA architecture that monitors the air return temperature in the air-handler units and the return water temperature in the chilled water circuit. The current control concept prioritizes the operation of the CHWDX units for cooling supply. These mechanical chiller units are activated sequentially, based on the return 1 Designation used in Figure 8.
Suitability of the Chilled Water System for Energy Flexible Operation
The results of the suitability analysis for energy flexible operation of the system are presented in Table 6. Regarding its controllability: as the system is used for air temperature conditioning, it is Energies 2020, 13, 5887 25 of 35 state variable, outdoor temperature-dependent and hence can be classified as a Co1. As there is high redundancy in the system and it belongs to the TBS technical unit, its criticality is estimated at a Cr3. This because a change of state in the system is neutral for process continuity as long as the demand is met. Finally, the interdependence is given as In1, as the system counts just the inherent thermal inertia across in the chilled-water-piping grid and the conditioned rooms. Table 6. Suitability analysis of the chilled water system (left: score, right: radar graph).
Criteria Level
units are also used to provide redundancy to the mechanical chiller units. The free-cooling module, CHWFC-1 gains priority activation, when the ambient temperature drops below 10 °C.
Suitability of the Chilled Water System for Energy Flexible Operation
The results of the suitability analysis for energy flexible operation of the system are presented in Table 6. Regarding its controllability: as the system is used for air temperature conditioning, it is state variable, outdoor temperature-dependent and hence can be classified as a Co1. As there is high redundancy in the system and it belongs to the TBS technical unit, its criticality is estimated at a Cr3. This because a change of state in the system is neutral for process continuity as long as the demand is met. Finally, the interdependence is given as In1, as the system counts just the inherent thermal inertia across in the chilled-water-piping grid and the conditioned rooms.
Determination of the Relevant Operative Characteristics of the Chilled Water Air-Conditioning System
The relevant operative characteristics of the chilled water air-conditioned system are: • Typical load and output profile: There is a three-year data record of the cooling consumption in the factory on a 15 min basis. The data record also includes the cooling output and the electrical consumption of the components in Tables 4 and 5. Employing the silhouette analysis and the Kmeans algorithm, as previously described, on the data record, values were clustered and an average cooling output and electrical input profile per cluster were calculated. As the measurements followed a normal distribution, their spread for each 15 min period was calculated using the two standard deviations over (2σ) and below (−2σ) the mean. The results are presented in Figure 9, the average consumption profile is color-highlighted and the range (±2 standard deviations) is shown in grey.
•
Control Variable: As mentioned the system operates continuously, ramping up and down the different cooling generation units as a function of the return water temperature.
•
Control horizon and latency: The ramping up and down of the system to a new operative state lasts between 5 and 10 min. The control components present a latency under five milliseconds.
•
Operative Continuity: the system presents a discontinuous operative continuity.
Determination of the Relevant Operative Characteristics of the Chilled Water Air-Conditioning System
The relevant operative characteristics of the chilled water air-conditioned system are: • Typical load and output profile: There is a three-year data record of the cooling consumption in the factory on a 15 min basis. The data record also includes the cooling output and the electrical consumption of the components in Tables 4 and 5. Employing the silhouette analysis and the K-means algorithm, as previously described, on the data record, values were clustered and an average cooling output and electrical input profile per cluster were calculated. As the measurements followed a normal distribution, their spread for each 15 min period was calculated using the two standard deviations over (2σ) and below (−2σ) the mean. The results are presented in Figure 9, the average consumption profile is color-highlighted and the range (±2 standard deviations) is shown in grey.
•
Control Variable: As mentioned the system operates continuously, ramping up and down the different cooling generation units as a function of the return water temperature.
•
Control horizon and latency: The ramping up and down of the system to a new operative state lasts between 5 and 10 min. The control components present a latency under five milliseconds.
•
Operative Continuity: the system presents a discontinuous operative continuity.
•
Operative steps: Each of the cooling units ramp up and down in single steps depending on the number of cooling circuits they present. The CHWDXs present 2 circuits, hence 2 operative steps, and the CHWABs present a single one, as does the CHWFC.
•
Output Flexibility: the aforementioned cooling circuits in each of the cooling generation units provide the output flexibility. • Bivalence: Due to the different functioning principle of the CHWDX and CHWFC, the system can be considered as presenting bivalence. • Redundancy: As previously mentioned, the CHWAB act as redundancy for the CHWDX units. The other components in the CW and HW circuits present 2N + 1 redundancy, while the pumps in the CHW circuit present 3N + 1 redundancy.
The system, as already mentioned, does not present any buffering capability, is not shiftable or interruptible and has no routing or task flexibility.
•
Bivalence: Due to the different functioning principle of the CHWDX and CHWFC, the system can be considered as presenting bivalence. • Redundancy: As previously mentioned, the CHWAB act as redundancy for the CHWDX units. The other components in the CW and HW circuits present 2N + 1 redundancy, while the pumps in the CHW circuit present 3N + 1 redundancy. The system, as already mentioned, does not present any buffering capability, is not shiftable or interruptible and has no routing or task flexibility.
Determination of the Relevant Production Characteristics of the Production Facility
The relevant production characteristics of the production facility on which the system finds itself are: •
Determination of the Relevant Production Characteristics of the Production Facility
The relevant production characteristics of the production facility on which the system finds itself are: The production characteristics not mentioned are irrelevant for this system, except for the relevant costs, which, as sensible information from the company, cannot be disclosed. Additional to provided information it is important to mention that the facility is located in Central Europe and, therefore, the outdoor temperature ranges from −14 to 42 • C.
Identification of Prospective EFMs in the Chilled Water Air-Conditioning System
The analysis of the different characteristics explains the existence of two operative clusters in the system as shown in Figure 9. Cluster I, the first operation profile, comprises working days and outdoor temperatures over 10 • C, where the cooling demand increases and no free-cooling option is available. Cluster II, the second operation profile covers working days below 10 • C, where the cooling demand is relatively reduced and a considerable part of the cooling demand is supplied by CHWFC-1. Additionally, Cluster II also covers non-working days, where there a base cooling demand, mainly for the data center. This was inferred from an analysis of the days in a calendar year on which each of the determined operative clusters will be active.
The analysis of the chilled water system using a cross-analysis matrix based on Tables 2 and 3 and evaluating each crucial characteristic as either positive or negative showed the availability of two different EFMs in the analyzed system:
1.
Adaptation of resource allocation: The adaptation of resource allocation EFM focuses on the possibility of switching between the CHWDX chillers and the CHWAB chillers while supplying the cooling consumption of the facility. Due to the considerable difference in the power rating between chiller types and hence in their electrical EER, the rotation of the absorption and mechanical chiller units induce a change in the electrical input of the chilled water system. 2.
Dedicated Energy Storage: The installation of CHW storage to supply the totality or a share of the cooling demand, at specific periods.
The availability of both prospective EFM was validated with the energy managers in the facility. Due to expected high investment costs of the dedicated energy storage, this EFM was deemed unattractive.
Characterization of the Validated EFM in the Chilled Water Air-Conditioning System
The characterization framework defines the scope of the validated EFM. The following tables determine or quantify the different parameters and hence the different dimensions as described in Section 4.7. Table 7 describes the functional dimension of the EFM. In Table 7, the system description, EFM category and operative concept come from the analysis performed in the previous steps. The adjustment factor responds to the control variable in the system, the ramping up and down of cooling generation units. Four different modes of operation (MO) have been defined for the EFM. The division is based on the number of clusters identified in the analysis of the typical load profile and the ability of the EFM to induce an increase (↑) or a decrease (↓) in the electrical consumption. All of these modes of operation are defined as holding because when activated, they only induce one operative state in the system. As the control of the system is performed via a SCADA system the execution level is set on the supervisory level. As expected, the functional dimension responds directly to the physical and operative characteristics of the analyzed industrial system. Table 8 presents the temporal dimension of the identified EFM. The Active Duration minimum is restricted to avoid compressor short cycling (>5 cycles/h), which might cause the operative failure of the cooling generation units. The maximum Active Duration is limited to one working shift in the facility as an analysis of the typical profiles showed that MOs in the system can change form one shift to the other. The wide range in the Active Duration supports the intended implementation objective as the duration of price volatility can extend over several hours. The Planning Duration is set to zero as no planning is necessary, in both cases, the system can execute its task, provide cooling, without interruption. The Perception Duration depends on the specific market on which electricity is being purchased and hence ranges from 5 min for intra-day handling to 24 h for day-ahead handling. The Decision Duration is considered automatized and hence it is defined by the latency of the components in the EMS. The Shift Duration responds to the ramp-up and down duration of the different cooling generation units. The Activation Duration aggregates the ramping-up of the EFM and it is calculated using Equation (1). The major element deciding the Activation Duration is the Perception duration and hence the identified EFM presents a very high capability to quickly react to price volatility and hence achieved the intended implementation objective. The Deactivation Duration mirrors the Shift Duration and the Regeneration Duration corresponds to the short-cycling avoidance requirement in the cooling generation units. Both are also relatively short, allowing for the EFM to be used to respond to subsequent electricity price variations. The Validity responds to production characteristics of the facility and hence to the operatives clusters determined by the typical profile analysis. The Activation Frequency is calculated using Equation (2). Table 9 presents the performance dimension of the identified EFM. The given load increase (↑), ∆P flex , values quantify the maximum and average difference between the typical electrical input and the necessary electrical input if the typical cooling output is satisfied by only using the mechanical chillers. The load reduction (↓), ∆P flex , on the other hand, quantify the maximum and average difference between this typical electrical input and the necessary electrical input if the typical cooling output is primarily supplied using the absorption chillers. In reality, the ∆P flex is dynamic and hence a function of the state of operation of the system. The state of operation will depend on the instantaneous cooling demand, in turn, a function of both the outdoor temperature and the level of production in the facility. The given values are hence a static approximation to the dynamic ∆P flex , value.
As previously hinted in the functional dimension, the EFM presents a bidirectional flexibility type without a need for later compensation. The different ∆P flex responds to the different cooling demand of the typical profiles in the facility. A reduction in the cooling demand, MO-3 and MO-4, diminishes the flexible power. Moreover, the flexible power of a load increase is considerably lower than that of a load reduction as in the reference operation, the CHWDXs have operative priority, and hence are already supplying a portion of the cooling demand. The most attractive MO is MO-2, an electrical consumption reduction during production and with an ambient temperature over 10 • C. This MO can Energies 2020, 13, 5887 29 of 35 achieve, on average, a reduction of 88% of the electrical consumption of the system for a period of up to 8 h. Regarding the additional characterization parameters, the flexible energy carriers respond to the operative principle of the cooling generation units in the system and the flexible energy is calculated using Equation (3). As the physical and operative characteristics of the chilled water system and, the production characteristics of the facility are considered to calculate the ∆P flex and the ∆t Active , they represent the practical EFP.
Finally, in Table 10, the economical dimension of the EFM is presented and the calculation reasoning behind each parameter is described. As can be inferred from the descriptions the implementation of the EFM will only represent investment and activation costs. The investment costs relate to additional infrastructure to have a constant supply of HW in the facility, and additional IT-infrastructure to allow the reaction to dynamic electrical prices. The maintenance costs relate, mainly to additional operative hours of the CHWABs which present relatively high maintenance costs, due to their operative principle. Although a relatively high payback period is given, it is clear that MO-3 and MO-4 are prohibitively expensive, based on the average price of electricity in the EU which is approximately 100 €/MWh [41].
Calculation of the Economical and Viable EFP of the Identified EFM
As mentioned before, the final step of the proposed methodology, the calculation of the economic and viable EFP, based on the net revenues of the EFM was not a part of the performed analysis. Nonetheless, as can be inferred, the gross revenues are dependent on the variability of the price paid for electricity and will be specific to each MO. For MO-1 and MO-3, load increase revenues are achieved if the electricity price is lower than the average price paid for the combination of electricity and hot water as energy inputs. This consideration will further limit the active duration and the activation frequency of these MOs, hence reducing flexible energy and increasing their specific costs. These considerations hint that these MOs might not be economically attractive for the company to activate. Nonetheless, they will be practically available if the EFM is implemented. On the other hand, for MO-2 and MO-4 load decrease, the revenues are reached if the price of generating HW is lower than that for electricity. This consideration will also reduce the active duration and the activation frequency of these MOs. Nonetheless, due to the considerable low specific cost of MO-2, and its high validity this might constitute a very attractive EFM overall for the company.
The economic EFP will hence constitute the flexible power and active duration in which the EFM generates revenues on each MO. The MOs that do not produce revenues should not be further considered. In the case of the viable EFP, the company has to make decisions on the active duration and activation frequency they intend for the EFM, weighing potential risks or negative consequences in the facility's performance, i.e., in its energy efficiency, which was not a part of this analysis.
Discussion
The initial application of the methodology provided several insights that are discussed in this section. Under ideal conditions, the definition available industrial systems, Step 1, will only respond to the grouping energy consuming components in industrial systems and their categorization in technical units. Nonetheless, as detailed monitoring of all energy-consuming loads is not yet a standard in the industrial sector, a very relevant aspect in the decision of which industrial systems will be analyzed is the available data records of their energy consumption and their output. The selection of implementation objectives is relevant to provide an end goal to the analysis, however, it is frequent that once the EFMs are identified and characterized new implementation objectives become relevant.
The physical characteristics of the industrial systems, determined in Step 2, give a general view of the system and its operation and can lead to an initial understanding of the energy flexibility capabilities of the system. Nevertheless, excessive reliance on these characteristics might be misleading. During the application of the methodology, it was the case that initially thought available EFMs were deemed unavailable by the operative characteristics of the system or the production characteristics of the facility.
The suitability analysis, conducted in Step 3, allows sorting among the available industrial systems and reduce the duration of the analysis particularly when the production facility is very complex and hence constituted by a large number of industrial systems. Nonetheless, its qualitative nature demands caution as a wrong assessment of any of the three criteria might discard suitable industrial systems. In the cases where a determination was not clear, the consideration of operative characteristics of the industrial system, which normally occurs in the subsequent step, significantly helped the analysis.
In contrast to the physical characteristics, the operative characteristics of the industrial system and the production characteristics of the facility, determined in Steps 4 and 5, play a sorting role in either supporting or discarding the availability of each EFM-Category, as conducted in Step 6. Particularly in very wide encompassing EFMs categories, like dedicated energy storage, which initially seem to be available for all sorts of industrial systems.
The determination of the different parameters in the characterization framework in, Step 7, is intrinsically dependent of the nature of each industrial system and therefore is considerably difficult to standardize, here the experience of the person conducting the analysis, the thoroughness of the surveyed system and facility characteristics and, the input of relevant stakeholders from the production facility proved vital to obtain realistic values.
Similarly, the calculation of the economical and viable EFP, in Step 8, is very case-specific and only general guidelines can be given regarding how this step should be conducted.
In general, the application of the proposed methodology shows that it is not able to replace the accuracy of modelling the industrial system to simulate its operation under energy flexible operation, as described in References [15,38] among others. As explained, the methodology relies on typical profiles Energies 2020, 13, 5887 31 of 35 of energy consumption and patterns of operation. As these profiles and patterns are a simplification of the actual dynamic operation of an industrial system, the performance of EFMs once implemented will diverge from the provided characterization. Nonetheless, the methodology presents considerable value, as it pinpoints the industrial systems suitable for energy flexible operation, from the large list of available industrial systems in a typical production facility. Moreover, it systematically identifies and characterizes the specific actions that induce energy flexible operation in these systems in the form EFMs, which is not only novel but provides a key input for the modelling, evaluation, implementation and management of the energy flexibility capabilities of the industrial system. Subsequent modelling of the industrial system acts then as a supplement, focusing on improving the accuracy of the values of the characterization parameters and being used as a prognosis tool to plan the management of the EFMs.
Additionally; the initial results also show that the methodology is promising but can be improved by improving the tools to establish the typical operative patterns of industrial systems. The accuracy of the results is highly dependent on the approach used to establish these patterns. It is hence crucial to examine thoroughly the available machine learning algorithms on data mining and clustering to find the best fitting for the task. These algorithms provide extremely relevant insights towards understanding how energy consumption is affected, particularly by the operative characteristics of the system and the production characteristics of the facility. Therefore, the most fitting algorithms and their optimal usage will facilitate the identification of EFMs and provide more accurate quantification of their characterization parameters.
Conclusions and Outlook
The paper presents a methodology to identify and characterize energy flexibility measures in the industrial systems that constitute a production facility. The methodology is meant to be the basis of an industrial energy audit focusing on the topic of energy flexibility and hence providing vital information for enterprises to implement and exploit the energy flexibility capabilities of their production facilities. The proposed methodology follows a similar procedure than the current standards in industrial energy auditing aimed to improve industrial energy management and identify energy efficiency measures [28,29]. As those standards, and as previously stated in the requirements, the proposed methodology needed to be systematic, agile, current operation friendly, applicable to the plethora of industrial systems and its outcomes needed to be relevant for the industrial stakeholders. The methodology starts by establishing the available industrial systems in the facility. Allowing the definition of different system boundaries depending on the morphology of the analyzed production facility, and hence adapting to the heterogenous nature of industrial systems. The fact that the expected implementation objectives from energy flexible operation are incorporated in the methodology provides a clear end goal for the analyzed production facilities, and hence prioritizes outcomes to the specific company needs, providing relevancy to its outcomes. The suitability analysis allows focusing only on those relevant industrial systems, reducing the analysis duration contributing to its agility. This acts as a counterpart to the "big-consumers" approach usually used in energy efficiency auditing which might be misleading in the case of energy flexibility. The analysis of the physical and operative characteristics of the industrial system and the production characteristics of the facility allows considering the current operative nature of the analyzed industrial systems, guaranteeing its affinity with the current operation approach. Moreover, it provides a more agile approach to analyze the dynamic nature of industrial systems than building a dedicated system model. Overall, the methodology is systematic as it follows a linear approach where decisions are made following previously defined criteria and allowing a multi-level analysis of the industrial systems to identify the available EFMs. EFM-categories are analyzed and only discarded under specific techno-economic considerations, not on biased assumptions. The creation of the characterization framework, that consistently delimits the scope of each EFM, facilitates the subsequent evaluation, implementation and management.
The methodology is currently being implemented to identify and characterize EFMs in several production facilities within the framework of the second phase of Kopernikus-project "SynErgie". The results are expected to be used to evaluate the benefit-based performance of each EFM to then prioritize and facilitate their implementation [42]. The characterization parameters of the EFMs will also be used as input in the simulation of the production facility under energy flexible operation using digital twinning modelling. Moreover, the outcomes of the proposed methodology will also be used to develop energy management and optimization strategies for the analyzed production facilities. Continuous improvement of the methods and tools described in this article is expected as more production facilities are audited.
|
v3-fos-license
|
2017-02-12T02:06:33.996Z
|
2009-11-01T00:00:00.000
|
5717008
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://bmcstructbiol.biomedcentral.com/track/pdf/10.1186/1472-6807-10-S1-S3",
"pdf_hash": "14f05cd0076a9341106c333ec0fdb2408ee2d9ab",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43343",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "057c9cd506b168212f9b03fd445611224561be78",
"year": 2010
}
|
pes2o/s2orc
|
Generalized spring tensor models for protein fluctuation dynamics and conformation changes
Background In the last decade, various coarse-grained elastic network models have been developed to study the large-scale motions of proteins and protein complexes where computer simulations using detailed all-atom models are not feasible. Among these models, the Gaussian Network Model (GNM) and Anisotropic Network Model (ANM) have been widely used. Both models have strengths and limitations. GNM can predict the relative magnitudes of protein fluctuations well, but due to its isotropy assumption, it can not be applied to predict the directions of the fluctuations. In contrast, ANM adds the ability to do the latter, but loses a significant amount of precision in the prediction of the magnitudes. Results In this article, we develop a single model, called generalized spring tensor model (STeM), that is able to predict well both the magnitudes and the directions of the fluctuations. Specifically, STeM performs equally well in B-factor predictions as GNM and has the ability to predict the directions of fluctuations as ANM. This is achieved by employing a physically more realistic potential, the Gō-like potential. The potential, which is more sophisticated than that of either GNM or ANM, though adds complexity to the derivation process of the Hessian matrix (which fortunately has been done once for all and the MATLAB code is freely available electronically at http://www.cs.iastate.edu/~gsong/STeM), causes virtually no performance slowdown. Conclusions Derived from a physically more realistic potential, STeM proves to be a natural solution in which advantages that used to exist in two separate models, namely GNM and ANM, are achieved in one single model. It thus lightens the burden to work with two separate models and to relate the modes of GNM with those of ANM at times. By examining the contributions of different interaction terms in the Gō potential to the fluctuation dynamics, STeM reveals, (i) a physical explanation for why the distance-dependent, inverse distance square (i.e., ) spring constants perform better than the uniform ones, and (ii), the importance of three-body and four-body interactions to properly modeling protein dynamics.
Introduction
It is now well accepted that the functions of a protein are closely related to not only its structure but also its dynamics. With the advancement of the computational power and increasing availability of computational resources, function-related protein dynamics, such as large-scale conformation transitions, has been probed by various computational methods at multiple scales.
Among these computational methods, coarse-grained models play an important role since many functional processes take place over time scales that are well beyond the capacity of all-atom simulations [1]. One type of coarse-grained models, the elastic network models (ENMs), have been particularly successful and widely used in studying protein dynamics and in relating the intrinsic motions of a protein with its functional-related conformation changes over the last decade [2][3][4][5].
The reason why ENMs have been well received as compared to the conventional normal mode analysis (NMA) lies at its simplicity to use. ENMs do not require energy minimization and therefore can be applied directly to crystal structures to compute the modes of motions. In contrast, minimization is required for carrying out the conventional normal mode analysis (NMA). The problematic aspect of energy minimization is that it usually shifts the protein molecule away from its crystal conformation by about 2 Å. In addition, in ENMs analytical solutions to residue fluctuations and motion correlations can be easily derived. On the other hand, the simplicity of ENMs leaves much room for improvement and many new models have been proposed [6][7][8][9][10][11][12].
The two most widely used ENM models are Gaussian Network Model (GNM) and Anisotropic Network Model (ANM). They have been used to predict the magnitudes or directions of the residue fluctuations from a single structure and have been applied in many research areas [4,5], such as domain decomposition [13] and allosteric communication [14][15][16][17]. Both models have their own advantages and disadvantages. GNM can predict the relative magnitudes of the fluctuations well, but due to its isotropy assumption, it can not be applied to predict the directions of the fluctuations. In contrast, ANM adds the ability to do the latter, but it loses a significant amount of precision in the prediction of the magnitudes.
Gaussian network model. Gaussian Network Model (GNM) was first introduced in [2] under the assumption that the separation between a pair of residues in the folded protein is Gaussianly distributed. Given its simplicity, the model performs extremely well in predicting the experimental B-factors. The model represents a protein structure using its C a atoms. The connectivity among the C a 's is expressed in Kirchhoff matrix Γ (see Eq. (1)). Two C a 's are considered to be in contact if their distance falls within a certain cutoff distance. The cutoff distance between a pair of residues is the only parameter in the model and is normally set to be 7 Å to 8 Å. Let Δr i and Δr j represent the instantaneous fluctuations from equilibrium positions of residues i and j and r ij and r 0 ,ij be the respective instantaneous and equilibrium distances between residues i and j. The Kirchhoff matrix Γ is: where i and j are the indices of the residues and r c is the cutoff distance.
The simplicity of the Kirchhoff matrix formulation results from the assumption that the fluctuations of each residue are isotropic and Gaussianly distributed along the X, Y and Z directions. The expected value of residue fluctuations, <Δr i 2 >, and correlations, <Δr i · Δr j >, can be easily obtained from the inverse of the Kirchhoff matrix: where k B is the Boltzmann constant and T is the temperature. g is the spring constant. The <Δr i 2 > term is directly proportional to the crystallographic Bfactors.
Anisotropic network model. GNM provides only the magnitudes of residue fluctuations. To study the motions of a protein in more details, especially to determine the directions of the fluctuations, normal mode analysis (NMA) is needed. Traditional NMA is all-atom based and requires a structure to be first energy-minimized before the Hessian matrix and normal modes can be computed, which was rather cumbersome. Even after the energy minimization, the derivation of the Hessian matrix is not easy due to the complicated all-atom potential. In Tirion's pioneering work [18], the energy minimization step was removed and a much simpler Hookean potential was used, and yet it was shown that the low frequency normal modes remained mostly accurate. Since then, the Hookean spring potentials have been favored in most coarse-grained C α models [3,19,20]. One of such models is best known as Anisotropic Network Model (ANM) [3] since it has anisotropic, directional information of the fluctuations. The potential in ANM has the simplest harmonic form. Assuming that a given structure is at equilibrium, the Hessian matrix (3N×3N) can be derived analytically from such a potential [3] where H i j is the interaction tensor between residues i and j and can be expressed as: lie in several aspects: (i) it is a coarse-grained model and uses the C a 's to represent the residues in a structure; (ii) it does not require energy minimization and thus can be applied directly to crystal structures to compute the modes of motions; (iii) it provides analytical solutions to the mean square fluctuations and motion correlations.
The limitations of the GNM model. GNM provides only information on the magnitudes of residue fluctuations but no directional information. Therefore, the modes of GNM should not be interpreted as protein motions or components of the motions, since the potential in GNM is not rotationally invariant [21].
The limitations of the ANM model. In contrast to that in GNM, the potential in ANM is based on simple, harmonic Hookean springs and is rotationally invariant. And thus, the modes of ANM do represent the possible modes of protein motions. In doing this, however, ANM loses a significant amount of precision in predicting the magnitudes of the fluctuations. The reason is that, in GNM, the fluctuations in the separation between a pair of residues are assumed to be Gaussianly distributed and isotropic, while in ANM, because only a Hookean spring is attached between a pair of residues i and j, the fluctuation of residue j is constrained only longitudinally along the axis from i to j. The fluctuation is unconstrained transversely. The interaction spring tensor H i,j ANM between residues i and j in Eq. (5) becomes the following in the local frame (where the Z axis is along the direction from residues i to j): Because the fluctuation of residue j is unconstrained transversely relative to residue i, the fluctuations given by ANM are less realistic than those by GNM, which are assumed to be isotropic. The isotropy in GNM is equivalent to an interaction spring tensor between residues i and j of the following form: From the two tensors H i,j ANM and H i,j GNM given in Eqs. 8 and 9, the causes for the limitations in GNM and ANM are clearly displayed. The unrealistic-ness in ANM is an artifact resulting from its over-simplified potential. The isotropy assumption of GNM, on the other hand, does a better job than ANM in modeling the effect of residue interactions on the magnitudes of the fluctuations, but gives up completely on representing the anisotropic nature that is intrinsic to all physical forces and interactions, since only the magnitudes of the mean-square fluctuations and cross-correlations were of concern when GNM was first proposed. Therefore, to overcome the limitations of GNM and ANM, what is needed is a generalized interaction spring tensor that both is anisotropic and can exert more proper constraints on the fluctuations than the ANM tensor H i,j ANM does. This calls for a model that has a physically more realistic potential than that of ANM. Since potentials with only two-body interactions can provide only longitudinal constraints, it is necessary to include multibody interactions in the potential in order to have transversal constraints as well. The multi-body interactions provide additional diagonal and off-diagonal terms to the interaction spring tensor between residues i and j. For example, by properly including three-body interactions, the interaction spring tensor may look like: where k represent the indices of the residues that interact with both residues i and j through three-body interaction S. The first tensor on the right side of the equation represents the two-body interaction, which is similar to H i,j ANM , except that the interaction strength T (i, j) depends on residues i and j, and thus may be distance-dependent as well.
Our contributions. To overcome the limitations of ANM and GNM, we have developed a generalized spring tensor model for studying protein fluctuation dynamics and conformation changes. It is called generalized spring tensor model, or STeM, for the reason that the interaction between a pair of residues i and j is no longer a Hookean spring that has the tensor form of Eq. (8), but takes a generalized tensor form (similar to that in Eq. 10) that can provide both longitudinal and transversal constraints on a residue's fluctuations relative to its neighbours. We obtain the generalized tensor form by deriving the Hessian matrix from a physically more realistic Gō-like potential (Eq. 11), which has been successfully used in many MD simulations to study protein folding processes and conformation changes [22][23][24]. In additional to the Hookean spring interactions, the potential includes bond bending and torsional interactions, both of which had been found to be helpful in removing the "tip effect" of the ANM model [9]. The inclusion of the bond bending and torsional interactions is reflected in the generalized tensor spring interaction between residues i and j, in such a way that the tensor now includes not only the two-body interaction between residues i and j, but also three-body and four-body interactions that involve residues i and j (see Eq. 10). In doing this, the STeM model is able to integrate all the aforementioned attractive features of ANM and GNM and overcome their limitations. Specifically, STeM performs equally well in B-factors predictions as GNM and has the ability to predict the directions of the fluctuations as ANM. This is accomplished with virtually no performance slowdown. The only potential drawback of this model is the significantly increased complexity in deriving the Hessian matrix. Fortunately, this has been done once for all and the derivation results are available electronically at http://www.cs.iastate.edu/~gsong/STeM. STeM is physically more accurate by explicitly including the bond bending and torsional interactions since they capture the chain behavior of protein molecules, which are neglected in most elastic network models where a protein is treated as an elastic rubber. Therefore, we have reasons to expect this model will further distinguish itself in studying protein dynamics where a correct modeling of bond bending and/or torsional rotations is critical.
Results and discussion
Crystallographic B-factor prediction Table 1 shows the correlation coefficients between the experimental and calculated B-factors of the 111 proteins in the first dataset. The mean values of the correlation coefficients of ANM, GNM, and STeM are 0.53, 0.59, and 0.60 respectively. STeM provides the directional information of the residue fluctuations as ANM and has an accuracy even slightly better than GNM in B-factor predictions. Figure 1 shows the distributions of the correlation coefficients between the calculated B-factors and the experimental B-factors. STeM is the only model in which there are instances where the correlation coefficient is above 0.85 and no instances where the correlation coefficient is below 0.25. This implies that the performance of STeM is more steady than either ANM or GNM. The scatter plot of the correlation coefficients between ANM and STeM in Figure 2 shows that STeM performs better than ANM for 80% of the proteins in the dataset. Protein structures of higher resolution have more accurate data on atom coordinates and B-factors. We investigate whether our model's performance can be further improved when the dataset used is limited to structures with higher resolution. We select the 12 structures with resolution better than 1.3 A from the first dataset. The mean values of the correlation coefficients of these 12 structures are 0.56, 0.62, and 0.63 for ANM, GNM, and STeM, respectively, which gives an improvement of about 5%-6% for all of the three models. Since the improvement is based on a relatively small set of 12 structures, a larger dataset is needed to further examine this potential dependence of B-factor prediction accuracy on structure quality.
The contributions of different interaction terms to the fluctuations
The Gō-like potential in eq. (11) has four different interaction terms, namely, bond stretching, bond bending, torsional interactions, and the non-bonded interactions. It is of great interest to investigate the relative contributions of these different terms to the agreement with experimental B-factors. Since only the nonbonded interaction term (V 4 ) is able to provide by itself enough constraints to ensure the Hessian matrix to have no more than six zero eigenvalues, V 4 is used as the base term for the evaluation of different terms' contributions to the mean-square fluctuations. The Hessian matrix of ANM, denoted by H ANM , is used as another baseline for comparison purposes. Table 2 lists the contributions of these different terms to the improvement of B-factor predictions as they are added to the potential.
First, it is seen that the non-bonded interactions, as are present in H V 4 and H ANM , play a dominant role in contributing to the B-factors. This is not surprising since the mean-square fluctuations of a residue are mostly constrained by its interactions with its spatial neighbours, most of which are through non-bonded interactions. What is more interesting is that H V 4 term alone performs better than H ANM . This is in agreement with recent results that the performance of B-factor predictions can be improved by using distance-dependent force constants [25,26]. Particularly, the spring constants that take the form of inverse distance square have been shown to be superior in a recent exhaustive study that experimented with different distance-dependent spring constants on a large dataset [10]. The Taylor expansion of the non-bonded interaction term (V 4 ) shows that it has an equivalent spring constant of the form 120 r 0, 2 ij (see Eq. 36), which is exactly proportional to the inverse of the pairwise distance square. Thus, STeM provides a physics-based explanation for the choice of using inverse square distance spring constants.
The contribution to the improvement in B-factor predictions from each of the bonded interactions, such as that of bond stretching, is small, as had been pointed out by Bahar et al when GNM was first proposed over a decade ago [2]. However, when the contributions of all of these four terms are added up, they together enable the STeM model to gain a significant improvement over ANM to reach the level of accuracy on a par with GNM.
Conformational change evaluation
It is known that the modes derived from the open form of a structure have better overlaps and correlations with the direction of a protein's conformation change than the ones derived from the closed form [20]. Here we apply the STeM model to study the conformation changes between the open and closed forms of 20 proteins and the open forms are used to calculate the normal modes. Table 3 lists the overlaps and correlations of the observed conformation changes and the indices of the modes that are most involved in the conformation changes. GNM is not considered since it cannot provide directional information. The mean values of the overlaps and correlation coefficients of ANM are 0.49 and 0.61 respectively, and 0.52 and 0.64 respectively for STeM. These amount to an improvement of about 5% for STeM over ANM on both overlap and correlation.
Since the results are obtained based on a relatively small set of 20 protein pairs, the significance of the improvement seen here needs to be further tested by conducting a more exhaustive analysis that uses a larger set of proteins and varying parameters, and preferably taking into account the effect of crystal packing as well.
We will leave this for future work. It is also worth noting that, in both the overlap and correlation calculations, the modes that are most involved in the conformation change tend to have lower indices in STeM than in ANM (see Table 3), which may imply the modes of STeM be of higher quality than those of ANM.
Conclusions
Protein mean-square fluctuations and conformation changes are two closely related aspects of protein dynamics. However, in the past, two separate groups of models were needed to best explain protein meansquare fluctuations or conformation changes. Specifically, the best models for predicting mean-square fluctuations cannot predict conformation changes, and the models that can predict conformation changes do not have the best performance in predicting meansquare fluctuations. There is thus an obvious gap between the models that work well in predicting one aspect of the dynamics and those in another. Since protein mean-square fluctuations and conformation changes are two closely related dynamic phenomena and share a similar physical origin, we reasoned that models based on a physically more accurate potential should be able to bridge the gap and predict both aspects of the protein dynamics well. Indeed, by using a Gō-like potential, we have successfully developed a spring tensor model (STeM) that is able to singly predict well both meansquare fluctuations and conformation changes. Specifically, STeM performs equally well in B-factor predictions as GNM and has the ability to predict the directions of fluctuations as ANM. The new STeM model does come with a cost. As is seen, the derivation process of the Hessian matrix in STeM is much more complex than models using only two-body Hookean potentials, such as those used in ANM. However, the introduced complexity in the potential is necessary in resolving the aforementioned gap that is mainly due to over-simplified potentials and in providing a single, unified model for protein dynamics. Moreover, the derivation process, though more complex, needs to be done only once. Examining the different interaction terms in the Gō potential and their contributions to the agreement with experimental B-factors provides further benefits. Along the way, we have discovered a physical explanation for why the distance-dependent, inverse Figure 1 The distributions of the correlation coefficients between the experimental and calculated B-factors Figure 2 The scatter plot of the correlation coefficients by ANM and those by STeM For 80% of the proteins listed in Table 1, STeM does better than ANM.
Lin and Song BMC Structural Biology 2010, 10(Suppl 1):S3 http://www.biomedcentral.com/1472-6807/10/S1/S3 distance square (i.e., 1 r 2 ) spring constants perform better than the uniform ones. The van der Waals interaction term in the potential naturally renders inverse distance square spring constants! By including the bond bending and torsional interactions and their contributions to the improvement in B-factor predictions, the STeM model confirms the importance of 3-body and 4-body potentials. The importance of multi-body potentials is made even more evident when their contribution to the interaction spring tensor is examined -the multi-body potentials are shown to be necessary in providing proper constraints on residue fluctuations, even transversely. It is worth noting that the 3-body and 4-body potentials introduced through bond bending and torsional interactions only scratch the surface of the full extensity of the multi-body potentials since bond bending and torsional interactions are restricted to only conssecutive residues along the protein chain. The improvement seen here calls for other generalized spring tensor models that have a thorough treatment of the multi-body potentials. Chain breaking, such as that due to missing residues, has a more felt impact on STeM than on ANM or GNM, since the first, the second, and the third terms of the potential used to derive the model are all related to the continuity of the chain. We have not evaluated such impact in the current work but this could be a future research direction and our STeM model would be a proper tool for evaluating the impact of chain breaking on protein motions. STeM does not always outperform ANM in B-factor predictions -it does better than ANM for 80% of the proteins studied. it would be interesting to find out why this is so. Crystal packing has been known to impact significantly the mean-square fluctuations. Therefore, a proper inclusion of the crystal packing effect may further enhance STeM's performance. Since STeM takes into account bond bending and
Methods
In this section we will show the derivations of the Hessian matrix from a Gō-like potential proposed by Clementi et al [22].
The Gō-like potential The Gō-like potential in [22] takes the non-native and native (equilibrium) conformations as input and it can be divided into four terms. The first term of this Gō-like potential (defined as V 1 for later use) preserves the chain connectivity. The second (V 2 ) and third terms (V 3 ) define the bond angle and torsional interactions respectively and the last term (V 4 ) is the nonlocal interactions. The Gō-like potential has the following expression: In Eq. (11), r and r 0 represent respectively the instantaneous and equilibrium lengths of the virtual bonds between the C a atoms of consecutive residues. Similarly, the θ (θ o ) and F (F 0 ) are respectively the instantaneous (equilibrium) virtual bond angles formed by three consecutive residues and the instantaneous (equilibrium) virtual dihedral angles formed by four consecutive residues. The r ij and r 0,ij represent respectively the instantaneous and equilibrium distances between two non-consecutive residues i and j. The Gō-like potential in Eq. (11) includes several force parameters (K r , K θ , K 1 , K 3 and ε) and the values of these parameters are taken directly from [22] without any tuning. The values of these parameters are: Anisotropic fluctuations from the second derivative of the Gō-like potential Similar to ANM, STeM has a 3N×3N Hessian matrix that can be decomposed into N×N super-elements. Each super-element in STeM, H i,j , is a summation of four 3×3 matrices. The first 3×3 matrix is the contribution from bond stretching. The second and third 3×3 matrices are the contributions from bond bending and torsional rotations respectively. The fourth 3×3 matrix is the contribution from nonlocal contacts. , The Hessian matrix is the second derivative of the overall potential (equation 11). Let us first consider the first term of the Gō-like potential and let (X i ,Y i , Z i ) and (X j , Y j , Z j ) be the Cartesian coordinates of two consecutive residues i and j.
The protein sets studied To evaluate the STeM model, we apply it to compute Bfactors and to study protein conformation changes and compare the results with those computed from ANM and GNM. For B-factors computations, the protein dataset is from [27] and contains 111 proteins. Two proteins, 1CYO and 5PTP, are removed from the dataset because they no longer exist in the current Protein Data Bank [28]. The proteins in the first dataset all have a resolution that is better than or equal to 2.0 Å. For conformation change studies, the dataset is from [20], which contains 20 pairs of protein structures. Each pair of protein structures have significantly large structure difference from each other.
Evaluation techniques
We used the same evaluation techniques as have been applied before [20,27]. Specifically, the following three numerical measures are used.
The correlation between the experimental and calculated B-factors
The linear correlation coefficient between the experimental and calculated B-factors is calculated using the following formula. The overlap between the experimental observed conformation changes and the calculated modes
|
v3-fos-license
|
2023-05-17T15:03:19.421Z
|
2023-05-01T00:00:00.000
|
258729566
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/jpm13050832",
"pdf_hash": "fdba26f492294c8257202c95da6cf0f59d21e3c6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43344",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "9379e86a8fd3a8e17874ba19031f9b0f38c0890a",
"year": 2023
}
|
pes2o/s2orc
|
Accuracy of Conventional and Digital Impressions for Full-Arch Implant-Supported Prostheses: An In Vitro Study
Both conventional and digital impressions aim to record the spatial position of implants in the dental arches. However, there is still a lack of data to justify the use of intraoral scanning over conventional impressions for full-arch implant-supported prostheses. The objective of the in vitro study was to compare the trueness and precision of conventional and digital impressions obtained with four intra-oral scanners: Trios 4 from 3Shape®, Primescan from Dentsply Sirona®, CS3600 from Carestream® and i500 from Medit®. This study focused on the impression of an edentulous maxilla in which five implants were placed for implant-supported complete prosthesis. The digital models were superimposed on a digital reference model using dimensional control and metrology software. Angular and distance deviations from the digital reference model were calculated to assess trueness. Dispersion of the values around their mean for each impression was also calculated for precision. The mean distance deviation in absolute value and the direction of the distance deviation were smaller for conventional impressions (p-value < 0.001). The I-500 had the best results regarding angular measurements, followed by Trios 4 and CS3600 (p < 0.001). The conventional and I-500 digital impressions showed the lowest dispersion of values around the mean (p-value < 0.001). Within the limitations of our study, our results revealed that the conventional impression was more accurate than the digital impression, but further clinical studies are needed to confirm these findings.
Introduction
In recent years, interest in digital technologies has increased and impacted various industries around the world, including dentistry. In particular, the advent of computer-aided design/computer-aided manufacturing (CAD-CAM) systems has led to the development of digital technologies for intraoral impressions in the field of prosthodontics. Obtaining quality digital impressions for conventional fixed prostheses is no longer a problem. However, the quality of impressions for implant-supported complete prostheses is still a concern.
The process of osseointegration between alveolar bone and implant body leaves very little margin for error in the accuracy of the impression. In complete oral rehabilitation with a multi-supported implant prosthesis, the passivity of the framework is an essential requirement for the long-term survival of the implants and the prosthetic restoration [1][2][3]. Passivity is even more important for a screw-retained prosthesis, where stresses are applied during the screwing process in order to align the surfaces of the prosthesis and the implant [4]. Accurate transfer of the three-dimensional relationship of the intraoral implant to the master cast is an essential step in achieving a passive fit [5,6].
Conventional impressions are considered the gold standard in some clinical situations and the most commonly used technique in dentistry [7]. However, the risk of deformations during conventional impression or casting of impressions increases during rehabilitation with implants-supported complete prosthesis. It contributes to a lack of passive adaptation of the framework to the implant. Intraoral acquisition provides a digital model in which the implant replicas are automatically placed. The dental technician can model the prosthesis using CAD software and then machine or print the prosthetic part. Digital impression systems eliminate the error-prone plaster modeling step of conventional impressions, and the impression can be stored as an STL file for an unlimited time. Digital technology can provide greater reliability by eliminating casting stresses and/or dimensional variations experienced by materials during curing or removal [8].
Different capture techniques are used in intraoral scanners. The triangulation technique (for I-500, Cs3600 cameras) aims to assume the measurement of the volume of the object by calculating the difference between the incident and reflected light in contact with the object. This acquisition process requires software with significant computing power and complex algorithms capable of reconstructing the surface in three dimensions. Parallel confocal imaging (for Trios 4 and Primescan cameras) is a technique based on laser and optical scanning of the oral volume (dental, implant, periodontal) to digitally reproduce it. A series of "sections" at different depths of field are obtained and assembled to obtain a three-dimensional representation of the object by reconstruction [9]. Some studies agree that not all scanners are suitable for taking digital impressions for full-arch implant-supported prostheses [10]. An inaccurate impression does not record the true position of the implants and the spatial relationships with the teeth, alveolar ridges and soft tissues [11]. Inadequate accuracy of the impression technique and/or manual steps in the fabrication of the prosthesis may lead to poor prosthetic fit and subsequent technical, mechanical and biological complications [10]. Therefore, there is still a lack of data to justify the use of digital impressions in implant-supported complete prosthesis [7].
The aim of this in vitro study was to determine and compare the trueness and the precision of four intraoral scanners (IOS): Trios 4 from 3Shape ® , Primescan from Dentsply Sirona ® , CS3600 from Carestream ® and i500 from Medit ® , and those obtained with conventional impression in a full-arch implant-supported prosthesis. The null hypothesis was that conventional and digital impressions produce casts of similar accuracy.
Design
This in vitro study focused on the impression of an edentulous maxilla in which five implants were placed in the right central incisor, canine and first molar sectors and in the left canine and first molar sectors.
Working Model
In the first step, the maxillary working model was produced with a 3D printer from the digital file of a fully edentulous maxillary arch. The model was printed with the Formlabs Form 2 ® 3D printer, which uses resin as the printing material ( Figure 1a). Naturactis ® implant analogs of the Euroteknica ® (ETK) brand with a diameter of 3.5 mm were fixed on the model in the right sector of the first molar (#1), canine (#2), central incisor (#3) and in the left sector of the canine (#4) and first molar (#5). They have an internal hexagonal conical connection (Ref. NLA_H35) (Figure 1b). The working model was scanned with 3shape D2000 scanner to obtain the digital reference model. This scanner allows multiline scanning using four 5.0-megapixel cameras with 27 blue LEDs ( Figure 2). Two types of impressions were subsequently evaluated: (i) conventional impressions with elastomers, (ii) digital impressions with four different scanners (Trios 4 from 3Shape ® , Primescan from Dentsply Sirona ® , CS3600 from Carestream ® and Medit i500 ® ) ( Figure 2).
Conventional Impression
In a second step, the impression transfers were screwed on to the maxillary working model at 5 N·cm before taking the conventional impression ( Figure 1c). This allows the spatial position of the implant to be transferred into the impression material for precise Two types of impressions were subsequently evaluated: (i) conventional impressions with elastomers, (ii) digital impressions with four different scanners (Trios 4 from 3Shape ® , Primescan from Dentsply Sirona ® , CS3600 from Carestream ® and Medit i500 ® ) ( Figure 2).
Conventional Impression
In a second step, the impression transfers were screwed on to the maxillary working model at 5 N·cm before taking the conventional impression ( Figure 1c). This allows the spatial position of the implant to be transferred into the impression material for precise repositioning of the implant analog on the working model. The transfers were ETK Naturactis ® short transfers with S-Naturactis screws ( Figure 1d).
Each of the three calibrated operators took three impressions using a custom open tray and polyether material, Impregum™ Penta™ Soft from 3M ® (Figure 1e). The custom impression trays were created digitally using Dental System's 3D modeling software (3Shape ® ) and the digital file of the previously scanned maxillary working model and were printed with the Formlabs' Form 2 ® 3D printer. The Impregum™ pellets (base and catalyst) were inserted into the metal cartridge of the 3M ® ESPE Pentamix™ 2 automatic mixer with a single-use mixing tip and a 3M ® elastomer syringe. Once the implant replicas were placed, the impressions were cast and scanned with the D2000 laboratory scanner to obtain a digital model for evaluations. A total of nine conventional impressions were obtained.
Digital Impression
In the third step, scanbodies were screwed onto the maxillary working model analogs at 5 N·cm prior to digital impression ( Figure 1f). The scanbodies were attached to the implants so the scanner can establish the spatial position of the implant analog in the working model.
Each of the three calibrated operators took three impressions with each of the OISs according to the manufacturers' instructions [12][13][14]. A total of thirty-six digital impressions (nine per camera) were created. These impressions provided the digital models needed for the evaluation.
Two different scanning methods were applied as suggested by the manufacturers [12][13][14]. The first method is common to the Trios 4, Primescan and i500 scanners. It includes a first scan of the reference digital model without scanbodies. The scan started with the occlusal surface from the right molar sector to the left molar sector, then proceeded to the buccal surfaces of the edentulous ridge in a reverse path and finally to the palatal surfaces of the edentulous ridge from the right molar sector to the left molar sector ( Figure 3). The second scan was performed with the scanbodies in place, after the circular cut and implant areas removal.
J. Pers. Med. 2023, 13, x FOR PEER REVIEW catalyst) were inserted into the metal cartridge of the 3M ® ESPE Pentamix mixer with a single-use mixing tip and a 3M ® elastomer syringe. Once the im were placed, the impressions were cast and scanned with the D2000 labora obtain a digital model for evaluations. A total of nine conventional impres tained.
Digital Impression
In the third step, scanbodies were screwed onto the maxillary working at 5 N·cm prior to digital impression ( Figure 1f). The scanbodies were attac plants so the scanner can establish the spatial position of the implant analo ing model.
Each of the three calibrated operators took three impressions with ea according to the manufacturers' instructions [12][13][14]. A total of thirty-six sions (nine per camera) were created. These impressions provided the needed for the evaluation.
Two different scanning methods were applied as suggested by the [12][13][14]. The first method is common to the Trios 4, Primescan and i500 s cludes a first scan of the reference digital model without scanbodies. The with the occlusal surface from the right molar sector to the left mol proceeded to the buccal surfaces of the edentulous ridge in a reverse pat to the palatal surfaces of the edentulous ridge from the right molar se molar sector ( Figure 3). The second scan was performed with the scanbodie the circular cut and implant areas removal. The second method was performed with the CS 3600 scanner. The f scan was performed without the scanbodies and was identical to the pre ( Figure 3). Indeed, after the circular cutting of the implant areas, one scanbo and selected for scanning. Then, the scanbody was removed and the next o The second method was performed with the CS 3600 scanner. The first step of the scan was performed without the scanbodies and was identical to the previous method ( Figure 3). Indeed, after the circular cutting of the implant areas, one scanbody was placed and selected for scanning. Then, the scanbody was removed and the next one was placed for scanning until all scanbodies were completely scanned on the 3D model.
Comparison of Digital Models
The evaluation criterion for the impressions was accuracy, a combination of precision and trueness (ISO 5725-1) [15]. Trueness is the difference between the mean value and the true value. Precision is the distribution of values around the mean that provides the reproducibility of a measurement.
All STL files corresponding to the digital reference model, the 9 conventional impression models and the 36 digital impression models were saved in the same folder on the USB drive for comparison. To compare the accuracy between the conventional and digital impressions, several measurements were performed using dimensional control and metrology software Geomagic ® Control X™ (3DSYSTEM ® ). This software was used to compare two 3D digital models: a reference digital model and a model to be compared by aligning and superimposing them to calculate measurement differences (distances, angles) along a particular axis. The software allowed the process to be fully automated by keeping the reference digital model with the measurements to be completed and in turn replacing the model to be compared with the other selected digital models. All the digital models corresponding to the conventional and digital impressions were automatically compared one by one to the reference digital model and reports were generated for each comparison.
The angular and distance differences between the implants were calculated to obtain the accuracy for each type of impression. A comparison of distances was performed between each scanbody for each model: between 1 and 2, 1 and 3, 1 and 4, 1 and 5, between 2 and 3, 3 and 4 and finally 4 and 5. An angular comparison of each scanbody of the reference digital model with the scanbody corresponding to the model to be compared was also performed. The distance deviations were evaluated in two ways, firstly as an absolute value, to obtain an average of the deviations for each impression, and secondly as a negative or positive raw value, to assess the direction of the deviation. The angular deviation corresponds to the angle between the reference digital model and the analyzed impression vectors. A method comparing the dispersion of the values around their mean for each impression was also applied to assess the precision.
IBM SPSS version 28.0 was used to analyze the data. The level of significance was set at p-value ≤ 0.05. Repeated measures analyses of variance followed by Bonferroni multiple comparisons tests were applied for statistical comparisons.
Distances Deviation in Absolute Value According to the Impression Type
The mean absolute distance deviation was significantly lower for conventional impression and elevated for CS3600 digital impression (p < 0.001) (Figure 4).
Direction of the Distance Deviations According to the Impression Type
The direction of the distance deviation from the digital reference model was significantly different between impressions (p-value < 0.001). The conventional impressions and the Trios 4 had mostly positive deviations and the distance deviation was significantly smaller with conventional impression (p-value < 0.001) ( Figure 5). The results between calibrated operators were not significant (p > 0.05).
Distances Deviation in Absolute Value According to the Impression Type
The mean absolute distance deviation was significantly lower for conventional impression and elevated for CS3600 digital impression (p < 0.001) (Figure 4).
Direction of the Distance Deviations According to the Impression Type
The direction of the distance deviation from the digital reference model was significantly different between impressions (p-value <0.001). The conventional impressions and the Trios 4 had mostly positive deviations and the distance deviation was significantly smaller with conventional impression (p-value < 0.001) ( Figure 5). The results between calibrated operators were not significant (p > 0.05).
Angular Deviations According to the Type of Impression
The mean angular deviations were significantly different between impressions (p < 0.001). They were smaller with I-500 digital impression, followed by Trios 4 and CS3600 ( Figure 6). The results between calibrated operators were not significant (p > 0.05).
Angular Deviations According to the Type of Impression
The mean angular deviations were significantly different between impressions (p < 0.001). They were smaller with I-500 digital impression, followed by Trios 4 and CS3600 ( Figure 6). The results between calibrated operators were not significant (p > 0.05).
Precision
The average dispersion of distance values around their mean for each impression is displayed in Table 1. The I-500 has the lowest mean dispersion, followed by conventional impression (p-value < 0.001). The Trios 4 and CS 3600 have the highest dispersion, indicating the lowest precision (Table 1). The average dispersion of angular values around their mean for each impression is shown in Table 2. The I-500, CS3600, Trios 4 and conventional impression have the lowest dispersion (highest precision) and the Primescan has the highest dispersion (lowest precision).
Precision
The average dispersion of distance values around their mean for each impression is displayed in Table 1. The I-500 has the lowest mean dispersion, followed by conventional impression (p-value < 0.001). The Trios 4 and CS 3600 have the highest dispersion, indicating the lowest precision (Table 1). The average dispersion of angular values around their mean for each impression is shown in Table 2. The I-500, CS3600, Trios 4 and conventional impression have the lowest dispersion (highest precision) and the Primescan has the highest dispersion (lowest precision).
Discussion
Our study aims to compare the accuracy of four intraoral scanners and one conventional impression for full-arch implant-supported prostheses. The first null hypothesis that conventional and digital impressions will produce casts of similar trueness was rejected. As a result, the mean absolute distance deviation and the direction of the distance deviation from the reference digital model were reduced for the conventional impression. Similarly, the I-500 had the lowest angular deviation, followed by the Trios 4 and the CS3600. The second null hypothesis that conventional and digital impressions will produce casts of similar precision, was also rejected. Thus, the conventional and I-500 impressions showed the lowest dispersion of values around their means. The calculated power of the post tests was greater than 80%, indicating that the sample was sufficiently powerful to detect a difference between the groups.
In terms of trueness, the conventional impression provided the best accuracy for distances, with a mean distance deviation and standard error from the reference model of 132.3 ± 21 µm. The new IOS Primescan and Trios 4 performed better than the older CS3600 and i500 in terms of distance deviation. For an edentulous model, the cameras can easily confuse scanbodies of the same shape, forcing the area to be removed and the camera passageway to be repeated at that level. The possible confusion between the scanbodies in the case of edentulous cast is due to the fact that there are few anatomical shapes and features that allow the virtual model building software to find its position.
A study conducted in 2017 showed that an average distance deviation of 50 to 100 µm and a maximum angular deviation of 1 • were required to ensure framework passivity in implant-supported complete prostheses [16]. Our results showed that the distance deviation of the four digital impressions was between 170 and 270 µm. Thus, digital impressions seem to have an accuracy that is not compatible with the passivity of the framework in an implant-supported complete prosthesis. Although the conventional impression had the smallest distance deviation, it was greater than the accepted values (132 µm) and thus possessed a trueness that could lead to framework non-passivity.
For angular deviations, our results revealed that three of the scans (I-500, Trios 4 and CS3600) performed better, but both conventional and digital impressions were below the 1 • value.
In terms of precision, the distances between the scanbodies affect the dispersion of values from the mean. The greater the distance from the starting point (scanbody 1), the greater the average dispersion. The average dispersion increases from distance 1-2 to distance 1-5, and from distance 2-3 to distance 4-5. Since the distance between scanbody 2 and scanbody 3 was the shortest on the baseline model, the average dispersion was the smallest. For the angular differences between each scanbody and the reference, the results showed smaller average deviations, ranging from 0.212 • for the i500 scanner, 0.317 • for conventional impression to an increase in average deviation of 0.966 • for the Primescan.
Combining the results of deviations from the average for distances and angles, the i500 scanner achieved the best precision, followed by conventional impression. However, the dispersion of values from the angular mean was high for Trios 4 (233.6 µm) and the dispersion of values around the distance mean was high for Primescan 0.966 • . Implant ankylosis imposes several constraints on the implant-supported prosthesis [16][17][18][19]. Therefore, precision and trueness are crucial outcomes [6,20]. In this context, the digital impression has many advantages [17]: (i) the suppression of impression trays and materials reduces the risks related to incomplete curing and deformations; (ii) the reduction in errors related to laboratory processing (casting, demolding and transfer placement) ensures the stability of the impression; (iii) real-time processing of information and ease of reinterventions (reuse of the computer file or possibility of partial modification); (iv) improved patient comfort (less nausea) and the possibility to interrupt and continue the impression process at any time without losing the information already acquired, and patients prefer digital impression [21]; (v) the preservation of the virtual model, which, under appropriate clinical conditions, provides the possibility to remake the prosthetic element without patient intervention; (vi) the availability of libraries of theoretical scanbody morphologies suitable for different implant types; (vii) the use of the IOS saves time and (viii) improved communication with the laboratory: fast delivery without a carrier, exchange with the prosthetist before machining, help in choosing the shade and saving time [22].
Digital impression also has its limitations. Many factors can interfere with the scanning of a dental arch. These may include operator or equipment errors (lack of calibration), the nature of the object to be scanned [23] or external disturbances, such as the lighting in the clinic [24,25]. The optical properties of the scanned elements (reconstruction materials and prosthesis) contribute to alter the acquisitions [26].
Conventional impressions are considered the gold standard in some clinical situations and the most commonly used technique in dentistry [7]. However, it represents one of the weakest links in the prosthetic design. Conventional impressions are associated with limitations, such as ongoing costs, patient discomfort, the need for well-fitted impression trays and the need to cast with dental stone. In addition, their quality depends on material handling, deformation of the impression and stone material and capture of all intraoral tissues [27].
Our results showed that conventional impression was the most accurate. The four scanners provided very different results. This study confirmed the results of a previous study showing that digital impression is accurate but not yet faithful enough to be used routinely in implant-supported complete prostheses. Indeed, the four intraoral scans showed great variability [28].
It is worth noting that the results of this in vitro study do not presume the clinical validity of the impressions. Clinical studies are needed to evaluate the accuracy of these impressions in an implant-supported complete prosthesis as many important parameters (mouth opening, presence of blood or saliva, anatomical obstacles, etc.) may affect their accuracy. However, in vitro experiments have the advantage of limiting confounding parameters and evaluating all IOS under the same conditions. In this context, it is important to follow the manufacturer's instructions, calibrate accurately, and change the tip regularly to take full advantage of the IOS's performance. In addition, the use of the mesh/mesh (virtual model) method to evaluate all the impressions may also be a limitation of our study. Meshes are surface reconstructions, and thus geometric approximations of the scanned model, which may induce errors in the calculation of distances between scanbodies. However, in the case of implant prostheses, the first step in CAD is to replace the scanbody meshes with the corresponding scanbody library file. This is a geometrically perfect file (NURBS file, Non-uniform Rational B-Splines). The use of these NURBS files allows to obtain more reliable linear distances.
Conclusions
Within the limits of the in vitro study, our results revealed that the conventional impression was more accurate than the digital impression, but further clinical studies are needed to confirm these results. Nevertheless, the continuous progress of intraoral scanning technologies and the development of new acquisition processes might allow optical impressions to extend its indications in implantology and match or even surpass conventional impressions for implant-supported complete prosthesis. Funding: This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors.
|
v3-fos-license
|
2018-04-03T06:01:48.085Z
|
2017-01-09T00:00:00.000
|
15740154
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/srep40250.pdf",
"pdf_hash": "4489157f18701816824d618b7699a765618ff3c7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43345",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4489157f18701816824d618b7699a765618ff3c7",
"year": 2017
}
|
pes2o/s2orc
|
Investigating the Bidirectional Associations of Adiposity with Sleep Duration in Older Adults: The English Longitudinal Study of Ageing (ELSA)
Cross-sectional analyses of adiposity and sleep duration in younger adults suggest that increased adiposity is associated with shorter sleep. Prospective studies have yielded mixed findings, and the direction of this association in older adults is unclear. We examined the cross-sectional and potential bi-directional, prospective associations between adiposity and sleep duration (covariates included demographics, health behaviours, and health problems) in 5,015 respondents from the English Longitudinal Study of Ageing (ELSA), at baseline and follow-up. Following adjustment for covariates, we observed no significant cross-sectional relationship between body mass index (BMI) and sleep duration [(unstandardized) B = −0.28 minutes, (95% Confidence Intervals (CI) = −0.012; 0.002), p = 0.190], or waist circumference (WC) and sleep duration [(unstandardized) B = −0.10 minutes, (95% CI = −0.004; 0.001), p = 0.270]. Prospectively, both baseline BMI [B = −0.42 minutes, (95% CI = −0.013; −0.002), p = 0.013] and WC [B = −0.18 minutes, (95% CI = −0.005; −0.000), p = 0.016] were associated with decreased sleep duration at follow-up, independently of covariates. There was, however, no association between baseline sleep duration and change in BMI or WC (p > 0.05). In older adults, our findings suggested that greater adiposity is associated with decreases in sleep duration over time; however the effect was very small.
Research also suggests that, in addition to an association of short sleep and increased BMI, long sleep duration, defined as 9 to 10 hours may also increase the risk of weight gain in adults 12 . Evidence has also emerged in favour of a longitudinal association between long sleep and weight loss 16 . Few studies have examined the longitudinal relationship of sleep duration and BMI in older adults and have also yielded ambiguous results. The Whitehall II study found no evidence of an association between shorter sleep duration and changes in BMI or obesity incidence over 4 years, in a sample with a mean age of fifty-six 22 . However, prospective analysis of 3 576 Spanish older adults suggests that a sleep duration of less than or equal to 5 hours, as well as a sleep duration of 8 or 9 hours is associated with obesity; and with weight gain over a period of two years, but only in females 13 .
As far as we can determine, no studies have to date, tested the prospective, bidirectional relation between BMI and sleep duration in a single study, particularly in an ageing community sample. This is important, as it has been suggested that this association could be bidirectional in nature 23 , such that greater BMI may precede shorter sleep duration 12,15,20 or shorter sleep duration may precede weight gain 12,15,20 . It is essential to ascertain the direction of this association to allow health professionals to better understand where to target interventions in older age groups, when disease and frailty are most likely to occur.
However, the use of BMI as an adiposity indicator in older adults may not be optimal, due to its reduced ability to predict body fat in this population, due to decreasing muscle mass with age (e.g. sarcopenia) 24,25 . Few studies have investigated the association of alternative adiposity measures with sleep duration, such as waist circumference (WC), with evidence emerging in favour of a cross-sectional 26,27 , but not a prospective relation 22,28 . No studies have yet examined the association between WC and potential change in sleep duration over time, nor have they incorporated bidirectional analyses.
Due to these shortcomings in the current literature, we sought to investigate the bidirectional association between BMI and sleep duration, as well as WC and sleep duration, in a large, nationally representative, ageing sample, which is rich in covariates and prospective data. Here we present findings from cross-sectional and bidirectional prospective associations between BMI and sleep duration, and waist circumference and sleep duration, in the English Longitudinal Study of Ageing (ELSA), with the inclusion of a wide range of covariates, which may affect this relationship. Our aims were twofold: i) to examine the cross-sectional relationships of BMI and waist circumference with sleep duration in older adults and ii) to ascertain the nature of the direction of this association, using longitudinal data to examine the association from baseline BMI and WC, to change in sleep duration over a 4-year period, and baseline sleep duration to change in BMI and WC, from baseline to follow-up.
Sample. English Longitudinal Study of Ageing (ELSA). Data are used from the English Longitudinal Study
of Ageing (ELSA), which is an on-going national panel study of health and ageing initiated in 2002-3 (wave 1). Data have been collected from respondents at waves 2 (2004-5), 3 (2006-7), 4 (2008-9), 5 (2010-11) and 6 (2012-13), and comprise a nationally representative sample of English household residents aged fifty and over. Further details of ELSA can be found elsewhere 29 . ELSA was granted ethical approval by the London Multicentre Research Ethics Committee (MREC 01/2/91) and all participants provide informed consent at each wave. All methods were carried out in accordance with approved guidelines and regulations.
Of the 11,050 ELSA respondents at wave 4, 8,210 were interviewed and clinical measurements made by a nurse; whilst at wave 6 there were 10,601 interviews and 7,731 nurse visits. We analysed data from 5,015 respondents from waves 4 and 6 of ELSA; inclusion of respondents was based on whether they had complete data for measures of adiposity, sleep duration and all covariates at both waves of data collection.
Measures. Body Mass Index.
Participants were visited in their home by a nurse who measured both height and weight at waves 4 and 6. Standing height was measured using a Leicester portable stadiometer standardized with head in the Frankfort plane. A single weight measurement was recorded to the nearest 0.1 kg, using Tanita THD 305 scales. BMI was subsequently derived using the standard formula: weight divided by height squared (kg/m 2 ).
Waist circumference. During the nurse visit, two measurements of waist circumference were taken at the midpoint between the lower rib and the iliac crest using measuring tape. A mean of the two measurements was then used, unless they differed by more than 3 cm, in which case a third measurement was taken and then an average of the closest two measurements was used.
Sleep duration and change in sleep duration. A question on sleep duration was included in ELSA for the first time at wave 4 and repeated at wave 6. Respondents were asked 'How many hours of sleep do you have on an average week night?' Change in sleep duration from baseline to follow-up was calculated by subtracting sleep duration at wave 6 from sleep duration at wave 4.
Covariates. Demographic, socio-economic and health behaviour measures collected at wave 4 were used as covariates in the analyses. Age was recorded as a continuous number until 90 years, with ages above 90 collapsed to the value of 91. Socio-economic position was determined by quintiles of non-pension wealth, which is regarded as the most salient measure of standard of living in older age groups 30 . Frequency of alcohol consumption within the last 12 months [categorised to less than daily; daily (5-7 times per week)]; smoking status (never; ex-smoker; current smoker); long-standing illness (respondents were asked: 'do you have any long-standing illness, disability or infirmity? by long-standing I mean anything that has troubled over a period of time, or that is likely to affect over a period of time' , to which they could answer 'No' or 'Yes'); physical activity levels (Sedentary; Low; Moderate; High), and depressive symptoms (measured with the Centre for Epidemiologic Studies -Depression Scientific RepoRts | 7:40250 | DOI: 10.1038/srep40250 Scale -CESD -8 item scale) were also assessed by questionnaire. CES-D responses were summed (with the exception of the item: "whether respondent felt their sleep was restless in the past week") to obtain a total score, which was then dichotomized using a cut-off of > = 3 31 . A dichotomous season variable was created using the date which respondents completed their interview, which resulted in a categorisation of 0 = "BST" (British Summer Time) and 1 = "GMT" (Greenwich Mean Time). However, no association was observed between season at baseline and sleep duration at wave 4 [β = 0.022, (95% CI = − 0.047; 0.091), P = 0.533] or wave 6 [β = 0.009, (95% CI = − 0.079; 0.061), P = 0.800] when adjusting for age and sex, and was therefore not included in subsequent analyses.
Statistical analyses. All analyses were performed in STATA, version 13. Pearson's correlations were used to examine the relationship between sleep duration at baseline (wave 4) and follow-up (wave 6). For examination of baseline sample characteristics sleep categories of ≤ 5 hours, 6-7 hours, 7-8 hours and > 9 hours (Table 1.) were created. Analysis of variance was used to compare means for age, BMI and WC, whilst chi-squared tests were used to examine differences in categorical demographic variables (smoking status, alcohol consumption, long-standing illness, wealth, sex, ethnicity and depressive symptoms) across the 4 sleep duration groups.
Initially, quadratic regression modelling was used to investigate potential non-linear associations in cross-sectional and prospective, bidirectional relationships between adiposity measures (BMI and WC) and sleep duration. Multicollinearity was tested in all regression models using the variance inflation factor (VIF) to examine the extent to which predictors were correlated. A VIF of 1 indicates no correlation, whilst values > 10 are generally cause for concern 32 . For cross-sectional analyses, 4 regression models were performed to examine the association between BMI and sleep duration, with the same models run to investigate the relation between WC and sleep duration. Model 1 was minimally-adjusted (age, sex, wealth, ethnicity), whilst models 2 and 3 were adjusted for health behaviours (minimally-adjusted + alcohol consumption, smoking status, physical activity levels) and health problems (minimally-adjusted + depressive symptoms, long-standing illness), respectively and model 4 was fully-adjusted for all covariates (minimally-adjusted + health behaviours + health problems). Prospectively, the associations between adiposity (BMI and WC) and sleep duration, and sleep duration and adiposity were investigated using both linear and quadratic models. In order to examine change in sleep duration, sleep duration at wave 6 was analysed as the outcome with BMI or WC at baseline adjusted for sleep duration at baseline as the exposure. Conversely, in analyses to examine changes in BMI and WC, BMI or WC at follow-up was analysed as the outcome with sleep duration at baseline adjusted for BMI or WC at baseline as the exposure. Aside from this difference, Models 1 to 4 were identical to the cross-sectional models described above.
In order to examine the role of covariates in the association of adiposity measures and sleep duration, the percentage reduction in the regression coefficient following adjustment was calculated by comparing the coefficient for each exposure from models with and without adjustment for covariates.
Results
Compared with all participants at wave 4 of ELSA, those included in the present analyses were wealthier, slightly older and less likely to report having a long-standing illness (all p < 0.001). Table 1 shows baseline (wave 4) characteristics of participants, according to their sleep duration category. One-way ANOVAs showed that there were significant differences across sleep duration categories for both age and BMI, such that younger respondents slept for six to seven hours, whilst the oldest group slept, on average, for nine or more hours; the heaviest respondents had mean sleep durations of less than, or equal to five hours. There were, however, no significant differences in baseline waist circumference across the four sleep duration categories.
Chi-squared analyses showed that sex, smoking status, alcohol consumption, limiting illness, wealth, depressive symptoms and physical activity levels, were all significantly associated with sleep duration. Short sleepers (≤ 5 hours) were significantly more likely to be females, ex-smokers, less wealthy, consume less alcohol, report a long-standing illness and engage in 'moderate' physical activity (Table 1). Multicollinearity was not an issue in any of our cross-sectional or prospective regression models, as all VIF values were around 1 when tested.
Mean BMIs were 28.20 kg/m 2 and 28.17 kg/m 2 , at baseline and follow-up respectively; whilst average duration of sleep was 6.86 hours at baseline and 6.87 hours at follow-up. Respondents who slept for five hours or less had the highest mean BMI (28.89 kg/m 2 ) both at baseline and follow-up, whilst those who slept between eight and nine hours had the lowest mean BMI (28.06 kg/m 2 ). Overall mean WC at baseline was 96.49 cm and 96.09 cm at follow-up.
Cross-sectional associations: BMI and sleep duration at baseline; WC and sleep duration at baseline. A basic model revealed a small, inverse linear relation between BMI and sleep duration, which was attenuated and no longer significant in model 2 adjusted for health behaviours (basic model to health-behaviours adjusted model = 5% decrease in the coefficient), and then further weakened in model 3 adjusted for health problems (basic model to model adjusted for health problems = 15% decrease in the coefficient). In the final model adjusted for all covariates this effect was again, attenuated (basic model to final model = 16% decrease in the coefficient) ( Table 2). There was no interaction between sex (p = 0.822) or age (p = 0.366) and BMI at baseline on sleep duration, in any of the four regression models. We also performed quadratic cross-sectional regression models to test for a U-shaped relationship between BMI and sleep duration, but this was not significant (p = 0.987).
The pattern of results for waist circumference and sleep duration was almost identical to that of BMI and sleep duration ( Table 2). In a model adjusted only for demographics there was a significant, negative association between WC and sleep duration, which was attenuated with inclusion of health behaviours in Model 2 (basic model to model adjusted for health behaviours = 4% decrease in the coefficient). With adjustments for health problems, the coefficient was again, reduced (basic model to health problems-adjusted model = 7% decrease in the coefficient) and a final model including all covariates resulted in further attenuation (basic model to final model = 8% decrease in the coefficient). We observed no evidence of a U-shaped association between baseline WC and sleep duration (p = 0.103), nor did we find a significant interaction of age (p = 0.084), or sex (p = 0.300) with baseline WC on sleep duration.
Prospective associations I: BMI and changes in sleep duration; WC and changes in sleep duration.
The first set of prospective analyses performed had baseline BMI as the exposure and follow-up sleep duration as the outcome, the results of which are shown in Table 3. Model 1 revealed a negative association between baseline BMI and follow-up sleep duration, such that a higher BMI was associated with increasingly shorter sleep at wave 6. In Model 4, the longitudinal association between BMI and sleep duration was only slightly attenuated (Model 1 to Model 4, 6% non-significant decrease in the coefficient, p > 0.05). On average, the change in sleep duration from baseline to follow-up was − 0.42 minutes per unit increase in BMI. A very similar pattern of associations was observed between WC and changes in sleep duration, such that for every centimetre increase in WC at baseline, sleep duration at follow up decreased, on average, by 0.18 minutes. Although small, this effect remained significant after adjustment for all covariates in Model 4 and the coefficient was identical throughout the models ( Table 3). The mean change in sleep duration from baseline (6.867 hours) to follow-up (6.872 hours) was 0.005 hours (0.3 minutes), standard deviation = 1.13 hours (67.5 minutes). There were no interactions between baseline age or sex with BMI and WC on follow-up sleep duration in any of the 4 models (p > 0.05). There was no evidence of a quadratic association between baseline BMI or WC and follow-up sleep duration (p > 0.05). Table 4 revealed no significant associations between sleep duration and future BMI. Across all 4 models, there was no evidence of a linear association between sleep duration at baseline and BMI at 4-year follow-up. Nor was a U-shaped association observed (p > 0.05). Although age at baseline was strongly associated with BMI at follow-up [B = − 0.020 hours, (95% CI = − 0.027; − 0.013) P < 0.001] after adjustment for BMI at baseline and all other covariates, there were no interactions between baseline age or sex with sleep duration on BMI at follow-up in any of the 4 models (all p > 0.05). Similarly to sleep duration and changes in BMI, there was no significant association between sleep duration at baseline and changes in waist circumference ( Table 4) in any of our 4 regression models (all p > 0.05). There were also no significant interactions between age and baseline sleep duration, or sex and baseline sleep duration on follow-up WC, nor was there evidence of a quadratic association (all p > 0.05).
Discussion
In this large, nationally representative study of older adults, findings suggest that cross-sectionally, while both BMI and WC are inversely associated with sleep duration, these relationships are largely accounted for by variations in health status and health behaviours. Prospectively, greater BMI and WC at baseline were associated with small decreases in sleep duration over a 4-year period, independently of adjustment for a variety of covariates. In contrast, sleep duration at baseline was not associated with changes in BMI or WC over the follow-up period.
When tested we found no statistically significant evidence of a cross-sectional U-shaped relationship between BMI and sleep duration. The findings of longest sleep duration in those with BMI between 18.5 and 24.9 kg/m 2 agrees with previous research 10,11,20,33,34 . This may reflect reverse causation, as in older age groups long-standing illness is prevalent and may lead to weight loss. Our findings support this notion because adjustment for health problems attenuated observed associations. It is also important to note that, for example, the BMI and WC of a respondent with diabetes could be quite different from a respondent reporting cancer.
Our findings indicate that BMI and WC are not independently associated with sleep duration cross-sectionally, a result consistent with two earlier large-scale studies, which did not find an association between adiposity and sleep duration 18,35 . However, our results do not accord with recent evidence in favour of this cross-sectional relationship in older adults 13,15,36 . At least one of these studies, which found a significant association of BMI and WC with sleep duration in older adults made no adjustment for physical long-standing illness or socioeconomic position in their analysis 15 , which could in part explain the discrepancy between our findings and theirs. These authors also used a measure of self-reported sleep duration by which respondents were only asked to report how many hours they had slept on the two nights prior to the interview 15 rather than the more general sleep duration question in ELSA which asked about the number of hours sleep on an average weeknight. The richness of the available dataset enabled analyses that accounted for a number of factors, including wealth, illness and depressive symptoms, and health behaviours. Associations apparent in our data concur with several reports that associations exist between disadvantaged socioeconomic position and sleep duration [37][38][39][40][41] , and depression and sleep duration 42,43 . There is also evidence for an association between socioeconomic position and obesity 44,45 , as well as BMI and depression 46 . Thus, in future it would be of interest to further explore the interrelationship between BMI, these measures and sleep duration, particularly depression which is closely related to sleep behaviours, using measures taken at several time points.
Longitudinal analyses of adiposity at baseline with sleep duration at follow-up revealed a negative association, such that higher BMI and WC were associated with decreased length of sleep. However, in ELSA there was no evidence of an association between sleep duration and change in BMI from baseline to follow-up, nor between baseline sleep duration and change in WC at follow-up.
Finding that measures of adiposity and sleep duration were associated in a prospective analysis accords with previous research, which found evidence of a trend -albeit not statistically significant -for an association between average changes in weight gain and average change rates in sleep duration 16,18 . The present study found that both BMI and WC were associated with future sleep duration in a sample whose average age was 65 years. This result is in line with a study in younger adults, which suggests the association between adiposity and changes in sleep duration was stronger than the opposite relationship, which examined sleep duration and changes in adiposity 16 .
One plausible explanation for the prospective associations of adiposity measures and sleep duration in older adults could be obstructive sleep apnoea (OSA), a condition that causes the airways to collapse or become blocked whilst sleeping and is markedly prevalent in obese adults 47 . Older people with higher BMIs and/or waist circumferences may have a higher percentage of visceral fat than their leaner counterparts, which has been found to be a significant risk factor for OSA 47,48 . Therefore they may develop OSA, which could subsequently affect their sleep duration. Evidence suggests that when objectively measuring sleep duration, very short sleep (mean duration of 3 hours) is associated with greater OSA severity 49 , which could also be applicable to self-reported sleep duration. Another recent study found self-reported short sleep duration and OSA to be independently associated with visceral obesity, in adults aged between forty and sixty-nine years 50 .
Both body mass index and waist circumference remained associated with change in sleep independently of a wide range of covariates, including health and health behaviours. However, we cannot discount the possibility of residual confounding, as it was not possible to examine all other factors that might explain the observed association between adiposity and sleep duration.
Our observation that sleep duration was not associated with change in BMI or WC specifically in older adults, accords with some 22,51 , but not all previous reports 13 . One potential explanation for this may relate to the stability of BMI, as neither average BMI or WC changed greatly in 4 years. Further follow-up of the participants may reveal associations not yet apparent. Secondly, it is suggested that the magnitude of the association between sleep duration and changes in adiposity measures declines with age 14,16,52 . This may explain why the results presented here, where mean age is 65, and in other studies such as Whitehall II 22 are null. Thus, these data suggest that obesity may be a target to ameliorate co-morbidities that occur due to poor sleep, but that sleep duration is not a target to prevent obesity in older age groups.
Our study has a number of strengths. The longitudinal design of ELSA enabled us to investigate the bidirectional association between two measures of adiposity (BMI, WC) and sleep duration within the same population. The sample size was sufficient to observe an association between BMI and WC, and change in sleep duration. However, we failed to observe an association between sleep duration and change in adiposity measures, indicating that this association is likely to be weak in comparison. Also, BMI and WC were measured by a nurse, rather than self-reported, unlike some earlier studies in this area 33,53 . A further strength is that ELSA is broadly representative of the English population aged 50 years and older 29 . Sleep duration was, however, self-reported which may be prone to error and bias 54 . Data were only available from waves 4 and 6 of ELSA; hence there were only four years between baseline and follow-up, which may have contributed to the trivial change we observed in sleep duration. This could perhaps be related to findings that usual sleep parameters do not significantly change in adults after the age of sixty 55 . We were also unable to examine potential mediators of the prospective association between adiposity measures and sleep duration. For example, respondents might sleep poorly due to their own or their partner's snoring or other symptoms of sleep apnoea, as mentioned above. Additionally, information on daytime napping or shift work was not available, which is particularly pertinent in older adults.
In conclusion, we found that in older adults, BMI and waist circumference were associated with change in sleep duration from baseline to follow-up, rather than vice versa. This is important so that interventions can be targeted appropriately in older adults.
|
v3-fos-license
|
2021-10-22T05:19:23.316Z
|
2021-10-20T00:00:00.000
|
239049018
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0257978&type=printable",
"pdf_hash": "8f4b7c71edecea7635cba9d60f95ff25981d7798",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43347",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "8f4b7c71edecea7635cba9d60f95ff25981d7798",
"year": 2021
}
|
pes2o/s2orc
|
Understanding an evolving pandemic: An analysis of the clinical time delay distributions of COVID-19 in the United Kingdom
Understanding and monitoring the epidemiological time delay dynamics of SARS-CoV-2 infection provides insights that are key to discerning changes in the phenotype of the virus, the demographics impacted, the efficacy of treatment, and the ability of the health service to manage large volumes of patients. This paper analyses how the pandemic has evolved in the United Kingdom through the temporal changes to the epidemiological time delay distributions for clinical outcomes. Using the most complete clinical data presently available, we have analysed, through a doubly interval censored Bayesian modelling approach, the time from infection to a clinical outcome. Across the pandemic, for the periods that were defined as epidemiologically distinct, the modelled mean ranges from 8.0 to 9.7 days for infection to hospitalisation, 10.3 to 15.0 days for hospitalisation to death, and 17.4 to 24.7 days for infection to death. The time delay from infection to hospitalisation has increased since the first wave of the pandemic. A marked decrease was observed in the time from hospitalisation to death and infection to death at times of high incidence when hospitals and ICUs were under the most pressure. There is a clear relationship between age groups that is indicative of the youngest and oldest demographics having the shortest time delay distributions before a clinical outcome. A statistically significant difference was found between genders for the time delay from infection to hospitalisation, which was not found for hospitalisation to death. The results by age group indicate that younger demographics that require clinical intervention for SARS-CoV-2 infection are more likely to require earlier hospitalisation that leads to a shorter time to death, which is suggestive of the largely more vulnerable nature of these individuals that succumb to infection. The distinction found between genders for exposure to hospitalisation is revealing of gender healthcare seeking behaviours.
Introduction
The COVID-19 pandemic has had an unprecedented impact on the global population. In the United Kingdom, as of 24 February 2021, 4194785 cases have been observed [1] healthcare system. The changing landscape of COVID-19 prevalence due to non-pharmaceutical interventions (NPIs) has led to a variable aetiological clinical impact. Moreover, since the onset of the pandemic, the virus has had varying temporal consequences for different demographics, affecting the time delay parameters, which was particularly pronounced with the March 2020 outbreak in care homes [2]. Understanding these temporal time delay dynamics of infection is key for the calculation of the infection hospitalisation rate (IHR) and infection fatality rate (IFR). This in turn has implications for the accurate modelling of the pandemic and formulation of effective public health policy. For instance, the changes to the time delay dynamics are central to estimating the incubation and illness period, which is essential for defining accurate quarantine periods for those that have been infected or exposed by a contact.
Tracking the phenotypic changes in the virus is now becoming more relevant due to the extent of antigenic drift observed in SARS-CoV-2 [3] and worrying mutations [4] that may have an impact upon vaccine effectiveness. There is limited contemporary research that looks at infection to clinical outcomes and nothing we have found for this study that addresses the temporal changes or looks in detail at the distinctions between gender or by age. Much of the literature that seeks to estimate the time delay dynamics [5] has been focused on the outbreak in Wuhan, China seen in 2019 and at the start of 2020. From this period Linton et al. (2020) [6] calculated the mean time from infection to hospitalisation: 9.7 days (95% CI: 5.4, 17.0), hospitalisation to death: 13.0 days (95% CI: 8.7, 20.9), and infection to death: 20.2 days (95% CI: 15.1, 29.5). However, these estimates are predominantly from small samples and, due to the pandemic nature of this outbreak, are dependent upon the demographic structure, the quality of the healthcare system, and the epidemiological context in which they were collected.
The time between infection and a clinical outcome for infectious diseases is not precisely observed and therefore is often 'coarsely' recorded, that is, we observe a subset of the sample space in which the true but unobservable data actually lie [7]. Therefore, modelling of this type of data needs to adjust for its imprecise nature or it is likely that the estimates will not accurately capture the maximum likelihood or the tails of the distribution, which can be important to inform key elements of public health policy. McAloon et al. [5] found, in a meta-analysis of studies published on the incubation period of COVID-19, that this has been overlooked in much of the current literature. In this study, we employed a doubly interval censored modelling approach [8] that seeks to capture all the available information of the clinical time delay distribution.
The time delay from infection to a clinical outcome has changed in response to the evolution of intrinsic and extrinsic factors across the geography of the United Kingdom. Using the most complete clinical data presently available, we have calculated, across distinct epidemiological periods in the pandemic, the difference in the time delay distributions for hospitalisations and deaths. These periods were defined by identifying temporally unique periods that were found to be strongly associated with changes in the prevalence of SARS-CoV-2. We have further modelled the difference between age groups and by gender to understand and analyse distinctions between demographic groups.
Epidemiological data
Two Public Health England datasets were used in this study: the mortality line list and the Severe Acute Respiratory Infection Watch (SARI) line list [9]. The data used ranges from 1 January 2020 to 20 January 2021. The key dates used to develop the models were of symptom onset, hospitalisation, and mortality in order to measure three quantities of interest: the time from infection to hospitalisation, the time from hospitalisation to death, and the time from infection to death.
Data preparation
The two datasets used in this study were merged and split in order to measure the three quantities of interest. Subsequently, rows with missing values and duplicates were dropped. As the datasets were anonymised, it was assumed that if two lines had the same local authority area, sex, age, start date and end date, then they referred to the same person. Additionally, the data was filtered to remove erroneous negative time-delay periods and extreme outliers prior to model fitting. The data were then split into distinct epidemiological periods: the first wave (January to May), the summer (June to August), the second wave (September to November), and the third wave (December to January). The periods were defined by clear distributional changes in the time delays that had an evident seasonality with distinct peaks in prevalence and hospital admissions: • 1 st Period: The first period was characterised by a sharp increase of SARS-CoV-2 incidence that peaked at 280000 [10], which across the period led to daily hospital admissions having a median of 1466 [1] and this precipitated the first national lockdown.
• 2 nd Period: The second period saw a loosening of NPIs with the median for daily hospital admissions dropping to 162 [1] and incidence estimates peaking at 10700 [10].
• 3 rd Period: The third period was characterised by the introduction of tiers that determined the extent of the NPIs that were required locally. It saw an increase in the median for daily hospital admissions to 1025 [1] and a peak incidence estimate of 66800 [10].
• 4 th Period: The middle of the fourth period saw the start of a national lockdown with the highest median for daily hospital admissions of 2529 and incidence estimates peaking at 157000 [10].
In addition, in order to assess the dependence of time delay on gender and age, we split the combined data by ten-year age bands and gender using the data from January 2020 to November 2020. These dates were selected so that the full distribution of hospitalisations and deaths had been observed. We did not have reliable data on infection to symptom onset so this was informed by a literature estimate [5]. We then calculated for these periods two categories based on the date that the data was collected. This was used to address the inherently 'coarse' [8] nature of this data, in part due to how it was recorded.
Time delay distribution modelling
We define two events, A and B, and the times at which these events occur, α and β, with α < β. However, α and β are not known precisely: In addition, let O be an unobserved event that occurs a time t 0 prior to A. The probability density function governing the time from O to A is p(t 0 ). Let the time between events O and B be T, a continuous random variable with probability density function f(t;θ) dependent on parameters θ. We express the joint probability of all three events as In the absence of information informing p(α), let it be a uniform distribution. In other words, Then, we can express the likelihood of θ and an observed data point For multiple data points X = {X i }, the likelihood is and a Hamiltonian Markov chain Monte Carlo method is used within Stan [11] to find the distributions of θ, given the observed data.
Within the context of this paper, the events O, A and B refer to the quantities in Table 1. We use the literature [5] to inform the time between O and A as p(t 0 ) * Lognormal (1.63, 0.50). In the specific case that the time we want to measure is in fact A to B rather than O to B, we can let p(t 0 ) = δ(t 0 ) where δ here refers to a half delta function defined on t 0 2 R þ 0 . In order to account for the right truncation present within the most recent portion of the dataset, we use a modified probability density function f RT that accounts for this [6] where Fðt; θÞ ¼ R t 0 f ðt; θÞ dt is the cumulative probability function of f and r is the exponential growth rate of type A events. In this paper there are two categories of type A event: symptom onset and admission to hospital. In order to calculate the growth rate, a negative binomial was fitted to modelled incidence [12] for symptom onset, and publicly available admissions data [1] for hospitalisations.
Models and assessing performance
For each set of data points, the probability density function f is taken to follow the Lognormal, Gamma and Weibull distributions as these are commonly used for survival data. Their probability density functions, defined for x � 0, are as follows We calculated the leave-one-out cross-validation (LOO) using Pareto-smoothed importance sampling (PSIS) and the widely applicable information criterion (WAIC) [13] scores for each model to compare the accuracy of the fitted Bayesian models. The WAIC score is asymptotically equivalent to LOO and can be thought of as an approximation [14]. Therefore, LOO scores were used in conjunction with Pareto k diagnostics and the R-hat convergence diagnostics to assess the best model fit. Most desirable is the lowest LOO score alongside a Pareto score where k � 0.7 and anR � 1:05 [13].
Results
We present two sets of results in this paper: (i) the evolution of the times to clinical outcome over the course of the pandemic, and (ii) the variation in those times by sex and age group. The times to clinical outcome that are measured are infection to hospitalisation, hospitalisation to death, and infection to death. The modelled estimates that were of primary interest were informed by category A (see section on data preparation) rather than by category B because these estimates are not influenced by historical infections in the defined periods. We report category B results (Tables A1-A3 in the S1 Appendix) as they may have utility for epidemiological modelling and when assessing external factors, such as the impact of healthcare pressure, because it captures those individuals that died or were hospitalised in that period. The choice of which date category to use impacts the whole time delay distribution. The right tail of the distribution using category B data may capture some individuals infected in an earlier period whereas, using category A to inform estimates may capture some hospitalisations and deaths from a later period. We were not aware of any selection bias for individuals that were included in the datasets used for modelling although, ascertainment bias for cases would be more evident in the earlier periods when testing capacity was more limited. The sample of individuals that have symptom onset included in the death data line list pertains to reporting practices of certain testing laboratories. In the SARI dataset highly detailed data is collected for a subset of NHS Trusts, which includes symptom onset. Tables 2 and 4 show that the Lognormal is a better fit for the infection to hospitalisation and infection to death distributions whereas, Table 3 illustrates that Weibull is a better fit for the hospitalisation to death distribution. Tables 2-4 show the distributions of these times for the four distinct periods described in the Methods section. There is a consistent age structure for the hospitalisations and deaths, which is highly skewed towards the older demographics, irrespective of the temporal period. Noteworthy is the result that the mean time from infection to hospitalisation has remained the most constant of the three time delay quantities. This contrasts with noticeable increases observed in the time from hospitalisation to death and infection to death over the summer and early autumn months of 2020 when prevalence was lower, with declines observed in the most recent period.
Variation in time by sex and age
Additionally, modelled results by sex and age can be seen in Tables 5-7. Fig 1 illustrates that men had a longer time delay distribution than women for infection to hospitalisation; however, there was no statistically significant difference in the time from hospitalisation to death between the sexes. For the variation by age, the mean time from infection to hospitalisation and death increases from those in their twenties to peak in patients in their forties, followed by a steady reduction with increasing patient age until 80-89. The variation observed within the time from hospitalisation to death was more modest; nevertheless, middle-aged patients displayed the longest times as observed in infection to death. Results for people under the age of 20 were discarded because there were too few patients for a meaningful measurement of their epidemiological characteristics. Males have a greater time from infection to hospitalisation, which was statistically significant, with a p-value of 5.0 × 10 −15 using a Mann-Whitney-Wilcoxon. This same distinction between males and females is not found for the time delay in hospitalisation to death with a p-value of 0.93.
Discussion
The impact of SARS-CoV-2 between subgroups of the population and across periods defined by distinct temporal epidemiological trends is significant in furthering understanding of the virus and how we might expect it to change over time. Understanding the clinical time delays and the impetus that drive the changes in these distributions will help to untangle extrinsic pressure from any further phenotypic changes we encounter in the virus. This will help to inform more impactful policy decisions on the containment and the suppression of transmission and allow for a clearer understanding of variants of concern. As seen in Fig 2, there was found to be statistically significant variation between the defined periods. This is particularly apparent in Table 4 where we observe that during the first wave of SARS-CoV-2 the mean time from infection to death is 19.6±0.2 days (95% interval: 5.6, 50.0) and that in the summer period that followed, this rises to 24.7±1.4 days (95% interval: 5.8, 69.8). There has been a substantial change in testing volume and strategy over the timeline of the pandemic impacting the complete capture of COVID-19 deaths and hospitalisations, which will be particularly significant for the January to March 2020 period. This may have had the impact of selection bias at the start of the pandemic albeit the impact of this is thought to be small due to prioritisation of testing for individuals that required clinical care. The summer period is very striking in Fig 2 by the long right tail for all three categories, which could be indicative of a change in patient clinical management as intensive care clinicians found that sustaining patients that were considered extremely critical for longer could result in a higher survival rate [15]. Moreover, the survival rate for patients will have been positively impacted by the endorsement in the UK of dexamethasone [16] use on the 13 November 2020 [17], the more widespread use of individualised lung protective ventilator strategies [18], and the support for proning [19] by the Intensive Care Society [20] in April 2020. High prevalence of SARS-CoV-2 has palpably impacted the healthcare system's ability to manage the volume of patients [21], which has been a conspicuous impetus behind temporal fluctuations in the clinical time delay distributions, as seen in Fig 3. However, in periods of higher prevalence we may The data were filtered for hospitalisation dates between January 2020 to November 2020. 90% credible intervals are quoted.
https://doi.org/10.1371/journal.pone.0257978.t006 also see a compositional shift towards more severe patients being admitted, which could be seen as an adaptive response to increasing pressure on the healthcare system; nonetheless, this should not have an impact upon the time delay distributions for mortalities. This can be further seen in Table 3 where during the first period hospitalisation to death was 10.3±0.1 days (95% interval: 0.4, 34.9), while an increase was seen in the low prevalence summer to 14.6±0.3 days (95% interval: 0.4, 53.3). This association between an increase in prevalence and a decrease in the time delay to a clinical outcome can be seen across the pandemic in Fig 2. It is perhaps the best early indicator that a healthcare system is under stress and that intervention may be required to allow hospitals to decompress [22]. the pandemic by Public Health England [24] found that 88% of deaths were within 28 days and 96% were within 60 days of positive COVID-19 test, with 54% of those excluded by the 28 day limit found to have COVID-19 on their death certificate. Moreover, as the results in this study indicate, the mean time to death is longer during times of low prevalence, which leads this categorisation to be more unsuitable. We did not observe a significant impact in the clinical time delay distributions from the growth in the B. Corroborating previous literature [5,6] we find the time from infection to death for SARS--CoV-2 is similar to SARS [25] although a shorter period to peak infectivity is now clear for SARS-CoV-2 [26]. We find that the decrease seen in time from illness onset to hospital admission observed during the SARS outbreak of 2003, thought to be reflective of contact tracing, has not been observed in the SARS-CoV-2 outbreak in the UK. Table 2 illustrates how the time from infection to hospitalisation slightly increased from 8.0±0.1 days (95% interval: 2.7, 18.5) in the first wave to 9.7±0.3 days (95% interval: 4.1, 19.6) at the end of the second wave.
The time from infection to hospitalisation between genders shows a statistically significant difference with males showing a longer modelled mean time of 8.6±0.1 days (95% interval: 2.9, 20.0) relative to 7.9±0.1 days (95% interval: 2.8, 18.0) for females. This difference is not found between genders for the time delay distribution of hospitalisation to death. This is likely related to the well documented epidemiological phenomenon that males have a tendency towards delayed medical help seeking [27]. Galasso et al. (2020) illustrated across eight countries that males are overall likely to be less compliant with NPIs and treat the dangers of COVID-19 with less gravity. The greater fatality rate of males from COVID-19 [28] is a combination of biological, psychosocial, and behavioural causal factors; nonetheless, this delay in seeking out medical attention may be a contributory factor to increasing their overall IFR. We can observe the differences between age groups in Fig 1. It illustrates that the 40-49 age group have the longest time from infection to death with a mean of 26.5±1.1 days (95% interval: 7.3, 69.3) while the shortest period was found for the 80-89 age group with 17.6±0.2 days (95% interval: 5.3, 44.0). The distribution of the time delays to a clinical outcome seen in Fig 1 illustrates that the youngest and oldest age groups have the shortest time delays, which is revealing of the predominantly more vulnerable nature of the younger adults in 20-39 age bands that require either clinical intervention or have a severe reaction to SARS-CoV-2 infection that results in a mortality.
Conclusion
We illustrate that evaluating the variation in the time delay temporal changes is key to informing public health policy and that this should not be regarded as a static metric but rather something that, thus far, has been inherently a by-product of extrinsic pressure. By monitoring these changes it will aid in the calibration of quarantine periods, the calculation of fatality rates, and help in unpacking the extent of transmission. This should be monitored closely in response to new variants of concern and further work should aim to understand their time delay dynamics. Moreover, we also recommend further analysis to assess the impact of vaccination campaigns on these trends. The paradigms seen by gender are not unexpected but should help to inform public policy on how to shape the message around when to seek medical attention. Finally, we propose that fluctuations in the modelled mean time from hospitalisation to death can be used as a proxy indicator of healthcare strain and that an intervention is required that may help to preclude avoidable morbidity and mortality. The main limitation of this study is that we can only infer from the wider context any causal impact on the clinical time delay distributions.
|
v3-fos-license
|
2019-04-06T13:10:37.892Z
|
2008-04-01T00:00:00.000
|
98699081
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/rbcs/a/nThTb756WXzSNF5tkNk8mrq/?format=pdf&lang=en",
"pdf_hash": "3d71f86bdaf81851394bafdbada908eabd627c25",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43349",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "3d71f86bdaf81851394bafdbada908eabd627c25",
"year": 2008
}
|
pes2o/s2orc
|
ROOT ABUNDANCE OF MAIZE IN CONVENTIONALLY-TILLED AND ZERO-TILLED SOILS OF ARGENTINA ( 1 )
Maize root growth is negatively affected by compacted layers in the surface (e.g. agricultural traffic) and subsoil layers (e.g. claypans). Both kinds of soil mechanical impedances often coexist in maize fields, but the combined effects on root growth have seldom been studied. Soil physical properties and maize root abundance were determined in three different soils of the Rolling Pampa of Argentina, in conventionally-tilled (CT) and zero-tilled (ZT) fields cultivated with maize. In the soil with a light Bt horizon (loamy Typic Argiudoll, Chivilcoy site), induced plough pans were detected in CT plots at a depth of 0–0.12 m through significant increases in bulk density (1.15 to 1.27 Mg m-3) and cone (tip angle of 60 o) penetrometer resistance (7.18 to 9.37 MPa in summer from ZT to CT, respectively). This caused a reduction in maize root abundance of 40– 80 % in CT compared to ZT plots below the induced pans. Two of the studied soils had hard-structured Bt horizons (clay pans), but in only one of them (silty clay loam Abruptic Argiudoll, Villa Lía site) the expected penetrometer resistance increases (up to 9 MPa) were observed with depth. In the other clay pan soil (silty clay loam Vertic Argiudoll, Pérez Millán site), penetrometer resistance did not increase with depth but reached 14.5 MPa at 0.075 and 0.2 m depth in CT and ZT plots, respectively. However, maize root abundance was stratified in the first 0.2 m at the Villa Lía and Pérez Millán sites. There, the hard Bt horizons did not represent an absolute but a relative mechanical impedance to maize roots, by the observed root clumping through desiccation cracks.
INTRODUCTION
Deep reaching compaction in many fertile soils of the world, often managed under modern, industrialagricultural management, is an issue of great concern (Hamza & Anderson, 2005).In such soils it is often the frequent and repeated use of conventional tillage (CT: e.g.mouldboard-and disc-ploughs, disk harrows) that results in the formation of induced plough pans in subsoil (Canarache, 1991).This compacted layer can impede crop root growth when soil resistance measured with a cone (tip angle of 30 °) penetrometer exceeds the threshold of 2-3 MPa (Gupta & Allmaras, 1987;Glinski & Lipiec, 1990;Passioura, 2002).The magnitude of restriction is closely related to the soil water content, because of the resulting increases in soil resistance (Gupta & Allmaras, 1987;Raper, 2005).
For different reasons, many of these fertile agricultural soils were converted to conservation tillage systems in the last decade.Zero tillage (ZT), the paradigm of conservation agriculture, was implemented in about 95 million hectares worldwide (Lal et al., 2007).In the Pampas region of Argentina, half of the cultivated area passed to this system in the last 15 years (Díaz Zorita et al., 2002;Steinbach & Alvarez, 2006).Conventional tillage (ploughing, disking, harrowing, and so on) induces the development of so-called "plough pans" near the surface in many soils (Canarache, 1991;Taboada et al., 1998;Micucci & Taboada, 2006).These plough pans are expected to disappear under continuous ZT.In exchange, surface compaction problems have often reported in fine-textured, zero-tilled topsoils (Díaz Zorita et al., 2002;Sasal et al., 2006;Taboada et al., 1998).Although conservation tillage minimizes agricultural traffic (Hamza & Anderson, 2005;Raper, 2005), wheel tracks are not erased in ZT topsoils (Botta et al., 2006).This can contribute to increase surface compaction problems.
Among the crops used in industrial agriculture, maize (Zea mays L.) is one of the most susceptible to soil compaction (Maddonni et al., 1999;Amato & Ritchie, 2002).Maize root growth is negatively affected by subsoil compacted layers, but the consequences on maize yields are not always direct (Erbach et al., 1986;Díaz Zorita et al., 2002).Soil hardness may also have a genetic origin, as in the case of many argillic Bt horizons that behave as "clay pans" (Canarache, 1991).These hard togh B horizons may restrict the amount of available water stored in the soil profile in areas with high intra-seasonal rainfall variability (Dardanelli et al., 2003).However, clay pans are not found in all Bt horizons since the occurrence does not only depend on the clay percentage but also on the characteristics of soil prisms and the occurrence of desiccation cracks between them (Amato & Ritchie, 2002;Dardanelli et al., 2003).In Mollisols of the Pampas region of Argentina, farms converted to ZT systems coexist with farms where the CT system is still being used.Within a distance of few kilometres (Figure 1), both the topsoil and subsoil horizons consist of soils with different textural and structural properties.This situation represents an interesting opportunity to study the combined effects of man-made compaction and genetic compaction on maize root growth in the field.In this study we report field results obtained in three different soils, under CT and ZT management regimes.At each site we investigated the occurrence of man-made and genetic soil compaction and the effects on maize root abundance and root distribution.
East soils are Abruptic and Vertic Argiudolls with silty loam or silty clay loam topsoil texture and strong Bt horizon (Salazar Lea Plaza & Moscatelli, 1989).
Two agricultural fields under different managements were selected per site.Each field was divided in three adjacent plots.Soil studied managements were: (a) Long-term conventional tillage (CT): two or three passes of tandem disk harrow at 0.10 m depth and spike-tooth harrow were continuously applied over the last years.After maize sowing, weeds were mechanically controlled; (b) Zero tillage (ZT): zero till planting after long-term CT, with herbicide weed control.The number of years under continuous ZT was 9, 6, and 7 for the Chivilcoy, Villa Lía, and Pérez Millán sites, respectively.The selected agricultural fields did not differ in the technological level or crop rotation.The crop sequence over three years consisted basically of maize -wheat (Triticum aestivum L.) / soybean double cropping and full season soybean.
Determinations
In winter (June 2004), composite soil samples were taken from the 0-0.05, 0.05-0.15and 0.15-0.30m layers to determine the pH in soil suspensions (1:5 distilled water), and organic C content by the Walkley and Black method (Nelson & Sommers, 1982).
Three soil cores of 6 cm diameter from each plot were taken from the layers 0-0.05, 0.10-0.15and 0.25-0.30m to determine bulk density (BD) by the core method (Burke et al., 1986).Cores were collected between rows, unaffected by recent traffic compaction.At the sites of Chivilcoy and Villa Lía, three replicated measurements of soil penetrometer resistance (PR) were regularly made at each plot.A cone-shaped probe (tip angle of 60 °; basal diameter 1.4 cm) was driven into the soil (0.45 m deep) through consecutive falls of a 2 kg load falling from a height of 0.515 m (Burke et al., 1986).Soil PR was recorded as the number of falls required to penetrate each 5 cm of soil to a depth of 0.54 m.Apart from each penetrometer measurement, soil samples were taken from the 0-0.05, 0.1-0.15 and 0.25-0.3m layers, to determine the gravimetric water content (SWC) by oven-drying at 105 °C in the laboratory.Magnitude of soil
Description of the region
The Rolling Pampa of Argentina is one of the most important temperate crop areas of the Southern Hemisphere and covers around 5 Mha.The climate is temperate and humid, with an average annual rainfall of 940 mm, concentrated in the springsummer seasons (Hall et al., 1992), and a mean annual temperature of 17 °C (Soriano, 1991).The entire area is covered by Mollisolls, which developed from aeolian sediments (loess) under grassland vegetation (Soriano, 1991).In the West soils are mainly Typic Argiudolls with loamy texture and light Bt horizon, while in the desiccation was inferred by SWC/FC quotient (FC = field capacity).FC (g g -1 soil) was calculated using a pedo-transfer function (Campbell, 1985) validated for Pampean soils (Damiano & Taboada, 2000): FC = 0.258 -0.002 x Sand ( %) + 0.0036 x Clay ( %) + 0.03 x SOM ( %) being FC = soil water retentional -33.3 kPa matric potential; SOM = soil organic matter.
Soils were cultivated with maize from September 2004 to March 2005.Sowing dates, seed density and maize hybrids were similar at each site, for both tillage treatments.In summer (December 2004-January 2005), at maize flowering, 1 m deep soil pits were dug in each plot to determine the horizontal and vertical distribution of maize roots by a semi-quantitative method (Manichon, 1987).Each pit was 0.6 m wide with a maize plant in the centre.The abundance and vertical distribution of maize roots was determined using a 0.5 x 0.3 m rectangle subdivided in 0.05 x 0.05 m squares.In each square, root abundance was determined using a semi-quantitative scale (0 to 5).The root abundance of each 0.05 m soil layer was the average of 12 squares.Beside each pit the soil PR was evaluated in two measurements, by the same method as in the winter.On the opposite side of the pits, soil samples were taken at different depths for SWC determination in the laboratory.Magnitude of desiccation was inferred by SWC/FC quotients.
Statistical inference
At each site a factorial arrangement was applied.Factors were tillage treatment (CT and ZT) and soil depth.Differences were inferred from the analysis of variances, and in the case of significant treatment effects (p < 0.05), means were compared using contrasts (Student's test).
Soil profiles
Soils differed in the horizon sequences, topsoil texture and texture degree of Bt horizons.In Chivilcoy, the sandy loam Ap and A horizons and light textured Bt horizon do not indicate a priori the occurrence of genetic mechanical impedances in soil profile (Table 1).On the other hand, Villa Lía and Pérez Millán soils do not only have silty topsoils but also clayey and well-developed prismatic Bt horizons (Table 1).In Villa Lía the Bt horizon has no vertic features, unlike in Pérez Millán where it has slickensides and desiccation cracks.Soil organic C content was higher in the sandy loam Chivilcoy topsoil than in silty loam and silty clay loam Villa Lía and Pérez Millán topsoils (Table 2).This is likely due to different historical agricultural intensities at the study sites.In addition, organic C content was higher in ZT than in CT plots (0 to 0.15 m), at the Chivilcoy and Villa Lía sites, with modest C sequestration under zero tillage (Lal et al., 2007;Steinbach & Alvarez, 2006).Soil pHs were slightly acid in Chivilcoy and Pérez Millán, and acid in Villa Lía (Table 2), evidencing once more intense agricultural intensity in this soil.Soil pH was essentially the same in CT and ZT soils.Soil organic C losses and, to a lesser extent, soil acidification were reported as expected consequences of long-term cropping in pampas soils (Senigagliesi & Ferrari, 1993;Steinbach & Alvarez, 2006).
Soil bulk density
Soil BD was significantly affected by both soil depth and tillage, without respective interaction (Table 3).Soil depth effects were mainly caused by the lower BD determined in all cases by C enrichment in the 0-0.05 m layer.Tillage effects differed from site to site.In Chivilcoy BD was significantly lower in ZT than in CT plots, in all evaluated layers.A higher BD value was noted in the CT soil compared with ZT determined in the 0.05-0.15m layer.This BD increase may be caused by an induced plough pan at this depth (Canarache, 1991).Contrastingly, in Villa Lía and Pérez Millán soil BD was significantly higher in ZT than in CT plots, although in different soil layers (Table 3).Such increases in BD can be the result of surface compaction under zero tillage (Thomas et al., 1996).
Soil water content
In the winter, SWC was determined in the top 0.3 m at the sites Chivilcoy and Villa Lía (Table 4a,b).The SWC/FC quotient indicated the magnitude of soil desiccation on each sampling date.In the winter, soils reached a field capacity of 0.73-0.95,with significantly higher SWC in ZT than in CT plots in topsoil in Chivilcoy.In Villa Lia, SWC/FC quotients were lower and only differed between management in the top 5 cm (CT > ZT).In the summer, only the Villa Lía soil showed significant differences between tillage treatments.Even in summer and under maize crop flowering, the magnitude of soil desiccation was not high (i.e.high SWC/FC quotients) in the deep profile of Chivilcoy (0.95-1 m layer) and in the intermediate profile (0.45-0.5 m layer) of Villa Lía and Pérez Millán sites (Table 4).In the latter, high SWC were due to water storage in the clayey Bt horizons at (Amato and Ritchie, 2000).
Soil penetrometer resistance
Soil resistance is highly dependent on soil water condition (Gupta & Allmaras, 1987); lower PR values were thus observed in winter than in summer (Figure 2).Soil PR was significantly affected by a tillage-by-depth interaction in the winter.PR values increased from 1-3 MPa in topsoil to 5-7 MPa in subsoil (0.45 m) in the Chivilcoy and Villa Lía soils.Taking into account the wet soil conditions in these periods (Table 4), the higher PR values were the result of profile anisotropy in deep Bt horizons.Statistical differences observed between treatments at two soil depths were little relevant in magnitude.
In summer, soil PR was affected by significant and independent effects of depth and tillage in Villa Lía 2).Depth effects differed from site to site.In Chivilcoy, soil PR was significantly lower in topsoil than in the rest of the profile.Despite a higher water content under CT than ZT, PR was higher at 0.15 m.Taking into account the higher BD determined in CT soil at the same depth (Table 3), this PR increase can be ascribed to an induced plough pan by long-term conventional tillage (Canarache, 1991).It is likely that after nine years ZT this induced compaction is recovered at the Chivilcoy site.Similar recoveries were also found in other medium-textured soils of the region (Taboada et al., 1998;Micucci & Taboada, 2006).Soil PR was as high as 15 MPa in the 0.1-0.2m layer of the fine textured Pérez Millán soil.This PR increase occurred both in CT and ZT soils, so that it is questionable to ascribe it to an induced plough pan.No clear explanation was found for the persistence in ZT plots either, since the high number of years (seven) would have allowed the recovery of porosity in the untilled soil (Rhoton, 2000).Therefore, the PR data recorded here could be ascribed to a genetic cause.At the same depth, there is a thin BA horizon with a lower organic C content than in the higher A horizon and with a lower clay content than in the deeper Bt horizon (Tables 1 and 2).Considering the very low GWC found in summer at 0.2-0.25 m depth in both CT and ZT plots of Pérez Millán soil (Table 4), it can then concluded that this BA horizon behaved as a natural compacted layer in the summer.Soil PR decreased sharply below 0.2 m, despite the strong Bt horizon existing at this depth (Table 1).Unlike the desiccated upper horizons, higher GWC was determined at this depth (Table 4).The combined action of water and clay plasticity may have allowed an easy penetration of the probe in this clayey subsoil (Pilatti & Orellana, 2000).
Although the Villa Lía soil also has a silty clay loam topsoil and clayey subsoil, soil PR behaved completely different here (Figure 2d).Soil PR was not only significantly higher in CT than in ZT plots, but also increased significantly with depth.This PR increase was not related to the higher GWC found in this Bt horizon (Table 4), and can be ascribed to the low cracking potential of this subsoil.In contrast, soil cracking is very high in the vertic Pérez Millán subsoil, which could help to decrease soil hardness.Both Bt horizons of Villa Lía and Pérez Millán soils differed substantially from each other in clay content (Table 1) and have different clay mineralogy.The expansible clay mineralogy of Pérez Millán soil determined lower mechanical impedance, incompatible with a clay pan.This different behaviour can be ascribed to the characteristics of soil prisms and the occurrence of desiccation cracks among them (Amato & Ritchie, 2002;Dardanelli et al., 2003).
A threshold of 2 MPa, measured with probes (tip angle of 30 °), has often been mentioned to detect mechanical impedances for crop root growth (Gupta and Allmaras, 1987;Glinski & Lipiec, 1990;Passioura, 2002).However, soil resistance can increase by 68 % when 60 º probes are used instead of 30 ° probes (Voorhees et al., 1975).Soil PR values determined here exceeded both threshold limits by far.In Argentina, Pilatti & Orellana (2000) observed that a threshold of 6 MPa is more suitable for Pampean soils similar to those studied here, using a 60 ° probe.
Maize root abundance
The abundance of maize roots was affected by independent and highly significant effects of tillage and depth (Figure 3a,b,c).Tillage effects were most evident in Chivilcoy, where maize root abundance was significantly higher in ZT than in CT plots below 0.1 m.This can be explained by the higher BDs (Table 3) and PRs (Figure 2c) observed in the CT plot, due to the occurrence of induced plough pans.This situation represents an effect known as "shadow effect", as shown by Tardieu (1988) for surfacecompacted soil layers.Even when there is no mechanical impedance in the rest of the profile, the occurrence of surface compaction affects root exploration in the zone of the profile below this compacted layer.Map distributions of maize roots showed that a major soil volume of the CT plot has little or no roots below 0.75 m (Figure 4a).In the ZT plot of Chivilcoy Maize root exploration occurs throughout the profile (Figure 4b).A differentiated water and nutrient absorption by maize can therefore be expected in each tillage treatment (Tardieu, 1988;Passioura, 2002).
Although soil profiles differed greatly in summer in Villa Lía and Pérez Millán sites (Figure 2d,e), the root abundance profiles did not (Figure 3b,c).Tillage affected maize root abundance in both soils significantly, while the magnitude of this effect was generally little relevant.An exception was the higher maize root abundance under ZT in Pérez Millán profile below 0.4 m (Figure 3c).Because of this root abatement only 10 % relative maize abundance was observed below 0.8-0.9 m.However, taking the sharp PR drop with depth in Pérez Millán into account, maize root abundance decreases with depth were due to the sharp PR increase detected in the deeper BA horizon under both CT and ZT.
Figure 1 .
Figure 1.Geographical location of studied sites in the province of Buenos Aires, Argentina.
Figure 2 .
Figure 2. Soil penetrometer resistances in different layers of conventionally-tilled (CT) and zero-tilled (ZT) soils in winter and summer.Standard errors of means are indicated by bars.ANOVA tables are also included.(*) indicates statistical differences between management at a specific soil depth.
Figure 3 .
Figure 3. Relative abundance of maize roots in 1 m deep soil profiles in conventionally-tilled (CT) and zero-tilled (ZT) soil profiles of study sites.Standard errors of the means are indicated by bars.ANOVA tables are also included.
Figure 4 .
Figure 4. Spatial distribution of maize roots in 0.6 wide and 1 m deep soil profiles under conventional tillage (CT) and b) zero tillage (ZT) at the studied sites.
Table 1 . Soil properties of the studied sites
Data were abtained from published soil maps(INTA, 1980a, b)and checked in situ.
Table 4 . Soil water retention at -33.3 kPa matric potential (field capacity = FC), gravimetric soil water contents (SWC) in winter and in summer, and the FC/SWC quotients only
(Figure
|
v3-fos-license
|
2018-04-03T06:14:49.727Z
|
2018-01-24T00:00:00.000
|
26866329
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://msphere.asm.org/content/msph/3/1/e00517-17.full.pdf",
"pdf_hash": "86a7a5f73b94a938d36960a3faa262b9afdb87e6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43351",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "86a7a5f73b94a938d36960a3faa262b9afdb87e6",
"year": 2018
}
|
pes2o/s2orc
|
In Situ Analyses Directly in Diarrheal Stool Reveal Large Variations in Bacterial Load and Active Toxin Expression of Enterotoxigenic Escherichia coli and Vibrio cholerae
The cause of diarrheal disease is usually determined by screening for several microorganisms by various methods, and sole detection is used to assign the agent as the cause of disease. However, it has become increasingly clear that many infections are caused by coinfections with several pathogens and that the dose of the infecting pathogen is important. We quantified the absolute numbers of enterotoxigenic E. coli (ETEC) and Vibrio cholerae directly in diarrheal fluid. We noted several events where both pathogens were found but also a large dose dependency. In three samples, we found ETEC as the only pathogen sought for. These isolates belonged to globally distributed ETEC clones and were the dominating species in stool with active toxin expression. This suggests that certain superior virulent ETEC lineages are able to outcompete the gut microbiota and be the sole cause of disease and hence need to be specifically monitored.
RESULTS
Coinfections with ETEC, V. cholerae, and other enterobacteria are frequent in watery diarrhea. During the diarrheal peak period in March to April 2006, 35 surveillance stool samples were randomly collected from children and adults seeking care for diarrheal disease at the hospital ward at icddr,b in Dhaka, Bangladesh. Seven collected samples did not meet the inclusion criteria and were therefore not analyzed. A total of 28 samples from children (n ϭ 11, ages 2 to 17 years, median age of 12 years) and adults (n ϭ 17, ages 18 to 54 years, median age of 35 years) were included in the study. The routine standard analyses for detection and surveillance of diarrheal pathogens at the icddr,b are culture analysis on MacConkey agar plates followed by multiplex PCR analysis on 3 to 10 pooled colonies for pathogenic E. coli and culture on selective taurocholate-tellurite-gelatin agar (TTGA) plates for detection of V. cholerae. Stool samples are by routine also tested for Salmonella spp. and Shigella spp. on selective plates. By use of these methods, 18 MacConkey agar plates were found positive for growth of E. coli-like bacteria, either as only E. coli (4 of the 18 plates) or as mixtures of E. coli with other bacteria, i.e., Klebsiella and other non-lactose fermenters, and/or V. cholerae, as presented in Table 1. Of the original 28 samples, 17 were positive for V. cholerae, which corresponds to 60%. All V. cholerae-positive samples were also found positive for other enterobacteria. None of the samples contained Salmonella or Shigella. Furthermore, the 18 samples positive for E. coli bacteria were tested for the presence of ETEC by multiplex PCR, showing that ETEC was present in 8 of the samples, corresponding to 29% of all the collected samples. Coinfections with ETEC and V. cholerae were found in 4 samples, corresponding to 14% of all the collected samples (Table 1). No significant correlation with age groups was found for either single infections or coinfections. Of the seven samples for which only E. coli colonies were detected on the MacConkey plates, four (samples 4, 5, 16, and 18) showed no growth of V. cholerae. These samples tested positive for ETEC toxins by use of multiplex PCR, suggesting that ETEC was the major pathogen.
Determination of the ratio of ETEC CFU to total E. coli CFU in the diarrhea samples. ETEC was detected in the samples where only E. coli colonies, and no other lactose-fermenting bacteria, grew on MacConkey agar. To determine the ETEC frequency in these samples, the numbers of ETEC CFU per total E. coli CFU were examined. The total numbers of E. coli in the samples were determined by quantitative culturing using serial dilutions. Isolated colonies, 50 in total, were picked randomly from the dilution plates and analyzed by toxin multiplex PCR. As presented in Table 1, the ETEC percentage of total number of E. coli-like CFU was then calculated by dividing the ETEC-positive colonies by 50, i.e., the total number of analyzed colonies. For two of the samples, sample 4 and sample 5, 50 out of 50 colonies were determined as positive for either LT (sample 4) or both LT and STh (sample 5), suggesting that they were pure ETEC. For sample 18, 96% of the colonies were determined as ETEC expressing LT and STp. For sample 9, 60% of the E. coli strains were STp positive, while in sample 17, only 4% were LT positive. The results from the individual colonies largely agreed with initial toxin profiles determined for the diarrheal samples, with the exception of sample 16. This sample was originally scored as an E. coli LT/STp-only infection, but in the analysis of the 50 colonies, it was found to contain multiple ETEC toxin profiles (Table 1). In addition, the fraction of ETEC in E. coli in the sample was 66%, indicating a mixed infection with several ETEC strains and other E. coli strains. Regarding sample 1, this sample was not tested, and in sample 15, the original ETEC isolate could not be found among the tested 50 colonies. In total, six representative isolates, E2264 to E2269, were collected from the remaining six ETEC-positive samples and stored in freeze medium. The colonization factor profiles of these strains were then tested using dot blot and PCR analysis, showing the presence of CS7 in sample 4 and CS5/CS6 in sample 5. For the other ETEC strains, the CF profiles could not be determined (Table 1). qPCR quantification of ETEC and V. cholerae toxin genes. Since variations in ETEC frequency occur in diarrheal stool, we next sought to determine the absolute numbers of ETEC and V. cholerae per milliliter of watery stool. The DNA copy numbers of ETEC and V. cholerae were determined in the 18 E. coli-positive stool samples by use of qPCR analysis and primers specific for estA1 STp, estA2 to estA4 STh, eltB LT, and ctxB CT together with standard curves of known copy numbers. Gene loci for all four toxin genes were detected in a majority of the samples, and the amounts varied between 0 copies and 2 ϫ 10 8 copies per ml, as presented in Table 2. Higher numbers of toxin gene copies were found in samples that tested positive for ETEC toxins than in samples that were negative for ETEC in the previous culture analyses. Samples 4,5,16,and 18, which were all positive for LT ETEC, contained between 1 ϫ 10 7 and 6 ϫ 10 7 LT gene copies per ml. Furthermore, sample 4 and sample 5 contained, in comparison with the others, very low copy numbers (0 and 150 copies per ml stool, respectively) of the V. cholerae CT gene. Hence, these samples likely represent true ETEC-only diarrheas. The two other LT ETEC-positive samples, sample 15 and sample 17, contained few or no LT ETEC bacteria per total amount of E. coli in the culture analysis, and the levels of ETEC LT gene copies detected by qPCR were correspondingly 3 orders of magnitude lower (2 ϫ 10 4 to 3 ϫ 10 4 ) than in the other ETEC-positive samples. The copy number of eltB was additionally found at levels between 10 3 and 10 5 copies per ml in the samples that tested negative for ETEC in culture. The gene encoding heat-stable toxin STp was detected in the STp-positive sample 9 (9 ϫ 10 3 copies) and sample 16 (2 ϫ 10 4 ) and in high numbers in sample 18 (2 ϫ 10 8 ), as presented in Table 2. Sample 18 was by quantitative culture and multiplex PCR determined to contain 6.3 ϫ 10 7 E. coli bacteria per ml diarrheal fluid, and 96% of the colonies were LT and STp positive in culture analysis. Hence, the qPCR results and quantitative culture corroborate a dissemination concentration of 10 7 to 10 8 gene equivalents per ml diarrheal stool in this patient. A similar concentration was also found for sample 5, which contained 100% ETEC and a total E. coli count of 2.7 ϫ 10 7 CFU per ml. The LT and STh gene copy numbers of this sample were 1 ϫ 10 7 (eltB) and 2 ϫ 10 7 (estA3 and estA4), respectively. The gene copy numbers for STh were for sample 16 estimated to be 7 ϫ 10 7 and for sample 1 estimated to be 3 ϫ 10 7 , whereas all samples that were negative for STh in culture analyses showed levels of between 10 2 and 10 4 copies of estA2 to estA4 per ml ( Table 2).
The gene counts for V. cholerae ctxB in the analyzed samples varied from none to almost 2 ϫ 10 8 gene copies per ml (Table 2). Low (Յ200 copies) or absent levels were found in sample 4 and sample 5, both detected as 100% ETEC. Low levels (Յ400 copies) were also found in samples 8, 10, 13, and 16. For sample 10, only around 25 colonies in total of V. cholerae were detected on culture plates, which corroborates the low counts detected with qPCR. In the other samples with low copy numbers of ctxB, samples 8, 13, and 16, no growth of V. cholerae was consequently detected in culture analysis. The samples that scored positive for V. cholerae in culture showed gene copy numbers of between 10 4 and 10 8 per ml. In addition, sample 18, which was detected as V. cholerae negative by culture, also contained 5 ϫ 10 5 ctxB copies per ml. Taken together, these results suggest that low levels of ETEC and V. cholerae are continuously present in a majority of diarrheal stools from patients and that these levels are difficult to detect by routine culture analyses.
Toxin production and secretion determined directly in diarrheal liquid stool samples by GM1-ELISA. The diarrheal stool samples were further analyzed for the presence of the translated ST, LT, and CT by use of GM1 enzyme-linked immunosorbent assay (GM1-ELISA). Seven of the eight stool samples that had been scored as positive for ETEC were tested using GM1-ELISA and inhibition GM1-ELISA (Table 3). In the three tested samples that were culture positive for V. cholerae (samples 1, 9, and 15), CT was detected by LT-39, an antibody that detects both LT and CT, as well as CT-Wi monoclonal antibody (MAb), which is more specific for CT. Cholera toxin was also detected in sample 18, which was estimated to contain approximately 5 ϫ 10 5 V. cholerae bacteria per ml by qPCR analysis but was V. cholerae negative in culture. The toxin ELISA results thus confirmed the qPCR results for this sample. Two of the samples, sample 4 and sample 5, were detected as virtually pure ETEC samples, with undetectable levels of V. cholerae toxin in qPCR. For sample 4, traces of LT were detected in the pellet fraction using the LT-specific MAb LT-80, but the amount was close to the lower detection limit of the assay. In sample 5, however, positive results were obtained using MAbs LT-80 and LT-39 but not MAb CT-Wi, suggesting that the toxin found indeed was LT and not CT. In addition, this sample was positive only in the bacterial supernatant fraction and not in the pellet, indicating that LT was actively secreted during infection. ST was detected in the supernatants of sample 1, sample 9, and sample 16, for which STh or STp had already been detected by toxin multiplex PCR. Sample 5, additionally, showed trace amounts of ST. Furthermore, ST was detected in sample 15, in both pellet and supernatant, whereas the same toxin was not detected by culture analysis followed by multiplex PCR, and only approximately 10 4 copies of estA2 to estA4 were detected by real-time PCR analysis ( Table 2). In contrast, no ST was detected in sample 18, for which high levels of STp had been detected by both culture/multiplex PCR and real-time PCR analysis. The analyses show that toxins are present in diarrheal stool and distributed to the environment. In addition, both ETEC and V. cholerae evidently actively secrete LT and CT during acute infection and dissemination. Gene expression of the ETEC and V. cholerae toxin genes. Toxin levels in stool might not indicate active transcription and translation. To investigate whether active toxin transcription occurs in ETEC and V. cholerae in diarrheal stool, the expression of the toxin genes in the two pathogens was measured by extracting RNA from the bacterial pellet from liquid diarrheal samples and reverse transcription of the RNA to cDNA. The relative expression of the ctxB (CTB subunit) gene, the ETEC toxin genes estA2 to estA4 and estA1 (STh and STp, respectively), and eltB (LTB subunit) was determined in a fixed concentration of total RNA converted to cDNA (15 ng/PCR mixture). The mRNA expression of the CT-, LT-, and ST-encoding genes per 15 ng total cDNA was found to largely correspond to the toxin profiles, as seen in Table 4. Higher gene expression levels of the ctxB mRNA were found in samples 1, 9, and 12 and particularly in sample 15 compared to the other samples. These samples also had correspondingly high levels (2 ϫ 10 5 to 2 ϫ 10 8 ) of ctxB DNA copies ( Table 2). The highest mRNA levels for eltB and the ST-encoding mRNAs were (Tables 1 and 2). For sample 15, no gene expression of ETEC mRNA toxin genes could be determined, whereas sample 8, for which no pathogen was found in culture, showed expression of eltB and estA2 to estA4. These results, together with the DNA data, suggest that an LT and STh-positive ETEC infection was missed in culture analysis in this patient. The results show that toxin gene expression levels generally correlate with numbers of the respective pathogen in the stool. Calculations of expressed gene copy numbers (mRNA) divided by genomic copy numbers (DNA), i.e., gene expression per genome equivalent, showed that toxin gene expression levels in ETEC and V. cholerae are similar. Whole-genome sequencing of ETEC isolates. The results presented above suggest that ETEC might be underestimated in cases of cholera-like liquid diarrhea, as well as that certain ETEC clones such as LT CS7 (sample 4) and LT STh CS5 plus CS6 (sample 5) can manifest as monocultures of a single clonal infection. In order to investigate the genetics of the collected ETEC strains and to be able to correlate the collected ETEC with worldwide ETEC infections, whole-genome sequence (WGS) analysis was performed. For isolates E2264 (sample 4) and E2265 (sample 5) PacBio sequencing was performed to gain even more complete information about the genetic details. These two strains were of specific interest since they were detected in ETEC-only infections and since they belong to ETEC lineages that have persisted over time and have spread globally (10). The four other recovered isolates, E2266 (sample 9), E2267 (sample 16), E2268 (sample 17), and E2269 (sample 18), were sequenced using Illumina MiSeq sequencing. The details of the sequencing are provided in Tables S1 to S4 in the supplemental material. The genomes of the six isolates were annotated and analyzed using CGE in silico multilocus sequence typing (MLST), PlasmidFinder, and plasmid MLST (pMLST) as well as ResFinder VirulenceFinder and ARG-annot. In addition, pathogenic E. coli virulence genes were identified using BLAST analysis.
PacBio analysis of E2264 and E2265 revealed large virulence plasmids and additional plasmids with transfer systems and antibiotic resistance. Since two stool samples, sample 4 and sample 5, contained 100% ETEC of two commonly isolated lineages of ETEC, PacBio sequencing was employed to further analyze these isolates. The PacBio analysis revealed that ETEC 2264 (sample 4), in addition to the chromosome, contained 3 plasmids. The largest plasmid (E2264_p112045) had a size of 112,045 bp and contained 128 putative open reading frames (ORFs). Among these, the ORFs containing the tra genes traD and traI, which may have helicase activity, as well as the genes encoding the heat-labile enterotoxin A and B subunits for production of LT (orf66 and orf67) and the CS7 operon subunit precursor (orf75 to orf81), were found. The distance between LT and CS7 was 3,986 bp, and the sequence contained two conserved, hypothetical proteins and four transposase genes. The six CS7 operon genes found were, in 5= to 3= direction, positioned in the order D, F, E, C, B, and A. The second largest plasmid (E2264_p77345) of ETEC 2264 was 77,345 bp long and contained 93 ORFs. This plasmid contained a high number of tra genes and antibiotic resistance genes. The tra genes, which are involved in conjugative transfer of plasmids between bacteria, encountered were traA, traD, traE, traG, traH, traK, traL, traM, traN, traP, traQ, traT, traU, traV, traX, and traY. Among these genes are the genes encoding the pilus subunit (traA), genes for regulation of traA, and genes for pilus assembly as well as genes for nicking and unwinding of DNA (24)(25)(26). In addition, six ORFs involved in antibiotic resistance were found: ones for resistance to tetracycline (orf63), trimethoprim (orf81), beta-lactams (orf83), and sulfonamide (orf87) and finally strA (orf88) and strB (orf89), coding for streptomycin resistance. The third plasmid (E2264_p45777) of ETEC 2264 was 45,777 bp long and contained 45 ORFs. This plasmid contained the gene encoding the secreted autotransporter serine protease EatA (ETEC autotransporter A) (orf25) (27). Sequence comparison of EatA using BLAST (NCBI) demonstrated high homology with a vast number of sequences of E. coli origin (97 to 99% homology with 7 samples, Ն71% homology with 84 samples) and also with a few sequences from Shigella.
ETEC 2265 contained two plasmids in addition to the chromosome. The largest plasmid, E2265_p142359, was 142,359 bp long and harbored 189 ORFs, including genes encoding important virulence factors like eatA (orf66), the csfA to -F operon encoding CS5 (orf79 to orf84), the cssABCD operon encoding CS6 (orf96 to orf99), and the estA3 and estA4 gene encoding STh (orf107). The plasmid also contained an aatPABCD operon encoding a membrane transporter initially described in enteroaggregative E. coli (EAEC) and a cexA-like gene located directly upstream of aatPABC (28). Plasmid E2265_ p142359 also contained several tra genes: traM, traJ, traA, traL traE, traY, traK, traB, and traP. The second plasmid, E2265_p88757, was 88,757 bp long and contained 101 ORFs, including the eltAB operon (orf81 and orf82), as well as a high number of tra genes: traM, traA, traL, traE, traK, traB, traV, traC, trbL, traW, traU, trbC, traN, traF, trbB, traH, traG, traS, traT, traD, traI, and traX. Scoary analysis of the genomes did not reveal any unique single-pathogen infection profiles for the ETEC. The results indicate that three of the diarrheal samples collected in this study, E2264, E2265, and E2269, might be true ETEC infections without other coinfecting pathogens. In order to investigate whether these isolates differed from the three isolates with ETEC found as mixed infections with other E. coli strains and/or with V. cholerae (E2266, E2267, and E2268), the pan-, core, and accessory genomes were determined for the six sequenced isolates (Fig. 1). The pan-genome of the six isolates comprised 7,537 genes; of these, 3,502 genes were common to all isolates and constituted the core genome. Scoary analysis to determine if genes were significantly associated with the three ETEC-only infections did not reveal any signifi-cant results. The only differences found were larger genomes of E2264, E2265, and E2269 than of isolates E2266, E2267, and E2268 (see Table S1 in the supplemental material).
DISCUSSION
In this study, the prevalence and virulence factor profiles of ETEC and V. cholerae in watery stool from patients with cholera-like diarrhea were investigated by use of genetic sequencing and molecular analysis as well as traditional culture methods. In total, 28 samples were randomly collected at the hospital ward at icddr,b in Dhaka, Bangladesh. Of these, 17 tested positive for V. cholerae and 8 tested positive for ETEC (Table 1). Quantitative culture of bacterial load on MacConkey plates and analysis of ETEC toxin profile by PCR found that some samples were dominated by ETEC while others were mixed with other bacteria. Three of the ETEC samples, samples 4, 5, and 18, were detected as pure or almost pure ETEC infections, while samples 1,9,15,16, and 17 were found as coinfectants with V. cholerae and/or with various undetermined E. coli and Enterobacteriaceae strains. Coinfections with various pathogens are quite common among patients with diarrhea (29,30). For example, Paschke et al. have previously reported a prevalence of coinfections in as many as 60.5% of travelers to tropical and subtropical parts of the world who suffer from diarrhea as well as in 12.5% of the travelers to the same areas with no disease symptoms (17). The coinfection rate of the two pathogens investigated in this work was found to be 14%, which is higher than the 2% previously reported by Chowdhury et al. (11) but similar to the results from the work of Begum et al. (20) in the same geographical area.
Since both V. cholerae and ETEC cause massive diarrheas with a daily loss of several liters of fluid during the acute phase, we wanted to investigate the concentration of the bacteria in the stool samples. By use of quantitative real-time PCR, we could determine (Table 2). These numbers were corroborated by the results of the quantitative culture and other studies reporting between 10 7 and 10 8 CFU of diarrheal pathogens per milliliter or gram of stool (31,32) and up to 10 8 to 10 9 pathogen gene copies per gram of stool (30). Toxin gene copy numbers in ETEC have been determined to be between 1 and 16 copies per cell (32,33), indicating that quantification of gene copies might overestimate bacterial load up to 10-fold. Regardless, shedding of 10 7 to 10 8 bacteria per ml of stool in patients, who can lose several liters of fluid per day, is probably one of the main factors contributing to the large epidemic outbursts of diarrhea caused by these pathogens.
One of the findings of this work was that the use of highly sensitive molecular methods detects low levels of ETEC and V. cholerae in a majority of the stool samples negative in culture (Table 2). These low levels are probably not causing symptoms and may constitute a background pathogenic microbiota in individuals living in areas of endemicity. They could also mean that the rates of coinfections are significantly higher than previously reported. Recent studies have, however, highlighted the importance of pathogen load for disease manifestation (30,34,35). High levels of ETEC in stool were linked to clinical manifestation of diarrhea in ETEC-challenged volunteers (35). In the same study, it was found that E. coli 16S (i.e., ETEC) dominated the microbiota in volunteers who developed diarrheal symptoms but was low in asymptomatic volunteers. Even though severe diarrheas caused by ETEC and by V. cholerae are very similarly manifested, V. cholerae is frequently identified as the disease-causing pathogen. The development of quantitative molecular diagnostics, including real-time PCR and other PCR methods for identification of the pathogens, may aid in finding the major diseasecausing pathogen(s) in an infection (30,36). With the use of these techniques, ETEC has been shown to be frequently underestimated (32). It might also turn out that a high proportion of the cholera cases are, in fact, mixed infections, since we detected E. coli or other species in all V. cholerae-positive stool samples.
Although the heat-labile CT and LT of V. cholerae and ETEC have similar actions, cholera is generally regarded to cause a more severe disease outcome. This has been explained by differences in the amount of toxin that the bacteria secrete. V. cholerae is generally considered to secrete most of its produced CT, while ETEC retains Ͼ50% of the LT intracellularly, either in the periplasm or associated with the membrane lipopolysaccharides (LPS) under laboratory conditions (37,38). ST is also considered to cause a more severe disease outcome than LT (39,40), and infection with LT ETEC has subsequently often been detected in asymptomatic patients (17). The reason for this difference is unknown but might be addressed by the difference in secretion. Thus, in order to study the toxin secretion in more detail, the 8 samples that by culture analysis were identified to contain ETEC were further analyzed for both ETEC and V. cholerae toxin production and secretion (Table 3). Using quantitative ELISA and monoclonal antibodies specific for the different toxins, it was seen that for V. cholerae, CT was detected both in the bacterial fraction and in the supernatant, indicating that approximately one-third of the toxin produced was actually retained in the bacteria (Table 3). For ETEC, similar levels of LT were detected both in the bacterial cells and in the supernatant, suggesting that LT is secreted to a relatively large extent. ST was mainly found as secreted toxin, but for one of the samples, ST was localized to the bacterial cells. This implies that the concept that V. cholerae always secretes CT and that ST is always secreted by ETEC, while most LT is retained in the periplasm of ETEC, does not hold true during infection. Our results indicate that disseminating ETEC and V. cholerae actively both transcribe and secrete toxins during shedding in diarrheal stool, which further supports the epidemic nature of these pathogens. Since ETEC toxins were also found secreted in the liquid diarrhea in levels that were equal to V. cholerae and since the toxin gene expression levels and pathogen loads were similar for the two pathogens, the explanation for the more severe nature of V. cholerae infections is still elusive. However, since these results are from only a few stool samples, further studies are needed.
Quantitative culture of bacterial load and analysis of ETEC toxin profile by PCR in this study found that some samples were dominated by ETEC while others were mixed with other bacteria. Six ETEC isolates were recovered and used for further analysis. Half of the ETEC isolates, E2264, E2265, and E2269, from samples 4, 5, and 18, respectively, were detected as pure or almost pure ETEC infections. The other isolates, E2266, E2267, and E2268, recovered from samples 9, 16, and 17, respectively, were found as coinfectants with V. cholerae and with various undetermined E. coli and Enterobacteriaceae strains ( Table 1). All the 6 isolated strains were whole genome sequenced, using Illumina or Pacific Bioscience (PacBio) sequencing, in order to further analyze the genomic content.
Moreover, the toxin and colonization factor profiles of the isolated ETEC strains were investigated. For E2264 (sample 4) and E2265 (sample 5), the genetic analysis could confirm the CF profiles detected by dot blot analysis of CS7 and CS5/CS6, as well as toxin profiles of LT and LT/STh, respectively (Table 1). These toxin/CF profiles are common and widespread over the world (10). The E2264 isolate belongs to a subtype of ETEC lineage 3 expressing LT and CS7, which is frequently isolated globally (10,20,41). The E2265 isolate belonged to ETEC lineage 5, a clonal group that expresses the toxins LT and STh and colonization factors CS5 and CS6. Recent reports indicate that ETEC expressing LT/STh/CS5/CS6 is the most common ETEC pathotype isolated in Dhaka (20). Both these strains were detected as pure ETEC infections in this study, suggesting that these toxin/CF profiles are beneficial for single infections. A study performed in Dhaka in 2011, using whole-genome sequencing of several isolates recovered from individual patients in a setup similar to this study, identified one patient for whom all analyzed isolates constituted a clonal expansion of L5 (LT/STh CS5/CS6) isolates (42), which further confirms that L5 isolates are able to outcompete the normal microbiota and cause serious infections. The other four strains were detected as CF negative in the initial dot blot analysis; however, sequence analysis revealed CF profiles for three of them: E2267 (sample 16) was detected as CS14 positive, E2269 (sample 18) was detected positive for CS27b, and E2268 (sample 17) was shown to harbor genes for a CF profile similar to CS13/CS23. No CF could be detected for E2266 (sample 9), suggesting that this strain either lacks CF or expresses a kind of colonization factor that has not yet been discovered or that the plasmid(s) was lost prior to sequencing. Nonetheless, this STp/CF combination rendered a strain that was the major infecting pathogen of the E. coli species still beneficial for coinfections. The strain E2267 showed an STp/LT/CS14 profile, whereas E2268 was shown positive for LT in combination with new CFs that are very similar, but not identical, to CS13/CS23. Both ETEC strains were found in coinfections. The E2269 strain also showed a new toxin-CF combination. This novel type of ETEC, expressing LT, STp, and CS27b, is worth keeping an eye on. It was detected in an almost pure ETEC infection and is definitely potent as a single-infection pathogen. The LT/STp/CS27b ETEC of MLST ST-4493 was not described during the time of isolation of the strains in this paper. The novel CF CS27b was first described by Nada and coworkers in 2011 (21). Recent publications have indicated that ETEC strains previously noted as CF negative might express a novel group of CS18-and CS20-like CFs, to which CS27b belongs (21,22). These new CFs are not detectable using traditional dot blot techniques and might be missed using the present PCR methods in use; therefore, more information about the various ETEC toxin-CF combinations is needed. The matter is, however, complex, and no consensus regarding the CF/toxin profiles and disease outcome has been determined to date (9).
Next, we sought to determine why certain ETEC toxin-CF combinations manifest as single infections. Scoary analysis of the genomes, however, did not reveal any unique gene profiles that could explain the single-pathogen infections seen for samples 4, 5, and 18. Analysis of antibiotic resistance genes was performed to determine if single infections are more resistant. Two of the single-pathogen infectants, E2264 (sample 4) and E2269 (sample 18), showed multiple genes for antibiotic resistance. In contrast, using ResFinder, no acquired resistance genes were found for E2265 (sample 5), while ARG-annot confirmed chromosome-bound ampicillin resistance by ampC (see Table S4 in the supplemental material). This isolate has been described previously (43,44).
Regarding the ETEC strains found as coinfecting pathogens, E2266 (sample 9) and E2267 (sample 16) harbored four and three antibiotic resistance genes each, respectively. Also in this group, one of the three strains, this time E2268 (sample 17), harbored no presumed plasmid-borne resistance genes except ampC. For E2268, the lack of acquired resistance genes coincides with the lack of the ETEC-specific ParM gene (23), which was detected in all the other strains. This strain might thus be somewhat less potent, considering that only 4% of the E. coli isolates of this sample were determined as ETEC, or it might be highly specialized in occurring in coinfections. The high frequency of resistance genes detected in some isolates might not be surprising considering that a test of the drinking water in Dhaka some years ago revealed that 36% of the E. coli isolates were multiresistant, of which 26% were positive for extendedspectrum beta-lactamases (45). The presence of antibiotic resistance might thus not be important for diarrheal virulence.
Nevertheless, in this work we have shown, although with a limited number of samples, that there might be specific CF/toxin profiles associated with either single ETEC infections or multipathogen infections. Two of the isolates, E2264 (sample 4) and E2265 (sample 5), were found to belong to globally successful ETEC lineages that have been described previously (10). Hence, although the isolates described in this work were collected a decade ago, they are indeed still relevant and frequently detected pathogens. Here, we used second-(Illumina) and third-generation (PacBio) sequencing technologies, the latest developed in the last 5 years, which allowed us to perform de novo assemblies and characterize ETEC plasmids. The genome data will be useful for deeper studies of pathogen genomics.
Recent studies on global diarrhea in the GEMS and MAL-ED studies have identified STh-expressing ETEC to be a major contributor to diarrhea (40). The identification in this study of potent single infections by LT/CS7 and LT/STp/CS27b ETEC does indicate that focus on STh-expressing ETEC might be an oversimplification. Indeed, Del Canto et al. recently described clonal CS27b ETEC isolated from Chile, Pakistan, India, and Bangladesh (22), indicating that LT/STp/CS27b is an emerging virulent ETEC type. Given this, we propose that preventive efforts and vaccine strategies against ETEC should be focused on globally spread ETEC lineages.
MATERIALS AND METHODS
Ethics statement. The collected samples were part of the icddr,b 2% surveillance system routine, approved by the Research Review Committee (RRC) and Ethical Review Committee (ERC) of icddr,b, Dhaka, Bangladesh, as described previously (10,20). Exclusion criteria in this study were the presence of agents other than Enterobacteriaceae and Vibrio or if the patient reported having had antibiotic treatment prior to hospitalization. Informed oral consent was obtained from adult patients, or from caregivers or guardians of children, for collection of stool specimens, according to the hospital policy. The ERC has approved verbal consent and voluntary participation, and subjects may refuse participation without compromise of patient care. All patients were treated for their clinical conditions, e.g., dehydration, after sample collection. Consenting individuals were assured about the nondisclosure of name or identity of the participants. The ETEC strains collected and analyzed in this study were deposited at the ETEC culture collection of the University of Gothenburg and in the group of Å. Sjöling. Permission to use the ETEC strain collection was granted by the Regional Ethical Board of Gothenburg, Sweden (Ethics Committee reference 088-10).
Bacterial growth and detection of ETEC and V. cholerae. Watery diarrheal stool samples were collected from the hospital ward at the International Centre for Diarrhoeal Diseases Research in Bangladesh (icddr,b) during the diarrheal peak season in March to April 2006 and brought to the adjacent laboratory for culture of bacteria. The clinical criteria for admission were moderate to severe watery diarrhea requiring hospitalization.
The collected liquid diarrheal samples were serially diluted in phosphate-buffered saline (PBS) and cultured on MacConkey agar plates for determination of CFU of E. coli per milliliter. E. coli was distinguished from other lactose-fermenting bacteria, including Klebsiella, Citrobacter, and Enterobacter, by ocular investigation. Only the E. coli-like colonies were picked for subsequent analyses. The same serial diluted samples were cultured in parallel on selective taurocholate-tellurite-gelatin agar (TTGA) plates to determine growth of V. cholerae (46). The V. cholerae serotype, Inaba or Ogawa, was identified by an agglutination test (47). To verify the presence of ETEC among the samples containing E. coli-like bacteria, ETEC toxin multiplex PCR was performed as described previously (48). In short, 6 to 10 colonies from the MacConkey agar plates were pooled, boiled in 500 l MilliQ water, and tested for the presence of the genes encoding the ETEC toxins LT, STh, and STp. Individual colonies were collected from the ETECpositive plates, and the toxin profile was retested and confirmed by multiplex PCR. One representative isolate was saved in freeze medium at Ϫ80°C. Furthermore, the colonization factor profiles of these representative isolates were determined by multiplex PCR and by dot blot analysis, as previously described (48,49).
Determination of the proportion of ETEC to total number of E. coli bacteria in stool.
To determine the percentage of ETEC per total number of E. coli-like colonies in the diarrheal stool samples, 50 E. coli-like colonies were randomly collected from the original ETEC-positive MacConkey agar plates. The colonies were individually boiled for 10 min in MilliQ water and tested by toxin multiplex PCR (48). The percentage of ETEC per total E. coli bacteria in each sample was calculated by dividing the number of toxin-positive colonies by 50 (total number of analyzed E. coli colonies).
Collection of bacterial pellet and supernatant from diarrheal liquid for DNA and RNA analysis. From the original watery diarrheal samples, DNA and RNA were extracted for molecular quantification of DNA as well as for gene expression analyses of both ETEC and V. cholerae. The diarrheal samples were first centrifuged at 1,000 ϫ g for 5 min to separate mucus and solid particles from the bacteriumcontaining supernatant. From the remaining supernatant, three separate samples of 1 ml each were collected and additionally centrifuged for 5 min at 16,000 ϫ g followed by subsequent separation of bacterial pellet and supernatant. The first centrifuged sample was saved for subsequent determination of the amounts of toxins secreted by the bacteria or associated with the bacterial cells by GM1-ELISA (described below). For this sample, the pellet was dissolved in PBS and sonicated before it was frozen at Ϫ70°C, and the corresponding supernatant was frozen separately at Ϫ70°C. These samples were stored at maximum for 1 day at Ϫ70°C, after which they were analyzed. For the second and third 1-ml samples from the same stool sample, the bacterial pellets were collected and immediately frozen at Ϫ70°C for subsequent DNA and RNA extraction, respectively.
Detection of ETEC and cholera toxins in diarrhea samples by GM1-ELISA. In order to detect the toxins produced by ETEC and V. cholerae in the diarrheal samples, the pellet and supernatant samples were analyzed by GM1-ELISA for detection of CT and LT and by inhibition GM1-ELISA for detection of ST. The procedures have been described previously (48,50). With the use of in-house monoclonal antibodies specific for ST (ST-1), LT (LT-80), and CT (CT-Wi), as well as an antibody that recognizes both LT and CT (LT-39), the total amount of ST, LT, or CT could be determined in the supernatant (secreted toxin) and in the pellet fraction (toxin associated with the bacterial cell membrane or cytoplasm). All antibodies were produced at the department of Microbiology and Immunology, Sahlgrenska Academy, University of Gothenburg. Threefold dilution series and reference toxin standards of known concentrations were used to determine the amount of each toxin present in both pellet and supernatant of the samples.
DNA and RNA extraction from diarrhea samples. The second and third 1-ml samples from the diarrhea sample preparation were used for DNA and RNA extraction and subsequent qPCR analyses of gene copy numbers and gene expression. Bacterial DNA extractions from the frozen pellets were performed with the QIAamp stool DNA kit (Qiagen, Hilden, Germany) as described by the manufacturer and in previous studies (33,51). The extracted DNA was kept at Ϫ20°C until analysis. RNA extraction was performed with the RNeasy kit (Qiagen, Hilden, Germany), and a DNase protocol (Qiagen) was included to remove genomic DNA as described previously (52). Extracted RNA was analyzed on an agarose gel to determine integrity, and the concentration was measured at 260 nm using a NanoDrop spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA). cDNA was prepared from 200 ng RNA from each sample using the QuantiTect cDNA kit (Qiagen) with an additional DNase step included in the protocol. The cDNA was stored at Ϫ20°C until further analysis. qPCR quantification of ETEC and V. cholerae in diarrhea samples. Quantitative PCR (qPCR) was performed to determine the total amount of ETEC bacteria in the DNA samples using primers specific for the eltB (LT), estA1 (STp), and estA2 to estA4 (STh) genes (33) and ctxB (CT) (52). For ETEC, a standard curve for each gene was generated by PCR amplification using the respective real-time PCR primer and a toxin-positive ETEC strain as the DNA template. The PCR products were purified using the QIAquick PCR purification kit (Qiagen), and the concentrations were determined on a NanoDrop spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA). The PCR product copy number was determined as described previously using Avogadro's number and the molecular weight of the PCR product (53). Tenfold serial dilutions between 5 ϫ 10 8 and 5 copies/liter were prepared and stored at Ϫ20°C until further use. To determine the total number of V. cholerae bacteria in the diarrheal sample, real-time PCR was performed using the same conditions as for ETEC. A standard curve was generated by manually counting the bacteria of the N16961 El Tor V. cholerae reference strain using a Neubauer improved counting chamber (Hausser Scientific, VWR International) at a magnification of ϫ40. Counted bacteria were diluted in 10-fold serial dilutions to the same concentrations as described above and used as a standard curve. Real-time PCRs were run in duplicates in 96-well plates (Applied Biosystems) with a total volume of 20 l in each reaction mixture. The PCR mix contained 10 l SYBR green real-time PCR master mix (Life Technologies), 10 pmol of each primer, 6 l water, and 2 l DNA. Negative controls and a standard curve were included in each PCR run. The numbers of bacteria per 1 ml liquid diarrheal sample were calculated by using the settings for absolute quantification in the ABI 7500 real-time PCR instrument and by assuming that one gene copy equals one bacterium.
Determination of toxin gene expression per bacterium in stool samples. The same primers and qPCR conditions were used in reverse transcriptase qPCR (RT-qPCR) for gene expression analysis. As the template, the cDNA was used, and each sample was analyzed in duplicate. A no-RT reaction mixture containing only RNA for each sample was run in duplicate to confirm that the detected expression was not due to genomic DNA amplification. The real-time PCR was run on an ABI 7500 using SYBR green and standard amplification conditions, as described above, in a reaction volume of 20 l. The gene copy number per bacterial genome was calculated by dividing the numbers of gene transcripts per milliliter of sample with the gene copy number per milliliter of sample as described previously (52).
Whole-genome Illumina sequencing. In order to investigate the genetics of the collected ETEC strains, and to be able to correlate the collected ETEC with worldwide ETEC infections, whole-genome sequence (WGS) analysis was performed. For E2264 (sample 4) and E2265 (sample 5), PacBio sequencing was performed to gain even more complete information about the genetic details since these strains were of specific interest due to their being found in ETEC-only infections (described below). The four other ETEC strains were sequenced using Illumina MiSeq sequencing. The ETEC strains, stored at Ϫ80°C, were plated on LB agar plates and incubated for 24 h at 37°C. One large bacterial colony from each plate was selected and washed in 300 l MilliQ water, after which the bacterial DNA was extracted using the DNeasy Blood & Tissue kit from Qiagen according to the manufacturer's instructions. The DNA concentration was measured using a Qubit 2.0 fluorometer (Invitrogen). Sequencing libraries were prepared using the TruSeq Nano kit (Illumina, San Diego, CA) with a mean fragment length of 900 bp. Libraries were sequenced on the MiSeq platform v.3 chemistry, 2 by 300 bp, generating a coverage of Ͼ100ϫ for all strains.
PacBio sequencing. DNA for Pacific Biosciences (PacBio) sequencing was prepared from ETEC isolates grown in LB medium to an optical density at 600 nm (OD 600 ) of 0.3. DNA was extracted by the Qiagen Genomic-tip 500/G kit according to the manufacturer's instructions (Qiagen, Hilden, Germany). For each sample, one DNA aliquot was sheared into 10-kbp fragments using a Genemachines HydroShear instrument (Digilab, Marlborough, MA, USA) and a second aliquot was sheared into 2-kb fragments using a Covaris instrument (Covaris, Woburn, MA). SMRTbell templates were constructed according to the manufacturer's instructions (Pacific Biosciences, Menlo Park, CA, USA). Each library was sequenced on 1 SMRT cell on a Pacific Biosciences RSII sequencer according to the manufacturer's instructions with 4-h movie time.
Assembly and annotation. The reads from the 10-kb PacBio sequencing library were assembled using HGAP3 from SMRTportal v2.3 (Pacific Biosciences, Menlo Park, CA, USA) with default settings. The 2-kb PacBio libraries were assembled using Falcon (Pacific Biosciences, Menlo Park, CA, USA) with settings allowing high coverage for plasmid assembly.
Illumina raw reads were trimmed and filtered using TrimGalore! (54), applying the quality cutoff Q30 and keeping only reads longer than 30 bp. Filtered reads were de novo assembled using SPAdes v 3.10.1 (55), and resulting assembly files were filtered for very-low-coverage contigs and contigs shorter than 500 bp before they were ordered to the E2265 complete PacBio sequence using the Mauve order contigs tool (56).
The resulting draft and complete genomes were annotated with the prokka annotation pipeline v. 1.1.12b (57) using the E24377A (CP000800.1) ETEC proteome as primary annotation source. Summary statistics from the sequencing, assembly, and annotation were collected using MultiQC v1.0 (58) and are shown in Table S1 (PacBio) and Table S2 (Illumina) in the supplemental material.
Statistical analyses. Statistical analyses were performed using GraphPad Prism version 7.0. P values of Ͻ0.05 were considered significant.
ACKNOWLEDGMENTS
The PacBio sequencing and assembly were performed by the National Genomics Infrastructure, NGI, Science for Life Laboratory, Uppsala, Sweden. We thank Christian Tellgren-Roth at the NGI Science for Life Laboratory, Uppsala, for bioinformatic support.
|
v3-fos-license
|
2018-12-25T08:14:40.798Z
|
2015-05-05T00:00:00.000
|
59435589
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=56127",
"pdf_hash": "d0fc7e9838766d606d53cee4641b5d209139235b",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43352",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "d0fc7e9838766d606d53cee4641b5d209139235b",
"year": 2015
}
|
pes2o/s2orc
|
Maternal Trauma and Adolescent Depression : Is Parenting Style a Moderator ?
Current research suggests that parents who experience symptoms of trauma transfer distress to their children. The purpose of this study was to understand the possible moderating effect of mothers’ parenting style on this relationship for adolescents. This study differs from much of the existing literature in that the adolescents themselves are the reporters of their own well-being. The level of maternal trauma, use of parenting styles, and adolescent depression were examined for a clinical sample of 113 mothers and adolescent dyads. Results indicate that mothers who experience high levels of trauma symptoms are more likely to parent using authoritarian or permissive behaviors. Although mother’s level of trauma alone was not related to adolescent’s depression, an interaction was found such that mothers experiencing high levels of trauma symptoms who parented with an authoritarian style had adolescents who experienced more depression than those whose mothers were less authoritarian. These findings are discussed in light of the larger literature on “secondary trauma”, or the transfer of distress, which often focuses on young children, with mothers as the reporters of both their own and their children’s functioning. Clinical implications are also considered.
Introduction
Children are often negatively affected when mothers exhibit symptoms of trauma (Matsakis, 2004).This transfer of distressing symptoms is referred to as secondary trauma and it can be exacerbated when the relationship with the traumatized person is one of dependence, as is the relationship between a mother and a child (Catherall, 2004).Secondary trauma in children is common when mothers experience a range of traumatic events including disasters, emotional abuse, interpersonal or community violence, war, or a host of other threatening or destabilizing situations (e.g., Kiser & Black, 2005;Van Ee, Kleber, & Mooren, 2012).Further, the more extensive a mother's trauma history is, the greater the negative outcomes for the child are (Thanker, Coffino, & Lieberman, 2013), and maternal trauma has a negative added impact on children, even when the children have experienced trauma themselves (Dulmus & Wodarski, 2000).Negative child outcomes from maternal trauma include problems with attachment, lower school performance, poor social functioning, and a higher incidence of being diagnosed with child disorders such as conduct disorder or attention deficit hyperactivity disorder (Rossman, 1999).
Many factors may moderate or protect against the impact of maternal trauma, including the mental health of the other parent, the stability of the spousal relationship, and the support of the family's community (Garbarino, Bradshaw, & Kostelny, 2005).One possible moderator that has received considerable attention is parenting, or the characteristics of interactions between a mother and a child.The majority of empirical evidence suggests that trauma-related stress negatively impacts the mother-child relationship and may undermine effective parenting (Lombardo & Motta, 2008).Kiser & Black (2005) conducted a meta-analysis of the clinical and research literatures addressing the connections between chronic traumatic exposure and family processes, specifically focusing on low-income urban families, and found that mothers exposed to high levels of trauma more consistently engaged in parenting characterized by insensitivity, reactivity, harshness, and lack of responsiveness.More recently, Van Ee & colleagues (2012) found that for mothers experiencing war-related trauma, higher levels of post-traumatic stress symptoms were related to less emotional availability to their infants and young children.
While several studies have documented the impact of maternal trauma on certain parenting behavior, and subsequently, child's well-being, few studies have examined the moderating effects of parenting style, per se.Parenting style refers to a systematic pattern of parenting practices that encompass a parent's overall approach to responsiveness and control (Maccoby & Martin, 1983).In her seminal work on parents' interaction with their preschoolers, Diana Baumrind (1971) revealed three prototypical parenting styles: authoritative, authoritarian, and permissive.Authoritative parenting involves high levels of parental acceptance and involvement, adaptive control techniques, and granting age-appropriate autonomy to children.In contrast, authoritarian parenting is characterized by low acceptance and involvement, coercive control, and low autonomy granting, while permissive parenting is characterized by inappropriately high levels of indulgence and involvement, low control, and developmentally inappropriate autonomy granting.Baumrind's findings and those of others who extended her work consistently find that authoritative parenting is the most effective parenting strategy and is associated with a wide range of positive outcomes for children (Berk, 2005).Further, an authoritative parenting style has also been linked to high levels of child resilience, or protecting the child from negative impacts of family stress or hardship (Pettit, Bates, & Dodge, 1997).Therefore, it is possible that children may be protected from the negative impact of maternal trauma when mothers are able to engage in authoritative parenting.
However, a review of the literature on the intergenerational effects of trauma (Kaitz et al., 2009) illuminates how difficult it may be for mothers who have experienced high levels of trauma to engage in authoritative parenting.While not examining the three parenting styles by name, Kaitz and her colleagues describe compilations of parenting practices analogous to Baumrind's parenting styles.The authors summarize studies indicating that mothers who are affectively disturbed by trauma or who are highly symptomatic of posttraumatic stress are unable to provide sensitive guidance, regulation, or fun during encounters with their children.They explain these parenting challenges with evidence from literature concerning maternal depression and anxiety.In particular, mothers' interactions with their children are disrupted by the traumatic experience such that depressed mothers are unable to accurately appraise and respond to their children's needs.This disconnect may result in the mother being unaware of her child's needs and not making appropriate demands or providing appropriate control for the child.This withdrawn stance, typical of the permissive parenting style, may be useful for mothers, as it can protect traumatized mothers from further distressing emotional arousal.Kaitz and colleagues also conclude that some mothers respond to trauma by exaggerating their responsiveness to children, resulting in an overly controlling manner of interaction.These parenting behaviors align with the authoritarian style.
In order to further our understanding of factors that may protect children from the deleterious effects of maternal trauma, the current study will examine the moderating effects of parenting style.Additionally, while the bulk of the research on maternal trauma and parent-child interaction has looked at school age children with mothers being the reporters of child outcomes (e.g., Wyman et al., 1999), we will focus on the potential moderating effects of parenting style on adolescent children, with the adolescents themselves as reporters of their well-being.Thus, the present study addresses three questions.First, do adolescents report higher levels of depression when their mothers report the experience of trauma symptoms?Second, are mothers who report symptoms of trauma more likely to engage in a particular parenting style?And third, do mothers' styles of parenting serve as a moderator between maternal trauma symptoms and their adolescents' experience of depression?
Sample
The data for this study were collected from 113 families seeking mental health therapy at a university based family therapy clinic in the eastern US.Families were selected for participation in the present study based on several requirements.First, the therapy treatment unit was a family which included one mother and at least one child between the ages of 12 and 18.If more than one child in the appropriate age group was present for therapy, selection of the child participant alternated between the oldest and youngest child present.
Procedure
Each family in the study initiated therapy services by first calling the clinic and completing an intake interview over the telephone.The intake worker gathered information such as the presenting problem, demographic information, and family structure.The assigned therapist contacted the family by telephone and explained that the first session would involve paperwork assessments and that this initial session would be free of charge.
During the first session, all present family members 12 years of age and older signed an informed consent agreement and completed the entire questionnaire battery, which assessed variables such as depression, trauma symptoms, relationship styles, issues of family conflict, family and social support, drug and alcohol use, relationship distress, and parenting practices.Of these, the Trauma Symptom Inventory, Parenting Practices Questionnaire, and Beck Depression Inventory were used for the purposes of this study.
Measures
Trauma Symptom Inventory.Mothers' experience of trauma was assessed using the Trauma Symptom Inventory (TSI; Briere, 1995).The TSI was created for use in clinical settings to assess the experience and severity of trauma-related symptoms.The original 100 item TSI has 13 subscales, three validity scales, and ten clinical scales.However, due to the extensive battery of assessments required of new clients, the clinic administers a shortened version of the TSI, which includes 42 items and five clinical scales; intrusive experiences, defensive avoidance, dissociation, anger/irritability, and anxious arousal.These five scales correspond to the five DSM-IV diagnostic criteria for PTSD.The TSI outcome data have been normed to the general population (men and women, 18 years and older), and university, clinical, and U.S. Navy recruit samples.Race accounted for only 2% to 3% of the variance on the TSI scales; therefore, Briere (1995) recommends that the TSI clinical scales not be adjusted for race.
Respondents are instructed to answer each question based on how often in the past six months each symptom was experienced, ranging from 0 "Never" to 3 "Often".The raw scores for each subscale are totaled, converted to T scores, and then compared to normative T scores.Higher total raw and T scores generally indicate greater degrees of symptomology, with a total T score above 65 considered clinically significant (Briere, 1995).The five clinical subscales used on the clinic's abbreviated version are internally consistent (mean alpha coefficients range from .84 to .87) and have sufficient convergent and predictive validity (predicting PTSD status in over 90% of the cases).Also, the TSI has high incremental validity, meaning its scores predicted the "victimization variance" beyond what was accounted for by other trauma symptom measures (Briere, 1995: p. 43).
Parenting Practices Questionnaire.In order to measure the mother's parenting style, the Parenting Practices Questionnaire (PPQ; Coolahan, 1997) was used.There are 62 items on the PPQ, which cluster on three parenting styles: authoritarian, authoritative, and permissive.The authoritative parenting scale consists of 27 items that measure dimensions of warmth, reasoning/induction, good natured/easy going, and democratic.The authoritarian parenting scale consists of 20 items along four dimensions of verbal hostility, corporal punishment, nonreasoning/punitive, and directiveness.Finally, the permissive parenting scale consists of 15 items that emphasize lack of follow through, ignoring misbehavior, and lack of parenting self-confidence.
The PPQ instructs the participant to select the response that best indicates how often certain parenting behaviors are performed.The participant answers on a Likert-type scale that ranges from 1 "Never", to 3 "About half the time", to 5 "Always".Respondents receive scores on each of the three parenting scales.The three parenting style scales are internally consistent, with Cronbach alphas of .87,.74,and .77for the authoritative, authoritarian, and permissive parenting scales respectively.The PPQ has also been shown to have good construct validity, with 93% of items loading on only one of the three dimensions (Coolahan, 1997).
Beck Depression Inventory.In order to measure a child's report of depression, the Beck Depression Inventory (BDI; Beck, Rush, Shaw, & Emery, 1979) was used.The BDI consists of 21 items that describe the symptoms and attitudes typically expressed by depressed individuals.It is proven as a reliable measure, with good internal consistency (mean coefficient alpha = .86)and stability (correlation coefficients between .48 and .86).The BDI also has excellent content, concurrent, discriminate, construct, and factorial validity.Scores on the BDI are related to suicidal ideation, alcoholism, and adjustment disorders, and they discriminate from anxiety disorders (Beck, Steer, & Garbin, 1988).
The BDI begins with instructions for the participant to rate his or her feelings in the past week.Each response is reported on a Likert-type scale that ranges from 0 to 3, where higher scores indicate more severe depression symptoms.Responses to the 21 items are summed for a total BDI score.The total score indicates level of depression: scores less than 10 indicate none to minimal depression, scores 10 -18 indicate mild to moderate depression, score 19 -29 indicate moderate to severe depression, and scores 30 -63 indicate severe depression.Scores above 15 are considered clinically significant (Beck, 1996).Although the BDI was developed for use on adult populations, it is accurate in detecting depression among adolescents ages 13 to 18 (Sitarenios & Kovacs, 1999).
Results
The present study was designed to examine the relation between maternal trauma symptoms and adolescent depression, as well as examine the moderating effect of parenting styles.
For the first question regarding the relation between mother's level of trauma symptoms and her child's level of psychological distress, a Pearson correlation was conducted.Contrary to expectations, the results indicated no significant relationship between mother's level of trauma and child's level of depression, r(113) = .08,p = .38.
For the second study question, three Pearson correlations were conducted to test the relations between mother's level of trauma symptoms and her use of each parenting style.Results differed by parenting style.Mother's level of trauma symptoms and her level of authoritative parenting behaviors were not significantly correlated, r(113) = -.07,p = .50.However, mother's level of trauma symptoms and her level of authoritarian, r(113) = .25,p < .01,and permissive, r(113) = .45,p < .01,parenting behaviors were significantly positively correlated.Mothers who experience higher levels of trauma symptoms were more likely to use authoritarian and permissive parenting behaviors than mothers who experience lower levels of trauma symptoms.
Three step-wise multiple regressions, one for each parenting style, were conducted to test the third study question investigating the moderating effect of parenting style.For each regression, an interaction variable was created by multiplying mother's level of trauma symptoms with her respective level of parenting behaviors on authoritative parenting, authoritarian parenting, and permissive parenting scores.The level of child depression was the dependent variable.Maternal level of trauma was entered first, the parenting style was entered second, and each interaction variable was entered third.As can be seen in Table 1 and Table 2, there was no moderating effect for authoritative or permissive parenting.However, a significant moderating effect was found for authoritarian parenting, as shown in Table 3.
In order to understand the direction of the moderating effects for authoritarian parenting, children's depression means were examined under conditions of high and low maternal trauma and authoritarian parenting.For these analyses, the level of maternal trauma symptoms was divided in two categories based on a median split between "low scores" (a score of 0 -39) and "high scores" (a score of 40 -121).The level of authoritarian parenting were also divided in two categories based on a median split between "low scores" (25 -43) and "high scores" (44 -82).As can be seen in Table 4, under conditions of low maternal trauma, mother's use of authoritarian parenting strategies does not appear to affect child depression.The effect does seem to be evident, however, under conditions of high trauma.Mothers experiencing high levels of trauma symptoms who parent with a high use of authoritarian behaviors have children who experience more depression than those whose mothers who use low levels of authoritarian behaviors.
Discussion
The purpose of this study was to investigate the impact of symptoms resulting from maternal experience of trauma on parenting styles and adolescent depression.Previous research shows a transfer of distress from traumatized mothers to their children, and a deleterious effect of trauma on mother's parenting (Dulmus & Wodarski, 2000;Kaitz et al., 2009).
The current study both aligns with and differs from previous research on the complex relationships between maternal trauma, parenting behaviors, and child psychological distress.One of the most interesting differences in the findings of this study is that it did not replicate findings of secondary trauma in children of mothers who experience high levels of trauma symptoms.A possible explanation for this finding is the use of adolescent participants.Much of the literature that documents secondary trauma in children uses samples of children younger than age 12.It is possible that adolescents are better protected from their parent's trauma symptoms than younger children, due to their more developed coping skills and support systems, such as peers or other significant adults such as a teachers or coaches.Younger children are more likely than adolescents to be dependent on and to spend time with their mothers.Thus, they may be more vulnerable to the transfer of distress.Additionally, adolescents are at a more advanced level of processing and may be able to better reason and understand their parent's trauma in a way that protects them from a transfer of distress.It is also possible that adolescents who are negatively impacted by their parent's trauma symptoms respond in other ways not assessed in the current study.For example, adolescents may respond with anger or anti-social behaviors such as substance abuse, rather than with symptoms of depression or anxiety.It is also worth noting that the lack of transfer was found in a sample in which the adolescents themselves reported on their level of depression.The link between maternal trauma and negative outcomes for children is often found in studies in which the mother is the reporter of both her own trauma and her child's well-being.
Similar to previous research, the current study supported the link between the experience of trauma and less than optimal parenting.Mothers in the current study were more likely to use authoritarian or permissive parenting behaviors when they also experienced high levels of trauma symptoms.Furthermore, mothers' use of authoritarian and use of permissive parenting behaviors were highly correlated.In a secondary analysis, authoritarian behaviors and permissive behaviors were significantly positively related, r(111) = .35,p < .01,such that mothers who reported high levels of authoritarian behaviors were also likely to report high levels of permissive behaviors.Previous research supports the relation that stressed mothers are more likely than non-stressed mothers to be generally lenient in their behavioral standards for their children and yet also engage in harsh tactics for control (Cummings et al., 2000;Kaitz et al., 2009).The combination of internalizing trauma symptoms, such as depression and emotional numbing, and externalizing trauma symptoms, such as hyper-arousal and aggression, may lead mothers to tend towards the withdrawn behaviors of permissive parenting and the hyper-vigilant behaviors of authoritarian parenting.
While maternal trauma per se did not seem to impact adolescents' level of depression in the current study, how mothers parent through their traumatic symptoms did appear to have an impact.Based on previous research, it is not surprising that mothers with trauma symptoms tend to use both authoritarian and permissive parenting behaviors, however, it was only the use of an authoritarian parenting style by these mothers that was related to adolescent depression.Mothers who experienced higher levels of trauma symptoms and who used higher levels of authoritarian parenting had adolescents who experienced higher levels of depression.Authoritarian parenting is characterized by high levels of criticism and coercive control and by low levels of warmth and acceptance (Baumrind, 1971).Some authors suggest that the authoritarian behaviors of traumatized parents stem from their tendency to go to exaggerated lengths to protect their children from the kinds of traumatic experiences they themselves encountered (Kaitz et al., 2009).In general, this type of authoritarian behavioral control may lead to infantilizing children and restricting their development of autonomy and confident decision making.This may explain, in part, why it was this parenting style specifically that was most problematic for adolescents.At the time of life when developmentally a child may be attempting to separate from parents and develop more independence, the restrictive nature of authoritarian parenting could be most distressing for the adolescent.It is important to remember, however, that an authoritarian parenting style alone was not related to adolescent depression.This parenting style had to occur in the presence of high maternal trauma for the effect to be seen.It may be that having a restrictive mother who does not demonstrate warmth and who, herself, seems distressed can have a debilitating effect on an adolescent.
When reviewing the findings of this study, limitations should be considered.First, it is possible that the use of a convenience clinical sample affected the means of the sample.Although the clinic serves an ethnically and socioeconomically diverse population, the sample may not be representative of the larger population, in that these were families who were both experiencing some family problems and were seeking help for the problem.Additionally, interpretations of the findings need to be made in the correlational context of the study design.Given that causal connections cannot be established, it must be acknowledged that the adolescents' depression could have preceded the mothers' use of authoritarian parenting style or her trauma.
In spite of these limitations, the current study both adds to the body of knowledge on secondary trauma and suggests important avenues to consider as this work moves forward.First, while previous research has established that secondary trauma in children is likely when mothers experience trauma, these findings suggest that this relationship may be different for adolescents.Due to their potentially more developed cognitive processing and coping skills, as well as their increased amount of time spent away from their mothers, it is possible that the impact of maternal trauma is less direct for adolescents than it is for younger children.Future studies should seek to more clearly understand the potential and mechanisms for secondary trauma in adolescents.Second, using mothers as reporters of their own trauma and parenting style and adolescents as reporters of their own depression, this study was able to establish a connection between a particular parenting style and adolescent depression under conditions of high maternal trauma.However, given that the direct effect of maternal trauma was not found as it has been in previous work where mothers were the reporters of child outcomes, it raises the question of who is the optimal reporter of child well-being.It is certainly possible that the disengaged or hypervigilent symptoms of trauma may hinder mothers' ability to accurately assess their children's emotional and behavioral states.While younger children may not be capable of providing such information, the findings of this study suggest that future work on secondary trauma should consider including the input of additional reporters, such as teachers or other non-traumatized parents or care-givers, when examining child outcomes.
Finally, the results of the present study have implications for clinical practice.Maternal experience of trauma is nuanced, as is its impact on parenting behaviors and child well-being, and may be best assessed with a comprehensive assessment format (Briere & Scott, 2006).Given that authoritarian parenting behaviors were related to higher depression among adolescents under conditions of high maternal trauma, clinicians using psychoeducation could coach mothers to parent through their trauma symptoms in other ways to reduce the transfer of distress.Clinicians using a comprehensive, systems-oriented framework may facilitate a more meaningful dialogue with their clients about their traumatic experiences and how those experiences impact different aspects of their life, including parenting behaviors.
Conclusion
The findings from this study do support the ongoing concern that when parents experience trauma, children may suffer too.Identifying the mechanisms through which children are affected and how they may differ based on characteristics of both the child and the parent are important as we strive to develop strategies and interventions to assist children who may experience secondary trauma.
Table 4 .
Children's depression (BDI) scores for low and high maternal trauma and authoritarian parenting.
|
v3-fos-license
|
2024-03-09T16:28:02.103Z
|
2024-03-04T00:00:00.000
|
268279468
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2813-2475/3/1/9/pdf?version=1709540627",
"pdf_hash": "e50a3bd457689d52f6c7abadb69bde7e8578cfb5",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43354",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "de7f0e681fbd9302c4ba6e541a116d6a9e4c5e8b",
"year": 2024
}
|
pes2o/s2orc
|
Guanylyl Cyclase Activator Improves Endothelial Function by Decreasing Superoxide Anion Concentration
: Introduction: Soluble guanylyl cyclase (sGC) activation in vascular smooth muscle has the potential to induce vasodilation. Chronic sGC activation enhanced vascular function in the congestive heart failure animal model. Therefore, sGC activation can lead to vasodilation and improvement in endothelial function. Objective: To investigate whether the selective sGC activator can revert the endothelial dysfunction and investigate the mechanism of action. Methods: Wistar rats were split into two groups: normotensive (2K) and hypertensive rats (2K-1C). Intact aortic rings were placed in a myograph and incubated with 0.1 µ M ataciguat for 30 min. Cumulative concentration-effect curves were generated for acetylcholine (Ach) to assess endothelial function. The pD2 and maximum relaxant effect (Emax) were measured to Ach. In endothelial cell culture, superoxide anion (O 2 •− ) was detected by using fluorescent probes, including DHE and lucigenin. Results: Ataciguat improved the relaxation induced by acetylcholine in 2K-1C (pD2: 6.99 ± 0.08, n = 6) compared to the control (pD2: 6.43 ± 0.07, n = 6, p < 0.05). The aortic rings were also improved from 2K (pD2: 7.04 ± 0.13, n = 6) compared to the control (pD2: 6.59 ± 0.07, n = 6, p < 0.05). Moreover, Emax was improved by ataciguat treatment in the 2K-1C aorta (Emax: 81.0 ± 1.0; n = 6), and 2K aorta (Emax: 92.98 ± 1.83; n = 6), compared to the control (Emax 2K-1C: 52.14 ± 2.16, n = 6; and Emax 2K: 76.07 ± 4.35, n = 6, p < 0.05). In endothelial cell culture, treatment with ataciguat (0.1, 1, and 10 µ M) resulted in a reduction of the superoxide anion formation induced by angiotensin II. Conclusions: Our findings indicated that ataciguat effectively enhanced endothelial function through the inactivation of superoxide anions.
Introduction
The vascular system encompasses a monolayer of cells known as the endothelium.These cells release mediators that control several physiological functions, such as vascular homeostasis by maintaining the balance among vasodilation and vasoconstriction, platelet adhesion, and atheromatous plaque formation and progression, among others.Endothelial dysfunction (ED) is recognized as a predictor for the onset of cardiovascular diseases, as it not only accompanies their progression but also involves a reduction in the release of vasodilators and an increase in vasoconstrictor factors by endothelial cells.Nitric oxide (NO), endothelium-derived hyperpolarizing factor (EDHF), and prostacyclin are components acting as vasodilators.Endothelin-1 (ET-1), angiotensin II, thromboxane A2, and prostaglandin H2 [1][2][3] are key contributors acting primarily as vasoconstrictors.Furthermore, ED manifests with aging, even in individuals who are healthy [4,5].With aging, blood vessel walls undergo thickening and increased stiffness, leading to a decline in various processes such as the regulation of vascular tone and the loss of endotheliumdependent factors responsible for inducing vasodilation, including nitric oxide (NO) [3,6].
Several studies have indicated that vascular relaxation in the aorta of renal hypertensive rats is impaired due to various factors, one of which is the elevated generation of superoxide anion (O 2 •− ) [7][8][9].At high concentrations, superoxide anions react with NO, therefore its bioavailability could be significantly reduced when O 2 •− is present.This results in the formation of peroxynitrite, a potent oxidant interacting with DNA, lipids, and proteins triggering cell signaling and or increasing oxidative injury, inducing necrosis or apoptosis [10].
These reactions initiate cellular responses that span from subtle adjustments in cell signaling to the activation of the soluble guanylate cyclase (sGC) enzyme.They interact with the heme group, leading to a modification in the conformation of sGC, rendering it active.The sGC then catalyzes the conversion of guanosine triphosphate (GTP) to cyclic guanosine monophosphate (cGMP), serving as an intracellular second messenger.This activates the protein kinase C (PKC), inducing several biological effects.The sGC activity is mediated by the sGC activator or sGC stimulator; stimulation is dependent on the sGC heme portion oxidative status, and the ferrous heme (Fe 2+ ) oxidative state is essential for its activation.Nitric oxide serves as a stimulator for sGC, displaying low affinity for ferric heme (Fe 3+ ).Consequently, the direct activation of soluble guanylate cyclase (sGC) is regarded as a promising strategy for addressing various cardiovascular disorders linked to endothelial dysfunction [11].
Ataciguat (HMR 1766) binds to sGC when the iron is in the ferrous heme (Fe 2+ ), ferric heme (Fe 3+ ) state or even without this grouping.This characteristic is very important in a biological environment with low redox potential, in which nitric oxide cannot modulate its biological effects by sGC stimulation [11,12].This compound is suggested for managing cardiovascular diseases linked to oxidative stress and pulmonary hypertension [13].Since its initial publication in 2004, there has been only a limited number of studies exploring the effects of HMR-1766.Notably, in a rat model with congestive heart failure, prolonged treatment with ataciguat resulted in enhanced vascular function, increased NO sensitivity, and diminished platelet activation [13].Moreover, within endothelial cells, ataciguat triggers the synthesis of nitric oxide through a mechanism reliant on nitric oxide synthase activation.This, in turn, leads to a potentiation of vascular relaxation in isolated aortic rings induced by ataciguat [14].
In this context, the objective of this study is to examine whether the sGC activator ataciguat can reverse the endothelial dysfunction caused by angiotensin II through the inactivation of superoxide anion.
Experimental Animals
Male Wistar rats weighing 180-200 g were employed in the study.The rats were kept in a light-dark cycle with unrestricted access to both standard rat chow and water.Renovascular hypertension was induced in the rats utilizing the two-kidney one-clip (2K-1C) model, as outlined in previous descriptions [7,15].This model involves the constriction of only one renal artery to decrease chronic renal perfusion and activate the endogenous renin-angiotensin-aldosterone system.Following a midline laparotomy, anesthetized animals received a silver clip with an internal diameter of 0.20 mm placed around the left renal artery using tribromoethanol (2.5 mg/kg, i.p.).Normotensive two-kidney rats (2K) underwent laparotomy without additional procedures.Post-surgery, the animals were placed in a heated enclosure at approximately 28 • C for optimal anesthesia recovery.In non-anesthetized animals, systolic blood pressure (SBP) was measured on a weekly basis using an indirect tail-cuff method (MLT125R pulse transducer/pressure cuff connected to the PowerLab 4/S analog-to-digital converter; AD Instruments Pty Ltd., Bella Vista, NSW, Australia).To enhance the blood pressure signal, animals were situated in a heated enclosure at approximately 30 • C. Rats were classified as hypertensive if their SBP exceeded 160 mmHg six weeks after surgery.All protocols adhered to the guidelines of the Animal Care and Use Committee of the Federal University of São Carlos and received approval from this committee (CEUA n • 1626101216).
Vascular Reactivity Studies
At the six week post-surgery mark, male Wistar rats, both hypertensive and normotensive, underwent anesthesia with isoflurane.Following euthanasia via decapitation, the thoracic aortas were isolated and 3 mm long aortic rings were placed in 5 mL bath chambers for isolated organ studies.The chambers contained Krebs solution at 37 • C and pH 7.4 and were continuously aerated with 95% O 2 and 5% CO 2 .The aortic rings were placed in an isometric myograph (Mulvany-Halpern model 610 DMT-USA, Marietta, GA, USA) and were recorded using the PowerLab8/SP as a data acquisition system (ADInstruments Pty Ltd., Colorado Springs, CO, USA).
The aortic rings underwent tension adjustment to 1.5 g, which was recalibrated every 15 min during a 60 min equilibration period before introducing the designated drug.The assessment of endothelial integrity was based on the extent of relaxation induced by 1 µM acetylcholine following the contraction of the aortic ring caused by phenylephrine (0.1 µM).A ring was discarded if acetylcholine-induced relaxation was below 60% in 2K-1C or 80% in 2K rat aortas.Aortic rings from normotensive (2K) and hypertensive (2K-1C) rats were exposed to ataciguat (at a concentration of 0.1 mM) or a control solution (PBS) for a duration of 30 min.Subsequently, the aortic artery rings underwent three washes to eliminate the incubated drug, followed by pre-contraction with phenylephrine (0.1 µM).Following precontraction with phenylephrine (0.1 µM), an endothelium-dependent relaxation curve was generated using concentration-effect curves for acetylcholine.Each experiment utilized aortic rings obtained from separate animals.
Nitric Oxide (NO) Measurements
■ HUVEC were seeded in 96-well plates at a concentration of 5 × 10 4 cells per well.The plates were then incubated for 24 h in a humidified incubator at 37 • C with 5% CO 2 .Following the 24 h incubation, the treatment was removed, and the plates were gently washed with phosphate buffer saline (PBS).To detect intracellular NO, the cells were subjected to incubation with the selective fluorescent probe 4,5-diaminofluorescein (DAF-2T-10 µM) for 30 min.This probe reacts with dinitrogen trioxide (N 2 O 3 ), an oxidation product of NO, producing the fluorescent compound DAF-2T [16].
■
This technique enables the quantification of the intracellular concentration of NO, considering its short half-life.The measurements were carried out using the SpectraMax Gemini XS fluorometer (Molecular Devices, San Jose, CA, USA) with an excitation wavelength of 435 nm and an emission wavelength of 538 nm.Angiotensin II incubation was implemented to mimic endothelial dysfunction, while A23187 was utilized to induce an elevation in intracellular calcium, thereby activating NOS.
Measurement of General Reactive Oxygen Species (ROS) Production
HUVEC were seeded in 96-well plates at a concentration of 5 × 10 4 cells per well and incubated for 24 h in a humidified incubator with 5% CO 2 at 37 • C.After the initial 24 h, the treatment was removed, and the plates were gently washed with phosphate buffer saline (PBS).Subsequently, the cells were incubated for 30 min with ataciguat at concentrations of 0.1, 1, or 10 µM, followed by a 30 min incubation with angiotensin II (Ang II) at 0.1 µM.The detection of intracellular superoxide radical (O 2 •− ) was performed using dihydroethidium (DHE) at a concentration of 50 µM, with an incubation period of 20 min.Fluorescence intensity measurements were conducted using a fluorescence microplate reader (SpectraMax Gemini XS, Molecular Devices) with excitation at 510 nm and emission at 595 nm wavelengths.Angiotensin II incubation was carried out to induce ROS formation, mimicking endothelial dysfunction, and to examine whether ataciguat decreased intracellular ROS concentration through a mechanism independent of nitric oxide formation, as detailed below.Additionally, tempol was employed as a positive control for an agent that reduces intracellular ROS concentration.
ROS detection was also carried out in the presence of hydroxocobalamin (Hcb), a nitric oxide scavenger, to assess whether the reduction of superoxide anions is not attributed to a reaction with NO resulting in peroxynitrite formation.Additionally, this experiment was conducted in the presence of an sGC inhibitor, 1H- [1,2,4] oxadiazolo [4,3-a] quinoxalin-1-one (ODQ), to confirm whether the effect of ataciguat on ROS concentration is linked to intracellular cGMP accumulation.Cells were treated with ataciguat at 10 µM, ODQ, and Hcb for 30 min, followed by treatment with angiotensin II at 0.1 µM for another 30 min.The intracellular detection of superoxide radicals (O 2 •− ) was accomplished using dihydroethidium (DHE) at 50 µM, with an incubation period of 20 min.The fluorescence quantification was conducted as previously described for DHE.
Measurement of Superoxide Anion Production
In this experiment, the lucigenin probe (5 µM), specifically designed for detecting superoxide anions, was employed.HUVEC were seeded in 96-well plates at a concentration of 5 × 10 4 cells per well and were maintained in a humidified incubator with 5% CO 2 at 37 • C for 24 h.The cells were then treated with ataciguat at concentrations of 0.1, 1, or 10 µM and lucigenin for 30 min, followed by treatment with Ang II at 0.1 µM for an additional 30 min.The increase in fluorescence intensity was monitored using a fluorescence microplate reader (SpectraMax Gemini XS, Molecular Devices) at an excitation wavelength pair of 510 nm and 595 nm.Angiotensin II incubation was conducted to induce O 2 •− formation, mimic endothelial dysfunction, and determine whether ataciguat reduced the intracellular O 2 •− concentration.Tempol was utilized as a positive control for an agent that reduces intracellular O 2 •− concentration.
Statistical Analysis
Statistical analysis of the results was conducted using GraphPad Prism version 3.0.The one-way ANOVA was employed, with post hoc testing using Newman-Keuls, to assess statistical significance.Data from each set of experiments are presented as mean ± S.E.M., in which 'n' indicates the number of animals utilized.Significant values were considered at p < 0.05.
A similar outcome was observed in aortic rings from normotensive rats.Treating aortic rings with ataciguat at 0.1 µM in 2K (pD2: 7.04 ± 0.13, n = 6), the rats improved endotheliumdependent relaxation induced by acetylcholine compared to 2K rings treated with PBS (pD2: 6.59 ± 0.07, n = 6).Notably, aortic rings from hypertensive rats treated with ataciguat exhibited superior results compared to normotensive rats without ataciguat treatment.
In HUVEC cells, we conducted NO quantification through fluorescence intensity measurement (FI) to assess whether the enhancement in aortic ring vasodilation in hypertensive rats treated with ataciguat is triggered by NO production.In HUVEC cells, we conducted NO quantification through fluorescence intensity measurement (FI) to assess whether the enhancement in aortic ring vasodilation in hypertensive rats treated with ataciguat is triggered by NO production.
We quantified ROS through fluorescence intensity measurement (FI) using a nonselective ROS probe (DHE) and a selective superoxide probe (lucigenin) to assess whether the enhancement in aortic ring relaxation in hypertensive rats treated with ataciguat is attributed to a reduction in reactive oxygen species.
As a positive control to induce superoxide production, we used angiotensin II.As a positive control to induce superoxide production, we used angiotensin II.53.23 ± 0.46 and 60 min 53.12 ± 0.44, p < 0.001).We quantified ROS through fluorescence intensity measurement (FI) using a nonselective ROS probe (DHE) and a selective superoxide probe (lucigenin) to assess whether the enhancement in aortic ring relaxation in hypertensive rats treated with ataciguat is attributed to a reduction in reactive oxygen species.
As a positive control to induce superoxide production, we used angiotensin II. Figure 3 To investigate whether the ataciguat-induced reduction in ROS and superoxide is attributed to sGC activation and/or NO formation, experiments were conducted in the presence of hydroxocobalamin (NO scavenger) or ODQ (sGC inhibitor), as can be seen at Figure 4.In the presence of hydroxocobalamin, no alteration in the ataciguat effect was observed (Ang.II + ataciguat + Hcb: 2.56 ± 0.16, n = 5, p < 0.05), suggesting that the decrease in ROS induced by ataciguat is not dependent on NO production.However, sGC inhibition (ODQ) nullified the ataciguat effect (Ang.II + ataciguat + ODQ: 5.86 ± 0.14, n = 5, p < 0.05), indicating the involvement of sGC in the ataciguat effect (Figure 4).tensin II vs. PBS + angiotensin II and tempol + angiotensin II vs. PBS + angiotensin II (p < 0.001).(B) Quantification of superoxide anions (O2 − ) in HUVEC.Values are means ± E.P.M fluorescence intensity obtained on HUVEC cells after a 30 min treatment with ataciguat, tempol and angiotensin II.* indicates the difference between PBS + angiotensin II vs. PBS; ataciguat 0.1 µM + angiotensin II vs. PBS + angiotensin II; ataciguat 1 µM + angiotensin II vs. PBS + angiotensin II; ataciguat 10 µM + angiotensin II vs. PBS + angiotensin II and tempol + angiotensin II vs. PBS + angiotensin II (p < 0.05).
To investigate whether the ataciguat-induced reduction in ROS and superoxide is attributed to sGC activation and/or NO formation, experiments were conducted in the presence of hydroxocobalamin (NO scavenger) or ODQ (sGC inhibitor), as can be seen at Figure 4.In the presence of hydroxocobalamin, no alteration in the ataciguat effect was observed (Ang.II + ataciguat + Hcb: 2.56 ± 0.16, n = 5, p < 0.05), suggesting that the decrease in ROS induced by ataciguat is not dependent on NO production.However, sGC inhibition (ODQ) nullified the ataciguat effect (Ang.II + ataciguat + ODQ: 5.86 ± 0.14, n = 5, p < 0.05), indicating the involvement of sGC in the ataciguat effect (Figure 4).
Discussion
The primary discovery of this study was that ataciguat enhances endothelial function in vessels, both with and without endothelial dysfunction, through the inactivation of superoxide anions.
To mitigate the direct vasodilation induced by ataciguat, we incubated aortic rings with a concentration that had been previously confirmed not to induce vasodilation [14].Despite the low concentration, ataciguat was still effective in enhancing endothelium-dependent relaxation in isolated aortic rings, both with and without endothelial dysfunction.In rats with cardiac heart failure, chronic treatment with ataciguat demonstrated improvements in both endothelium-dependent (induced by acetylcholine) and non-endotheliumdependent (induced by NO donor) relaxation in aortic rings [13].These effects were linked to a decreased concentration of ROS in aortic rings, as measured after chronic ataciguat treatment [13].A high concentration of superoxide anions was detected in the aortic rings of rats with renovascular hypertension (2K-1C) [7], contributing to endothelial dysfunction [15,17].Therefore, the observed improvement in endothelium-dependent relaxation
Discussion
The primary discovery of this study was that ataciguat enhances endothelial function in vessels, both with and without endothelial dysfunction, through the inactivation of superoxide anions.
To mitigate the direct vasodilation induced by ataciguat, we incubated aortic rings with a concentration that had been previously confirmed not to induce vasodilation [14].Despite the low concentration, ataciguat was still effective in enhancing endothelium-dependent relaxation in isolated aortic rings, both with and without endothelial dysfunction.In rats with cardiac heart failure, chronic treatment with ataciguat demonstrated improvements in both endothelium-dependent (induced by acetylcholine) and non-endothelium-dependent (induced by NO donor) relaxation in aortic rings [13].These effects were linked to a decreased concentration of ROS in aortic rings, as measured after chronic ataciguat treatment [13].A high concentration of superoxide anions was detected in the aortic rings of rats with renovascular hypertension (2K-1C) [7], contributing to endothelial dysfunction [15,17].Therefore, the observed improvement in endothelium-dependent relaxation in this study following in vitro treatment with ataciguat can be attributed to a reduction in superoxide anions in aortic rings.
Increased ROS formation results in decreased nitric oxide (NO) bioavailability and is regulated by various endogenous neuro-human systems [18].For activation, NO requires the iron heme group of sGC to be in the ferrous state (Fe 2+ ).However, in conditions of endothelial dysfunction, the sGC iron is in its oxidized state (Fe 3+ ) or may even be absent [19].Consequently, heightened oxidative stress reduces the expression and impairs the NO-induced activation of heme-containing sGC, diminishing the efficacy of vasodilatory therapy with NO donors or eNOS-stimulating compounds.
To assess whether the enhancement in endothelial function induced by ataciguat resulted from a reduction in ROS formation, an experiment with human umbilical vein endothelial cells (HUVEC) was conducted using the dihydroethidium (DHE) probe.An increase in fluorescence intensity was observed when angiotensin II was added, indicating an elevation in ROS production.These findings align with previous publications that have demonstrated increased ROS production in cells stimulated with Ang II [17].Treatment with various concentrations of ataciguat (0.1, 1, and 10 µM) decreased the intensity of DHE fluorescence in Ang II-stimulated cells, demonstrating results similar to the treatment with tempol, an SOD mimetic.This result indicates that ataciguat reduces ROS concentration in HUVECs.
To determine whether the reactive oxygen species formed by Ang II stimulation in HUVECs is the superoxide anion (O 2 •− ), the lucigenin probe was employed [20].Our findings demonstrated that after treating HUVECs with angiotensin II, there was an increase in lucigenin fluorescence, suggesting that the ROS being formed is indeed a superoxide anion (O 2 •− ).Using lucigenin yielded similar results to those obtained with the DHE probe, indicating that ataciguat induces a decrease in superoxide anion levels.
Oxidative stress is characterized by an increase in ROS coupled with a decrease in antioxidant capacity.ROS, particularly the superoxide radical (O 2 •− ), reacts with nitric oxide (NO), forming peroxynitrite (ONOO − ), a highly oxidant species capable of causing protein nitration and inducing lipid peroxidation [21].To assess whether the reduction in ROS in endothelial cells induced by ataciguat does not occur due to a reaction with NO, an experiment was conducted using hydroxocobalamin (Hcb), an NO scavenger.It was observed that in the presence of Hcb, there was a decrease in ROS production similar to ataciguat treatment, indicating that the reduction in ROS induced by ataciguat is not dependent on NO production.
To explore whether the decrease in ROS induced by ataciguat is dependent on sGC stimulation, we utilized an sGC inhibitor (ODQ).ODQ nullified the effect of ataciguat, indicating that the ataciguat effect is contingent on sGC stimulation.The mechanism of sGC inhibition by ODQ involves the oxidation of the sGC heme [22].Given that ataciguat activates the sGC when the iron is in the ferrous heme (Fe 2+ ) state, ferric heme (Fe 3+ ) state, or even without this grouping [11], our findings indicate that the reduction of superoxide anion induced by ataciguat is dependent on sGC ferrous heme (Fe 2+ ).Furthermore, this effect is not attributed to sGC nitric oxide stimulation, as the NO scavenger did not nullify this effect.Furthermore, we conducted experiments in HUVECs by stimulating (NO) production (A23187) and superoxide anion generation (angiotensin II).In these experiments, we observed that ataciguat can prevent degradation of NO induced by angiotensin II and induce NO production by endothelial cells.These findings align with ROS and superoxide detection, suggesting that ataciguat can mitigate the effects of ROS.These results are consistent with the literature, which demonstrates that ataciguat is a potent stimulus for cGMP production in cells exposed to oxidative stress [23].Thus, these results provide additional insights into the biological effects induced by ataciguat, which may enhance various physiological functions dependent on endothelial health, including blood pressure control, the formation and progression of atheromatous plaques, and platelet aggregation, among others.
It is essential to emphasize that the measurement of endothelial function is accessible for clinical use through methods such as flow-mediated dilatation, serving as a crucial tool to guide clinical interventions [24].
Conclusions
In summary, our findings collectively suggest that the activation of sGC by ataciguat can enhance endothelial function through the inactivation of superoxide anion, as illustrated in the central figure.
Figure 3 .
Figure 3. (A) Quantification of Reactivity Oxygen Species (ROS) in HUVEC.Values are means ± E.P.M fluorescence intensity obtained on HUVEC cells after a 30 min treatment with ataciguat, tempol and angiotensin II.The *** indicates difference between ataciguat 0.1 µM + angiotensin II vs. PBS
Figure 3 .
Figure 3. (A) Quantification of Reactivity Oxygen Species (ROS) in HUVEC.Values are means ± E.P.M fluorescence intensity obtained on HUVEC cells after a 30 min treatment with ataciguat, tempol and angiotensin II.The *** indicates difference between ataciguat 0.1 µM + angiotensin II vs. PBS
|
v3-fos-license
|
2020-07-30T02:04:21.529Z
|
2020-07-24T00:00:00.000
|
236838745
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-42499/latest.pdf",
"pdf_hash": "5dc9484e26557187db2eaa7ece2496e7b20850db",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43356",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3fbeb9b14e60d74e0028c012c0f35cd438b89282",
"year": 2020
}
|
pes2o/s2orc
|
Temporal Dynamics in Clinical and Laboratory Manifestations During COVID-19 Progression
An analysis of the evolution during the hospitalization of clinical and laboratory ndings from 78 conrmed COVID-19 patients and the associated risk factors. smokers when compared with the stable group. These data show that advanced age and smoking are important risk factors for disease progression.
Background
In December 2019, an outbreak of a respiratory disease in humans (COVID-19) raised an acute and severe global concern [1][2][3], which was rapidly proved to be caused by a virus named severe acute respiratory syndrome coronavirus-2 ( SARS-CoV-2) [1,4]. By June 1, 2020, a total of 5,939,234 con rmed COVID-19 cases and 367,255 associated deaths were recorded all over the world [5]. The number of con rmed COVID-19 cases reported to the World Health Organization (WHO) continues to rise worldwide.
Although the majority of COVID-19 patients have an uncomplicated or mild illness (81%), some will develop a severe illness (14%), and approximately 5% will require intensive care unit treatment. Of those critically ill, the mortality is as high as 61.5% [6]. The early detection of these severe patients could decrease the mortality rate but remains a major clinical challenge. Currently, severe cases are mainly diagnosed empirically based on a set of clinical characteristics, such as respiratory rate (≥ 30 times/min), mean oxygen saturation (93% in the rest state), or arterial blood oxygen partial pressure/oxygen concentration (300 mmHg) [7]. In fact, patients exhibiting these clinical manifestations have already progressed to the clinically severe phase with a high risk of death. Therefore, it is critical to develop new approaches to predict which cases will likely become clinically severe. This would anticipate the initiation of the appropriate treatment to reduce the risk of rapid deterioration.
Previous studies have focused on the difference between non-severe and severe patients [8][9][10][11]. It was reported that severe patients have higher blood white counts (WBC), neutrophil counts, and higher levels of C-reactive protein, D-dimer, lactate dehydrogenase, and Interleukin-6 [8,10,12,13]. However, the kinetic changes of these abnormal indicators that occur throughout the course of COVID-19, especially during deterioration, and how they relate to the clinical features remains unknown In this study, we retrospectively investigated and compared the dynamic changes of the clinical presentation and laboratory indicators between those stable non-severe cases and those with an initial mild illness that progressed into a severe case of COVID-19 during hospitalization. We hypothesized that the clinical and laboratory pro ling of these COVID-19 patients may shed light on the early warning indicators of severe cases.
Ethics statement
Data collection and analysis of patients were approved by the Ethics Committee of Union Hospital, Tongji Medical College, Huazhong University of Science and Technology (2020 − 0120). Written informed consent was waived as part of a public health outbreak investigation, and oral consent was obtained from each patient.
Patients
This was a retrospective single-center study that included 78 con rmed COVID-19 patients treated at the Wuhan Union Hospital from January 1, 2020, through February 30, 2020. The ow chart, from a total of 243 patients up to 78 patients of the study, is shown in Figure S1. Of the 243 patients, 158 cases were classi ed as severe or critically ill at admission, and seven cases lacking relevant data were excluded.
The clinical classi cation followed was according to the Novel Coronavirus Pneumonia Diagnosis and Treatment Plan (Trial Version 7) developed by the National Health Committee of the People's Republic of China [7]. Laboratory con rmation of SARS-CoV-2 infection was performed within bio-safety level 2 facilities at the Wuhan Union Hospital [14].
Baseline Data Collection
The epidemiological characteristics, clinical symptoms, laboratory ndings, medical treatment, and outcome were obtained from electronic medical records. The laboratory tests included blood routine, biochemical indicators, lymphocyte subsets, plasma cytokines, in ammation markers, and coagulation function.
Follow-up
After admission, the patients were re-examined for laboratory indexes, and symptoms, treatments, and outcome events were recorded. The endpoint was discharge or death. The discharge criteria were based on the Novel Coronavirus Pneumonia Diagnosis and Treatment Plan (Trial Version 7) [7].
Statistical analysis
Student's t-tests or Mann-Whitney tests were applied to continuous variables, while chi-square tests were applied to categorical variables. Timecourse data were aligned to date of symptom onset and aggregated over 3-4-day intervals to account for data not being available for all patients at all time points during the disease course. For each interval, the means were compared by the Sidak-Bonferroni method. Time from symptoms onset to discharge was compared using the Log-rank(Mantel-Cox) test. All analyses were performed using the SPSS 13.0(SPSS Inc). p < 0.05 was considered as statistically signi cant.
Demographic and clinical characteristics of COVID-19 patients
A total of 78 patients with COVID-19 were enrolled in the study. The ow chart is displayed in Figure S1. They were all classi ed as non-severe patients on admission and received antiviral therapy after admission, such as Ribavirin, Arbidol, and/or IFN-α [7].
Of the 78 patients, 18 patients exacerbated even with treatment after admission and then rapidly developed to severe illness. The conditions of the remaining 60 patients were stable and gradually improved until discharge. Based on this, the 78 patients were divided into two groups, the exacerbated group (18 patients) and the stable group (60 patients). Table 1 summarizes the patient demographics and clinical characteristics. The median age of the two groups was statistically different, at 57.5 years old for the exacerbated group and 35 years old for the stable group (p = 0.0001). The Body Mass Index (BMI) was also signi cantly different between the two groups, with 23.98 (IQR: 21.92-28.54) for the exacerbated patients and 22.77 (IQR: 21.09-24.46) for the stable patients (p = 0.0441). The percentage of patients with a smoking history was 22.2% and 3.3% for the exacerbated and stable groups, respectively (p = 0.0083).
Time To Event Analysis In Covid-19 Patients
Of the 18 exacerbated patients, 17 cases eventually recovered and were discharged. The one patient that died (on day six after symptom onset) had been admitted with gastric cancer and was con rmed as COVID-19 positive after surgery. The median time from onset to exacerbation was 7.5 days (IQR:2-11.5) for the exacerbated patients (Fig. 1).
The disease course was de ned from the day of symptom onset to discharge, and the data revealed the median disease course was longer for the exacerbated group compared with the stable group patients (23 vs. 19 days, p = 0.0015, Table 1 and Fig.S2). The average hospitalization days were 11.5 (IQR:7-16) days for stable patients and 17 (IQR:14.75-23.25) days for the exacerbated patients (p = 0.0006). Nevertheless, the two groups did not differ in the median duration from illness onset to a rst hospital visit and hospital admission (Table 1), which indicated that the deterioration of patients after admission was not due to the pre-hospitalization delay.
Dynamic Changes In Febrile Patients
Of the 78 patients, a total of 88.5% of patients exhibited fever (temperature > 37.3 °C) during COVID-19 (Table.S1). After 12 days from the onset, the highest temperatures of all febrile patients subsequently improved, and the trend and magnitude were similar between the exacerbated and stable groups (Fig. 2A). The proportion of febrile patients also signi cantly declined on day 12 (Fig. 2B), but there were no differences in the proportion of febrile patients between the two groups within 12 days from onset. Interestingly, at 13-17 days from symptom onset, the proportion of febrile patients in the exacerbated group was signi cantly higher than in the stable group (33.3% vs. 5%, p = 0.001, Table S1, and Fig. 2B), indicating the longer duration of abnormal temperature in the exacerbated group.
Dynamic Pro ling Of Laboratory Findings
To elucidate the dynamic changes in laboratory indicators throughout the disease course, blood routine and lymphocyte subsets (Fig. 3), biochemical and coagulation indicators (Fig. 4), plasma cytokines, and in ammatory biomarkers (Fig. 5) were collected during hospitalization at 3-day or 4-day intervals.
At days 0-3 from symptom onset, the exacerbated cases exhibited higher WBC (8.1 vs. 4 Fig. 3E). The level of D-dimer between the two groups showed the same trend as before. (Fig. 4I).
However, the differences in CD4 + T cells between the two groups did not reach a statistical signi cance at any time point (Fig. 3G). There was a noticeable rise in the platelet count (PLT) at this time in the exacerbated patients, which was signi cantly higher than in the stable patients (Fig. 3D). Additionally, the WBC (Fig. 3A), neutrophil counts (Fig. 3B), and the NLR (Fig. 3C), LDH (Fig. 4D), and IL-6 ( Fig. 5C) levels were greatly decreased at this duration, all of which showed the same tendency between the two groups.
At days 23-27 from the onset, except for PLT (Fig. 3D), there were no discernible differences in any of the laboratory indicators between the exacerbated and stable patients (Fig. 3-5).
Conclusions
This is the rst study to describe the clinical features and consecutive laboratory characteristics of COVID-19. Of the 78 patients who were classi ed as non-severe patients at admission, 18 patients progressed to a severe illness (the exacerbated group) after hospitalization. In comparison, the remaining 60 patients (the stable group) were stable and gradually recovered. In our study, the exacerbated group had a higher frequency of older patients and smokers when compared with the stable group. These data show that advanced age and smoking are important risk factors for disease progression.
In this study, the median time from onset to deterioration was 7.5 days for the exacerbated patients. Before this time point (days 0-7 from onset), compared with the stable group, we observed signi cantly higher WBC and neutrophil counts, higher levels of NLR, LDH, and D-dimer, and lower levels of albumin in the exacerbated group. We suggest that these abnormal laboratory indexes might be the earliest warning indicators to predict the patients with a high risk of deterioration before the appearance of the clinical manifestations of severe COVID-19. In the second week (days 8-12), lower lymphocytes, CD3 + T cells, and CD8 + T cells, and higher levels of CRP, ESR, ALT, AST, IL-6 subsequently appeared in the exacerbated patients.
By evaluating the condition of patients with fever, we observed that the highest temperature and the proportion of febrile patients both declined on days 13-17 from symptom onset. Day 12 from symptom onset might be an in ection point, suggesting an improvement of the disease course. In the late third week (days 18-22), except for AST, albumin, PLT, and ESR, the rest laboratory indicators of the exacerbated patients greatly improved and did not differ from the stable patients. Notably, the median disease course of stable patients was 19 (IQR:16-23) days, suggesting that most stable patients would recover and be discharged during that week. In the fourth week (days 23-27), except for PLT, there were no different indicators between the two groups. A study reported that platelets could trigger B cells to increase their production of IgG1, IgG2, and IgG3, suggesting that the platelet content can contribute to B-cell function and alter adaptive immunity [15,16]. Similarly, the elevated level of PLT, which was observed in the late stage of disease course, was suggested to be associated with the recovery of COVID-19 in the current data.
Only based on clinical symptoms, it is di cult to determine whether a patient is at risk of progression. There was no difference in symptoms between the exacerbated and stable groups at admission. Nonetheless, notable differences in laboratory indexes were observed during the whole course of COVID-19, indicating the importance of monitoring laboratory indexes timely, not just the clinical features. However, considering the rapidly growing number of COVID-19 cases, the inadequate responses, and insu cient medical staff, it was di cult to take care of every patient, and extra blood tests would undoubtedly have increased the burden on the nurses and on patients as well. In addition, such testing implies a higher patient-clinician interaction and laboratory personnel exposure, increasing the risk of transmission. Therefore, based on our ndings, monitoring the laboratory indicators distinctively according to the timetable from symptom onset may help to e ciently diagnosis the patients with high risk to rapid deterioration, before clinical manifestations. In the early stage, the clinician should pay more attention to the WBC and neutrophil counts, as well as the levels of NLR, LDH, and D-dimer. Any increase in these levels should be an alert for disease progression, and then the focus should be on the levels of CRP, IL-6, AST, and ALT. An elevation of these indicators may be a sign that the COVID-19 patient progressed or has a high risk of progressing to severe conditions. According to these changes, clinicians may take effective measures timely to reduce the risk of deterioration and regulate the schedule and items of follow-up indexes to bene t the patient as much as possible.
This study has some limitations. First, it was a retrospective study in a single center in Wuhan, which may have resulted in unavoidable bias.
Second, the sample size of exacerbated patients was smaller than that of patients without deterioration. Third, an accurate risk assessment model has not been established due to the small number of exacerbated cases enrolled in the current study.
In summary, advanced age and smoking history are risk factors for COVID-19 progression. High WBC and neutrophil counts, and high levels of LDH and D-dimer could be early indicators for patients with a high risk of progression in the rst week after symptom onset. The secondary changes included elevated IL-6, CRP, ESR, AST, and ALT in the next week, and after that, the changes of lymphocyte subsets. Thus, we have shown that it is feasible to predict severe COVID-19 patients based on a panel of laboratory indexes. Our data offer an optimal laboratory detection time point, which may help to e ciently anticipate the recognition of suspected patients with a high risk of progression into severe COVID-19.
Declarations
Ethics approval and consent to participate: Data collection and analysis of patients were approved by the Ethics Committee of Union Hospital, Tongji Medical College, Huazhong University of Science and Technology (2020-0120).
Consent for publication: Not applicable
Availability of data and material: All the clinical data can be shared if necessary.
Competing Interests: The authors declare that they have no competing interests. Authors' contributions: QZ conceived the idea, designed and supervised the study, had full access to all data and took responsibility for the integrity of the data. LLY and WBP contributed to collected, analyzed and interpreted the data. XSW, XRW and WBY recruited the patients. XW,
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. supplementaryinformation.doc
|
v3-fos-license
|
2018-04-26T21:26:17.830Z
|
2015-06-22T00:00:00.000
|
51763093
|
{
"extfieldsofstudy": [
"Art"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://revue-relief.org/article/download/URN:NBN:NL:UI:10-1-117145/9647",
"pdf_hash": "5248a3f938e251442cc3ce9dc6cca86d098c7174",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43357",
"s2fieldsofstudy": [
"Art"
],
"sha1": "9fc94d7d81a539bfb9ac879a20018716d961354b",
"year": 2015
}
|
pes2o/s2orc
|
Imagining Adventure in Middlebrow Fiction
The first half of the twentieth century saw the rise of a new type of novel that straddled the divide between popular entertainment and legitimate culture by combining ‘high’ and ‘low’ literary forms and catering en masse for the tastes of an expanding middleclass reading public. In this article we want to explore the ways in which the novels La Madone des sleepings (1925) by the bestselling French novelist Maurice Dekobra and Venetiaansch avontuur [Venetian adventure] (1931) by the Dutch author Johan Fabricius fit into this broad category of the middlebrow novel and how their use of adventure as a structural devise might complicate the common view of the middlebrow novel as a form of domestic realism.
Maurice Dekobra is often introduced as the author who invented a new type of novel, viz. the so-called cosmopolitan novel.As his biographer, Philippe Collas, notes, "avec ses premiers romans cosmopolites, Dekobra rompt avec une tradition pour se lancer vers de nouveaux horizons, ouvrant de nouvelles voies, offrant des sensations inexplorées jusqu'alors" (241).Immediately, however, the novelty of this cosmopolitan dimension is qualified.Collas rightly points out that cosmopolitanism as a novelistic theme or setting as such was far from unprecedented.Instead, he claims, it is the realistic treatment of the different locales, all over the world, that make Dekobra's successful novels stand out from earlier explorations of foreign territories, as they can be found for instance in the works of Chateaubriand,Loti,Stevenson (241).A similar argument is made by Tom Genrich, who argues that "Dekobra participates in the inter-war realist-historicist illusion, with his major novels containing passages that privilege Reality or Truth over their artistic representation" (124).Although there might certainly be some truth in these observations, it remains to be seen to what extent the combination of cosmopolitanism and realism of novels such as La Madone des sleepings really epitomizes something new.Instead, there are good reasons to argue that Dekobra's cosmopolitanism is in at least two ways firmly rooted in the prewar cultural context, and hence in the context of a bygone past.
To begin with, La Madone des sleepings is a cosmopolitan book in that it explores different geographical contexts, moving from the very heart of civilization (London) to the exotic (yet not very attractive) borderland of the Caucasus (Nikolaïa).Such a geographical organization is reminiscent of some major 19 th century novel types (such as the historical novel, see for instance Moretti 1998) and, in particular, of the adventure novel, which saw its heydays from the 1870's onwards (Letourneux 2010, 9).The typical novel of adventures is often based on a kind of conflict between the ordinary, western world and an univers dépaysant at the very end of civilization, in which the hero is threatened by sauvagerie (Letourneux,20,22).The development of the genre is clearly linked to the 19 th century notion of empire, the exploration and 'cultivation' of hitherto 'unknown' and 'savage' territories and -as Rosalind Williams shows, drawing on the example of among others Jules Verne -can be seen as a reaction to the growing intuition at the end of the century that the expansion of 'human empire' is coming to a close, as the 'mapping' and 'conquering' of the world is reaching its completion.
Dekobra's books, however, are cosmopolitan in yet another sense.Their main characters are casted as members of a high society that is depicted as profoundly international.Gérard Dextrier, the male protagonist of Mon coeur au ralenti and La Madone des sleepings, for instance, is a French member of the high society who is in need of money, begins a new life in the US, marries a wealthy American woman and finally moves to Londen, where he meets Lady Diana Wynham, the female protagonist of La Madone des sleepings and La gondole aux chimères (and, later on, La Madone à Hollywood), who is born in Scotland, lives in London, has travelled in sleeping cars throughout the continent and ends up living in Venice.These people of the smart set seem to cross national borders without any difficulty.Hence, Dekobra's cosmopolitism seems to be rooted in a kind of nostalgic longing for the prewar Europe of the fin de siècle and the belle époque in which the upper class could travel freely across the civilized world -until World War I passports were generally not required for travel inside Europe.Travel was facilitated by the expanding infrastructure of modern tourism, which got an enormous boost in the last quarter of the 19 th century, due for instance to the increasing density of the railway system and the construction of luxurious grand hotels (see for instance Gottwaldt 2005 andKnoch 2005).These landmarks of modern tourism, as they had materialized by the end of the 19 th century, are key to the spatial and geographical organization of Dekobra's early cosmopolitan novels: the Orient Express, grand hotels (such as the Adlon Hotel in Berlin (since 1907) and the Bristol Hotel in Vienna (since 1892)), restaurants, tourist attractions like Venice, etc.If the kind of cosmopolitanism epitomized by the novels by Dekobra becomes a real hype in novel-writing in the 1920's and 1930's (Genrich 2004) (on a par with, for instance, the fashion of novels set in hotels, from Thomas Mann's Der Tod in Venedig (1913) to Vicky Baum's Menschen im Hotel (1929( ), see Matthias 2006) ) this hype is at least anticipated since the end of the nineteenth century and has definitely something nostalgic, and even conservative in its depiction of social relations.
The invention of the middlebrow
Given the widespread character of literary cosmopolitanism in the 1920's and its dependence on prewar cultural models, Dekobra was neither the writer who introduced cosmopolitanism in literature, nor did he invent the genre of the cosmopolitan novel, as the latter is clearly based on older generic models (such as the novel of adventures).What he did, rather, was adapting older themes, geographies and generic models to new cultural constellations.What is crucial in this process of reinterpretation is that his novels are not only set in cosmopolitan contexts, yet are cosmopolitan themselves, in that they almost immediately began to circulate internationally and can be seen as early examples of the phenomenon of the international bestseller.In this way, Dekobra made the elite cosmopolitism of the turn of the century, epitomized for instance by luxurious tourism, accessible for a much broader audience, ingeniously mixing elitist social assumptions and highbrow cultural references with popular generic models such as the novel of adventures.Hence, what is new about Dekobra is not so much his cosmopolitism or his realistic journalistic style, but the fact that he, together with many others, participates in the relatively new and rapidly expanding cultural domain of middlebrow culture, which did emerge more or less simultaneously with the rise of modernism and its mechanisms of distinction (for a genealogy of these mechanisms, see McGurl 2001).
The fact that the link between Dekobra and middlebrow culture has never been systematically explored -and that he is mainly remembered as the popular author par excellence, an essentially lowbrow figure -has several reasons.First, the emergence of the middlebrow as a distinctive cultural sphere in between the high and the low has been more pronounced in the Anglo-American cultural context, in contrast to other literary contexts (such as the French), where a more dichotomous (instead of tripartite) model for thinking cultural hierarchies (high vs. low) seems to have prevailed.Recent scholarly research, however, is finding more and more evidence of the truly transnational character of the emergence of middlebrow culture, which gave rise to similar effects in different European countries.Second, the middlebrow novel has typically been understood as a specific subtype of the novel.Although, in her seminal book on the 'feminine middlebrow novel' Nicola Humble admits that "the middlebrow literature of this period encompassed a wide range of genres, including romances and country-house sagas, detective stories, children's books, comic narratives, domestic novels and the adolescent Bildungsroman" (12), she (like most others) de facto gives priority to books that concentrate on femininity, middleclass consciousness and domestic life (mainly in the context of the modern city) that can be understood as updated versions of 19 th century realist domestic fiction (11).From this narrow interpretation of the middlebrow follows that the novels by Dekobra can only be understood as more marginal examples, given their affinities with the novel of adventures, which is almost by definition at odds with the realist domestic novel.Finally, there is a third reason why Dekobra may be conceived of a rather atypical case.Though it is undoubtedly true that the middlebrow novel is by definition a hybrid form "that straddles the divide between the trashy romance or thriller on the one hand, and the philosophically or formally challenging novel on the other" (Humble,11), it is equally true that the successful middlebrow novel is increasingly conceived of as a form of its own, with its own generic features, its own thematic concerns and stylistic peculiarities.From this follows that, in the interwar years, the middlebrow novel is increasingly functioning independently from highbrow modernism as well as from popular genres, according to its own models for the production, distribution and consumption of literature.A common feature, for instance, which sets the middlebrow novel apart from highbrow as well as lowbrow genres, is the concern for a certain kind of pedagogy; generally speaking, middlebrow novels want to instill their readers with a savoir-vivre.In order to facilitate the transfer of models and values it is essential for the reader to experience a kind of homology between the lives of the fictional characters and his/her own life.In contrast, one could argue that the early cosmopolitan novels by Dekobra represent an earlier stage in the process of the integration of highbrow and lowbrow elements and generic patterns, since both are still fairly easily discernable.On top of that, the books contain numerous elements that seem to complicate the transfer of models and values which is essential to the pedagogy of middlebrow fiction.
From this follows that the case of Dekobra is a rather complex one.His work definitely instantiates new developments in literary fiction, which are related to the rise of the so-called middlebrow novel, and yet his books are clearly reminiscent of older novelistic, cultural and social models and they are atypical with regard to some of the main directions of this new kind of novel writing.In order to explore the ambiguous position of Dekobra vis-à-vis the middlebrow, we will compare his major work La Madone des sleepings with a novel by the Dutch novelist Johan Fabricius, of whom the position as bestselling writer is similar to Dekobra's, as he is known for having introduced an element of cosmopolitanism and adventure in the Dutch literary context of the 1930's.This juxtaposition can help to entangle some of the abovementioned paradoxes in Dekobra's cosmopolitan novels.Conversely, since these cases share their sense for cosmopolitanism and adventure, they can add something to the debate on the interwar middlebrow novel, which has primarily been associated with femininity, with domestic life and the home, and with the nation.
La Madone des sleepings: coming home, and leaving again
As Nicola Humble points out, the 'feminine middlebrow novel' showcases a marked "fascination with domestic space" (11) (and the gender and class relations attached to it), which can to a certain extent be understood as a relic from the 19 th century realist novel.In contrast, the novel of adventures appears to be based on a completely different spatial and geographical structure, in which the home is left behind and domesticity gives way to travel and to adventurous action in all kinds of exotic places, often situated at the 'edges' of the civilized world.This distinction, however, is far from absolute.Humble, for instance, argues that the "middlebrow women's fiction of this period" was not only concerned about domesticity, yet also "indulged in a curious flirtation with bohemianism" (5), which can be understood as the integration of the idea of adventure in the universe of the home.Conversely, Mathieu Letourneux convincingly argues that the place of the home is not absent from the novel of adventures, as "la forme du récit distingue deux moments enchâssés, l'aventure et le quotidien": l'aventure (qui en forme la matière), le quotidien, ce chronotope que l'on quitte et qu'il s'agit de retrouver, joue un rôle essentiel dans la dynamique du récit.
In other words : the home often functions as both the point of departure and the ultimate destination of the hero's adventurous travels.Hence, many novels are not so much concerned with the home, or with adventure as such, as well as with the negotiation between these two poles, which allows for an endless range of variations between 'home' and 'away'.
At first sight, La Madone des sleepings qualifies as a typical adventure story.The male hero of the story, the Frenchman Gérard Dextrier, also known as prince Séliman, secretary of Lady Diana Wynham, leaves their house at Berkeley Square in London in order to try to secure her interests in an oil concession in Georgia, on Soviet territory.First he travels to Berlin, where he meets two Russian characters, Varichkine, a representative of the Russian government in Europe, and his wife Madame Moravieff, who will turn out to be Gérard's main opponent in his quest for oil (and money) in order to solve the financial needs of Lady Diana.From Berlin he travels further east, by train, via Vienna, 'Budapest, Brasov, Bucarest, Constanza' (139) and Constantinople, where he embarks for Batumi in Georgia.Upon finally arriving in the small seaside town of Nikolaïa, he is arrested by the Soviets, at instigation of Irena Moravieff, and awaits his death in an underground prison.This movement through space can be seen as a movement away from home and from western civilization, to a place that is the complete opposite of a cozy and comfortable house and is situated at the frontiers of civilization.If the houses where the protagonists lived in the civilized world are depicted as luxurious storehouses of culture -with "colonnes doriques" (28), "quatre copies de déesses grecques [qui] cachaient mal leur impudeur millénaire au fond de leurs niches de marbre rose et gris" (28-29), "livres à reliures anciennes" (255) -, in his prison, Gérard is bereft of all marks of civilization, and will die naked: "Vous vous dévêtirez avant de mourir… Ce sera une sensation nouvelle pour vous… Vous vous souviendrez alors de vos garçonnières parisiennes où vous accomplissiez ce rite pour immoler une vertu complaisante… Mais cette fois-là, la chute sera définitive… Ni fleurs, ni porto" (211).In the end, however, with three other people, Gérard succeeds in a spectacular escape and he is miraculously rescued by a ship, owned by his wife Griselda Turner.It is clear that the escape by ship can be understood as a return to civilization, which is explicitly voiced by Gérard himself: "Messieurs, […] nous ferons les présentations à bord du Northern Star, quand nous aurons regagné cet asile flottant où les règles de la civilité occidentale reprendront toute leur valeur" (230).In the case of Gérard this escape from la sauvagerie and return to civilization really means a return 'home', as the situation helps to restore his relationship with his wife Griselda Turner.
This circle of adventure -from univers familier to univers dépaysant and back again -, however, is not entirely closed and this complication follows from the fact that the novel is based on the adventures of two different protagonists -Gérard Dextrier and Lady Diana Wynham -and that their adventures are of a different kind.Lady Diana, to name but one example, never leaves the civilized world.After having waved goodbye to Gérard in Berlin, she stays at home, and yet, one cannot say that she epitomizes domesticity, or that she adheres to 19 th century gender roles in which women are confined to the private sphere of the home.After Gérard has returned, he is invited by Lady Diana at her home at Glensloy Castle, near Loch Lomond, in Diana's native region of Scotland.Glensloy Castle is in many respects the home par excellence, since it connects Lady Diana to her birth ground, as she is "élevée sur les rives élégiaques des lochs aux eaux tranquilles" (53).This setting seems to provide anchorage for the traveling and cosmopolitan nature of la Madone des sleepings; in this context she receives the alternative nickname of the Lady of the Lakes (251), after the Arthurian legend, which bears totally different overtones than the association with sleeping cars.More than any other house in the novel, this house is associated with culture and tradition, with for instance "une page manuscrite de Sir Walter Scott dans un petit cadre d'or entre les deux hautes fenêtres" (251).Increasingly, however, the house is associated with death (as Lady Diana wants to commit suicide there); it is "une cage dorée" (255), "un château qui abritait une condamnée" (263), and hence, despite all its luxury, it is not dissimilar to Gérard's prison in the Caucasus.In the last scene of the novel, we find Lady Diana leaving by train, escaping Glensloy Castle, in search of new adventures: "Ma vie depuis six mois a été monotone […].Il est grand temps que je pimente mon menu et caracole sans but précis dans la pampa de l'Aventure" (281).This pampa of adventure, however, is not situated on the verge of civilized society, yet it is situated at the very heart of it, as Lady Diana will travel to Venice (which will become clear in Dekobra's next book, viz.La gondole aux chimères) with the aim of seducing wealthy men who can finance her exuberant lifestyle.
In short, La Madone des sleepings manifests a chiastic structure with regard to the tension between domesticity and adventure.Whereas the male protagonist Gérard Dextrier, a chevalier errant (283), is finally coming home, the female protagonist Lady Diana, who is homebound when Gérard is away, finally decides to leave her house and her birth ground, in order to become la Madone des sleepings again.When the focus shifts from Gérard to Diana, the nature of adventure changes from something at the outskirts of the civilized world to something at the inside of it.In a similar vein, for instance, the threat of communism is framed as something that differs from the war between nations because it is a war that is fought on the inside: "On ne se bat plus entre Français, Allemands ou Bulgares, on se bat sans explosifs, entre bourgeois et prolétaires, à l'intérieur des nations.C'est la lutte en vase clos" (93); "Soyez sincère, Lady Diana, et dites-moi si, dans votre luxueuse maison de Berkeley Square, vous n'êtes pas campée jour et nuit en face de l'Ennemi… Quel ennemi ?... Mais votre femme de chambre qui vous envie et votre chef qui vous vole, en attendant mieux…" (94).In this way, La Madone des sleepings can be understood as a novel that renegotiates the opposition between domesticity and adventure, the inside and the outside, that informed 19 th century fiction and ran through adventure tales and domestic fiction alike.Although Dekobra's cosmopolitanism may be based on a certain sense of nostalgia, Diana's act of leaving Glensloy Castle seems to imply a departure from 19 th century culture (the imaginary universe of Walter Scott), from 19 th century values (domesticity, nationalism and the expansion of (human) empire), from 19 th century social hierarchies (based on inherited property) and gender roles, etc.In the new constellation of the 20 th century, it could be argued, the division between domesticity and adventure has become obsolete to a certain extent.This has strong repercussions for the home, that loses its function as the ultimate cornerstone of bourgeois society.As Enda Duffy convincingly shows in his account of detective fiction and its relation to speed, the detective storythat other popular genre of the period, yet of more recent date than the adventure tale -"faced the new phenomenon of mass traffic in the early years of this century by raising anxieties to assuage them, by denigrating the notion of home as fixed structure of refuge, and by indulging in escape fantasies which marked movement and participation in mass traffic as a gesture of freedom"; "the idea of space as refuge, and in particular of the home as sanctuary and guarantor of personal prestige and identity, was coming under attack" (64-65).This is exactly what is happening in La Madone des sleepings: the 'home' is relocated from the stable structure of Glensloy Castle to the moving object of the sleeping cars.This is even true for Gérard, whose new home is the Northern star, a yacht, which combines the conveniences of the house with the ability to travel freely from port to port.If the house (and, by extension, the nation) is no longer the foundational force of society, what force then has taken its place?The force of … capital, since in contrast to land, money moves freely and enables people to move freely.In the end, it is not the house, but the cheque that awaits the adventurous hero: "Tandis que vous, Gérard, tout vous sourira désormais… L'Amour et l'Argent… La princesse Séliman vous attend, reconquise… La sérénité la plus parfaite vous guette au coin de l'Eldorado" (282); "Muet adieu de la Femme à la conquête d'un Graal rempli de chèques barrés…" (283).In this way La Madone des sleepings can be said to navigate between old and new social and economic realities.To name but one example, the shift from the house to money is paralleled by a shift from the old continent to the new continent.While Glensloy Castle is the materialization of European tradition, the Eldorado where the money comes from is America.This is true for Gérard, who had married a wealthy American woman in Mon coeur au ralenti, as well as for Diana, who will receive financial support from Jimmy, "jeune merle importé d'Amérique", in La gondole aux chimères.The dispositions of the latter leave no doubt: "Le charme de Venise, darling?C'est la cheminée d'usine de Santa Elena; c'est la digue du chemin de fer qui relie la ville à la civilisation" (Dekobra 1926, 9).
Venetiaansch avontuur : adventure as a product of the imagination
The novel Venetiaansch Avontuur (Venetian adventure, 1931) by the highly successful Dutch novelist Johan Fabricius is in many respects comparable to Dekobra's early cosmopolitan novels.To begin with, there are numerous evident intertextual connections: like La gondole aux chimères the novel by Fabricius is set in Venice, like in La Madone des sleepings a major role is played by Americans on a ship, etc.The name of Dekobra is even mentioned explicitly; he is called "een zeer zondig auteur, dien ze eigenlijk niet mogen lezen" (a very naughty author, who they are in fact not allowed to read, 109).Besides such more superficial reminiscences -which, by the way, do not impinge on the originality of Venetiaansch avontuur -, there are also more profound and structural similarities, related for instance to the role of adventure in the structure of the plot and to the redefinition of adventure in a twentieth century world governed by capital and marked by shifts in social hierarchies.
Venetiaansch avontuur tells the story of the young Wiener Walther Drachentöter (Walther 'Dragon-killer') who is not satisfied with his job as an office clerk in the firm Julius Kleingeld & Zonen (Julius 'Small-Change' and Sons), which deals in shirt boards made of rubber.As his name already suggests, he is a man with romantic ideals, who reads and writes poetry and is the author of radical essays in the periodical De ontketende Prometheus (Prometheus unbound), pleading for the spiritual emancipation of the suppressed working class.Desperate to flee his ordinary life, he travels to Venice, hoping to win the love of one or another marriageable "travelling dollar princess" (27), whose money would free him from daily routine forever.Walther's quest is roughly similar to Diana's, since, likewise, money would allow him to buy freedom: "Ik wil me vrij kunnen bewegen… Wat van de wereld zien.-Ik heb altijd moeten rekenen.Schillinge.Halve Schillinge" (I want to be able to move freely.See something of the world.I have always been obliged to do the math.Schillings.½ Shillings, 26).At first sight, Walther's voyage follows the traditional paths of adventure, since Venice is framed as an exotic place, totally different from Vienna: "het avontuur van een reis, de scènerie van een vreemde, romantische wereld" (the adventure of a journey, the scene of a strange, romantic world, 36).From the moment he arrives in Venice, however, the city shows him a double face, since it qualifies as exotic/strange and prosaic/ordinary at the same time.Throughout the entire book there is the idea that the city (and its romantic and poetic effects) are nothing but a chimera, an illusion, a dream or a stage performance (see for instance pages 50-51).This idea is so pervasive that it sometimes appears as if the whole idea of adventure is nothing more than the projection of the imagination of Walther in overdrive.Behind that illusion lurks an image of Venice as a world that is as ordinary as the one Walther has left behind.From his arrival onwards, Venice is portrayed as a gigantic tourist trap, in which people continuously need to pay attention to how much they are spending: Hij [Walther] haalt een potloodje uit zijn zak en begint weer eens te rekenen.Zijn kamer kost hem twaalf lire per dag, maar daar komt nog toeristentaxe en de fooi voor Marcolina bovenop; hij kan, met de caffé-latte er ook nog bij, gerust op twintig rekenen.He [Walter) takes a pencil out of his pocket and, once again, starts counting.His room costs him twelve Lira a day, but on top of that comes the tourist tax, and the tip for Marcolina; with the caffé-latte included he can be sure it will amount to twenty.
From this perspective, Walther's life in Venice is little different from his life in Vienna, where he had to count every Shilling.
This idea is further developed in passages like the one in which Walther tries to seduce the Dutch girl Miep, who is on vacation in Venice with her parents.As her father is the "biggest shareholder in some Indonesian plantations, not only tabacco, but also tea and rubber" (209), the daughter represents one of Walther's last chances to acquire wealth by marriage.Upon hearing about Indonesia, Walther immediately starts dreaming again, complementing his visions about Venice, with visions about the East: "Hij ziet een heerlijk jungle-visioen voor zijn geest verrijzen, palmen, bruine inlanders… Een tropischen morgen met helder vogelgefluit, bonte orchiedeeën, apen…" (He is seeing a marvelous jungle-vision passing through his mind, palm trees, brown natives… A tropical morning with the bright whistling of birds, multi-colored orchids, apes…, 210).Walther is attracted by the idea of marrying the daughter and being sent to one of his plantations by the father.The plan of living in the East, however, is immediately rejected by the daughter and the father alike: "De natuur is vol onbekende gevaren.Er heerschen ziekten, waartegen geen kruid gewassen is.De Inlanders zijn absoluut niet te vertrouwen" (Nature is full of unknown dangers.Diseases are reigning that cannot be cured.The natives are totally unreliable, 210).Unlike Gérard Dextrier in La Madone des sleepings, Walther will not be travelling to the outskirts of civilization.What is more, Miep and her parents ultimately confront him with the insight that adventure, even in Venice -i.e. in its reduced form within the confines of civilization -may prove to be a chimera, as all escape routes out of the ordinary and out of domesticity appear to lead back again to the ordinary and to domesticity.Walther fears that if he would follow Miep to her native country, his fantasies about a rural and idyllic Holland, with "windmills and farmer girls and Gouda cheese" would give way to "gray everydayness" populated by "tired typing girls" (217), "Sunday walks in a Sunday best suit" (218).In other words: his adventures in Venice would ultimately and unavoidably lead to a domestic life in the North again.Read in this way, Venetiaansch avontuur is not so much an adventure story, as well a novel about the impossibility of adventure, at least of adventure understood as an escape from domesticity and an exploration into an univers dépaysant.Adventure can only exist in the imagination.
If Walther is certainly not a Gérard Dextrier, he may be a Diana Wynham, as from the moment he arrives in Venice, he engages in one flirtation after another, with different girls, just like Diana's list of lovers is seemingly endless (13).In fact, it is as if he cannot even look at a girl without falling in love with her.The sheer number and almost arbitrary character of these flirtations has a comical effect, but is key to our understanding of the novel, as it reveals the central conflict between romantic ideals (based on authenticity and unique singularity) on the one hand and a capitalist economic logic (based on multiplication and reproduction) on the other hand -a tension lurking behind every corner in the age-old, yet very touristic streets of Venice.This conflict is inherent in Walther's plans from the very start, as he is in search for a romantic kind of freedom, which he hopes to realize by acquiring lots of money: "Een bourgeois droomt van het naakte leven.Een verlorene, die noch de handen, noch de lichaamskracht, noch het zelf-of Godsvertrouwen heeft van den vrijen man uit het volk.[…] Voor hem is er maar één weg naar de vrijheid: geld.Zoveel geld, dat hij zijn koortsfantasie overwinnen, het naakte leven, dat hij verafgoodt, vergeten kan" (A bourgeois dreams of the bare life.[He is] a lost soul, who lacks the hands of the free man from the people, or his bodily strength, his self-confidence or trust in god .[…] For him, there is only one way to freedom: money.Plenty of money, so that he can conquer his feverish phantasy, so that he can forget the bare life he is worshipping, 86).Whereas in La Madone des sleepings, this connection between freedom and money is never challenged, it proves to be highly problematic here, which is manifested for instance by Walther's relation to the Tiller/Miller girls.
The Miller girls Susie, Phoebe and Peggy -who are first mistakably identified by Walther as members of John Tiller's popular travelling dance troupes -embody everything the protagonist can dream of: they are the young and beautiful daughters of an American billionaire who travels the seven seas in a luxurious yacht, decorated with tasteful artwork.Besides, they are fond of Walther, and invite him on board.There, Walther is passionately kissed by one of the girls, who immediately proposes him to marry.This would mean that Walther would join them on their sea travels, living from the money of their 'dad': "I, Walther Drachentöter, […] are going to engage me with the daughter of a billionaire tonight and I will become one myself.I can go to China, to Australia, or to the Hawaiian Islands, wherever I like" (137).There is, however, one major problem; since the three American girls are almost identical triplets, Walther is not able to identify the girl he has kissed.Was it Susie, Phoebe, or Peggy?This situation can be seen as a manifestation of the clash between two cultures: on the one hand the culture of the old continent, based on romantic notions such as authenticity, singularity and distinction and on the other hand American culture, based on capitalism and the logic of economic multiplication and reproduction.Whereas Walther is (at least at first sight) depicted as a representative of the former -as he can even cite from Petrarch in the original (55, 109) -the girls are presented as typical exponents of American (non-)culture, as they cannot distinguish Italian from German (109), barely differentiate between popular authors like Edgar Wallace and classics like Shakespeare or Wordsworth (109), are grown up with images from Hollywood cinema and the sounds of popular music (110), etc.They do not bother much about social differences and hierarchies either (115)(116).Moreover, they are depicted as the products of a homogenizing system of production as they are all dressed the same way, after the latest fashion, even to the extent that Walther first imagines them to be actors in a marketing stunt ("een reclame-truc") of a fashion firm (54).Hence, whereas Walther ought to find the 'one', he is only confronted with the 'many'.In the new social context, American capitalist culture, which is based on the lack of hierarchies and distinctions and the laws of reproduction and multiplication, is pushing aside obsolete romantic ideas about singularity and authenticity (in love, in art, as well as in the experience of a city like Venice).The tragedy of the situation is that Walther needs the 'many' (i.e.lots of dollars) to realize his dreams about the 'one'.As he is increasingly embarrassed by the fact that he cannot distinguish between the three Miller sisters, he breaks off the budding love relation and the American ship leaves the port of Venice without the Austrian 'adventurer'.His other attempts at seducing rich girls -of which some are successful and others definitely not -all to a certain degree reiterate this initial dilemma between the one and the many, the singular and the multiple, romanticism and pragmaticism, idealism and capitalism.
Near the end of the novel, however, after many failed love affairs, when Walther is really running out of money, he experiences a moment of epiphany: Als hij echter in zijn kamer is en zijn hoofd in de waschkom onderdompelt, gebeurt er iets zeer verrassends met hem: hij herinnert zich plotseling zeer duidelijk een bepaalde intonatie in Susie's stem; hij weet in ditzelfde oogenblik met groote zekerheid, dat hij haar stem van die der beide anderen zou kunnen onderscheiden.Het is een eigenaardige wijze om alle zinnen met een vraagteeken te laten eindigen (212).However, when he is in his room, and plunges his head in the washing bowl, something very surprising happens to him: suddenly he can recall very clearly a certain intonation in Susie's voice; at that moment he knows with great certainty that he would be able to distinguish her voice from the voices of the other girls.It is a peculiar way of ending all sentences with a question mark.
After a period of erring in Venice, in which he has made friends with another group of Americans, who have initiated him to the American way of life, the major dilemma is finally solved and Walther is finally able to identify the girl who he had kissed (the 'one' out of the 'many'), by interpreting the intonation of her sentences.The book ends with Walther's decision to head to Naples by train, where the yacht of the sisters is anchored.The question arises, however, how we should accurately interpret this final twist of the plot.In our view, there are at least two options, not surprisingly an idealist and a more realist one.In an idealist reading, Walther has indeed finally reached a moment of insight; he has learned to single out the right girl out of an undifferentiated mass.Such a reading suggests that ideas of singularity can indeed survive in a new social and economic context dominated by Americanism and capitalism.In this respect, it is a telling detail that, at the last page of the novel, when he imagines his reunion with his girl (who he has finally been able to identify as Peggy now), Walther compares his own love story to the plots of ancient mythology: Oh, Peggy, is het dan niet als een Grieksche mythe zooals ik jou eindelijk gevonden heb?Door hoeveel vuren heb ik moeten gaan voor ik er in slaagde de drieëenheid te splitsen, waarin jij gevangen zat als een bedrieglijk spiegelend kristal (256).O Peggy, is it not like a Greek myth, the way in which I have finally found you?How many fires did I have to cross before I was able to split the trinity in which you were imprisoned as in a delusively mirroring crystal.
Walther is referring here neither to the narratives of popular culture (such as sentimental Hollywood cinema), nor to the narratives of the great authors he cherishes (like Petrarch and Dante), yet to a still more primordial narrative, underlying them all and providing his own 'adventures' with universal and transhistorical overtones.
In contrast to this idealistic reading, however, a more realist interpretation is possible too.By the time of Walther's epiphany, the reader has learned to know the young Wiener as a somewhat naïve romantic who takes his own egocentric dreams and desires for real.Since the novel ends immediately after Walther's decision to travel to Naples, there is no way for the reader to check whether his method to single out the right girl is accurate, or maybe just another offspring of his vivid and often misguided imagination.This would mean that Walther keeps chasing his dreams in a world in which ideals of singularity and authenticity are out of place, just like the city of Venice is nothing more than a chimera (as opposed to the modernized city of Naples?).The final reference to Greek mythology would then be nothing more than another sign of the mythomania of Walther, who continually tries to legitimate his own worldly 'adventures' by referring to the prestigious, yet outdated cultural models of the past -in fact he only uses his knowledge of the classics to impress easily impressible young girls.
By way of conclusion
It is clear by now that La Madone des sleepings by Maurice Dekobra and Venetiaansch avontuur by Johan Fabricius definitely share some middlebrow characteristics (such as the mixing of elite cultural references with popular culture), while they also challenge more reductive definitions of the middlebrow by introducing themes, settings and plot structures that are not typically associated with the domestic realism commonly associated with middlebrow fiction (mainly in the Anglo-American context).One can think here of the systematic exploration of the idea of adventure as an alternative for domesticity.In their use of adventure as a structural device, however, the two novels differ considerably.In contrast to Gérard Dextrier's expedition into the Caucasus, Walther Drachentöter's trip to Venice does not -in spite of the title of the novel -follow the pattern of the typical adventure story, which is based on a transition from an univers familier to an univers dépaysant.What is more, adventure is not only relocated from the outskirts of civilization to the very heart of it (as in the case of Lady Diana Wynham), but it is still further interiorized and transferred to the mind and imagination of the characters.Hence, in Venetiaansch avontuur adventure is to a certain extent rather imaginary than real and this has strong repercussions for the generic identity of the novel.Whereas La Madone des sleepings is clearly based on structures borrowed from popular genres (such as the adventure tale, yet also for instance erotic literature), Venetiaansch avontuur may contain numerous references to popular culture, but is structurally different, as it comes closer to the essentially middlebrow genres of travel fiction or documentary realism, and even to the highbrow forms of the psychological novel.Both novels can rightly be labeled as cosmopolitan novels, but this label covers different generic structures.
On top of that, there is a difference as far as the depiction of social and cultural hierarchies is concerned.Middlebrow literature has often been described as a literature of the growing middleclasses in the first half of the 20 th century.It is a kind of literature written for and read by new middleclass groups and dealing with middleclass themes and issues.As Nicola Humble sees it: "In the obsessive attention [middlebrow writing] paid to class markers and manners it was one of the spaces in which a new middleclass identity was forged, a site where the battle for hegemonic control of social modes and mores was closely fought by different factions of the newly dominant middle class" (5).The precariousness of these emergent middleclass identities can explain many of the dilemmas and paradoxes experienced by the young Walther Drachentöter, who, as a lower middleclass bourgeois, is continuously navigating between different social identities and roles, ranging from the romantic bohemianism of the poet, over his own position as an office clerk, to the luxurious life of the smart set he dreams of.The open ending of the story suggests that this process of negotiation between social and cultural hierarchies has not come to a definitive conclusion (yet).The identity of the middleclass is not fixed (yet) and is based on volatile compromises.The social identity of the main characters in La Madone des sleepings is totally different.Although the problems Lady Diana faces may seem similar to the problems experienced by Walther -she is in need of lots of money to buy her freedom and to secure her social position -, she remains aristocratic by nature (which not necessarily means: by birth).She is looking for new means to live her aristocratic life in new social contexts, but her social identity is fixed and will not change (she would rather die, as she, at a certain moment, considers killing herself).Her identity is stable like the stock characters of popular fiction.If both novels engage with the issue of social hierarchy in a democratizing world -which is a central theme in the middlebrow imagination -Lady Diana Wynham seems to represent an attempt to escape these realities in a nostalgic, or even escapist fashion, whereas in Venetiaansch avontuur it is up to the reader to decide whether the idea of (a certain) cultural of social aristocracy can still survive in a middleclass world ruled by capital (be it the symbolic capital of young Europeans, or the material capital of young Americans).It remains to be seen whether Walter Drachentöter is indeed, as his name suggests, a heir of the adventurous race of dragon killers, or rather just another representation of the middleclass Everyman.
Notes1
This article has been realized in the context of the research project Dutch Middlebrow Literature: Production, Distribution, Reception, a joint project of the University of Groningen, the Open University in the Netherlands and the University of Nijmegen, funded by the NWO.
|
v3-fos-license
|
2023-03-24T13:13:18.311Z
|
2023-03-21T00:00:00.000
|
257696779
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/03/21/2023.03.17.533162.full.pdf",
"pdf_hash": "48e75f25db687b6637272856a2b558a4ca4c4371",
"pdf_src": "BioRxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43358",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "48e75f25db687b6637272856a2b558a4ca4c4371",
"year": 2023
}
|
pes2o/s2orc
|
Sculpting DNA-based synthetic cells through phase separation and phase-targeted activity
Synthetic cells, like their biological counterparts, require internal compartments with distinct chemical and physical properties where different functionalities can be localised. Inspired by membrane-less compartmentalisation in biological cells, here we demonstrate how micro-phase separation can be used to engineer heterogeneous cell-like architectures with programmable morphology and compartment-targeted activity. The synthetic cells selfassemble from amphiphilic DNA nanostructures, producing core-shell condensates due to size-induced de-mixing. Lipid deposition and phase-selective etching are then used to generate a porous pseudo-membrane, a cytoplasm analogue, and membrane-less organelles. The synthetic cells can sustain RNA synthesis via in vitro transcription, leading to cytoplasm and pseudo-membrane expansion caused by an accumulation of the transcript. Our approach exemplifies how architectural and functional complexity can emerge from a limited number of distinct building blocks, if molecular-scale programmability, emergent biophysical phenomena, and biochemical activity are coupled to mimic those observed in live cells.
Introduction
Though there is no agreed definition of cellular life, 1 there is general consensus on the fundamental characteristics of living cells, which includes information processing, adaptability, growth and division, metabolism, and compartmentalisation. 2,3 Bottom-up synthetic biology aims to create synthetic cells featuring a subset, or all, of these fundamental characteristics by combining elementary molecular building blocks. 2,4 This radical approach, although regarded as more challenging compared to traditional top-down cell engineering, circumvents the complexities inherent to working with and modifying living cells. 1,5 Particularly challenging in the context of synthetic cell engineering is the production of sufficiently complex and robust cell-like architectures containing multiple compartments that serve to localise, segregate, and regulate function and environment. 6,7 These enclosures are often membranebased, constructed from phospholipids and/or fatty acid vesicles, [8][9][10][11][12] polymersomes, 13,14 or proteinosomes, 15,16 but alternative membraneless architectures are emerging in the form of hydrogel capsules, coacervates, or synthetic condensates. [17][18][19][20][21] Microfluidics is a typical strategy for producing synthetic cell scaffolds, being particularly effective at generating monodisperse and nested structures in small quantities. 22 However, the microfluidic approach can be difficult to implement and scale, requiring bespoke, often complex chips and specialised equipment. 23 Like bulk emulsification approaches, microfluidics often rely on water-oil mixtures to assemble stabilised droplets or giant vesicles, which adds challenges and steps in the workflow associated to handling and removal of the non-aqueous phase. [24][25][26][27][28][29][30][31][32][33] It may instead be more desirable to use techniques which exploit simple physical principles such as phase separation and selfassembly while avoiding the use of specialised equipment and non-aqueous components altogether.
The low costs of DNA oligonucleotides, 34 in conjunction with the predictable thermodynamics and kinetics of their interactions, 35,36 and the availability of computational design tools, 37,38 makes DNA nanotechnology 36,39,40 highly attractive to engineer advanced biomimetic devices, including synthetic cells. The latter are often formed from the programmable condensation of DNA building blocks in aqueous environments, hence removing the complexities of microfluidics, emulsification, and other chemical manufacturing routes. Artful examples of DNA-based, celllike devices have been reported, self-assembled from multiblock single-stranded DNA, 41,42 or from branched DNA junctions known as nanostars. [18][19][20]43,44 Among condensate-forming DNA motifs, amphiphilic nanostructures, obtained by labelling DNA junctions with hydrophobic moieties, have shown significant potential owing to the stability of the self-assembled phases, their programmable nanostructure, and the possibility of engineering localised functionality and response to various environmental stimuli. [45][46][47][48][49][50][51][52][53] While DNA-based synthetic cell implementations hold substantial promise due to their facile preparation, robustness, and algorithmic programmability, routes towards gener-ating architectures with sufficiently complex internal structure, akin to those achievable with microfluidic methods, are lacking. In this work, we take advantage of the programmable phase behaviour of amphiphilic DNA nanostructures alongside their ability to host stimuli-responsive elements and sustain enzymatic pathways, [46][47][48]51 to construct celllike devices that display a complex internal architecture and spatially resolved activity. The DNA-based synthetic cells feature a porous lipid shell as a pseudo-membrane, and internal membrane-less organelles. These organelles can be engineered to exhibit different functionalities, from selective etching which creates a cytoplasm-like space within the lipid shell, to hosting the in-situ transcription of RNA aptamers by a polymerase, which also induces spatially localised morphological changes to the synthetic cells. Our results demonstrate that the synergy between biophysical and biochemical responses can produce structural and functional complexity akin to that observed in live cells, even in minimalist systems composed of a small number of molecular building blocks.
Results and discussion
Generating internal heterogeneity in DNA condensates with phase separation. Our synthetic cells are primarily constructed from C-stars -DNA junctions made amphiphilic through the addition of cholesterol moieties -whose structure is sketched in Fig. 1(a). Subject to a one-pot thermal annealing, single-component samples of fourarm C-stars have been shown to self-assemble into polyhedral crystalline aggregates 45,46 or, as shown in Fig. 1(b), spherical, cell-size condensates. 46 The latter, despite their macroscopic appearance, can be either crystalline or amorphous. 46,51 Aggregate morphology (whether polyhedral or spherical) and the degree of crystallinity have been found to depend on C-star design features, size, quenching rate, and ionic conditions. [45][46][47] The nano-porous structure of the condensates enables internal diffusion of oligonucleotides and proteins. 51 Figure 1: Size-induced phase separation leads to the emergence of core-shell DNA condensates and phase-targeted disassembly. (a) Schematic of two amphiphilic DNA nanostar (C-star) populations, A and B, with different arm lengths -A with 35 base-pairs (bp) and B with 50 bp arm length. The end of each arm is decorated with a cholesterol molecule. 45 Simplified schematics adjacent. (b) Schematic (left) and bright-field micrograph (right) of C-star condensates comprised of C-star motif A1, with a 35 bp arm length. Condensates form upon annealing singlestranded C-star components from high temperature (see Methods) [45][46][47]52 Within the condensates, C-stars interact through hydrophobic forces, with the cholesterol moieties likely co-localising in micelle-like regions. (c) Schematic (left), and confocal (middle) and bright-field (right) micrographs of binary, de-mixed C-star condensates comprised of C-star populations A1 (Cy5-labelled, blue) and B3 (fluorescein-labelled, green, 50 bp arm length). When coannealed, two different C-star populations self-assemble to form phase-separated condensates with distinct core-shell morphology, with the small-arm-length constructs localising primarily in the core, and the larger motifs favouring the shell. (d) Schematic of C-star disassembly driven by toehold-mediated strand displacement. 46 An invader strand binds to the toehold and displaces a bridge strand, triggering condensate disassembly by dissociating the central C-star junction from cholesterol-DNA micelles. (e) Confocal and bright-field micrographs of another binary C-star condensate system, comprised of populations A1 and B2, both of which have been labelled with a fluorescent probe on their inner junction -Cy5 for population A1 (shown in blue, inner phase), and fluorescein for population B2 (shown in green, outer phase, arm length 50 bp). (f ) Confocal and bright-field micrographs showing the targeted disassembly of the outer, B2-rich, phase of the binary condensate described in panel (e), exploiting toehold-mediated strand displacement as shown in panel (d). Timestamps mark time elapsed after adding the invader strand. (g) Disassembly of the condensate shown in panel (f) quantified by monitoring the cross-sectional area of the two phases measured with image segmentation. Abrupt disassembly is seen for the outer, B2-rich, phase. The A1-rich inner phase shows a small, sharp shrinkage when the outer phase disassembles, ascribed to the presence of B2 motifs in the A1-rich phase (incomplete de-mixing). The steady decrease in apparent area of the A1-rich phase is an artefact of photobleaching. All scale bars 10 µm.
In this study, we primarily consider two pop-ulations of four-armed C-stars -A and Bwith different lengths of the double-stranded (ds) DNA arms: 35 base-pairs (bp) for type A and 50 bp for type B. Various designs for Atype (A1, A2) and B-type (B1, B2, B3) C-stars, were used, hosting different modifications. The oligonucleotide components of each motif and their sequences are listed in Supplementary Tables S1 and S2. Figure 1(c) shows that condensates formed by A-B C-star mixtures (A1-B3 in this case) display two nested regions, a core enriched in the smaller construct A (identified through Cy5 labelling, shown in blue), surrounded by a spherical shell enriched in B (fluorescein-labelled, shown in green), as also confirmed by confocal Z-stacks in Supporting Video V1. Because A and B components can interact through identical cholesterol-cholesterol hydrophobic forces, phase separation is likely mediated by packing considerations driven by size mismatch, similar to de-mixing in binary mixtures of colloids of different size or colloid-polymer systems. [54][55][56][57][58] Size-induced de-mixing may be enhanced by the tendency of C-stars to crystallise. 45,46 The core-shell morphology can be rationalised with interfacial energy considerations, with the Brich phase likely to have a lower surface energy for interaction with water due to its lower hydrophobic content (cholesterol/DNA molar ratio) compared to the A-rich phase. 59,60 Size-induced phase separation is a simple but robust bottom-up approach for establishing addressable micro-environments within cell-like condensates, reminiscent of intracellular phase separation. 61 As we will explore later, individual building blocks can be modified with different responsive moieties, enabling spatial separation of functionalities. The robustness of size-induced phase separation means that the protocols developed for A-B systems are applicable to designs with slightly different arm lengths (C1, 48 bp and D1, 28 bp, see Supplementary Table S1). In addition, the formation of phase-separated binary condensates is straightforward and readily scalable, requiring a simple one pot thermal annealing (Methods).
Targeted disassembly of C-star do-mains.
As a first example of spatiallydistributed functionality, we sought to enable the triggered disassembly of a selected C-star phase within the condensates, hence inducing a programmable, localised morphological response. This is preferable to the use of enzymatic approaches for condensate disassembly, such as deoxyribonucleases, which would degrade the DNA populations indiscriminately. 62 Selective network disassembly relies on toeholdmediated strand displacement, whereby an invader oligonucleotide displaces a DNA strand initially linking the sticky cholesterol moiety with the nanostar, as described in earlier work by Brady et al., 46 and shown in Fig. 1(d). Following disassembly, the targeted C-star motifs split into non-cholesterolised DNA stars and dispersed cholesterol-DNA micelles. 46 Figure 1(e) shows phase-separated binary condensates in which the larger, B-type, Cstars in the outer phase (fluorescein-labelled B2, shown in green) are modified to disassemble, while A-type C-stars accumulating in the core are non-responsive (Cy5-labelled A1, shown in blue). Condensate disassembly following the addition of the invader strand can be observed in Fig. 1(f) as loss of signal from the B2 channel. Figure 1(g) shows the time evolution of the cross-sectional area of the Arich and B-rich phases for the condensate in panel (f), confirming the abrupt disassembly of the outer shell. Delay in the disassembly following invader addition (t = 0) results from its slow diffusion through the imaging chamber, while the progressive decrease in apparent area of the A-rich phase likely arises from the effect of photobleaching (visible in Fig. 1(f)) on the image segmentation pipeline (Methods). We also note a small, but sharp, decrease in the apparent area of the A-rich phase, simultaneous with B-phase disassembly. This shrinking response is likely a consequence of the disassembly of B-type motifs initially present in the A-rich phase, and the consequent relaxation of the remaining condensate. Incomplete demixing, namely the presence of B-type motifs in the A-rich phase (and vice versa), is expected in the binary condensates, and is further dis-cussed and quantified below. Bright-field micrographs in Fig. 1(f) confirm the disassembly of most of the outer phase. However, lowdensity (low-contrast) material is left behind after the green phase has disappeared, which progressively coalesces with the A1-rich cores. Contrast enhancement of the fluorescence images, shown in Fig. S1, reveals a weak signal in the Cy5 channel, identifying the low density material as being composed of A-type C-stars initially present in the B-rich shell, consistent with incomplete de-mixing. Similar behaviour is observed in the disassembly of B3 C-stars in binary A1-B3 condensates, shown in Supporting Video V2.
SUVs accumulate at the interface of Cstar condensates, forming a lipid shell. Having shown how phase separation in multicomponent C-star condensates enables spatial engineering of functionality, we then proceeded to boost their biomimetic relevance by introducing a lipid layer. Small Unilamellar lipid Vesicles (SUVs, 100 nm in diameter, as confirmed by dynamic light scattering shown in Fig. S2), prepared from the zwitterionic lipid DOPC and stained with fluorescent Texas Red DHPE (0.8 % molar ratio), were introduced into samples of pre-formed, single component condensates (Methods). As sketched in Fig. 2(a), we observed adhesion of SUVs on the surface of the condensates, which appear to assemble into a continuous layer in confocal micrographs ( Fig. 2(a), bottom right). We note here that the condensates have adopted a polyhedral morphology, reflecting the BCC crystalline phase formed by C-stars with this design (A1). 46 As sketched in Fig. 2(a) (right), SUV adhesion is most likely mediated by the insertion of the C-star cholesterol moieties within the phospholipid bilayer -a known effect exploited for membrane functionalisation with DNA nanostructures. [63][64][65][66][67][68] In a related system, Walczak et al. reported the adhesion of small C-star aggregates onto the surface of cell-size Giant Unilamellar Vesicles (GUVs), 48,53 observing that C-star particles are able to permeabilise liposomes and cause their rupture. To assess whether similar destabilisation occurs for SUVs depositing onto cell-size C-star condensates, we encapsulated calcein inside SUVs lacking phospholipid fluorescent labelling. Figure 2(b) shows that the calcein signal (shown in yellow) remains localised on the condensate surface, confirming that (at least some of) the SUVs are not disrupted nor rendered leaky. We then prepared lipid shells in which only a small fraction of the SUVs (1 in 800) were doped with Texas Red DHPE, while the remainder consisted entirely of unlabelled lipids. As shown in Fig. 2(c), we observed a speckled pattern on the lipid layer, whose persistence over time demonstrates that neighbouring SUVs are unable to exchange fluorescent lipids and are thus not undergoing significant fusion. Monitoring the speckle arrangement over time shows no Brownian motion, indicating that the SUVs remain effectively static over experimentally-relevant timescales. Taken together, the absence of substantial leakiness and lack of lipid exchange or diffusion hints at a morphology of the lipid layer as depicted in Fig. 2(a), in which the SUVs mostly retain their identity rather than reconfiguring into a continuous bilayer.
It is thus expected that the lipid layer would have significant porosity. This hypothesis is tested in Fig.2(d), where we used confocal microscopy to measure the permeability of the lipid layer. C-star condensates formed from the A1 motif, either with or without a lipid shell, were soaked in solutions of various fluorescent probes. The ratio (ξ) of fluorescent intensities recorded within (I internal ) and outside (I external ) the condensates was used as a proxy to determine whether the lipid shells are permeable to these probes (see Methods). 46 For all tested probes, which included calcein, a noncholesterolised fluorescein-labelled DNA nanostar with 28 bp arm length, 20 kDa tetramethylrhodamine isothiocyanate (TRITC)-labelled dextran, and Texas Red-labelled streptavidin, we observe that ξ is unaffected by the lipid shell. This evidence confirms that the shell possesses pores sufficiently large to allow for diffusion of all tested probes. As a comparison, (d) Partitioning of fluorescent molecular probes into A1 C-star condensates with and without a lipid shell, gauged using the ratio ξ of probe fluorescence intensity inside (I internal ) and outside the condensate (I external ), measured from confocal micrographs (examples in insets). Symbols mark the mean of three independent repeats in which a median of 12 condensates were sampled, and error bars show the standard error. Data shows that the lipid shell is permeable to the tested probes, and that dextran and streptavidin preferentially accumulate within the condensates due the hydrophobic nature of TRITC and Texas Red. For calcein, ξ is also estimated for a sample of DOPC Giant Unilamellar Vesicles (GUVs), nominally impermeable to the dye. Fig. 1(g), we observe abrupt disassembly of the outer, B3-rich, phase, here accompanied by a slight shrinkage of the A1-rich inner phase. All scale bars 10 µm, with the exception of 20 µm for panel (c). For images marked with a half-shaded circle, contrast enhancement has been applied by linear rescaling of the pixel intensity to aid visualisation.
we present data for penetration of calcein into electroformed GUVs, which should be largely impermeable to the dye. As expected, ξ is significantly lower for GUVs compared to condensates, although greater than zero due to outof-plane fluorescence signals. We also note that dextran and streptavidin accumulate within the condensates, driven by favourable hydrophobic interactions between the cholesterol moieties and the hydrophobic, rhodamine-derived fluorophores they host. 46 A similar test shows that shell porosity is not reduced with longer condensate-SUV incubation times (Fig. S3).
Lipid shells remain stable after condensate disassembly. The porosity of the lipid shell allows rapid exchange of material, which we can exploit to externally trigger the disassembly response discussed earlier, through the addition of an invader strand. In Fig. 2(e) we observe condensates of responsive B1 C-stars dissolve from within the lipid shell, confirming that the latter is sufficiently porous to first allow inward diffusion of the invader, and later the escape of the disassembled DNA fragments (the largest of which is a non-cholesterolised DNA star with 12 bp arm length). Surprisingly, we note that, in the vast majority of cases, the lipid shells remain stable after condensate disassembly is complete. Therefore, despite the previously noted evidence that SUVs largely retain structural identity (Fig. 2), a mechanism must exists through which the SUVs remain physically connected, as we will discuss further.
Lipid-coated condensates display diverse responses upon disassembly, two examples of which are shown in Fig. 2(e). In panel (e) ii, the condensate disassembles asymmetrically, and a "bubble" is formed which deforms the lipid shell. In panel (e) iii, disassembly occurs more symmetrically but, as before, an expansion and smoothing of the lipid shell is observed. Though the behaviours differ, both examples suggest an initial build-up in osmotic pressure within the lipid shell due to the release of DNA nanostructures. 20 At later times, the pressure is released as the DNA diffuses out through the pores in the shell, evident in the absence of significant DNA-associated fluorescent signal remaining within the shells. Figure 2(e) iv shows the time-dependent cross-sectional area of the DNA condensates, confirming abrupt disassembly upon invader addition for the examples in Fig. 2(e) ii and iii. The different onset times noted for the two condensates is a consequence of the delayed diffusion of the invader in the imaging chamber (Methods). Disassembly of lipid-coated DNA condensates can also be performed by non-specific means, namely using DNase I. This is demonstrated in Fig. S4, where the lipid shell is also observed to persist once disassembly of a C-star condensate (population D1, arm length 28 bp) is completed.
Figure 2(f) shows TMSD disassembly of a B1 condensate coated in a non-fluorescent lipid shell. As expected, the fluorescent signal from the DNA in the condensate disappears as disassembly progresses. However, on enhancing image contrast post-etching, we note a faint signal co-localised with the lipid shell ( Fig. 2(f), right). Supporting Video V3 shows a confocal z-stack of a Texas Red-labelled lipid shell remaining after disassembly of a fluoresceinlabelled B1 condensate -with the contrast of the fluorescent channels enhanced, the same co-localisation of the DNA and lipid fluorescent signals is noted. We argue that a small amount of DNA nanostuctures may fail to disassemble in the region immediately in contact with the lipid shell, or in the gaps between the SUVs, possibly due to poor accessibility of the toehold domain. This protective effect of the lipids may be reminiscent of the one observed by Julin et al., who found that cationic lipidcoated DNA origami were significantly more resistant to DNase I digestion than non-coated origami. 69 It is thus reasonable to hypothesise that residual amphiphilic DNA material may play a role in conferring stability to the lipid shell following condensate disassembly, by cross-linking neighboring SUVs. DNA-mediated SUV crosslinking would be compatible with the observations reported in Fig. 2(b) and (c) concerning the lack of content leakage and lack of signif-icant lipid exchange between the SUVs in the shell. Nonetheless, other DNA-independent mechanisms may still play an important role, such as membrane adhesion or SUV hemifusion, encouraged by the presence of amphiphiles. 70,71 The ability to create semi-permeable lipid shells, combined with that of building condensates with distinct regions hosting addressable functionality, offers a route to construct architectures which resemble a prototypical biological cell. As demonstrated in Fig. 2(g), this can be achieved by forming a lipid shell around a binary A1-B3 condensate, in which the B3-rich outer phase can be disassembled with TMSD. Upon disassembly, a gap, reminiscent of a cytoplasm, is formed between the lipid shell and the A1-rich "organelles". In Fig. S5, we observe a similar gap formed from the disassembly of the B3 motif in A1-B3 binary condensates enveloped in a lipid shell comprising SUVs encapsulating calcein. Here, we note the calcein signal persisting, indicating a lack of significant content leakage. As demonstrated in Fig. 1(g), disassembly can be quantified by monitoring the cross-sectional area of the core and shell regions. Consistently, in Fig. 2(g) iii, we note a sharp, complete disassembly transition for the B-rich phase and a simultaneous shrinkage of the A-rich core.
In-situ transcription triggers morphological changes in DNA-based synthetic cells. In a recent contribution, Leathers et al. showed that C-stars can be modified with DNA templates, which are transcribed by T7 RNA polymerase to produce RNA aptamers in situ, thus allowing the DNA-based synthetic cells to produce new nucleic acid building blocks. 51 Here we seek to combine these transcription capabilities with the more complex cell-like architectures we present, which differ from the implementation of Leathers et al. by featuring regions with distinct physical characteristics and composition.
We consider binary systems of C-stars that differ both in size and functionality, sketched in Fig. 3(a). One population is composed of B1 C-stars (50 bp arm length) which host a modification enabling disassembly through TMSD, as described previously. The second population comprises A2 C-stars (35 bp arm length), which are modified so that an overhang on one of the arms connects to a transcribable ssDNA Template (T) through a ssDNA Bridge (B) strand -see inset in Fig. 3(a). The Template codes for an extended and brighter version of the Broccoli RNA light-up aptamer, 51 which binds to and induces fluorescence in DFHBI, 72 while the Bridge contributes to forming the doublestranded promoter region required to initiate transcription by T7 RNA polymerase. 51,73 As expected, one-pot annealing of all ssDNA components produced phase separated condensates, visible in bright-field images ( Fig. 3(b)ii ), right). However, when labelling the Bridge strand with a fluorescent probe (Alexa Fluor 647), confocal images reveal a non-uniform distribution of the Bridge and Template (BT) duplex in the inner phase, with a greater concentration found at the outer interface of the A-rich core compared to its centre ( Fig. 3(b)ii, left). We also note a visible signal from the Bridge in the outer shell, indicating a significant presence of fluorescently-labelled A2 motifs in the B-rich region. A microscopy-informed schematic of the internal morphology of the condensates is sketched in Fig. 3 Control experiments with A2-only condensates, summarised in Fig. S6, reveal that the presence of the bulky BT duplex is, in itself, sufficient to produce size-induced phase separation, leading to the appearance of a templateenriched shell and a template-depleted core, as visible by comparing condensates lacking and including Bridge and Template strands ( Fig. S6(a) and (b), respectively). We thus argue that the non-uniform template distribution seen in Fig. 3(b) for binary condensates could be a direct consequence of size-induced phase separation caused by the template.
Besides exhibiting a non-uniform distribution, due to its substantial contribution to the molecular weight of the A-type stars, the BT construct may also influence the degree of phase separation between A-type and B-type motifs in the binary condensates. To test this hypothesis, we prepared binary condensates comprising fluorescein-labelled B1 C-stars and unlabelled A2 C-stars, the latter of which either contained or did not contain the Bridge and Template strands. Confocal microscopy confirms that in both condensate types, the outer phase is enriched in fluorescein-labelled B1 C-stars (shown in cyan in Fig. 3(c)). Using image segmentation, we extracted the average B1 fluorescent signal in the inner A-rich phase (I inner ) and in the outer B-rich phase (I outer ), and computed the parameter χ = I inner /I outer . χ can be used as a proxy for the extent of A-B mixing in the condensates, taking values close to zero for complete de-mixing and close to 1 for complete mixing. A higher value of χ is found when BT constructs are present, indicating that the mod-ifications hinder A-B phase separation, likely by making the steric encumbrance of A-type motifs closer to that of B-type motifs. It should also be noted that, even in the absence of BT, χ remains substantially larger than 0 (∼ 0.45) indicating incomplete de-mixing and consistent with the observations made on the response of the system to disassembly of the outer phase ( Fig. 1(g) and Fig. 2(g)).
Having characterised phase separation and BT-distribution in A2-B1 binary mixtures, we proceeded to create cell-like architectures able to sustain transcription, henceforth referred to as synthetic cells. To this end, the BT-containing A2-B1 binary condensates were coated in a lipid shell as discussed above, and washed multiple times to remove any free DNA and unattached SUVs (see Methods). The removal of freely-diffusing Broccoli-templating DNA was verified with transcription reactions run using the supernatants extracted after each wash, as shown in Fig. S7(a). The B1 motifs were then disassembled with TMSD, as sketched in Fig. 4(a). Epifluorescence images show a significant quantity of labelled RNAtemplating A2 C-stars (red) in the B1-rich outer phase pre-etching, as expected given the hindering effect that the BT duplex has on A-B de-mixing, quantified in Fig. 3(c). During disassembly of the B1 C-stars, some of the BT-linked A2 motifs collapsed to form a low density mesh surrounding the core of the synthetic cells. Other constructs were able to detach and escape the lipid shells, as confirmed by control experiments in which RNA transcription reactions are carried out with the supernatants removed from the samples post-B1 etching, and summarised in Fig. S7(b). Despite the synthetic cells having been sufficiently washed pre-etching ( Fig. S7(a)), the Broccoli signal from the supernatant was again substantial post-etch, indicating a significant leakage of the Broccoli-templating A2 constructs during removal of the outer phase ( Fig. S7(b)). Bright-field micrographs in Fig. 4(b) also show the lipid shell shrinking and wrinkling during the disassembly, more substantially than for other, template-free, designs ( Fig. 2(g) ii ). As we discuss later, this response is likely a result of less complete de-mixing, where a significant quantity of A2-rich material is present in the outer phase and collapses onto the inner core after etching, which may pull on the lipid layer causing its contraction.
Transcription of the etched and subsequently washed synthetic cells causes an expected increase in Broccoli fluorescent signal, and a distinct morphological response, both of which are sketched in Fig. 4(c) and shown in epifluorescence micrographs in Fig. 4(d). Broccoli fluorescence builds up within the A2-rich dense inner phase, within the cytoplasm-like region and low-density material left behind by B1 disassembly, and later in the medium surrounding the synthetic cells. More surprisingly, we note a dramatic change in the morphology of the disordered A2-rich mesh surrounding the core of the synthetic cells, which expands and inflates the synthetic cell to a size and shape akin to that observed prior to etching. Additional images and further examples of the transcription-induced synthetic-cell expansion can be found in Figs. S8-S10 and Supporting Videos V5-V13. Fluorescent labelling of the SUVs used in Fig. 4(d), Fig. S8 and Supporting Videos V5-V7 demonstrates that the lipid shell follows the expansion the synthetic cell pseudo-cytoplasm, which however also swells in the absence of lipids, as shown in Fig. S9 and Supporting Videos V11 -V13.
In order to rationalise this morphological response, we used image analysis to extract the outer dimensions of the synthetic cells and the intensity of the Broccoli fluorescence signal, measured both in the solution surrounding the synthetic cells (I bkg ) and in their cytoplasm (I cy , see Fig. S11). The results of this analysis are presented in Fig. 4(e)i for synthetic cells prepared with the Alexa Fluor 647-labelled Bridge strand, with or without a (non-fluorescent) lipid shell. Regardless of the presence of a lipid shell, construct size increases as transcription progresses and reaches a maximum after approximately 8 hours, followed by a slight decrease in size which is less pronounced for the lipid-coated synthetic cells. The fluorescence intensity ratio ζ = I cy /I bkg should approximate the ratio of aptamer concentrations inside and outside the constructs. As might be expected, at early times, we see a pronounced growth of ζ from its initial value of ∼ 1 due to localised Broccoli transcription within the synthetic cells. A maximum is then reached, followed by a steady decrease back towards ζ ∼ 1, due to aptamer leakage and the consequent increase of I bkg (also notable in Figs. S9 and S10 and Supporting Videos V8-V13), combined with the progressive reduction in polymerase activity. A clear difference is observed between lipid-coated and non-coated constructs, with the former displaying a higher maximum ζ-value, which is also reached at later times. These differences indicate that, albeit permeable, the lipid shell is able to slow down outward diffusion of the Broccoli aptamer. This hypothesis is consistent with the absolute values measured for I bkg , which are higher for lipid-less constructs compared to the complete synthetic cells, as shown in Fig. 4(e) ii. The accumulation of the Broccoli aptamer within the synthetic cells (whether lipid-coated or not) hints at potential mechanism for the observed size increase, where a transient osmotic pressure build-up from the transcript causes swelling of the low-density A2-rich material remaining around the cores of the constructs following removal of the B1-rich shell. A similar response was observed by Saleh et al. during the enzymatic digestion of DNA hydrogel droplets, where the formation of internal cavities was ascribed to osmotic pressure from disassembled DNA fragments. 20
Conclusion
In summary, we have presented a strategy for constructing cell-like architectures that mimic multiple characteristics of biological cells, including a porous lipid-based shell, a cytoplasmlike cavity, and dense membrane-less internal compartments. These biomimetic devices are produced via bulk self-assembly of amphiphilic DNA nanostructures that undergo phase separation, combined with liposome deposition and phase-specific etching, hence negating the need for complex microfluidics or emulsion-based methods. The membrane-less organelles can be modified to host a DNA template. This template can be transcribed to produce RNA, whose localised synthesis causes a swelling response in the cytoplasm and lipid shell of the synthetic cells, due to transient osmotic pressure build up. The ability to initially accumulate, and then progressively release nucleic acids of tailored sequence could make our solution a valuable starting point for the development of therapeutic agents, e.g. for vaccines 74 and gene therapy. 75 Our proof-of-concept implementation dovetails the structural and dynamic control afforded by DNA nanotechnology, with amphiphile self-assembly, phase-separation phenomenology, and in vitro transcription, hence exemplifying how increasingly more complex architectures and responses can be engineered from the bottom-up, if complementary molecular tools are synergistically combined. The synthetic cell assembly strategy we outline is general, and can be systematically expanded by localising functional moieties other than the DNA templates in the co-existing phases, e.g. grafted enzymes 76 or nanoparticles. 77 Despite being used here only to form a semi-permeable shell, the liposomes could further be targeted with lipid-specific functional modification, such as membrane receptors, while their ability to retain contents could be exploited to encapsulate and conditionally release molecular car-goes. This modularity and design versatility further strengthens the applicative outlook of our solution, potentially unlocking the design of multi-functional therapeutic synthetic cells that combine nucleic acid synthesis with targeting abilities and the possibility of co-delivering small-molecule drugs and macromolecules encapsulated in the liposomes and/or embedded in the DNA matrix. 46 Acknowledgement LM, LDM, and DT acknowledge support from the European Research Council (ERC) under the Horizon 2020 Research and Innovation Programme (ERC-STG No 851667 -NANOCELL). GF acknowledges funding from the Department of Chemistry at Imperial College London. MPP acknowledges support from a UK Research and Innovation New Horizons Grant (EP/V048058/1) and an EPSRC Doctoral Prize Fellowship (EP/W524323).
AL and LDM acknowledge acknowledge support from a Royal Society Research Grant for Research Fellows (RGF/R1/180043).
LDM also acknowledges support from a Royal Society University Research Fellowship (UF160152). MJB is supported by a Royal Society University Research Fellowship (URF/R1/180172) and acknowledges funding from a Royal Society Enhancement Award (RGF/EA/181009) and an EPSRC New Investigator Award (EP/V030434/2). The authors acknowledge the Facility for Imaging by Light Microscopy (FILM) at Imperial College London and thank Stephen Rothery for his assistance at the facility. The authors thank Elisa Franco for useful feedback on the manuscript.
|
v3-fos-license
|
2018-08-26T21:42:57.049Z
|
2018-08-24T00:00:00.000
|
52078549
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0202328&type=printable",
"pdf_hash": "59e3b1e46032cdc8680b35ac7e34c3a01f9dc733",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43359",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "59e3b1e46032cdc8680b35ac7e34c3a01f9dc733",
"year": 2018
}
|
pes2o/s2orc
|
Continuous patrolling in uncertain environment with the UAV swarm
The research about unmanned aerial vehicle (UAV) swarm has developed rapidly in recent years, especially the UAV swarm with sensors which is becoming common means of achieving situational awareness. Due to inadequate researches of the UAV swarm with complex control structure currently, we propose a patrolling task planning algorithm for the UAV swarm with double-layer centralized control structure under the uncertain and dynamic environment. The main objective of the UAV swarm is to collect environment information as much as possible. To summarized, the primary contributions of this paper are as follows. We first define the patrolling problem. After that, the patrolling problem is modeled as the Partially Observable Markov Decision Process (POMDP) problem. Building upon this, we put forward a myopic and scalable online task planning algorithm. The algorithm contains online heuristic function, sequential allocation method, and the mechanism of bottom-up information flow and top-down command flow, reducing the computation complexity effectively. Moreover, as the number of control layers increases, this algorithm guarantees the performance without increasing the computation complexity for the swarm leader. Finally, we empirically evaluate our algorithm in the specific scenarios.
Introduction
UAV has rapidly developed in recent years [1,2], such as agricultural plant protection, pipeline inspection, fire surveillance and military reconnaissance. In August 2016, Vijay Kumar put forward the "5s" development trend of UAV, which is small, safe, smart, speed and swarm. Particularly, swarm intelligence [3,4] is the core technology of the UAV swarm, attracting more and more researchers. The study of swarms began in behavior study of insect communities by Grasse in 1953 [5]. For example, the behavior of the single ant is quite simple, but the group of ant colony composed of these simple individuals, shows a highly structured social organization, which can accomplish complex tasks far beyond the individual's ability.
The UAV swarm here is a large scale multi-agent system [6] with the complex relationship. Complex relationships can generate complex behaviors, adapting to complex environments and accomplishing complex missions. Compared to the small-scale multi-UAV system, the UAV swarm holds many new advantages, such as lower cost, higher decentralization, higher a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 time as possible based on our algorithm. Additionally, our algorithm have the realistic significance.
Additionally, the paper is organized as follows. In section 1, we introduce the background of our research. Then in section 2, the relative literatures are reviewed. In section 3, we formally define the UAV swarm patrolling problem. Given this, the UAV swarm patrolling problem is formulated as POMDP in section 4. In this section, the patrolling algorithm is provided to calculate polices for every sub-swarm leader and information gathering UAV. After that, we put forward proof on the decision-making mechanism and corollaries about scalability and performance bound in section 5. In section 6, our algorithm is evaluated through simulation experiments empirically by comparing with benchmark algorithms in the same problem background. Finally, we conclude and point out more research work in section 7.
Related work
In this section, we review related work on dynamic environment model, command and control structures and approaches for the patrolling task planning problem.
Generally speaking, approaches to gather situational awareness without considering threats are typically categorized as the information gathering problem, where agents aim to continuously collect and provide up-to-date situational awareness. One of the challenges in this type of problem is to predict the information at other coordinates in the environment with limited data. As for the environment model, Gaussian Processes [15] are often used in recent years, effectively describing the space-time relationship of the environment. Additionally, topology graph is abstract way to model environment from different perspectives. Compared to topology graph, Gaussian Processes displays more details about environment. However, topology graph abstracts the core elements of environment, which is helpful to concentrate on research object. As for the environmental dynamic, most environment models are static in previous work [16]. Presently, Markov chain are widely used to model non-static random environment objects, such as the physical behavior of the wireless network [17], the storage capacity of the communication system channel [18] and communication channel sensing problem [19]. In these papers, the Markov model is added some different assumptions. As for patrolling problem with the UAV swarm, the Markov chain is one of the most popular models. For instance, the ground target is modeled as an independent two-state Markov chain in paper [20]. Paper [21] models the patrolling environment with threat state and information state as K-state Markov chain. Paper [22] uses the Markov chain to represent hidden movement of targets. In this paper, we assume the UAV swarm patrolling environment is a topology graph, changing with K-state Markov chain.
Due to the large number of UAVs, the command and control structure of the UAV swarm should be taken into consideration. Nowadays, there are many control structures about innerloop controller in UAV [23,24], which are different from our research. What we concern is the relationship among UAVs in the swarm. Generally speaking, control structures of the swarm can be divided into general structure and computable structure. General structures are coarse granularity, which can be applied to a variety of fields. In general structures, research object is described by qualitative method, lacking quantitative analysis. The AIR [25] model divides control structures into four basic patterns: the directed control structure, the acknowledged control structure, the virtual control structure and the collaborative control structure. The 4D/RCS [26] model provides a theoretical basis for unmanned ground vehicles on how their software components should be identified and organized. The 4D/RCS is a hierarchical deliberative architecture that plans up to the subsystem level to compute plans for an autonomous vehicle driving over rough terrain. Paper [27] proposes a scalable and flexible architecture of real-time mission planning and dynamic agent-to-task assignment for the UAV swarm. Compared to general control structures, computable control structures are fine granularity and quantitative. For example, aiming at centralized control structure and decentralized control structure, paper [28] introduces three methods to solve the cooperative task planning for the multi-UAV system. Paper [29] proposes a task planning method of singlelayer centralized control structure in dynamic and uncertain environment. However, most of computable control structures are single-layer presently. Thus, in order to effectively manage large-scale UAVs, the computational control structures with complex relationship should be taken into consideration.
There are many approaches to solve the task planning problem [30], such mathematical programming, Markov decision process (MDP) and game theory [28]. As for the continuous information gathering problem, MDP based algorithms are more appropriate due to the property of multi-step programming. For instance, in fully observable environments, paper [31] proposes a MDP based algorithm to compute policies for all the UAVs. Moreover, POMDP and Dec-centralized POMDP (Dec-POMDP) [32] are widely used to partially observable environments. However, most of the researches on patrolling problem of the UAV swarm are single-layer control structures. And our work in this paper mainly extends to double-layer control structure. Due to the exponential growth in the number of possible course of actions of UAVs, solving this formulation using current POMDP solvers [33] is hard. Partially Observable Monte Carlo Planning (POMCP) [34] extends some benchmark algorithms to solve multi-agent POMDPs. POMCP breaks up curse of dimensionality and the curse of history, providing a computationally efficient best-first search that focuses its samples in the most promising regions of the search space. However, as for the large scale multi-agent patrolling problem, the state space is still too large to apply POMCP into multi-POMDP problem directly.
The UAV swarm patrolling problem
In this section we present a general patrolling problem formalization of the UAV swarm with double-layer centralized control structure. Here, we introduce the patrolling problem of the UAV swarm in three aspects: overview of the patrolling problem, the physical environment and patrolling UAVs.
Overview of the patrolling problem
The environment is modeled as the upper-layer environment and the lower-layer environment for different decision makers. The upper-layer environment and lower-layer environment correspond to the same real environment. The control structure falls into the upper-layer control structure and the lower-layer control structure for different decision makers. There are three types of UAVs: the swarm leader, the sub-swarm leader and the information gathering UAV (I-UAV for short). The swarm leader and the sub-swarm leader are decision makers. The upper-layer environment, lower-layer environment and three types of UAVs are shown in Fig 1. The lower-layer environment provides information for sub-swarm leaders, and I-UAVs follow sub-swarm leaders' command. After that the sub-swarm leaders provide information for the swarm leader and follow the swarm leader' command. The difference between two layers is granularity of time, layout graph, action, and information belief.
The swarm leader is represented by a blue hexagon. There are several sub-swarms in a swarm, and every sub-swarm contains several I-UAVs. The leader of a UAV swarm is called as the swarm leader, while the leader of a sub-swarm is called as the sub-swarm leader, and UAVs which are directly subordinate to the sub-swarm leader are called as I-UAVs. In reality, the swarm leader may be a high intelligence UAV in the UAV swarm, a ground control station, or an early warning machine. The main function of the swarm leader is to allocate course of actions for each sub-swarm leader. Sub-swarm leaders are represented by yellow five-pointed stars. They play the role of actor in the upper-layer environment, while they are decision makers in the lower-layer environment, leading a sub-swarm and allocating the course of actions for each I-UAV. I-UAVs are represented by red rhombus, directly controlled by their superior sub-swarm leader. The function of I-UAV is to collect environmental information. Additionally, the upper-layer control structure and lower-layer control structure are both centralized control structure, and there are no interactions between UAVs with peering relationship. Here, let l denote the lower-layer environment, let h denote the upper-layer environment, let u denote the sub-swarm leader, and let w denote the I-UAV. Moreover, the meanings of symbols in this paper are shown in Table 1.
The physical environment
The physical environment is defined by its spatial-temporal and dynamic properties, encoded by the lower-layer environment and upper-layer environment based on the control structure, specifying how and where UAVs can move. In fact, the physical environment is an interested area for people like a mountain forest, a battlefield, or a farmland, where people need urgent continuous intelligence information. Each vertex in undirected graph refers to an area in reality, and edge indicates it is connected between two vertices. Definition 1 (Layout graph) The layout graph is an undirected graph G = (V, E) that represents the layout of the physical environment, where the set of spatial coordinates V is embedded in Euclidean space, and the set of edges E denotes the movements that are possible.
Our model contains the upper-layer layout graph and lower-layer layout graph, denoted as G h and G l separately. The upper-layer layout graph and lower-layer layout graph corresponds to the same physical environment. There is a correspondence between G h and G l . Definition 2 (Information level) The information level qualitatively represents the content of interested information, denoted as I k 2 {I 1 , I 2 , . . ., I K }, where K is the number of levels. The information level vector is denoted as I = [I 1 , I 2 , . . ., Each vertex has a certain information level at a time. We regard the physical environment is dynamic and partially observable. So the information level of each vertex changes with time. Specifically, an I-UAV can only access the current location in G l and gather the information. When an I-UAV visits a vertex, the information level of this vertex will be reset to I 1 . In other words, there is no more new information after the most recent visiting.
Definition 3 (Information value)
The information value is a quantification of information level, denoted as f(I k ), I k 2 {I 1 , I 2 , . . ., I K }. Function f : I k ! R þ assigns information level to information value. The information value vector is denoted as F = [f(I 1 ), f(I 2 ), . . ., f(I K )].
In order to reduce the decision complexity of the swarm leader, the significant and interested information are extracted from the lower-layer layout graph. Moreover, information value of vertices where UAVs haven't visited for some time may increase. Thus, we regard that the function f(Á) is monotonically increasing. And the information value transition matrix P is Table 1. A summary of the notation used throughout this paper.
G
An undirected graph encoding the physical layout of environment (Definition 1)
I
The information level of each node (Definition 2)
f(I)
The information value of each node (Definition 3) γ The discounting factor C k The policy set of k allocated agents in sequential allocation method as follows: Assumption 1 The change of information value obeys the independent and discrete-time multi-state Markov chain according to Eq 1.
Here, we assume the information value state transition matrix P is known in advance. P h represent the upper-layer transition matrix, while P l represents the lower-layer transition matrix. Additionally, the concept of stochastic dominance is widely used in many applications [35], such as economy, finance, and statistic. Specifically, the stochastic dominance of two Kdimension vectors x = {x 1 , x 2 , . . ., x K }, y = {y 1 , y 2 , . . ., y K } is defined as x 1 y, if:
Assumption 2 Information value vector F follows stochastic dominance.
Intuitively, if a vertex v has higher information value than other vertices currently, the vertex v might have higher information value at the next moment.
Assumption 3 Information value transition matrix P is a monotone matrix.
Generally, if there are no UAVs gathering information in an area, the unknown information of this area may increase with time. The monotone matrix [36] P satisfies: As for two compact information belief vectors (See Eq 13) b n and b n 0 , if b n 1 b n 0, then b n ÁP 1 b n 0 ÁP [17]. If there are no UAVs visiting vertex v n and v n 0 at the moment, their information belief vectors will also maintain stochastic dominance at the next moment. Additionally, if b n 1 b n 0 , then b n ÁF ! b n 0ÁF, which means that the belief vector with stochastic dominance may have higher information value.
Definition 4 (Time)
Time is modelled by a discrete set of temporal coordinates t 2 {0, 1, . . .}, henceforth referred to as time steps.
The lower-layer time step is denoted as t l , and upper-layer time step is denoted as t h . Here, a time step contains a OODA (Observation, Orientation, Decision, Action) for all the agents with the same layer. And there is a correspondence between them.
Definition 5 (Time Step Ratio) Time step ratio is the ratio of the real time of one upper-layer time step to that of one lower-layer time step, denoted as M.
The relationship between upper-layer time step and lower-layer time step is Definition 6 (Corresponding Relationship of Time) Let function Θ t (Á) and Y À 1 t ðÁÞ denote the corresponding relationship of time.
The corresponding relationship between t h and t l is as follows. Where Floor denotes the fraction is rounded down. We use the term "region block" to represent a square area in graph G l . Each vertex v in G h corresponds to a region block. The length of region block is d r , including d r × d r lower-layer vertices.
Example 2 The lower-layer layout graph G l includes 300 × 200 vertices, and the region block G r include 20 × 20 vertices. Then the upper-layer layout graph G h is a rectangular area with 15 × 10 vertices. Therefore, the hierarchy environment greatly reduces the decision-making complexity for the swarm leader.
The patrolling UAVs
There are three types of UAVs, namely, the swarm leader, the sub-swarm leader, and the information gathering UAV.
Definition 8 (Swarm leader) A swarm leader is an entity capable of making decisions for all the sub-swarm leaders.
The role of the swarm leader is to manage the whole UAV swarm, whose function is similar to the ground workstations, or early warning aircraft. However, because of the hierarchy control structure, the swarm leader mainly focuses on the state of sub-swarm leaders and upperlayer environment. In this paper, we regard that the communication ability between subswarm leaders and the swarm leader is strong enough, regardless of the communication distance between them.
Definition 9 (Sub-swarm leader) A sub-swarm leader is a physical mobile entity capable of making decisions for its subordinate UAVs. The sub-swarm leader is denoted as u
The behaviors of a sub-swarm leader can be divided into decision-making part and actionexecuting part. The sub-swarm leader is an actor in G h , following the command of the swarm leader. However the sub-swarm leader becomes a decision maker in G l , controlling several I-UAVs. Briefly speaking, the sub-swarm leader plays a role of bridge, connecting the swarm leader and I-UAVs.
Actions of sub-swarm leaders are atomic in G h . It means that the sub-swarm leader can move from a upper-layer vertex v h i to its neighboring vertex v h j 2 adj G h ðv h i Þ at a time step t h . Meanwhile, different sub-swarm leaders can visit the same vertex at the same time. The sub-swarm leader performs the same actions in G l as it performs in G h . Based on formula 5, the time step ratio M is no less than the side length of region block d r in order to ensure that the sub-swarm leader can reach the target area timely. In this paper, we set M = d r .
Definition 10 (Information Gathering UAV) An information gathering UAV (I-UAV for short) is a physical mobile entity capable of taking observations. The I-UAV is denoted as
I-UAVs collect environment information by visiting lower-layer vertices. I-UAVs are distributed in lower-layer layout graph G l , and different I-UAVs can visit the same lower-layer vertex v l at the same time. The movement of the I-UAV in G l is atomic. We assume that the I-UAV is fast enough to move from one vertex to its adjacent vertex at a time step in reality. In addition, the cost of I-UAV movement is not taken into account in the paper.
If an I-UAV visits a lower-layer vertex v l , it will automatically gather current information value of this vertex. After visiting, the information level of vertex v l will be reset to I 1 , indicating that the information of v l has been collected and no new information currently. Nevertheless, the environment changes dynamically based on formula 1 with time. I-UAVs can only access the information at the moment, which cannot observe the state of vertex at the next moment.
Assumption 4 The communication capacity of I-UAVs is limited. The feasible area of I-UAVs is a square region block centering on the current position of their superior sub-swarm leader.
In other wards, the feasible area for I-UAVs moves with the movement of the sub-swarm leader in G l .
Definition 11 (Corresponding Relationship of Action) Let Θ a denote the action corresponding relationship of sub-swarm leader between the upper-layer layout graph and the lowerlayer layout graph: For the convenience of description, we define the concept of a team. There are two types of teams in our model: teams of I-UAVs and teams of sub-swarm leaders. Policies of agents are decided by the team leader. The team leader is the swarm leader in G h , while it is the subswarm leader in G l .
Definition 12 (Team)
The team is a multi-agent system with single-layer centralized control structure.
The UAV swarm myopic patrolling algorithm
As for the centralized control structure, the information flow is bottom-up, while the control flow is top-down. In this section, we introduce the UAV swarm patrolling algorithm from the aspect of control flow. Given the problem described in previous sections, we first instruct the multi-agent patrolling formulation. Then we introduce the objective of patrolling problem. After that, we introduce the UAV swarm patrolling algorithm.
Team of agents patrolling model
The swarm patrolling model can be divided into multiple sub-swarm leaders patrolling model and multiple I-UAVs patrolling model. Because they have the same control structure and the similar environment, the formula of multiple sub-swarm leaders patrolling model and multiple I-UAVs patrolling model are similar. Without loss of generality, we take a team for example. The team leader obtains joint observation values, takes the joint actions, and gets the joint return values. So, the patrolling problem of multi-agent patrolling can be modeled as MPOMDP problem, while MPOMDP problem can be regarded as a POMDP problem, which is denoted as hS, A, O, F, O, R, Bi: • S is the joint state set of all the agents in the team, including the joint position state set and joint information state set, denoted as S = [S V , S I ]. A joint position state is denoted as . . . ; s I U 2 S I , and a joint information state is denoted as • A is the joint action set of all the agents in the team. A joint action state is denoted as a = [a 1 , a 2 , . . ., a U ] 2 A. The team leader determines what actions agents should perform. Specifically, the action for an agent is the movement from current vertex to its adjacent vertex or remaining in its current vertex.
• O is the joint observation set of all the agents in the team, which is denoted We set o = s, which means the observation is equal to the current information state.
• F is the joint state transition function set, including position state transition function and information state transition function, denoted as . . . ; F I jVj . As for the position transition function, a agent can reach to the target neighbour vertex. The position state transition function is as follows: Where s V goal denotes the expected destination. In addition, the information state transition function is as follows: Where s I goal denotes the expected target state. The transition function is based on Eq 1.
• O is the joint observation function set of all the agents in the team, denoted as The observation function is as follows: • R is the joint reward function set of all the agents in the team, denoted as The reward of swarm is equal to the sum of rewards of all the sub-swarms. The reward of sub-swarm is equal to the reward of I-UAVs. And the reward of I-UAV is equal to the information value of vertex which is visited currently.
RðtÞ ¼ f ðs I ðtÞÞ ð9Þ
• B it the compact information belief vector, which is the compact representation of standard information belief vector. The standard information belief vector is the posterior probability distribution over the possible information states. The belief is proposed according to the assumption 1, that information state of the vertex changes independently. The standard information belief is a sufficient statistic for the design of the optimal policy for any time step [37]. And compact information belief B is the equivalence description of standard information belief [29]. The formula is as follows: Without loss of generality, we take vertex v n for example. The formula of its belief is as follows: b n ðtÞ ¼ ½p n I 1 ðtÞ; p n I 2 ðtÞ; . . . ; p n I K ðtÞ ð11Þ Where p n I k ðtÞ is the posterior probability of information level I k at time step t, and P K k¼1 p n I k ðtÞ ¼ 1. Now the number of all information states of lower-layer environment reduces to P jVj n¼1 K n , decreasing the computation complexity and memory complexity significantly. The update function of b is as follows: Where Λ denotes the unit vector that the first element is 1; v is the vertex visited by agent, and v n is a vertex in G. Moreover, let B l denote the lower-layer compact information belief (L-belief for short). Let B h denote the upper-layer compact information belief (H-belief for short). The upper-layer information belief derives from lower-layer information belief, let Θ b (Á) denote the relationship between H-belief and L-belief: Where t h = Θ t (t l ). The qualitative criteria about the extracted method, it is to reduce the computation complexity, at the same time, contain the sufficient and key information. So we use the method of average filter, which is brief, at the same time, containing the general information of lower-layer environment. Specifically, taking an upper-layer vertex v h n (corresponding to a region block) for example, the relationship between H-belief and L-belief is as follows: Where t h = Θ t (t l ), and N r is the number of lower-layer vertices in the region block. p l;i I k ðt l Þ represents the probability that the information level of vertex v l i is I k at t l , and p h;n I k ðt h Þ is the probability that the information level of v h n is I k at t h . Example 4 Taking a region block for example, it corresponds to upper-layer vertex v h n . This region block includes four lower-layer vertices, denoted as fv l
The objective of patrolling problem Definition 13 (Policy) The policy is a set of course of actions made by the team leader, denoted as π.
In addition, let π D denote the policy that the horizon of team leader (number of time steps that we look ahead) is D. Let D h denote the horizon of the swarm leader, and D l denote the horizon of the sub-swarm leader. The policy for an agent is defined as follows: Moreover, the patrolling objective of the UAV swarm is to acquire the maximum reward. Our algorithm is to find policies which can acquire the maximum reward. The formula is as follows: Where, R l i;j ðo l ðt l ÞÞ is the reward of I-UAV w i,j when the observation is o l (t l ), U is the number of sub-swarms, W i is the number of I-UAVs in the i-th sub-swarm, and γ 2 [0, 1] is the discount factor.
The swarm patrolling algorithm
In this section, we introduce the patrolling algorithm. Firstly, we propose the patrolling algorithm of single agent. After that, the team of agents patrolling algorithm is put forward based on single agent patrolling algorithm. Finally, we put forward the swarm patrolling algorithm.
Single agent patrolling algorithm (SAPA).
To effectively predict the information value state of layout graph, we use the character of environment. Based on formula 1, we know the information value state transition property. So we propose a heuristic function to predict the reward after performing policy, which is denoted as H(t). The heuristic function is as follows: HðtÞ ¼ Wherebðt þ kÞ is the expected belief of the vertex, which may be visited by agents at t + k.
The update function ofbðt þ kÞ is based on formula 12. However, the information transition matrix P is different between teams of sub-swarm leaders and teams of I-UAVs. If the agent is an I-UAV, it means the team leader is the sub-swarm leader in G l , then P l = P. If the agent is a sub-swarm leader, it means the team leader is the swarm leader in G h , then P h = P M . Because the time step ratio is M. 4.3.2 Team of agents patrolling algorithm (TAPA). The team of agents patrolling problem is a MPOMDP problem, which can be simplified as POMDP. The joint action space of the POMDP is the Cartesian product of the action of all sub-swarm leaders. Generally, it is hard to solve this formulation due to its huge state space. In order to duel with this problem, sequential allocation method is used to decrease the state space. As for sequential allocation method, there are two types of double counting: synchronous double counting and asynchronous double counting.
The synchronous double counting is that a vertex is visited by different agents at the same time. In this condition, the environment information will be redundant counting. We regard that the first I-UAV which is allocated to visit the vertex will acquire the information value. However, the other I-UAVs visiting the vertex will get nothing.
The asynchronous double counting is that, the j-th (i < j) agent makes a decision to visit vertex v at t 1 (t 1 < t 2 ) after the i-th agent having decided to visit this position at t 2 , where t 1 , t 2 2 {0, 1, . . ., D − 1}. In this condition, the expected value of vertex v is high-valued. Because the jth agent doesn't consider the i-th agent has decided to visit the vertex. So the penalty factor is to reduce the expected information value of vertex v for the j-th agent.
Definition 14 (Penalty Factor) The penalty factor, denoted asp, is the difference value between the expected reward and revised expected reward in the condition of asynchronous double counting.
The formula is as follows:p Where r expected 2 R þ denotes the expected reward of the i-th agent without regard to the visiting of the j-th agent. r revised 2 R þ denotes the revised expected reward of the i-th agent with regard to the visiting of the j-th agent. The formula is as follows: Wherebðt 2 Þ denotes the revised H-belief or L-belief at t 2 , which is as follows:
Definition 15 (Revised Heuristic Function)
The revised heuristic function (H-function for short) is a heuristic function adding in the penalty factor, denoted asHðÁÞ.
The formula is as follows:H Wherep all is the sum of penalty factors when evaluating a policy. Now we describe the process of sequential allocation algorithm. Firstly, the allocation sequence of agents is sorted randomly. Secondly, we calculate the optimal policy of all the agents sequentially. When calculating the revised expected reward of the k-th agent, it should take its current position v k (t), information belief vector B(t) and calculated optimal policies C Ã kÀ 1 into account. The formula of revised expected reward is equal to the revised heuristic function: for π 2 ∏ D (t) do 7: CalculatingB andB from t to t + D − 1 8: Calculating the revised expected rewardR of π 9: Calculating the π with π Ã , restoring the optimal policy 10: end for 11: Restoring the optimal policy and path in C 12: end for 13: Returning actions a(t) of all the agents 14: end function The sequential allocation method is to greedily compute policies for each single agent sequentially, instead of computing a joint policy for the team. The sequential allocation method [31] for multiple agents is defined as follows: . . .
WhereRðÁÞ is the revised expected reward function. B(t) is compact belief vector of vertices at t. C à k is computed optimal policies from 1-th agent to k-th agent, denoted as . . . ; p à k g; k 2 f0; 1; . . . ; K À 1g, C à 0 ¼ ;. The procedure of the team of agents patrolling algorithm see 1.
In the beginning, the new information belief vector B(t) is computed based on formula 12 (for the lower-layer vertices) or formula 13 (for the upper-layer vertices). Then the optimal policies of all the agents are calculated sequentially: firstly, all the feasible polices is calculated according to assumption 4; secondly, the expected belief vectorB and revised expected belief vectorB are calculated according to Eq 19; thirdly, the revised expected reward is calculated according to 21; fourthly, after comparing the revised expected reward with the restored maximum reward, the optimal policy is updated and is restored in C.
The UAV swarm patrolling algorithm (USPA).
The information flow and command flow are two main interactive processes between different layers. In specific, the information flow is a bottom-up process, the control flow is an top-down process.
Firstly, I-UAVs visit the lower-layer vertices and collect information value. The sub-swarm leaders calculate the information belief of all the vertices and transfer the lower-layer information belief B l (t) to the swarm leader. After that the swarm leader calculates the upper-layer information belief vector B h (t). The function of updating the lower-layer belief vector B l (t) is based on formula 12, and the function of calculating upper-layer belief vector B h (t) is based on 13. Secondly, the swarm leader makes decisions for all the sub-swarm leaders. The sub-swarm leader then makes decisions for its agents. The algorithm to calculate policies π of agents is based on algorithm of TAPA (See algorithm 1). The UAV swarm patrolling algorithm is 2.
Theoretical analysis
In this section, we analyse the performance of SAPA, TAPA, and USPA. Firstly, the performance of SAPA is qualitatively analysed. Then the performance of TAPA is analysed based on theory 1 and corollary 1. After that, we analyse the performance of USPA through corollary 2 and corollary 3.
As for the single UAV patrolling, it is an open problem to design patrolling algorithm for each UAV. As for the SAPA, it maybe not the optimal policy. However, it is a myopic policy, using the dynamic property of environment, which still has heuristic capability. In particular, SAPA is time-saving compared with POMCP [34].
As for the TAPA, sequential allocating method is used to calculate policies, instead of computing the joint policies. The collected information value satisfies the property of monotonically increasing and diminishing increment [38]. So our model still guarantee the lower limit of performance compared with joint policies [31,39]. Here, the method of joint policies is to calculate the best reward of Cartesian product policies of all the agents.
The accumulated function for the swarm leader is defined as follows: The accumulated function for the i-th swarm leader is defined as follows: Theory 1 Let f : 2 E ! R be a non-decreasing sub-modular set function [31]. The greedy algorithm that iteratively selects the element e 2 E that has the highest incremental value with respect to the previously chosen elements I 2 E: Until the resulting set I has the desired cardinality k, has an approximation bound f ðI G Þ f ðI Ã Þ at least BoundðkÞ ¼ 1 À kÀ 1 k À Á k , where I Ã & E is the optimal subset of cardinality k that maximises f.
Proof Function Q h (u) and Q l i ðwÞ can be separated based on some conditions. When we just take upper-layer environment into consideration, Q h (u) is an independent function. When the swarm leader have made a decision and it is sub-swarm leader's turn to make decision, Q l i ðwÞ is an independent function. Due to the same decision-making mechanism, without loss of generality, we take Q h (u) for example. The non-decreasing property shows the fact that adding more agents never reduces the observation value they receive as a team (since existing agents do not change their policies). To prove the submodularity, for every set of policies π 0 π 00 X, and policy π = 2 π 00 , π 2 X, the formula 27 holds.
Without loss of generality, we take policies π , for example. The right hand side of formula is equal to: While the left hand side of formula is equal to: Qðp [ π 00 Þ À Qðπ 00 Þ ¼ Qðpjπ 00 Þ þ Qðπ 00 Þ À Qðπ 00 Þ ¼ X DÀ 1 i¼0 ðg i Áb π 00 ðt þ iÞ Á FÞ Àp π 00 Generally speaking, to prove that this holds, we just need to prove that adding a policy π to a set of policies π 00 instead of π 0 reduces reward and increases penalty. It may occur two situations when a new policy π is added into the π 00 .
In the second situation, there are some path cross points between π and π 00 − π 0 . There are two cases for their path cross points, including t c1 t c2 and t c1 > t c2 . Where t c1 is the time visited by π 00 − π 0 and t c2 is the time visited by π.
Patrolling with the UAV swarm Thus, the formula 27 is satisfied and corollary 1 is proved.
Corollary 2 The reward lower bound of centralized control model with k layers is
Á k of the optimal reward. Proof Without loss of generality, we take the double-layer control structure for example. The information flow is bottom-up, summarized to the swarm leader and the swarm leader will give an evaluation of the whole reward. In corollary 1, we prove the performance bound of different accumulated functions independently. Here we take them as a whole.
In the upper-layer control structure, the region block is corresponding to a upper-layer vertex and each sub-swarm is regarded as a mobile entity. When the swarm leader makes decision, it regards that each sub-swarm can gather the optimal reward of the region block. Nevertheless, we use the sequential allocating method for all the sub-swarm leaders. The approximate lower bound is: Where, W is the number of I-UAVs in the sub-swarm. As for W ! 1, BoundðWÞ ¼ 1 À 1 e . It means the sub-swarm leader can gather at least 1 À 1 e of the joint policy reward in the region block. Similarly, the sequential allocating method is also used in the decision process of the swarm leader. The approximate lower bound is: Where, U is the number of sub-swarm leaders. As for U ! 1, the corollary 2 is proved.
Corollary 3 As the number of UAVs increases, the computation complexity for the swarm leader will not change.
Proof Without loss the generality, we take a UAV swarm with l layers for example. Let each decision-making node manage N sub-nodes. The horizon for each decision-making node is D and each action has K choices. So there are N + N 2 + . . . + N l nodes (except the swarm leader) in the swarm. When making decisions for a sub-node, the number of possible action states is K D . However, if the swarm leader makes decisions for all the nodes in the swarm by sequential computing method, the number of action states is: In this paper, we allocate the decision-making process of the swarm leader to all the decision-making nodes. Each node only cares about behaviors of its direct sub-nodes. So the number of states for a decision-making node is N Á K D . In other words, our algorithm greatly reduces the computation complexity for the swarm leader. Thus, the corollary 3 is proved.
Empirical evaluation
In this section, we evaluate the performance of our algorithm in an abstract multi-agent information gathering problem. Firstly, the case experiment is conducted by setting experience parameters. Secondly, we perform parameter sensitivity analysis experiment based on the case experiment. In the experiments, we focus on the macro planning process, other than how to control each UAV.
Case experiment
We consider a disaster response scenario where an earthquake happened in a suburban area [40], where rescuers need urgent continuous intelligence information. This section includes problem statement, calculation expectation, experiment setup, and experiment result.
6.1.1 Problem statement. Earthquake has catastrophic effects on people. After earthquake, ground infrastructures in disaster area may be destroyed. The UAV swarm is one of the most effective ways to acquire the latest real-time information quickly. In this scenario, a UAV swarm with large scale of UAVs, is allocated to gather the newest information about the unknown environment. We assume the UAV swarm has good communication quality, and some unforeseen circumstances are not taken into consideration, such as communication interrupt, mechanical breakdown and other problems. It is note that, we focus on the patrolling problem from the perspective of high level. The environment is modeled as the layout graph, and information attached to the vertex. The vertex in layout graph corresponds to an area in the reality world.
To effectively manage the UAV swarm, the swarm requires the command and control structure. In this scenario, we focus on the double-layer centralized structure. There is one swarm leader in the UAV swarm, making decision for several sub-swarm leaders. Each subswarm leader controls its information gathering UAVs. The total process is as follows: firstly, the information gathering UAVs collect environment information; the sub-swarm leader then calculates the information belief of its layer, and transfers it to the swarm leader. Secondly, the swarm leader calculates the information belief of the total environment, and makes decision for the sub-swarm leaders; after that the sub-swarm leaders make decisions for their subordinate information gathering UAVs.
6.1.2 Calculating expectation. Some performance indicators, such as information value and time are evaluated through experiments. In this experiment, we mainly take the total information value and the swarm leader decision time into consideration. On one hand, the goal of our model is to collect information as much as possible. The total information value gathered by I-agents reflects the overall situation of the algorithm. On the other hand, the decision time is an important performance indicator to evaluate the computation complexity of the algorithm. Meanwhile, we compare our algorithm with other algorithms. Theoretically, our algorithm not only gathers much information, but also has less computing time for each decision maker.
There are three algorithms in Section 4. Intuitively, the team of agents patrolling algorithm (TAPA for short) consists of many single agent patrolling algorithms (SAPA for short), while the UAV swarm patrolling algorithm (USPA for short) is made up of several team of agents patrolling algorithms. Thus, we benchmark against a random algorithm and a baseline algorithm with USPA. Specifically, these algorithms are as follows: • USPA represents UAV swarm patrolling algorithm. The UAV swarm has double-layer centralized command and control structure in USPA.
• POMCP represents Partially Observable Monte Carlo Planning [34]. It is a promising approach for online planning, and it can efficiently search over long planning horizons. The UAV swarm has single-layer centralized command and control structure in POMCP.
• RA represents the random algorithm. The agent moves to a random position adjacent to or remain at the agent's current position. The UAV swarm has single-layer centralized command and control structure in RA.
Experiment setup.
Parameters are set based on experience. We first introduce the parameters of lower-layer environment of USPA, which corresponds to the parameters of environment of POMCP and RA. Because it is single layer environment in POMCP and RA. Then we describe the specific parameters of upper-layer environment of USPA.
The lower-layer environment is modeled as lower-layer layout graph G l . Let the target area be 40 million square meters, and each lower-layer vertex v l corresponds to an area with 10 thousand square meters. The lower-layer layout graph is modeled as 400 vertices, where the side length is 20 vertices. Each vertex v l has information, called as information level and information value. In the disaster response scenario, the newest information, such as the damage degree of building, road and people, needs to be collected and merged into a situation map of disaster situation. Due to the weather and aftershock, the environment may change dynamically and uncertainly. Thus, the disaster situation information of target area will change dynamically with time. Here we focus on the change degree of information. Intuitively, the larger the change degree, the more new information the area may contain. The information level is modeled as five levels, I 1 = no new information, I 1 = few new information, I 3 = some new information, I 4 = lots of new information, I 5 = completely new information. The corresponding information value vector is set as f(I) = [0, 1, 2, 3, 4]. The initial information levels of all vertices are set as I 1 . The UAV is abstracted as a patrolling agent, moving on the layout graph. 30 Iagents are allocated on the layout graph. We assume that it takes 5 minutes to complete a OODA (Observation, Orientation, Decision, Action) process for all the agents in this layer. That means one time step t l corresponds to 5 minutes in real world. The horizon D l for the leader is 1. Additionally, the reward acquired by agents is equal to the information value of the vertex at the moment. Let the discount factor be γ = 0.9. In order to predict the information in other vertices, the information beliefs are necessity. Let the initial information beliefs of all vertices be Λ = [1, 0, 0, 0, 0], following the same information value transition matrix P l . The matrix P l is as follows: The upper-layer environment is modeled as layout graph G h . In our model, the upper-layer environment and lower-layer environment correspond to the same target area. However, the time, action, layout graph, and information belief are different. There are some corresponding relationships between them. The corresponding relationships of time, layout graph, actions and information belief are described in definition 6, 7, 11, and 14 respectively. In the upperlayer environment, the swarm leader is the decision maker, and sub-swarm leaders are actuators; while in the lower-layer environment, the sub-swarm leaders are decision makers, and Iagents following the commands of the sub-swarm leaders. Let each upper-layer vertex v h correspond to 1.6 million square meters. The upper-layer layout graph has 25 upper-layer vertices, where side length is 5 vertices. Each upper-layer vertex corresponds to 16 lower-layer vertices. In addition, the information of upper-layer vertex is different from that of lower-layer vertex. In upper-layer vertex, it just has information belief, other than specific information level. Because each upper-layer vertex contains many lower-layer vertices with different information levels. Additionally, each sub-swarm has 3 I-agents, and 30 I-agents are divided into 10 sub-swarms. In the upper-layer environment, one time step t h corresponds to 20 minutes, and the time step ratio M is 4. Let the horizon D h be 1. And the upper-layer information value transition matrix P h is P M l . We run 20 rounds for each algorithm, and 400 lower-layer time steps t l for each round. After that, performances of each algorithm are evaluated by these performance indicators. The algorithms run on a machine with 2.5 GHz Intel dual core CPU and 8 GB RAM.
6.1.4 Experiment result. Fig 3 shows the total information value. The y axis represents total information value gathered by 30 I-agents. From this figure, the performance of USPA is 36.77% larger than that of RA. However, the computer memory is not enough to calculate POMCP. In deed, each I-agent has about 5 neighbours in each vertex, and each vertex has 5 information levels. So the joint action space and the joint observation space are near 5 30 . It is hard to evaluate the performance of POMCP in this scenario. Table 2 shows the decision time for the swarm leader. The second row represents the average time that the swarm leader makes a decision for its direct subordinates. The third row represents mean square error (MSE for short) for 20 rounds. The unit of average time and MSE is seconds. The symbol "-" is used to indicate the memory space is exceeded. It shows that the run time of RA is much lower than that of USPA. However, the difference between the run time of RA and that of USPA is not great from the macro perspective. Because a lower-layer time step is set as 5 minutes in this scenario. In general, as for the information gathering problem in earthquake area, USPA can be applied to the UAV swarm with large scale UAVs theoretically. Because, USPA meets the expectations in this scenario, that the decision time for the swarm leader is quite short and the total reward is high enough.
Parameter sensitivity analysis experiment
The parameter sensitivity analysis experiment is based on the background of case experiment. In this section, we mainly evaluate some parameters which may influence the performance indicators. Some parameters is adjusted to evaluate whether the USPA meets the expectations. In specific, these parameters are the number of sub-swarms (NoS for short), number of layers (NoL for short), and horizon. Then the practical value is summarized based on experiment results.
Evaluation of the number of sub-swarms.
In this scenario, the number of I-agents in a sub-swarm is fixed at 3. Then the total number of I-agents changes with the number of sub-swarms. Additionally, other parameters are the same with that in case experiment. We construct 6 scenarios, which is as follows. We compare USPA with POMCP and RA. Fig 4 shows the total information value acquired by I-agents. There are 6 figures in the figure, each figure shows the result of a scenario. The y axis represents the total information value acquired by all the I-agents. From these figures, we can find that the reward increases monotonously as the number of sub-swarms increases. Because the total number of I-agents increases, which can gather more information. Additionally, in ScenarioA, the reward of POMCP is 10.63% better than that of USPA, while the reward of USPA is 67.18% better than that of RA. However, as the number of sub-swarms increases, the joint action space and joint observation space increase exponentially. It is beyond the memory space of machine. So it is hard to conduct experiment based on the POMCP. Generally compared to POMCP and RA, the reward of USPA is slightly less than that of POMCP, and better than that of RA. Table 3 shows the average decision time of the swarm leader and its mean square error. The unit of time is seconds. The symbol "-" is used to indicate the memory space is exceeded. From the table, we know that as the number of sub-swarms increases, the decision time of the swarm leader will increase synchronously. From a macro perspective, there is little difference between the run time of RA and that of USPA.
Evaluation of the number of layers.
From section 6.2.1 we know that as the number of I-agents increases, the reward will increase. In this section, we mainly evaluate the influence of the number of layers where the number of I-agents, simulation time and physical layout graph for I-agents are fixed. Here, let the number of I-agents be 81; let the simulation time for I-agents be 270; let the lower-layer layout graph be 81 × 81 vertices; let the time step ratio be M = 3; let the region block be a square area with 3 × 3 vertices. However, some parameters will change with the number of layers. Each agent has about 5 neighbours, and each vertex has 5 information levels. The joint action space and joint observation space are about 5 81 . Other parameters are the same with that in case experiment. As for the L layers, let l = L be the highest layer, and l = 1 be the lowest layer. There are 4 scenarios in the experiment. • Scenario B: The number of layers is 2. The swarm leader controls 27 sub-swarms, while each sub-swarm leader controls 3 I-agents. Because the time step ratio M = 3 and the simulation time for the lowest layer is 270, the simulation time of the highest layer is 90. Each region block is 3 × 3, then the layout graph of the highest layer is 27 × 27 vertices.
• Scenario C: The number of layers is 3. The swarm leader controls 9 sub-swarms, while each sub-swarm leader controls 3 subordinates agents. Because the time step ratio M = 3 and the simulation time for the lowest layer is 270, the simulation time of the highest layer is 30. Each region block contains 3 × 3 vertices, so the layout graph of the highest layer is 9 × 9 vertices.
• Scenario D: The number of layers is 4. The swarm leader controls 3 sub-swarms, while each sub-swarm leader controls 3 subordinate agents. Because the time step ratio M = 3 and the simulation time for the lowest layer is 270, the simulation time of the highest layer is 10. Each region block contains 3 × 3 vertices, so the layout graph of the highest layer is 3 × 3 vertices. , we know that the reward of USPA is at least 34.37% better than that of RA. In addition, as the number of layers increases in USPA, the reward will decrease. In deed, the decision-making process of the swarm leader has hysteresis. Based on Eq 4, the real time of one upper-layer time step t h is equal to the real time of M Á t l lower-layer time steps. In Scenario D, one time step of the 4-th layer corresponds to 27 time steps of the 1-th layer. That means, the environment will change 27 times when the swarm leader makes a decision. Thus, as the number of layers increase, the hysteresis becomes greater and the reward decreases. Patrolling with the UAV swarm Table 4 shows the average decision time for the swarm leader and the mean square error of 20 rounds. It is obvious that the time of RA is less than USPA. Meanwhile, as the number of layers increases, the time will decrease. Because the number of sub-swarm leaders directly subordinate to the swarm leader decreases. Therefore, the cost of reducing decision time is to reduce the reward.
Evaluation of horizon.
In this section, we mainly evaluate the influence of horizon. In order to compare with POMCP, we decrease the number of I-agents and the size of layout graph. We take 4 I-agents into consideration. 4 I-agents are divided into 2 sub-swarms, and each sub-swarm contains 2 I-agents. The joint action space and joint observation space is about 5 4 . The lower-layer layout graph contains 9 × 9 vertices, while the upper-layer layout graph contains 3 × 3 vertices. The region block contains 3 × 3 vertices. Additionally, there are 2 types of horizons in USPA, i.e. upper-layer horizon D h for the swarm leader, and lower-layer horizon D l for the sub-swarm leader. Here, we set D h = D l . Moreover, there is no horizon for RA. Other parameters are the same with that in case experiment. There are 4 scenarios in the experiment.
• Scenario A: The horizons is 1.
• Scenario B: The horizons is 2.
• Scenario C: The horizons is 3.
• Scenario D: The horizons is 4. The y axis represents the total information value gathered by all I-agents. From the figure, the reward of POMCP and USPA is much better than the reward of RA. Meanwhile, the ratio of the reward of POMCP to the reward of USPA changes dynamically. Specifically, the ratios are 1.06, 1.03, 1.09 and 1.15, corresponding to Scenario A, Scenario B, Scenario C, and Scenario D separately. In fact, the larger the horizon, the longer the agent can predict. However, the sequential allocation method is used in USPA. The first assigned agents will gather more information, and the latter assigned agents will avoid previous paths. Therefore as for USPA, as the horizon increases, the reward will increase at the beginning. Nevertheless, when the horizon exceeds a certain threshold, the reward will decrease. Table 5 shows the average decision time for the swarm leader and the mean square error of 20 rounds. The unit of time is seconds. As for POMCP and USPA, as the horizon increases, the decision time will increase. Obviously, the time of POMCP is much larger than the time of USPA, while the time of USPA is much larger than the time of RA.
Experiment summary.
In this section, we conduct the experiments from three aspects: the number of sub-swarms, the number of layers, and the horizon. In addition, we compare USPA with POMCP and RA. These experiment results show that USPA meets our expectation that the I-UAVs can gather large enough information and it takes a very short computing time for decision makers. Moreover, our algorithm has some practical meanings. Firstly, it is obvious that the more sub-swarms the more reward. Thus, when conditions permit, UAVs should be placed as much as possible. Secondly, under the conditions of the same I-UAVs, target area and time, the number of layers of the UAV swarm should not be too large. The cost of reducing decision time is to reduce the reward. It means that the flat command and control structure is a better option when time is enough. Thirdly, when using sequential allocation method, the horizon for the decision-maker should not be too long. It is better to find the most suitable value by weighing the reward and decision time.
Conclusion and future work
In this paper, we develop a patrolling task planning algorithm for the UAV swarm with double-layer centralized control structure under the uncertain and dynamic environment. Unlike previous work, we take the complex relationship into consideration. Based on the model of double-layer environment, we give models of three types of UAVs. Given it, the UAV swarm patrolling problem is modeled as POMDP. In order to reduce the state space, the compact information belief vector is proposed according to the independent evolution property of each vertex. After that, information heuristic function is put forward to increase reward based on the property of multi-state Markov chains. Although the swarm leader could get the information from the sub-swarm leader, it is critical to build the compact information belief and information heuristic function, which increases the autonomous decision ability of the swarm leader and reduces the interaction frequency. And on this basis, we construct single agent patrolling algorithm, team of agents patrolling algorithm and UAV swarm patrolling algorithm. Our algorithm has the scalability and guarantees performance. It reduces the computation complexity for the swarm leader as the number of layers increases. Finally, we conduct simulation experiments to evaluate the performance of our algorithm.
There are several contributions in this paper. Generally, our algorithm can be applied in a wide range of domains which exhibit the general properties of sub-modularity, temporality, locality and multi-layer. The integration of computable structure and myopic algorithm can be applied into more scenarios of the UAV swarm. Specifically, this algorithm improves the patrolling efficiency, at the same time guarantees performance. In addition, our algorithm has scalability, which is easy to extend to more control layers. Moreover, our algorithm can alleviate the computing pressure of the centralized control node, allocating the computing work to other sub-decision nodes. Therefore, our algorithm provides a kind of effective ways to solve the patrolling problem of large-scale multi-UAV system. However, there are some conflicts between the number of layers and the number of sub-nodes subordinate to a decision node. In one hand, the computing capability of a decision node is finite, which cannot control infinite UAVs. In the other hand, as the number of layers increases, the performance of our algorithm will decrease exponentially. So there are some trade-offs between the number of layers and number of UAVs.
The main challenge in extending our work is to take the swarm intelligence into consideration. In this paper, we have considered the double-layer centralized control structure. However, it is just one of control structures. In fact, the UAV swarm is different from general multi-agent systems, having swarm intelligence and swarm behavior. In fact, the complexity of the UAV swarm derives from the combination of the bottom-up autonomy and the top-down command. The swarm intelligence reduces the burden of UAV operator and improves the search efficiency. The main challenge in extending our work is the need for radically different techniques. Intuitively, the swarm intelligence is reflected in adaptivity. The UAV swarm can autonomously adjust to adapt different environments and missions. Different control structure adapt to different environments and missions. Thus, an feasible way is to construct mixture control structure. Specifically, the centralized decision problem can be modeled as POMDP. As for the decentralized decision problem, we can model them as the Dec-POMDP and distributed constraint optimization problem (DCOP).
|
v3-fos-license
|
2018-12-29T09:17:54.520Z
|
2013-02-08T00:00:00.000
|
101909283
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.arkat-usa.org/get-file/46662/",
"pdf_hash": "be71474b2c240b69acf65ea78ba7d4a610d34eac",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43362",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "1c7e989d5fa19a968bce008a4f93a53e8abec16b",
"year": 2013
}
|
pes2o/s2orc
|
Design and synthesis of some isoindoline derivatives as analogues of the active anti-inflammatory Indoprofen
Searching new targets for anti-inflammatory drug design, agents with the isoindole skeleton were focused on the basis of preliminary studies of NSAIDs as COX-1 and/or COX-2 enzyme inhibitors. Thus several novel N -substituted isoindoline derivatives as possible biologically active compounds were prepared as analogues of Indoprofen ( 1 ) starting from cis -2-[(4-methylphenyl)carbonyl]cyclohexanecarboxylic acid ( 3 ) by treatment with primary arylamines
Introduction
Recently we described preparation, behaviour and structural studies of several substituted or fused isoindole derivatives.Their important and remarkable pharmacological properties also were reported.1a-f In continuation and extension of our synthetic and structural studies of saturated and partially saturated isoindoles we have designed and prepared Indoprofen 2a-d analogues.Indoprofen ( 1) is a nonsteroidal anti-inflammatory and analgesic drug, which was withdrawn worldwide in the 1980s after postmarketing reports of severe gastrointestinal bleeding.However in a more recent study Indoprofen is reported to increase production of the survival of motor neurone protein, suggesting it may provide insight into treatments for spinal muscular atrophy.3a Moreover, Ellies et al. have found that Indoprofen and its derivatives promote bone growth.3b Based on the structural features of Indoprofen, a series of isoindol-1-one derivatives were designed and synthesized, and all showed very good binding affinities, with Ki values in the subnanomolar (nM) range against aggregated Aβ42 fibrils.Thus these compounds could serve as scaffolds for potential Alzheimer's disease (AD) diagnostic probes to monitor Aβ fibrils.3c The Indoprofen derivatives also possess prostanoid EP4 receptor agonist properties and regulate inflammatory cytokines after an inflammatory stimulus.3d These results prompted us to prepare the optimized drug candidates 3-aryl substituted and partially saturated isoindolone derivatives, developing new potential anti-inflammatory drugs associated with lower gastrointestinal (GI) side-effects (e.g.gastritis, peptic ulceration, gastric bleeding).Earlier several 1,3-diarylisoindole 4a and isoindole-1,3-dione (phthalimide) 4b,c derivatives were developed, evaluated and studied in detail as potent anti-inflammatory agents and selective COX-2 inhibitors.According to a comparison of the structures of the non-selective Indoprofen (1) and the selective COX-2 inhibitor Celecoxib (2) below, some modifications were required to reduce the side-effects (with gastric tolerance and without cardiovascular risks) keeping the potent anti-inflammatory activity of the compounds (Figure 1).In the course of structural modification, Indoprofen (1) as a base molecule was substituted with an aryl group (red region) at the position C3 in the isoindole ring, a moiety important in Celecoxib (2) for the COX-2 selectivity.Furthermore, the aryl group increases the lipophilicity (giving higher log P values), and hence could improve the anti-inflammatory activity of the target molecules.Replacement of the pyrazole core of Celcoxib by 2-pyrrolidone resulted in the hybrid molecules 5. Introduction of several substituents (e.g.R = OH, CO2H, CONH2) into the N-phenyl group also influences the biophysical and pharmacological properties.Molecular docking studies support the presumption that our designed compounds after physicochemical screening may act on the same enzyme target as the COX-2 inhibitors.
Results and Discussion
In continuation of our research program to develop new partially saturated isoindole derivatives which could be pharmacologically useful, the γ-ketocarboxylic acid 3 was reacted with different bifunctional alkyl or aryl amines, either in a refluxing non-polar solvent (toluene, xylene) or neat in a solvent-free fusion reaction at 180-200 °C (Scheme 1), to obtain functionalized N-aryl- 3-oxo-1,3,4,5,6,7-hexahydro-2H-isoindoles (5a-h) in a one-step reaction.In this way novel N-aryl substituted isoindolones were synthesized by the reaction of substituted aniline derivatives with the oxocarboxylic acid 3 under the above conditions first of all.Thus, besides carboxylic and sulfonic acid derivatives, ortho-and para-hydroxy derivatives (5b,c) also were prepared, which can be interesting as paracetamol analogues.Recently, compounds 5a and 5c were synthesized and characterized, however as possible anti-inflammatory candidates were not investigated.1a,b The physicochemical and structural characterization of the title compounds are described in the Experimental section in detail.The proposed and main mechanism of action of these nonsteroidal compounds (e. g. phenylacetic-or propionic acid derivatives and sulfonamides, coxibs) base on the inhibition of cyclooxygenase (COX), and recent findings suggest that it is highly selective for COX-2 enzyme. 6
Scheme 1. Synthesis of 3-(4-methylphenyl)-N-arylhexahydro-1H-isoindol-1-ones (5a-h).
Superimposition of Indoprofen (1), Celecoxib (2), and compound 5e showed remarkable similarities (Figure 2).Therefore we supposed that 5e (R 1 = CH2CO2H, R 2 = H) may be a COX-2 selective agent.We found that overlapping of the investigated compounds could lead to a novel series of potential anti-inflammatory compounds with little modification of the lead compound Indoprofen (1).A very useful docking study was described earlier concerning the NSAID/ COX-2 isozyme complexes. 7According to this work we modeled and docked the Celecoxib 2 and isoindole derivative 5e with the schematic COX-2 isozyme (Figure 3).The traditional docking procedure was used by the identification of optimized formation of molecules, the binding site and structure of protein-ligand complexes.The ligands were drawn using ChemBioDraw Ultra 11.0 (Cambridge Soft; 100 CambridgePark Drive, Cambridge, MA 02140 USA) program and converted to the most favourable three-dimensional format using ACD/3D Viewer Freeware (Version 12.01) software (Advanced Chemistry Development, Inc.; ACD Labs, Toronto, Canada; the program is available at the ACD Labs at www.acdlabs.com).The top score ligandreceptor docking was demonstrated by 2D representation of complex interactions.The structural basis and conformational changes of cyclooxygenase-2 (COX-2) enzyme and its complexes with some anti-inflammatory agents has been described earlier in detail. 8The carboxylate group of 5e can be located in cavity B and cavity C. Delocated electrons of phenylacetic groups with Arg120 amino acid gives traditional NSAIDs the same anchor point in COX-1 and COX-2 enzymes and thus limits their selectivity; however, a strong hydrogen bond between of valine (Val523) amino acid and carboxy group could increase the COX-2 selectivity.On the other hand the presence of the 4-methylphenyl substituent increases the lipophilicity of the molecule and locates at the end of cavity A at the lipophilic site of the aromatic tryptophan part (Trp387).Unfortunately, according to a modeling study at the cavity C, compound 5e does not interact with amino acids His90, Arg513 and Gln192 by hydrogen bonding, which is an important key for COX-2 selectivity, while a bond with Tyr355 reinforces the interaction of the inhibitor with the enzyme.Finally, molecule 5h (R 1 = SO2NH2) was found to be a more selective COX-2 inhibitor closely analogous to Celecoxib (2) after replacement of the para substituent of 5e (R 1 = CH2CO2H) by a sulfonamide group (5h), when Phe518, His90 and Arg513 amino acids were activated in the cavity C of the enzyme, however the in silico test showed less bioactivity.Using the semi-empirical quantum-mechanical method AM1, the molecular geometries of the benzofuran analogues of anti-inflammatory arylalkanoic acids were calculated, then optimized, and their frontier orbital charge distribution evaluated.9a The physicochemical properties (conformation, protonation energy, lipophilicity) of some arylpropionic acids also were determined theoretically by using quantum-mechanical calculations and correlated with their anti-inflammatory activity.9b In our case the structures were also studied by molecular modeling and the conformational protocol using ACD/ChemSketch Freeware (Version 12.01) software (Advanced Chemistry Development, Inc.; ACD Labs,Toronto, Canada) for compounds 1, 2 and isoindolones 5a-5h.The physical-chemical properties, such as molecular polar surface (PSA), calculated lipophilicity (c.logP), acidity (pKa), molecular volume (Å 3 ) moreover the druglikeness and bioactivity scores were calculated using the freely accessible Molinspiration Cheminformatics Software (www.molinspiration.com;Slovensky Grob, Slovak Republic).The main calculated or predicted physical-chemical properties of the above compounds are summarized in Table 1.The drug-likeness and bioactivity score of the investigated compounds as NSAID candidate agents also were predicted computionally using the Molinspiration Cheminformatics software.This method was realized on the basis of the sophisticated Bayesian statistics to compare structures of representative ligands active on the particular target with structures of inactive molecules and to indentify substructure features typical for active molecules.
The activity scores for the six most important drug classes were compared with average druglike molecules.Thus calculation of drug-likeness score towards GPCR ligands, ion channel modulator, kinase inhibitors, nuclear receptor inhibitors and other enzyme targets based on Molinspiration Cheminformatics software by "on-line test".The larger the value of the score is, the higher is the probability that the particular molecule will be active.These drug-likeness and bioactivity results are summarized in Table 2 and the scores allow adequate identification of active and inactive molecules.The values which are depicted in red may refer to considerable biological activities (bioactivity scores ≥ 0.00), while green values (bioactivity scores ≥ -0.28) display fewer similarities to the known drugs and less efficiency of drug-like molecules.The black values show presumed inactivity of compounds investigated.
Conclusions
The comparison of the well-known NSAID Indoprofen (1) the COX-2 selective Celecoxib ( 2) and novel aryl-substituted isoindolone derivatives (5a-h) proved the importance of design, synthesis and further search of new anti-inflammatory drugs which are exempt from the undesirable side-effects.The present results showed that variations of the substitution and the heterocyclic moieties led to remarkable changes in the pharmacological, physical and biochemical properties.As it was expected, compound 5e had the most remarkable calculated biological activity similar to Indoprofen (1) (Table 2), however application of the above results require carefulness and further modifications of substituents at isoindoline ring should be desirable.The pharmacological investigation of these compounds is in progress and will be reported in due course.
Experimental Section
General.Melting points were determined in open glass capillaries using an Electrothermal melting point apparatus and are uncorrected.Infrared spectra were recorded for KBr discs with Perkin-Elmer 177 instrument. 1 H NMR and 13 C NMR spectra were recorded on a Bruker Avance DRX 400 MHz spectrometer and CDCl3 or DMSO-d6 were used as solvent.Chemical shifts (δ) are in ppm from tetramethylsilane (TMS) as internal standard, coupling constants (J values) are in Hz.Ascending TLC for the retention factor (Rf) was performed on precoated plates of silica gel 60F 254 (Merck), the mobile phase was a mixture of and benzene-EtOH-n-hexane (4:1:3) and spots were visualized by using a UV lamp or iodine vapor.
Figure 1 .
Figure 1.Structural rationalization and optimization strategy of designed compounds 5.
Figure 3 .
Figure 3. Schematic proposed representation and comparison of optimal positions of the ligand/COX-2 key as inferred from molecular modeling for compounds Celecoxib (2) and 5e.
Table 1 .
Some calculated physical-chemical properties of the investigated compounds a Calculated logP by using ACD/Labs software; b Calculated logP by using Molinspiration Cheminformatics software; c PSA = Polar surface area; d MV = Molecular volume.
Table 2 .
Drug -likeness and bioactivity scores of compounds 1, 2 and 5a-h according to the Molinspiration Cheminformatics software Comp.GPCR lig. a Ion c. m. b Kinase inh.c Nucl. r.l.d Prot.inh.e Enzyme inh.f a GPCR ligand; b Ion channel modulator; c Kinase inhibitor; d Nuclear receptor ligand; e Protease inhibitor; f Enzyme inhibitor.
|
v3-fos-license
|
2018-12-12T05:07:12.183Z
|
2017-10-17T00:00:00.000
|
55054111
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/jpii/article/download/10688/6792",
"pdf_hash": "efd808de40bee5cf742415d4977609f960608b7e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43364",
"s2fieldsofstudy": [
"Education"
],
"sha1": "06bcdb8aac7d756322ab4cc83b79bdb7b5022557",
"year": 2017
}
|
pes2o/s2orc
|
Building the Character of Pre-Service Teachers Through the Learning Model of Problem-Based Analytical Chemistry Lab Work
This research aims to apply and find out the characteristics of Problem-Based Instrumental Analysis of Chemistry Lab Work Learning Model (IACLLM) which is able to build the characters, improve the conceptual mastery and the ability of problem solving. The research using experimental quasy with 2 student groups of pre service chemistry teachers as the subjects of the research applied the treatment of problem-based IACLLM for the experimental class and lab work learning with standard lab work procedure in control class. Conceptual mastery was measured using essay test; problem solving skills were measured using assessment of problem solving reports, presentation of the results, and kit making products; whereas the emerged characters were observed during the learning process. The result of this research showed that problem-based IACLLM had open-ended problem characteristic, had produced local material kit, and characters were observed in every stage of problem-based learning model. The implementation of the model could improve the spectrometric and electrometric conceptual mastery, the problem solving skills on a very good level and also some characters were developing during learning process, including religious, discipline, honest, curious, creative, critical, cooperative, communicative, independent, and able to appreciate other people's opinions and achievements, leadership, democracy, and able to be thorough and careful, and hardworking.
INTRODUCTION
Lab work in universities is usually conducted during or after theories is given to support and validate students' knowledge on the certain course. The verification of lab work manual with gradually specific directions does not invite students to solve problems, therefore, the students' abilities to actually obtain some facts, as well as concepts of their own findings cannot be realized (Urena et al., 2012;Adani, 2006;Jalil, 2006;Haryani, 2011). Besides that, verification of working procedures in the lab work manual are also less giving opportunities for students to process infor-mation thoroughly, and students' main concern is only how to finish lab work assignments and report making (Hicks & Bevsek, 2012;McDonnell et al., 2007;Cooper & Urena, 2008). Even twenty years ago, Nakhleh (1996), reminded that chemistry in many parts of the world had invested a big amount of money to give some lab work experiences for students; however, it rarely evaluated on what should be achieved in lab work. Meanwhile, Haryani (2011) recommended that the lab work activities should be able to generate learning motivation, support conceptual mastery, develop basic experimental skills, and improve the skills of problem solving.
It is important for students to be trained on problem solving skills, and students of pre service chemistry teacher are needed to face assignments and challenges in the working world. On daily basis, students also often face many complicated problems (ill-structured/unstructured). Some reality that describes the students' low skill in handling problems are shown from the brawls happened between schools and students, drug abuse, and abortion. To overcome this moral crisis, problem solving as one of high level thinking skills is necessary to be trained through well-planned learning. Problem-based lab work learning model is highly assumed to be able to give good learning environment to improve problem solving skills (Haryani, 2011;Urena et al., 2012;Ferreira & Trud, 2012).
The design of lab work learning program does not only pay attention to the aspects of conceptual mastery (cognitive) and basic experimental skills (psychomotoric), but also to the students' affective aspects and problem solving skills. The affective ability is related to the attitude or characteristic of responsible, cooperative, discipline, committing, confident, honest, also related to respecting other people's opinion and having self-control (Aisyah, 2014) (Chan & Bauer, 2016;Popham, 1995). Those values are closely connected to the development of cultural and character education. The formation of strong and solid students' characteristics is very strategic in the nation's sustainability and excellence in the future, as well as very important to be owned by students to face the future challenges. We need to do a lot of efforts to face the challenges seriously, especially now that UNNES has established itself to be a University of Conservation. Conservation here means how UNNES and all academic activities have conservation and concern toward environment, socio-cultural, and conservation on knowledge (science). To enable the learning model of problem-based analytical chemistry lab work supporting conservation and developing students' characteristics, it is necessary in implementing Green Chemistry principles.
Based on arguments described above and according to various research results, the learning of instrumental analysis of chemistry lab work should be conducted so that students are trained to solve problems and grow their scientific attitudes (character education) by giving laboratory experience based on challenging and meaningful research as mentioned in PBL. Problem-Based Instrumental Analysis of Chemistry Lab Work Learning Model (IACLLM) gives a very accommodating environment to achieve its purpose, since the essence of this analytical chemistry as a science is to solve problems (Adani, 2006), this subject is also a process subject, which has various variables, consisting of several measuring methods (Mataka & Kowalske, 2015;Tosun & Senocak, 2013). Furthermore, to support UN-NES as a university of conservation, this problem-based instrumental analysis of chemistry lab work learning model developed is using local material based on Green Chemistry principles. The equipment limitation both in numbers and kinds as well as the expensive materials often become obstacles faced by teachers (Haryani et al., 2010). Therefore, pre service teachers need to be equipped with modeling on how to overcome equipment limitation. The briefing of pre service teachers appropriate with this subject is making simple measurement tools (kit), portable, but the observation data have good responsibility. This capacity of responsibility is obtained by comparing the measurement results using available laboratory instruments.
From previous descriptions, the main problem that becomes the focus of the research is "How do we develop conservation-based character education model through the implementation of problem-based IACLLM using local material". To manifest that idea, a lecture on instrumental analysis chemistry lab work is conducted with the strategy of problem-based instructional learning; that is proven to be able to improve the problem solving and conceptual mastery, as well as to build scientific attitude/character.
METHODS
This research is an experimental research that outlines the quantitative and qualitative data collection done simultaneously. Experiment class was given problem-based IACLLM treatment, whereas learning in lab work control class was using the standard procedure. This research was conducted in Analytical Chemistry Laboratory in Chemistry Department FMIPA UNNES, with the subjects of one study group as the control group and one study group as the students' experimental group. All were the students of Chemistry Education Department who were having the subject of Instrumental Analysis Chemistry (IAC).
The learning of problem-based IAC-LLM implemented had four stages, adapted from Arends (2004). The first stage, students were oriented on problems and on the second stage students were organized to study. Next, on the third stage, investigation group was guided; and finally on the fourth stage, the results of problem solving were presented. Before conducting lab work, trai-ning was given to three lab work assistants and one technician. Lab work assistants were assigned to help the researcher in conducting the lab work, observing during lab work process, and assisting in correcting pre-test and lab work reports.
The quantitative data collection was using essay test to measure conceptual mastery of spectrophotometry and potentiometry. Qualitative data were collected using assessment column; through observation during learning process to encompass the emerged character. Besides that, an interview was conducted to explore the students' knowledge related characters built in every step of problem-based lab work learning. The measurement of problem solving adapted from Fogarty (1997) that covered the assessment of problem solving report, presentation of the results, and kit making products in which all used columns.
Quantitative data in the form of spectrophotometry and potentiometry conceptual mastery of pre service chemistry teacher was analyzed using the formula of normalized gain, while the qualitative data was analyzed using descriptive percentage. After N-gain for the second group was obtained, it was then compared to see the difference of conceptual mastery improvement. The results of the observation were characters emerged and the performance during learning process was analyzed descriptively. Besides that, a supporting interview was also used to see characters built on every problem-based learning step.
RESULTS AND DISCUSSION
The learning of problem-based IACLLM in this research was designed to improve the conceptual mastery of spectrophotometry and potentiometry materials, problem solving skills, and develop students and teachers' characters. The initial step in the problem-based IACLLM was to have students oriented on the problems. Problems were categorized into groups according to available instruments/tools; it can be from students or teachers. Next, in groups, students decided the title of the research with these results: (1) (7) Qualitative Test of Formalin and Borax content in Foods (Meatball and Dumplings). Figure 1 shows the percentage of N-gain from the conceptual mastery of spectrophotometry and potentiometry as a whole concept in control and experimental groups. The data from both groups were normally distributed, with variants of % N-gain between homogenous groups. The results of % N-gain from control and experimental groups each for spectrophotometry and potentiometry were shown in Figure 1. Although 3 from 4 data were included in medium category, but the result achievement of this % N-gain was quite meaningful, supported by the different test result that % N-gain from the learning of problem-based instrumental analytic of chemistry lab work showed a significant difference (p<0.05). Table 1 shows the average % N-gain on every concept of spectrophotometry and potentiometry materials in control group and experimental group. The average result of % N-gain spectrophotometry material from experimental group was categorized as medium with two concepts categorized as high, while for various control groups; the average was categorized as medium with each concept categorized as low and medium. In contrast to spectrophotometry, the average % N-gain from control group potentiometry was categorized low, whereas experimental group was categorized medium. write the basic principles of measurement. Then, the low % N-gain of understanding and difference of molecule and atom spectrophotometry, were suspected because at the start of problem solving orientation stage, students focused more on the searching procedure related with problems to be solved. Besides that, this concept was accidentally written on literature review during proposal writing as it was in the concept of spectrophotometry basic principles.
The achievement of the highest concept in potentiometry was quantitative aspect/Faraday law. This concept was learned frequently started from Basic Chemistry, Basic of Analytical Chemistry, and on other skillful group such as Physics Chemistry. On the contrary, the low potentiometric titration concept was assumed that because students were lacked of skills in changing the initial data to be the first and second In Table 1, it was shown that the highest % N-gain of conceptual mastery occurred on the basic principles of spectrophotometry and the lowest was on the difference of molecule and atom spectrophotometry. The highest potentiometry was quantitative aspect/Faraday law, and the lowest was potentiometric titration.
The acquisition of the highest improvement for basic principles of spectrophotometry was because in this concept students started to write theoretical study both in their proposals and research reports, so that students got their learning experience directly that caused memory of event, a description of experience having long term effect more optimally (Hackathorn et al., 2011). This result was the revision from the research result of Haryani (2011), with the lowest % N-gain. This success was strongly assumed because at the presentation, students were asked to Based on the finding of the research, it seemed that spectrophotometry and potentiometry of problem-based IACLLM provided a good learning environment to improve the conceptual mastery of pre service teacher. The learning was initiated by students' orientation stage on problems. Students, in groups, were asked to solve open-ended problems in a laboratory research project, and were ended by presentation of the results and display of the research posters. The improvement of conceptual mastery was varied for each concept, but the average of all including medium category for both experiment and class control showed a significant difference (Table 1). Based on the comparison of pre-test and post-test results, there are not any students whose conceptual mastery decreased, as well as remained stable. Although there were various improvements, the data obtained showed that there was a successful improvement (medium category). derivative data, which was prepared to make curves. Besides that, students were generally weak in volumetric titration that became prerequisite of this material.
For both spectrophotometry and potentiometry materials, the concept that directly connected to the research procedure had been relatively good in results, this was corresponding with the previous research findings (Haryani, 2011). The obtained value of spectrophotometry was higher than potentiometry; it was possible that the analysis using spectrophotometry methods was also obtained through organic chemistry lab work, as well as an organic chemistry. Besides that, students also got spectroscopy material from Physics Chemistry subject.
On the contrary, the highest improvement of % N-gain for control group which was directly connected with the implementation of lab work was relatively low compared with the basic concepts which were not directly related with the lab work. The highest improvement of % N-gain for control group occurred in Lambert-Beer law, and the lowest one occurred in the measurement concept of level determination. The low level of level measurement concept was possible because during the report making, students adopted their senior's works; also, they were required to present their results. Besides the measurement of level determination, % N-gain whose improvement was relatively low in the control group was the production of standard solution. In every lab work, students were given tasks in groups to prepare pre-reaction before the lab work. However, so far the standard solution in spectrophotometry was prepared by one group, and the other groups were only measuring its absorbance. That was why it was normal for the bad quality of the improvement result. The tasks given to certain groups in preparing the standard solution was meant to save the time as well as to save the standard solution of titrisol which was frequently used.
The findings in this research showed that problem-based IACLLM provided a good learning environment in improving the students mastery on spectrophotometry and potentiometry materials; and these results were in accordance with the findings reported before (Tandogan & Tandogan, 2007;Hicks & Bevsek, 2012). In the problem orientation, students in groups will be given open-ended problems that would encourage students' curiosity and motivate them to be able to solve problems (Urena et al., 2012). According to Tan (2003), evidences recommended that problem-based learning could improve students in constructing knowledge and reasoning ability compared with the traditional teaching approach. Akcay (2009) on the other hand, revealed that problem-based learning was derived from constructivism learning; it was the learners constructed knowledge actively.
The data of problem solving from students of experimental group showed were obtained from reports/results of the problem solving, and the kit product as results of problem solving with the average were simultaneously 85; 86,12; and 86,11, and the whole average was 85,75. Based on the results obtained, it showed that the total score of problem solving reached the highest criteria; it was that each aspect was bigger than 85%. The indicator of minimum success in this research was 80%. Indicators for the report of problem solving referred to the pattern of problem solving which was developed by Fogarty (1997). The problem solving skills were measured as a whole through the working performance assessment using column. Table 2 shows the score summary of problem solving results. arranging lab work timetable, and collecting the lab work report. This discipline aspect emerged started from introduction until stage four with the total average of 90%. Meanwhile, religious aspect also emerged started from introduction until stage four, which was built by greeting in the beginning and at the end of lecture as well as praying with the total average of 95%. Students' curiosity detected, started from the introduction stage, was the curiosity of how an unknown thing worked, stage one and two during the problem given, from the proposed questions especially about how to find procedure and determine the proper procedure from all procedures obtained. Besides that, curiosity was also detected on stage three during the consultation of observation data. The average percentage of curiosity aspect appearance was 60. Next, honest characteristic was observed and built on stage three and . Honesty could be built through how students measured materials, as well as borrow some equipments. Students must be honest whenever they did mistakes in laboratory; such as telling the truth when they broke glasses, and telling the real report and presenting results based on data. For this honest aspect, the total average obtained was 90%.
Groups
Thinking critically and creatively occurred in stage one and two. Students were demanded to think critically while doing exercises/ pre-test and thinking about the problems and how to solve them. Students should also think creatively, while choosing effective and efficient types of lab work/ research so they could obtain the result maximally. They should also creatively design products to make KIT. The total averages of critical and creative thinking were 60 and 80% respectively.
Cooperating appeared on stage two until four. Students must cooperate in their groups to find working procedure, also to do lab work in order to solve problems. Leading characteristic was also built on stage two until stage four, started from task division on finding information to design proposal until lab work arrangement. The total average for cooperative and leading characteristic were 90 and 60% respectively.
The characteristics of hardworking, independent, thorough and careful were dominantly built on stage three. Lab work to solve problems really needed hard work in achieving the experimental purpose. This hard work was carried out by students started from doing preparation/sample preparation in experimental activities.
To enable solving the unstructured, contextual, and open-ended problems in PBL, students must be digging up and understanding much information; students must also design and do some researches in order to do problem solving. Students must become "architect" for the learning process they did. However, students were used to do learning method of "listen and take some notes as well as do actions whenever there is an instruction from the lecturers". The implementation of problem-based IACLLM accompanied with the measurement tools of this problem solving, students obtained lab work learning model directly that would be very useful to be applied in the future (Hicks & Bevsek, 2012;McDonnell et al., 2007).
The observation results by observers (research members) toward the learning conducted by a lecturer (the head of research) showed that the relevant problem presented with competency learned in the lecture, accurate lecture time management, and students cooperation were doing well. Meanwhile, students' motivation to discuss, ask questions, communicate, argue, facilitate, lead the discussion, and responsible in learning were still needed to be improved.
The use of unstructured, contextual, and open-ended problems, in fact, could improve students' skills in problem solving. These problems could trigger students to be involved actively in group discussion to find and determine the best problem solving for the groups. This learning required students to use their intelligence to decide real issues started with defining problems, collecting useful information, restating problems, producing alternatives, suggesting solutions, and determining recommendation (Urena et al., 2012). Besides that, these problems could also train students to solve contextual problems so that they had experience in solving problems that they faced in their real lives. This finding was in accordance with the previous finding (Gunter & Alpat, 2017;Ferreira & Trud, 2012;Akcay, 2009;Demirel & Dagyar, 2015;Downing, 2010;Bilgin et al., 2009).
Students' characteristics developed through problem-based lab work were obtained from the observation results during learning process in every meeting using students' observation form. There was also interview in every learning stage. Based on that observation, analysis was then conducted toward the emerged/developed character, and the percentage of its emerging/ development was counted in every PBL step.
Character of discipline was observed and built through punctuality in attending the lec-Students were demanded to be able to make solution independently based on the task division from their own groups. In preparing the equipments which were going to be used, they must be careful because the equipments were made from glasses; if they were not careful, those equipments would endanger themselves and people around them. It was also applied to chemicals; they must always be careful since some chemicals were corrosive, poisonous, and could cause itchy on their hands if they touched them; and some were even flammable. Next, for careful characteristic, it was built when the practitioner prepared some materials such as measuring substances, measuring the volume of solution, and observing results. The total averages for hardworking, independent, thorough and careful characteristics were 90; 80; 90; and 90% respectively.
On stage three, students communicated their observation result both in tables and figures. This communicative characteristic was also built by communicating the research result through report writing, power point making, as well as oral presentation which occurred on stage four. The total average for communicating was 80%. Next, another character built on stage four was democratic, respecting friends' opinions, and other people's achievement. While students are performing their presentation on their experimental results, they practiced receiving inputs from other groups. During discussion of paper or power point making, students learned democracy and respect their friends' opinion in their groups. Respecting friends' achievement also happened especially in products produced by other groups by granting more score. The total average for democratic characteristic, respecting friends' opinion, and respecting other people's achievement was 80%, 40%, and 60% respectively.
The improvement of conceptual mastery for this research was followed by problem solving skills with high scores and at least 16 characteristics were built through PBL steps. This learning success in cognitive domain and psychomotoric were influenced by scientific attitudes from the students as well as were determining someone's success in learning (Popham, 1995). Next, Popham stated that according to some expertise, someone's shifted attitudes or characteristics could be predicted if he/she had already had high cognitive mastery. This research result was in accordance with the results of some researches (Kelly & Finlayson, 2009), in that PBL besides improving conceptual mastery, it was also improving the social skills such as teamwork, confidence, and interactive manner with other people, and communication. Besides that, problem-based lab work learning also improved the students' skills in being careful with chemicals, doing careful observation, and trying to find information related with lab work conducted. Generally, students' responses toward learning implementation was very positive, they are: (a) improving their involvement; (b) giving direct experience through modeling; (c) practicing on doing great experiments; and (d) expecting that it could be applied on other lab work.
CONCLUSION
Based on research results and discussion, it could be concluded as follows. First, instrumental analysis of chemistry lab work learning model which was developed adapted problem-based learning steps, possessed these characteristics: (a) open-ended problems related spectrophotometry and potentiometry materials; (b) kit from problem solving was produced using 7 local materials; (c) characters were observed and interview was conducted on every problem-based learning step; (d) problem solving was measured through reports of problem solving, presentation of problem solving results, and products of problem solving results. Second, the implementation of problem-based IACLLM model using local material could both improve conceptual mastery and increase the skills of problem solving for pre service teachers in a very good category. Third, the characteristics developed in problem-based IACLLM using local material were: religious, discipline, curious, creative, critical cooperative, respectful for other people's opinions and achievements, democratic, throrough, careful, and hardworking. Students gave positive response toward the implementation of IACLLM.
Based on the results achieved in this research, these recommendations can be made. The implementation enlargement of problem-based lab work learning for other lab work subjects needed to be done, remembering that around 50% of Skill Subjects were followed by lab work; so that it would have a good potency to give academic atmosphere in order to achieve the competency of pre service chemistry teacher through lab work. The lecturer for lab work subject must always innovate to change the verification-based lab work paradigm to be problem-based lab work, by digging more ideas with students in finding open-ended and contextual problems hoping that it could color students' characteristics both as a person and as a teacher as his duty.
|
v3-fos-license
|
2022-10-11T01:16:15.964Z
|
2022-10-09T00:00:00.000
|
252780782
|
{
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1112/jlms.12920",
"pdf_hash": "5ead8cc71be8561eda6aeae933049f9f7e5e3c9b",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43365",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "bc37a4240de48d34b45a4f4fb2b74d2c7232d991",
"year": 2022
}
|
pes2o/s2orc
|
Rational cross-sections, bounded generation and orders on groups
We provide new examples of groups without rational cross-sections (also called regular normal forms), using connections with bounded generation and rational orders on groups. Specifically, our examples are extensions of infinite torsion groups, groups of Grigorchuk type, wreath products similar to $C_2\wr(C_2\wr \mathbb Z)$ and $\mathbb Z\wr F_2$, a group of permutations of $\mathbb Z$, and a finitely presented HNN extension of the first Grigorchuk group. This last group is the first example of finitely presented group with solvable word problem and without rational cross-sections. It is also not autostackable, and has no left-regular complete rewriting system.
A rational cross-section for a group G is a regular language L of unique representatives for elements of G.This notion can be traced back to Eilenberg and Schützenberger [ES69], and was mainly explored by Gilman [Gil87].A related "Markov property" was introduced by Gromov [Gro87] and explored by Ghys and de la Harpe [GdlH90].
Rational cross-sections are linked to several subjects in group theory.For instance constructing a geodesic rational cross-section (i.e., rational cross-section with minimallength representatives) over a given generating set is sufficient to prove the rationality of the corresponding growth series.In particular, the following groups admit geodesic rational cross-sections for all generating sets: • Finitely generated abelian groups [NS95] • Hyperbolic groups [Can84;Gro87;GdlH90] If you don't require the section to be geodesic (and we won't), the property of having a rational cross-section is independent of the choice of a generating set.The following groups have been shown to admit rational cross-sections: • Automatic groups [Eps+92, Theorem 2.5.1] • Finitely generated Coxeter groups [BH93] The author acknowledges support of the Swiss NSF grants 200020-178828 and 200020-200400.
On the other side, very few groups are known not to have rational cross-sections: • Infinite torsion groups (see [Gil87;GdlH90]) • Recursively presented groups with undecidable word problem (see eg. [OKK98]).
For completeness, we can also construct new non-examples from old ones: Theorem ( [GS91], see also Theorem 2.1).Consider a product A * C B with C finite, s.t.either A or B doesn't have a rational cross-section.Then neither does A * C B.
Our main objective in the present paper is to add to the list of known non-examples.We start with the observation (Corollary 3.5) that groups with rational cross-sections either have bounded generation (see §3 for a definition) or contain free non-abelian submonoids.Some new non-examples follow, specifically • some extensions of infinite torsion groups ( §3.1) • some Grigorchuk-type groups (including the Fabrykowski-Gupta group) ( §3.2) We then turn our attention to wreath products.We define property (R+LO), a strengthening of left-orderability (LO), which means that some positive cone admits a rational cross-section.We derive the following (positive) result: Theorem 1 (Theorem 4.3).Let L and Q be two groups such that • L has a rational cross-section then the restricted wreath L ≀ Q has a rational cross-section.
In the other direction, we prove that Q admitting "large" rational partial orders is actually a necessary condition for L ≀ Q to admit a rational cross-section (under extra conditions on L).Such rational orders have received attention recently (see [HS17; Su20; ABR22; ARS21]).We build on these results to prove Theorem 2. Let Q be a group satisfying either (a) Q has arbitrarily large/infinite torsion subgroups; (Theorem 5.5) 1 Background information
Regular languages
Regular languages form the lowest class of languages in the Chomsky hierarchy of complexities.Informally, a language is regular if its membership problem can be decided by some computer with finite memory.Our model of computer is the following: Definition 1.1.An automaton is a 5-uple M = (V, A, δ, * , T ) where • V is a set of states / vertices.
• A is an alphabet.
• δ ⊆ V × A × V is the transition function.An element (v 1 , a, v 2 ) ∈ δ should be seen as an oriented edge from v 1 to v 2 , labeled by a.
• * ∈ V is the initial vertex.
• T ⊆ V is the set of "accept" / terminal vertices.
An automaton is finite if both V and δ are finite.
Here are some examples of finite automata (with terminal vertices in green): * t T t T A finite automaton M recognizes a word w ∈ A * if w can be read along some oriented path from the initial state * to some terminal state v ∈ T .A language L ⊆ A * is regular if it can be written as L = {w ∈ A * | w is recognized by M} for some finite automaton M = (V, A, δ, * , T ).
Remark.Whenever * ∈ T the automaton M recognizes the empty word (noted ε).Some additional terminology around automata will be needed: Definition 1.3.
• An automaton is deterministic whenever, for all v ∈ V and a ∈ A, there exists at most one edge exiting v and labeled by a (i.e., {v} × {a} × V ∩ δ 1).
• For v 1 , v 2 ∈ V , we say that v 2 is accessible from v 1 is there exists an oriented path from v 1 to v 2 in M.An automaton is trimmed if, for all v ∈ V , there exists t ∈ T such that t is accessible from v and v is accessible from * .
• A strongly connected component is a maximal subset C ⊆ V such that, for every v 1 , v 2 ∈ C, the state v 2 is accessible from v 1 (and reciprocally).
For instance, automata in Figure 1 and 2 are deterministic, but not in Figure 3.The automaton in Figure 2 is not trimmed.
A remarkable result due to Rabin and Scott [RS59] is that any regular language is the set of words recognized by some finite, deterministic, trimmed automaton.Another fundamental result in the theory of regular language is Kleene's theorem.(We will use it without even mentioning it, usually to construct regular languages efficiently.) Theorem 1.4 (Kleene's theorem).The class of regular languages (over a finite alphabet A) is the smallest class of languages containing finite languages (over A), and closed under the following three operations: • Finite union • Kleene's star: The class of regular languages is also closed under complementation (in A * ), hence finite intersection, set difference, and in general all Boolean operations.
Finally, we introduce some notations for subwords (in general languages).
Notation 1.5.Consider a word w ∈ A * .We denote by ℓ = ℓ(w) its length.For 0 i < j ℓ, we denote by w(i : j] the subword consisting of all letters from the (i + 1)-th to the j-th one (included).If i = 0 we abbreviate the notation to w(: j].
Rational subsets and cross-sections
We now export those definitions to groups Definition 1.6.Let G be a group.
• Given a generating set A and a word w ∈ A * , this word can be evaluated in G.
The corresponding element of G will be denoted ev(w) or w.
• A subset R ⊆ G is rational if there exists a generating set A and a regular language L ⊆ A * s.t.ev(L) = R.We denote the class of rational subsets of G by Rat(G).For example finitely generated subgroups H G are rational.
• If moreover the evaluation map ev : L → R is bijective, we say that L is a rational cross-section for R.
Our primary interest will be rational cross-sections for the entire R = G.
For instance, automaton in Figure 1 recognizes a rational cross-section for Z = t relative to the generating set A = {t, T = t −1 }, while automaton in Figure 3 recognizes a rational cross-section for It is interesting to know which operations preserve rationality of subsets of a given group G. Several results directly translate from regular languages to rational subsets: the class of rational subsets of a group G is closed under finite union, set-theoretic product, Kleene star.However, contrary to intersections of regular languages, intersections of rational subsets are not rational in general.That being said, things behave better in special cases.Let us for instance recall the following classical result (used in §5).
Lemma 1.7 (Benois' lemma [BS86], see also [BS21]).Consider F 2 the free group of rank 2, and let R ⊆ F 2 be a rational subset.Then the set of reduced words representing elements of R is regular.As a corollary, Rat(F 2 ) is closed under all Boolean operations.
A natural question is how properties "rationality" and "existence of rational crosssection" for R depend on the choice of a monoid generating set A and on the ambient group G.It is answered by Gilman [Gil87]: Proposition 1.8.Let G be a group and R ⊆ G a rational subset, (a) Let A be any set generating G as a monoid.There exists regular language L ⊆ A * such that ev(L) = R.If furthermore R has a rational cross-section, we can ensure ev : L → R is bijective (possibly for another L ⊆ A * ).
(b)
The subgroup R is finitely generated, and R is rational in R .It follows that R is rational in any ambient group H ⊇ R.Moreover, the same holds with "rational" replaced by "admitting a rational cross-section" everywhere.
Left-invariant orders
Let G be a group.A left-invariant order on G is a (partial) order ≺ on G satisfying Given a left-invariant order on G, we can define its positive cone
Note that
Each property of the relation ≺ translates into a property of its positive cone, (a) Anti-symmetry translates into P ≺ ∩ P −1 ≺ = ∅.(b) Transitivity translates into the fact that P ≺ is a sub-semigroup, i.e., P ≺ P ≺ ⊆ P ≺ .
(c) The order ≺ is total if and only if G = P ≺ ∪ {e} ∪ P −1 ≺ .The other way around, given a subset P ⊆ G satisfying both (a) and (b), we can define a left-invariant (partial) order Hence we can think interchangeably about orders and their positive cones.This allows us to define a rational order (sometimes regular order) on G as a left-invariant order on G whose positive cone is a rational subset of G.
Remark.Most of the recent literature deals with total left-invariant orders, so that this adjective is usually dropped.We will work with both, hence keep the adjective.
Intersections of rational subsets
In the spirit of Benois' lemma, we study when rationality is preserved when taking intersections of subsets.First, we have then H ∩ R is rational.Moreover, if R had a rational cross-section, so will H ∩ R.
Remark.By induction, the same result holds for G the fundamental group of a graph of groups with finite edge groups, and H any vertex group.
Proof.We will only prove part (a) as part (b) is similar.
Let us first treat two small cases: Otherwise we can suppose A ⊆ (H ∪ B) \ C. Let L ⊆ A * be a regular language for R, and M = (V, A, δ, * , T ) an automaton recognizing L. We construct a new automaton M ′ = (V, (A \ B) ⊔ C, δ ′ , * , T ) as follows: • For each pair of states p, q ∈ V , if there exists an (oriented) path from p to q starting with an edge labeled by s ∈ B, with associated word evaluating to c ∈ C and no proper prefix evaluating in C, then add a c-edge from p to q.
• Remove all edges labeled by letters s ∈ B \ C.
It should be clear that the language L ′ recognized by M ′ evaluate to H ∩ R. Let us now suppose L was a rational cross-section and prove that the same holds for L ′ .
Each word w ∈ L ′ can be written as w = u 0 c 1 u 1 . . .c m u m with u i ∈ (A \ B) * and c i ∈ C.
Those words corresponds to words w old ∈ L of the form with v i ∈ A * evaluating to c i (in particular w = w old ), starting with a letter in B and without proper prefix evaluating in C. Suppose we have two words w, w ∈ L ′ evaluating to the same element g ∈ H ∩ R. It follows that both words w old , wold evaluate to g, hence they are actually the same word: Comparing both expressions we get u 0 = ũ0 as those are the longest prefixes without letters in B. As we keep reading, the shortest word between v 1 and ṽ1 is a prefix of the other, but they both evaluate in C, hence v 1 = ṽ1 and c 1 = c1 .Iterating, we get w = w as wanted: L ′ is a rational cross-section.
Surprisingly, the reciprocal "if A and B have rational cross-sections (and C is finite) then A * C B has a rational cross-section" seems open.Note that, using Proposition 3.3 from [BHS18], this reduces to the following question: Question.Suppose that A has a rational cross-section, and C A is a finite subgroup.Is it true that C has a regular language of cosets representatives?
Next we prove the following result.Note that Proposition 2.1 and Theorem 2.2 lie at the opposite sides of a spectrum: the subgroup H in Proposition 2.1 is a free factor, while here H intersects each free factors (and each conjugate) at most once.
Theorem 2.2.Let G be a finitely generated group and R a rational subset.Suppose Suppose that H G is a finitely generated subgroup acting freely on the associated Bass-Serre tree (by left multiplication).Then H ∩ R is rational.
Remark.These results can be made effective as soon as the rational subset membership problem is decidable in G (i.e., is decidable in both factors, see [KSS07]).
We start with several lemmas, following the scheme in §3-4 of [Su20]: Definition 2.3.Let G be a group, A a symmetric generating set, and K 0 a constant.Consider v, w ∈ A * two words.We say w asymmetrically K-fellow travel with v if there exists a weakly increasing sequence (i n ) n=1,...,ℓ(w) such that (using Notation 1.5) dist A w(: n], v(: i n ] K for all n.Lemma 2.4 (Compare with [Su20, Lemma 3.5]).Let G be a f.g.group, A a finite symmetric generating set, and K 0. Consider L ⊆ A * a regular language.We define L = {w ∈ A * | ∃v ∈ L, w = v and w asymmetrically K-fellow travel with v}.
Then L is rational and ev( L) = ev(L).
Proof.Let M = (V, A, δ, * , T ) be a deterministic automaton recognizing L. We construct a new automaton M recognizing L as follows • The vertex set is Ṽ = V × B K where B K is the ball of radius K (around e) in the Cayley graph of G, relative to the generating set A.
• Add an a-edge (a ∈ A) from (p, g) to (q, h) if any of the following conditions hold 1. g = h = e and there was an a-edge from p to q in M, 2. p = q and h = ga (i.e., there was an a-edge from g to h in B K ), or 3. Using only edges of type 1 and 2, there already exists in M an oriented path from (p, g) to (q, h) with associated word evaluating to a.
Note that M can be seen as the sub-automaton consisting of all type 1 edges.Type 1 edges can be seen in black, type 2 in purple, and some type 3 in pink.
We show that L coincide with the language recognized by M : • By construction, each edge e of type 3 (from (p, g) to (q, h), labeled by a ∈ A) can be associated to a word v e ∈ L p→q satisfying g −1 v e h = a.
Given a word w recognized by M , we construct v recognized by M as follows: pick an accepted path for w in M. Replace each edge of type 1 by its label, forget about edges of type 2, and replace each edge e of type 3 by v e .It's easy to check that w = v and w asymmetrically K-fellow travel with v, hence w ∈ L.
• Reciprocally, let w ∈ A * be a word asymmetrically K-fellow traveling with some word v ∈ L evaluating to the same ev(w) = ev(v).Pick an accepted path for v in M. Each prefix v(: i] leads to a state p i in M.Moreover, for each 0 n ℓ(w), there exists We argue w is recognized by the path going through all (p in , g n ) in order (and only those), using correct type 3 edges.
It follows that L is the language recognized by M : L is regular.
Proposition 2.5 (Compare with [Su20, Proposition 4.2]).Let G be a f.g.group, A a finite generating set, and R, S regular languages evaluating to R, S ⊆ G respectively.Suppose there exists K 0 such that, for all g ∈ R ∩ S, there exist v ∈ R and w ∈ S evaluating to g such that w asymmetrically K-fellow travel to v. Then R ∩S is rational.
Proof.R ∩ S is rational and evaluates to R ∩ S.
Lemma 2.6 (Nielsen basis).Consider H a group acting freely on a simplicial tree T , and fix a vertex p ∈ T .Then H is free, and admits a basis N such that, for any reduced word w over N ± and any 1 n ℓ(w), the geodesic path from p to w • p is not covered by the geodesic paths from p to w(: n − 1] • p and from w(: n] • p to w • p. Proof of Theorem 2.2.We will only prove part (a), as part (b) is similar.
Fix A ⊆ A ∪ B a finite symmetric generating set.Take R ⊆ A * a language evaluating to R. Consider T the Bass-Serre tree 1 for G = A * C B and p = A. Take N a Nielsen basis for H as defined above, and consider S the language of reduced words over N ± .
We check that, for every pair (v, w) ∈ R × S evaluating to the same point, w asymmetrically K-fellow travel with v, where The previous results says that, for each 1 n ℓ(w), there exists an edge g n C such that all w(: j]A with j < n lie in one component of T \ {g n C}, and all w(: j]A with j n lie in the other.Looking at the Cayley graph of G (w.r.t.A), this means that the (coarse) path w "crosses" the cutset g n C exactly once, between w(: n − 1] and w(: n]. On the other side, the (genuine) path v also has to cross this cutset (as it evaluates at the same endpoint as w).Let i n be the smallest index such that v(: i n ] ∈ g n C. It's easy to check that the sequence (i n ) 1 n ℓ(w) is (strictly) increasing and Everything is in place for Proposition 2.5, except that S ⊂ A * .This can be fixed replacing each letter of N ± by a geodesic representative over A. (We get a new language S A ⊂ A * , which is regular and evaluates to H.) We conclude that R∩H is rational.
Monsters without rational cross-sections
Let us first recall a classical lemma for regular languages Lemma 3.1 (Pumping lemma).Let L be a regular language (over A), either A direct corollary (proven in both [Gil87; GdlH90]) is the following: if L is a rational cross-section for a group G, then either G is finite, or G contain an element w of infinite order.In particular, infinite torsion groups don't have rational cross-sections.
There exist other dichotomies for regular languages that can be used in a similar fashion.We first need a notion of "smallness" for languages: Definition 3.2 (Bounded language).A language L ⊆ A * is bounded if there exist (not necessarily distinct) words w 1 , . . ., w n ∈ A * such that A folklore dichotomy is the following: Lemma 3.3 (See for instance [Tro81]).Let L be a regular language (over A), either • L has polynomial growth and is bounded, or • L has exponential growth and there exist v 1 , v 2 , w 1 , w 2 ∈ A * with w 1 w 2 = w 2 w 1 and The corresponding notion of "smallness" for groups is A direct translation of Lemma 3.3 gives Corollary 3.5.Let G be a group having some rational cross-section L, at least one of the following must hold true: • G is boundedly generated Remark.Note that Corollary 3.5 is no longer a dichotomy.For instance the Baumslag-Solitar group BS(1, 2) has rational cross-sections in both regime of Lemma 3.3, and indeed it is boundedly generated and contains free submonoids.
Our goal in the next two subsections will be to construct concrete examples of groups without either properties, hence without rational cross-sections.In both cases, one condition will be easily discarded, some work being needed to reject the other.
Extensions of infinite torsion groups
We first look at groups mapping onto infinite torsion groups.Note that bounded generation goes to homomorphic image so, if G has some quotient T which is not boundedly generated (e.g.infinite torsion), neither is G. Absence of free submonoid doesn't behave as well under extension, however things can be done under conditions Theorem 3.6.Let G be a group given by a short exact sequence Suppose that N M 2 , and T is torsion, then G M 2 .As a corollary, if T is infinite, then G doesn't have any rational cross-sections.
Proof.Let g 1 , g 2 ∈ G. Let n 1 , n 2 be the (finite) orders of π(g 1 ) and π(g 2 ) respectively, so that g n 1 1 , g n 2 2 ∈ N. As N doesn't contain any free submonoid, there exists some non-trivial positive relation between g n 1 1 and g n 2 2 (in N).Obviously this relation can also be seen as a non-trivial positive relation between g 1 and g 2 (in G), so that no pair of elements g 1 , g 2 ∈ G generates a free submonoid in G.
Groups acting on regular rooted trees
Another class of groups providing non-examples are groups with intermediate growth.Note that the growth of any cross-section gives a lower bound on the growth of the group, hence any hypothetical rational cross-section for a group of intermediate growth should have polynomial growth hence be bounded.Known groups of intermediate growth mainly comes from two constructions: groups acting on trees following [Gri85], and groups "of dynamical origin" following [Nek18].We will focus on Grigorchuk-type groups (not all of which are torsion).
Let T d be the d-ary rooted tree, and G T d by automorphisms.We denote by St(L n ) the pointwise stabilizer of the nth level L n of the tree, and We first need a notation: Definition 3.9.Let G be a group.We define its exponent as Proof of Proposition 3.8.By construction G n acts faithfully on the n first levels of T d , in particular it can be seen as a subgroup of the automorphism group of the first levels, i.e., a subgroup of the iterated permutational wreath product It follows that elements of G n have relatively small orders, namely bounded by Now suppose that G is boundedly generated, and fix elements g 1 , g 2 , . . ., g m ∈ G such that G = g 1 g 2 . . .g m .Factoring by St(n) we get This includes spinal p-groups with p 3 [FAZR11] and weakly regular branch groups [Bar06;FA23].For spinal 2-groups G ω , the orders |G n | have been computed in [Sie08], and the analogous lim sup is positive as soon as ω is not eventually constant, that is, as soon as G ω is not virtually abelian.(The Hausdorff dimension itself is positive only when ω doesn't contain arbitrarily long sequences of identical symbols.) We deduce that none of these groups has bounded generation.Combining this result with known results on intermediate growth among spinal groups (see [Gri85; Fra20]), we get new examples of groups without rational cross-sections: Corollary 3.10.The following spinal groups don't admit any rational cross-section Remark.The same argument works for context-free cross-sections.Indeed, just as regular languages, context-free languages are either bounded or have exponential growth [Tro81], so that groups of sub-exponential growth with context-free cross-sections should have bounded generation.This is not the case for spinal groups covered previously.
Orders and wreath products: Positive results
Let us first recall a known result that started our investigation: Proposition 4.1 (See [Gil87]).The lamplighter group admits a rational cross-section.
Proof.We construct a rational cross-section over A = {a, t, T = t −1 }: where t + = tt * = {t n : n 1}, similarly T + = T T * , and Let us be a bit informal.Except for the "all lamps off" elements (translations) covered by the first term T , these normal forms consist in going through the support from left to right, changing the state of each lamp to match the element represented, and never touching those lamps ever after.Similarly, for general wreath products L ≀ Q, we would like to have some order on Q.Of course, this ordering should be "encodable" in a finite automaton, which forces the order to be left-invariant, at least under the action of some finite-index subgroup (as the only recognizable subsets of Q are finite-index subgroups by a theorem of Anissimov and Seifert).This naturally leads to the following definition: Definition 4.2.A finitely generated group G has property (R+LO) if there exists a total left-invariant order ≺ on G such that the associated positive cone G + = {g ∈ G | g ≻ e} admits a rational cross-section G + .
Remark.Property (R+LO) implies that G admits a rational cross-section.Indeed, if G + admits a rational cross-section G + , then we can define which is also rational (as regularity is preserved under substitution/morphism and mirror image).It follows that G admits as a rational cross-section.
Once this (R+LO) condition defined, Gilman's construction generalizes easily.
Theorem 4.3.Let L and Q be two groups such that • L has a rational cross-section; • Q is virtually (R+LO).
Then L ≀ Q has a rational cross-section.
Proof.Let us first suppose that Q is (R+LO).Let Q, Q + and L be rational crosssections for Q, Q + and L respectively.We define L 0 = L \ ev −1 (e L ).We claim that is a rational cross-section for L ≀ Q: Note that the lamplighter only moves "in the positive direction" from the first to the last state switch, so no switch can be undone.It follows that every word from QL 0 (Q + L 0 ) * Q contains at least one non trivial state.
, is a translation, then g has a unique representative in Q, and no representative in QL 0 (Q + L 0 ) * Q.
⋆ Suppose now that g = (l s ) s∈Q , q has non-empty support.Let supp g = {s 0 ≺ s 1 ≺ . . .≺ s n }, then ∈Q is the only word of G mapping to g.
If Q is only virtually (R+LO), denote by H Q a finite index subgroup (say index n) with property (R+LO).Note that L n admits a rational cross-section, so the first part implies that L n ≀ H has a rational cross-section.Moreover We conclude in turn that L ≀ Q admits a rational cross-section.
Examples of
The positive cone is formed of elements g = (l s ) s∈Q , q) ∈ L ≀ Q such that • either (l s ) s∈Q ≡ e L and q ∈ Q + , • or l m ∈ Q + where m = min{s ∈ Q : l s = e}.
(f) For braid groups, the Relaxation Normal Form is regular and compatible with the Dehornoy order [Jug17, Theorem 5.5], hence B n is (R+LO).
(g) A closer look at arguments of [ARS21] gives the following: given a finite family of groups (G i ) with (R+LO), the group * i G i × Z has property (R+LO).
Orders and wreath products: Negative results
In this section we explore our intuition that the only way to build a rational cross-section for a wreath product L≀Q using an automaton is to switch on lamps monotonously w.r.t.some rational left-invariant order on Q.This insight translates into Lemma 5.1.We deduce from this lemma a criterion (Proposition 5.4) ensuring some wreath products do not admit rational cross-sections, and apply it to groups similar to C 2 ≀ (C 2 ≀ Z).In the last subsection, we apply the criterion to wreath products L ≀ Q with Q infinite ended.
Main lemma
Lemma 5.1 be a short exact sequence, and suppose R ⊆ G has a rational cross-section L. Let M = (V, A, δ, * , T ) be a trimmed automaton accepting L. For each state v ∈ V , we define the language L v→v of words we can read along paths from v to v. Finally, we define P v = π Q (ev(L v→v )).Then P v is a submonoid and (a) If N is torsion (i.e., N M 1 ), then Moreover, in either case, defines a left-invariant rational partial order on Q.
Building a word w ∈ L correspond to following a path in the automaton M. Morally what our lemma says is that, as long as we stay in the strongly connected component (or "communication class" using Markov chain terminology) of a vertex v ∈ V , the projection of w(: i] in Q will move along a chain for ≺ v .* The main idea is that, as soon as P v ∩ P −1 v (or rather T in what follows) is big enough, the language will recognize a bunch of words projecting onto the same element of q ∈ Q, and embedding all this mess (aka a free submonoid) inside a single lateral class qN isn't possible unless N itself contains a free submonoid.(The "trimmed" condition is there to make sure no part of the automaton remains invisible in the recognized language.)Examples.
• Let us first look at the rational cross-section for F 2 formed by all reduced words on a, b and their inverses A, B. The group F 2 can be seen as a trivial extension {e} ֒→ F 2 ։ F 2 , we have N = {e} torsion, and (for the automaton in Figure 8) Note that the condition N M 2 is indeed necessary, as F 2 can alternatively be seen as the extension [F 2 , F 2 ] ֒→ F 2 ։ Z 2 in which case • Examples with P v ∩ P −1 v non-trivial are quite easy to come up with.For instance, the automaton in Figure 9 recognizes a cross-section for C 3 × Z, which can be seen as an extension Z ֒→ C 3 × Z ։ C 3 , in which case It is notable that, even though cones obtained for different choices of v are quite close in some sense (for instance, we always have for some w u→v , w v→u ∈ Q), they can have drastically different behavior (P v gives rise to a total order ≺ v while ≺ u has chain density → 0 w.r.t.balls in Z 2 ).
Proof of Lemma 5.1.Let us define Both cases are pretty similar g −1 and define w = w + w − .Note that w ∈ L v→v and w ∈ N. Since the automaton is trimmed there exist paths from * to v, and from v to a terminal state.Let v 0 , v 1 ∈ A * be words we can read along such paths, we get If w = ε the language on the right hand side is infinite.As ev : L → G is injective, we get an infinite order element w in N, absurd!So the only possibility is w = w + = w − = ε: we get T = {ε} and Define g, w + , w − , v 0 and v 1 as in part (a), with the extra assumption that g has infinite order.In particular, there does not exist another element h ∈ Q such that both g and g −1 are positive powers of h, hence the same holds for w + and w − .It follows that w + w − = w − w + hence {w + w − , w − w + } * L v→v is a free monoid.Since and ev : L → G is injective, we get a submonoid ev{w + w − , w − w + } * N, absurd!We conclude that all elements in Suppose there exist w 1 , w 2 ∈ T s.t.w 1 w 2 = w 2 w 1 .Let n i be the (finite) order of g i = π Q (w i ).We get a free monoid {w n 1 1 , w n 2 2 } * L v→v evaluating to a free submonoid ev{w n 1 1 , w n 2 2 } * N, absurd!Hence T is a commutative submonoid of A * , i.e., there exists w 0 ∈ A * such that T ⊆ {w 0 } * .It follows that P v ∩ P −1 v is a cyclic subgroup (generated by some power of π Q (w 0 )).
Note that, in both cases, T is a regular language (submonoids of {w 0 } * ≃ N are finitely generated hence regular), so L v→v \ T is regular too, and evaluates to
A criterion for wreath products
Our interest will be directed towards wreath products G = L ≀ Q, in which case N = Q L. We first prove that N contains free submonoid if and only if L does.Proposition 5.2.Let (G i ) i∈I be monoids.Suppose none of the G i contains free (nonabelian) submonoids, then i∈I G i doesn't contain free submonoids either.
Proof of Proposition 5.2.We first deal with the case |I| = 2, so the direct sum can be written as G × H. Fix x 1 , x 2 ∈ G × H, say x i = (g i , h i ).Let us construct a non-trivial positive relation between x 1 and x 2 .
• Consider w i (x, y) = v i (u 1 (x, y), u 2 (x, y)) for i = 1, 2. The announced relation is This equality clearly holds in the second component.In the first component it reads as v 1 (g, g) = v 2 (g, g) which follows from ℓ(v 1 ) = ℓ(v 2 ).Moreover this relation is non-trivial.Indeed, v 1 and v 2 differs on some letter, wlog the j-th letter is x in v 1 and y in v 2 , then w i (j − 1)l : jl = u i for i = 1, 2, but u 1 = u 2 so that w 1 = w 2 .
The more general case where I is finite comes by induction from the case |I| = 2. Finally the result extends to arbitrary sums using that "not containing a free submonoid" is a local property, hence goes to direct limits.
We are now ready to prove our criterion Proposition 5.4.Let Q be a finitely generated group.Suppose that, for any finite sequence of left-invariant rational partial orders ≺ 1 , ≺ 2 , . . ., ≺ n on Q, there exists an arbitrarily big set S ⊆ Q which is an antichain w.r.t. to all orders ≺ i .Then L ≀ Q doesn't have any rational cross-section for any non-trivial group L M 2 .
Recall that a set S ⊂ Q is an antichain w.r.t. an order ≺ if and only if it doesn't contain p, q ∈ S such that p ≺ q (i.e., distincts elements of S are always incomparable).
Proof.For the sake of contradiction, let L be a rational cross-section for L ≀ Q. WLOG assume A ⊂ L ∪ Q.Let M = (V, A, δ, * , T ) be a deterministic trimmed automaton accepting L. Consider all orders ≺ v for v ∈ V , and let S ⊆ Q with be a large common antichain.We consider a non-trivial element h ∈ L, and define g = (h • 1 S , e Q ) ∈ L ≀ Q.We show that no element in L represents g.
By contradiction, suppose that g = w for some w ∈ L. For all s ∈ S, there exists 1 i s ℓ(w) s.t. the state of the lamp on the site s changes between w(: i s − 1] and w(: i s ].Recall that A ⊂ L ∪ Q hence the lamplighter can only change the state of the lamp he is standing next to.In other words By the pigeonhole principle, there exist s, t ∈ S such that following both prefixes w(: i s ] and w(: i t ] in the automaton lead to the same state v ∈ V and such that s Theorem 5.5.Suppose Q has arbitrarily large or infinite torsion subgroups.Then L≀Q doesn't have any rational cross-section for any non-trivial group L M 2 .
Proof.Distinct elements s, t of a given torsion subgroup can never be comparable w.r.t.any left-invariant order, as s −1 t has finite order.Put another way, large torsion subgroups form large common antichains.
Remark.It is natural to ask whether Proposition 5.4 can be improved all the way to a genuine reciprocal of Theorem 4.3, i.e., is the following statement true?
Conjecture A: The group L ≀ Q (with L ≃ {e}) has a rational cross-section if and only if Q is virtually (R+LO) and L has a rational cross-section.
Getting back information on L from "L ≀ Q has a rational cross-section" seems quite difficult.For instance, even for Q = C 2 , it reduces (up to known results) to "L × L has a rational cross-section only if L does" which is open.For this reason, we propose Indeed, getting back information on Q seems more doable.A further argument toward the conjecture is the following strengthening of Proposition 5.4: let ≺ be a partial order on Q and S ⋐ Q a finite subset.We define the chain density of ≺ as Proposition 5.6.Suppose L≀Q has a rational cross-section L with L M 2 non-trivial.Let M = (V, A, δ, * , T ) be a finite automaton recognizing L. There exists ε = ε(M) > 0 such that, for all S ⋐ Q, there exists v ∈ V such that Do these inequalities imply that one of these left-invariant orders restricts to a total order on a finite index subgroup of Q?This is true whenever Q = Z d for instance.Some equivariant version of Dilworth's theorem might be useful.
Rational cones in free and infinite-ended groups
In this section, we prove the following strengthening of a result by Hermiller and Sunic from [HS17].We need a slight adaptation of their argument to deal with partial orders.
Proposition 5.7.Let F 2 = a, b be a free group.For any rational order ≺ on F 2 , there exists S F 2 of rank 2 such that S is an antichain w.r.t.≺.
We suppose on the contrary that any subgroup S F 2 of rank 2 intersects P , aiming for a contradiction.
Let A = {a, b, a −1 , b −1 }, and let P ⊆ A * be a regular language for P .By Benois' lemma, we may assume P consists only of reduced words over A. Let |V | be the number of states in a corresponding automaton.
Let w ∈ A * be the reduced word for g −1 .Suppose wlog that w ends with b ±1 , hence are all reduced words, and S = g −1 aba −1 g, g −1 a 2 ba −2 g is a rank 2 subgroup of F 2 .By our assumption there exists k ∈ P ∩ S, with corresponding reduced word wa . . .a −1 w −1 ∈ P.
As a corollary, w is a prefix of a word in P, hence there exists another word v ∈ A * of length ℓ(v) < |V | such that wv ∈ P. Finally g ′ = ev(wv) has the announced properties.
⋆ The sequence defined by g 0 = e and g n+1 = g n g ′ n for all n 0 is an infinite chain (w.r.t.≺) supported in B(e, |V |) (which is finite), contradiction!Theorem 5.8.Let Q be a group with infinitely many ends.Then L ≀ Q doesn't have any rational cross-section for any non-trivial group L M 2 .
Proof.We use once again Proposition 5.4, we just have to provide large antichains.We first reduce to the case Q = F 2 and then provide an infinite antichain in F 2 .
Using Stallings' classification of groups with infinitely many ends, we know Q is either a free amalgamated products A * C B over a finite subgroup C, or an HNN extension A * C t over finite subgroups ι, ι ′ : C ֒→ A. In either case standard Bass-Serre theory gives a free subgroup F 2 Q of rank 2 acting freely on the corresponding Bass-Serre tree.Using Theorem 2.2, we get that all intersections P i ∩ F 2 are rational: all these orders restrict to rational orders on F 2 .
It follows that bounded generation, that is the existence of g 1 , g 2 , . . ., g m ∈ R such that R = g 1 . . .g m , implies the existence of a finite S ⊆ N such that R ⊆ (St) * (St −1 ) * m .
Remark.As a reality check, let us consider G = C 2 ≀ Z: the entire R = G admits a rational cross-section, and can indeed be written as R = ({e, a}t) * ({e, a}t −1 ) * 2 .
Proof of Proposition 6.1.Let L be a rational cross-section for R and M = (V, A, δ, * , T ) be a trimmed automaton accepting L (with A finite).Fix m = 2|V | − 1.We define Let J := max s∈A |π(s)| be the largest jump π( • ) can do in one step in the automaton.For each v ∈ V , we denote its strongly connected component K v , and L Kv→Kv the language of words we can read from any vertex in K v to any other vertex in K v .Let The remainder of the proof goes as follows: (a) We prove that S is finite, through an upper bound on the length of possible w's.(a) Consider a strongly connected component K, and w ∈ L K→K satisfying |π(w)| J. Suppose w.l.o.g.P v ⊆ Z 0 (for any -hence all -v ∈ K).Using some Loop-Erasure algorithm, we decompose any path recognizing w as an union of a simple path labeled w 0 , together with a bunch of (non-empty, simple) loops labeled u 1 , . . ., u r .
Figure 11: A path inside a component K of the automaton, and its decomposition.For the decomposition, follow the path.Each time you come back at an already-visited vertex, cut the simple loop formed between the two visits, and "forget about the loop", then keep going.
Even though we cannot reconstruct w from w 0 and the u i 's, we at least have Note that ℓ(w 0 ) |K| − 1 hence π(w 0 ) J(1 − |K|).Recall π(u i ) 1 by Lemma 5.1.
Putting everything together J(1 − |K|) + r π(w) J hence r J|K| and finally this is trivial.Otherwise, Lemma 5.1 gives π(w) = 0, say π(w) 1. Recall the notation w(i : j] for the subword of w consisting of all letters from the (i + 1)th to the jth, included.Let i 0 = 0. We define recursively i j as the largest integer such that u j = w(i j−1 : i j ] satisfies π(u j ) J. By definition of J, all u j are non-empty, so we eventually have w = u 1 u 2 . . .u r .By maximality of i j 's (or as π(w) 1 whenever r = 1), we have 1 π(u j ) J so that h(u j ) ∈ S. Finally we get As L is a rational cross-section for R, there exists w ∈ L such that w = g.Using another Loop-Erasure algorithm, we can rewrite w = w 1 s 1 w 2 s 2 . . .s n−1 w n with w i ∈ L v i →v i labeling a (possibly empty) loop at v i , and s i labeling an edge from v i to v i+1 , and n |V |.We've shown w i , s i ∈ (St) * (St −1 ) * which concludes the proof.Houghton's groups form a family of groups with many interesting properties.The first member is the group H 1 = FSym(N) of finitely supported permutation of N, which is not finitely generated.The second is defined as ∃π ∈ Z such that σ(x) = x + π for all but finitely many x ∈ Z .
It is finitely generated but not finitely presented.Higher groups in the family are finitely presented.Brown proved that H n has property F P n−1 , but not F P n (see [Bro87]).In particular, H n 's are examples of groups without finite complete rewriting system, and therefore good candidates not to have any rational cross-section.We show that H 2 doesn't have any rational cross-section.As a byproduct, it is not boundedly generated.
First note that H 2 is indeed a torsion-by-Z group, with the short exact sequence We show that subsets of H 2 of the form (St) * (St −1 ) * m are "nicer" than H 2 as a whole, hence H 2 cannot be of this form.In order to formalize this idea we define a notion of complexity for elements of H 2 : Definition 6.2.Let h ∈ FSym(Z).We define its crossing number as More generally, if g ∈ H 2 , we define c(g) = c(gt −π(g) ).Proof.Let us take a deep breath, and prove them one by one: (a) c(g) = c(gt n ) is clear (for all g ∈ H 2 ).Moreover, for all h ∈ FSym(Z), we have c(t m ht −m ) = c(h), as those are "translated" permutations.It follows that due to some "conservation of mass".For generic g ∈ H 2 , we have = c(gt −π(g) ) = c(g).
(c) This is clear for h 1 , h 2 ∈ FSym(Z), as x < p < h 1 h 2 (x) implies that either h 2 (x) < p < h 1 h 2 (x) or x < p < h 2 (x).Now for g 1 , g 2 ∈ H 2 we have The associated h = gt −n ∈ FSym(Z) can be written as for some σ 0 , σ 1 , σ 2 , . . ., σ n−1 ∈ Sym[a, b].Note that t i σ i t −i ∈ FSym(Z) and satisfies supp(t i σ i t −i ) ⊆ [a + i, b + i].The situation can be illustrated as follows: Now observe that h(x) x + (b − a), for all x ∈ Z, which concludes.Now that everything is in place, we can proceed and prove this section's main result Theorem 6.4.H 2 does not admit any rational cross-section.
Proof.Consider a subset R ⊆ H 2 admitting a rational cross-section, so that R ⊆ On the other side, crossing numbers are not uniformly bounded on H 2 .For instance Remark.In the previous section we used that, in order for a word w ∈ A * to evaluate to a well-chosen g ∈ L ≀ Q, the corresponding path in Q should pass by a large set of elements in Q, which can be complicated.In this section, we show the path corresponding to any word w evaluating to well-chosen g ∈ H 2 should do many back-and-forths in Q = Z, which turns out to be just as complicated.
A finitely presented extension of Grigorchuk's group
The goal of this section is to prove that the finitely presented HNN-extension G of the first Grigorchuk group defined in [Gri98] doesn't have a rational cross-section.This group was introduced as the first example of finitely presented group which is amenable but not elementary-amenable.In the first subsection, we recall the construction of G, and exhibit an action of G on the (unrooted) 3-regular tree.The actual proof that G does not admit rational cross-section, using Proposition 6.1, comes in §7.2.
An action of G on the 3-regular tree
Let us first recall the definition of the first Grigorchuk's group.
We will also need the notion of section of an automorphism at a vertex: Definition 7.2.Let g ∈ Aut({0, 1} * ) and v ∈ {0, 1} * .The section of g at v is the unique element g v ∈ Aut({0, 1} * ) such that ∀w ∈ {0, 1} * , g(vw) = g(v)g v (w).so that φ(g) ∈ St(0) for all g ∈ G, and φ(g) 1 = g.(This last equality proves injectivity, and will be central in order to define an action of G.) Using this presentation, Grigorchuk [Gri98] constructed the following finitely presented group: Definition 7.3.The group G is defined as the following (ascending) HNN-extension Proposition 7.4.The group G acts faithfully, by automorphisms, on the 3-regular tree.
This defines an action of F (a, b, c, d, t), remains to check that each relation is satisfied • The first line of relations holds in G (so holds in the subtree below (0, ε)), obviously a 2 , b 2 , c 2 , d 2 acts as the identity on each branch, and x 4 = e is a law in D 8 • Relations t −1 bt = d, t −1 ct = b and t −1 dt = c are trivial.
• t −1 at = aca is equivalent to a bunch of conditions on the actions ψ −i (a).First, we should have ψ −1 (a) = d (compare with the left diagram of Figure 16).Then Figure 17: Action of a, c and a random g ∈ G on the tree.The highlighted vertex is (0, ε) So we get a genuine G-action.Moreover, the action restricted to t n Gt −n is faithful (look under (−n, ε)), and any g / ∈ N = n t n Gt −n shifts levels so acts non-trivially.
Remark/Definition 7.5.Our tree has slightly more structure as it is graduated.Indeed we can define the level of vertex (n, v) as n + ℓ(v), where ℓ(v) is the length of v.In particular we can define a relation "is a descendant of", which is preserved by the action.Therefore we can still define the section of an automorphism g at a vertex (n, v) as the unique element g (n,v) ∈ Aut({0, 1} * ) satisfying g • (n, vw) = (ñ, ṽ g (n,v) w), where (ñ, ṽ) = g • (n, v).
Remark.The boundary of the tree can be identified with the set of doubly infinite sequences of 0 and 1, starting with infinitely many 1's (together with a globally fixed end −∞, which can be considered as the root at infinity).The induced action can easily be described.For v ∈ {0, 1} * , s ∈ {0, 1} and w ∈ {0, 1} ∞ , we have • t shifts to the left: t • . . .11v|sw = . . .11vs|w • Elements g ∈ G act on the main subtree as g • . . .11|w = . . .11|(gw) where gw is defined by the usual action G {0, 1} ∞ .
Remark.The existence of an action can also be seen abstractly.Consider a sequence of nested groups G n acting on nested sets/graphs Ω n If all of this is equivariant, in the sense ι n (g • ω) = φ n (g) • ι n (ω) for all g ∈ G n , ω ∈ Ω n , then we get an action lim − → G n lim − → Ω n .If moreover all G n , Ω n , φ n and ι n are equal to fixed G, Ω, φ and ι respectively, then we get an action Here, we just take G = G and Ω = {0, 1} * , φ = φ and ι : w → 1w.The direct limit of all G's is n t n Gt −n , while the direct limit of {0, 1} * is our tree.(If it wasn't clear, this shows that the graph acted on is indeed a tree since direct limits of trees are trees.)
No rational cross-section for G
We apply Proposition 6.1 (rather its contrapositive), hence proving the following result.
Theorem 7.6.The extension G doesn't have any rational cross-section.
Proof.Let N = n t n Gt −n .Fix S ⊂ N a finite set and m ∈ N. We're aiming to show G = (St) * (St −1 ) * m .
Up to conjugation by some power of t, we may suppose S ⊂ G.We also add e ∈ S.
The strategy is the following: for X ⊆ N, we consider This allows to decompose the problem into simpler parts.Indeed, each element h ∈ N ∩ (St) * (St −1 ) * m can be written as a product h = a 0 a 1 . . .a ℓ with a i ∈ t j(i) St −j(i) for some height function j : [[1, ℓ]] → Z satisfying j(i + 1) − j(i) = ±1, with the sign of this difference changing at most 2m − 1 times (and some additional boundary conditions).Pictorially, If we bunch together all contiguous a i with j(i) > 0 (red pikes), and all contiguous a i with j(i) −n (green valleys), we get elements of U := J 1 t(St) J−1 (St −1 ) J and t −n Gt n respectively, so that We then find estimates on the number of possibles sections for each subset: (a) Let g ∈ G and denote h = t −n gt n .If v = (0, ε), then h v (mod L n ) is fully determined by the values (ψ −2 (g), ψ −1 (g), ψ 0 (g), ψ 1 (g), ψ 2 (g)) (at most 8 • 4 • 2 3 = 256 different sections).Otherwise, h v is a subsection of some ψ * (g) ∈ a, d , i.e., belongs to {e, a, b, c, d, ad, da, ada, dad, adad} (10 sections).
(b) Let s ∈ S and denote h = t −j st j .If v = (0, ε), then h v is fully determined by s (so |S| choices).Otherwise, h v is a subsection of an element ψ * (g) ∈ a, d , i.e., belongs to {e, a, b, c, d, ad, da, ada, dad, adad}.We get |S| + 10 sections.As a corollary, k is finite on N \ {e, b, c, d}.
Let us go back to estimate (c).Let K = max s∈S k(s).Each h ∈ U can be written as with m 1 and s 1 , s 2 , . . ., s 2J−1 ∈ S. Using Lemma 7.8 (ii-iii) repetitively gives k(h) K + 1.It follows that, for any vertex v ∈ L 0 , the section h v is given by some automorphism of the K + 1 first levels and then a choice from {e, a, b, c, d} for the section at each vertex of the (K + 1)-th level.In total, this gives a bound of L = 2 2 K+1 −1 • 5 2 K+1 possibles sections.
Proposition 2. 1 (
Compare with [GS91, Proposition 4.5]).Let G be a group, H a subgroup, and R ⊆ G a rational subset.If either (a) G = H * C B over a finite subgroup C, or
Figure 4 :
Figure 4: w asymmetrically K-fellow travel with v.We see w is strongly restrained by v while conditions imposed on v by w are much weaker.
Figure 5 :
Figure 5: Part of M for G = a, b | [a, b], b 2 , A = {a, A = a −1 , b} and K = 1.Type 1 edges can be seen in black, type 2 in purple, and some type 3 in pink.
Figure 6 :
Figure 6: The Cayley graph of A * C B, and two words v, w.
(a) Z × T for any infinite torsion group T , for instance Burnside groups B(p, n) with odd exponent p 665 and n 2 generators, or the first Grigorchuk group G (012) ∞ .(b) Z ≀ T for any infinite torsion group T .(Compare with Theorem 5.5.)(c) Free groups in the variety [x p , y p ] = e for odd p 665 and n 2 generators.
as wanted.This might seem weak, but |G n | typically grows as a double exponential.This is the case as soon as the Hausdorff dimension of the closure G inside W d = Aut(T d ) is (strictly) positive.(Recall that the Hausdorff dimension is given by hdim
Q's satisfying the (R+LO) condition are (a) Q = Z with the usual order.(b) Q = Z d with lexicographic order.(c) More generally, consider an extension A ι ֒−→ B π −→ C. If both A and C have the property (R+LO), then B has (R+LO) too.Indeed, the usual construction for a positive cone B + := π −1 (C + ) ∪ ι(A + ) has a rational cross-section B + := C + A ∪ A + .As a corollary, poly-Z groups (i.e., Z-by-Z-by-. . .groups) have (R+LO), and all finitely generated virtually nilpotent groups are virtually (R+LO).(d) It is shown in [ARS21, Section 3.2.1]that, if both L and Q admits total leftinvariant rational order, then the same is true for G = L ≀ Q.Their argument adapts to property (R+LO): If both L and Q have (R+LO), then G = L ≀ Q has (R+LO), with positive language
w s 1 w 1 w 2 w 3 Figure 12 :
Figure 12: A path in the automaton and its decomposition.For the decomposition, start at the starting vertex * and skip directly to the last visit of * , hence bypassing a (possibly empty) loop * → * , then go to the next vertex.Each time you enter a new vertex, skip directly to the last visit of said vertex (bypassing another loop), then keep going
Figure 13 :
Figure 13: A permutation h ∈ FSym(Z) and a p reaching the bound c(h) = 3
Figure 14 :
Figure 14: A "braid" diagram for (d) (St) * (St −1 ) * m for some finite S ⊆ FSym(Z) by the previous theorem.Fix [a, b] a finite interval containing the support of each s ∈ S. It follows that R ⊆ (Sym[a, b]t) * (Sym[a, b]t −1 ) * m hence c(g) is uniformly bounded by 2m(b − a) on R (using Lemma 6.3 repetitively).
Figure 15 :Figure 16 :
Figure 15: From left to right a, b, c and d.Black triangles denotes id sections.
sec n (X) = {x v St(L n ) | x ∈ X, v ∈ L 0 } ⊆ Aut({0, 1} * )/St(L n )the set of sections below vertices at level 0, but we only care about the action down until level n. (Recall that St(L n ) is the pointwise stabilizer of L n .)In the one hand, sec n (N) contains the usual congruence quotient sec n (G) = G n , which has size 2 5•2 n−3 +2 (as soon as n 3, see[Gri00]).It follows |sec n (N)| grows as a double exponential.In the other hand, we show the cardinal ofsec n N ∩ (St) * (St −1 ) * mgrows exponentially in n, so that (St) * (St −1 ) * m cannot fully cover N, nor G.Observation.The chain rule (xy) v = x y(v) y v gives sec n (XY ) ⊆ sec n (X) sec n (Y ) for X, Y ⊆ N.
(
c) We'll need some more artillery this time.Let us start with a definition and some properties inspired by[BR10] Definition 7.7.Let h ∈ N. We define the complexity level k(h) as the smallest integer k such that all vertices v ∈ L k have sections h v belonging to {e, a, b, c, d} (with the convention k(e) = k(b) = k(c) = k(d) = −∞).Lemma 7.8.For g, h ∈ N, we have(i) k(a) = 0 (ii) k(tgt −1 ) = k(g) − 1(iii) k(gh) max{k(g), k(h)} + 1.
here is a strongly connected component of the automaton constructed in section §4 for C 2 ≀ Z 2 .
t Figure 10: Component of an automaton for C 2 ≀ Z 2 = a ≀ s, t and some corresponding cones.
|
v3-fos-license
|
2022-12-24T16:24:29.355Z
|
2022-12-01T00:00:00.000
|
255052690
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cureus.com/articles/128369-a-practical-tool-for-risk-management-in-clinical-laboratories.pdf",
"pdf_hash": "11a308aeb52f51c756592cf1f204bfefc634ec42",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43368",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b41479c44b6c9440b0e559157d903928f6fc5a75",
"year": 2022
}
|
pes2o/s2orc
|
A Practical Tool for Risk Management in Clinical Laboratories
Risk management constitutes an essential component of the Quality Management System (QMS) of medical laboratories. The international medical laboratory standard for quality and competence, International Standards Organization (ISO) 15189, in its 2012 version, specified risk management for the first time. Since then, there has been much focus on this subject. We authors aimed to develop a practical tool for risk management in a clinical laboratory that contains five major cyclical steps: risk identification, quantification, prioritization, mitigation, and surveillance. The method for risk identification was based on a questionnaire that was formulated by evaluating five major components of laboratory processes, namely i) Specimen, ii) Test system, iii) Reagent, iv) Environment, and v) Testing. All risks that would be identified using the questionnaire can be quantified by calculating the risk priority number (RPN) using the tool, failure modes, and effects analysis (FMEA). Based on the calculated RPN, identified risks then shall be prioritized and mitigated. Based on our collective laboratory management experience, we authors also enlisted and scheduled a few process-specific quality assurances (QA) activities. The listed QA activities intend to monitor new risk emergence and re-emergence of those previously mitigated ones. We authors believe that templates of risk identification, risk quantification, and risk surveillance presented in this article will serve as ready references for supervisors of clinical laboratories.
Introduction
The test results generated by clinical laboratories aid in both diagnosis of patients' medical conditions and their continuous treatment monitoring. Since laboratory test results form an integral part of medical decisions, it becomes an absolute necessity that results generated by the lab are highly reliable and accurate. In recent years the field of laboratory medicine has shown tremendous technological advancements and automation in all of its major processes, which include pre-examination (pre-analytical), examination (analytical), and post-examination (post-analytical) processes. Despite the automation of processes, some risks are persistent, and those, if not controlled adequately, could result in a wrong diagnosis, wrong treatment, and ultimately morbidity and mortality. So identification and mitigation of potential risks associated with laboratory processes shall always be given prime importance. Risk management in laboratories, like any industry, follows similar strategies that include creating a process map that gives a better understanding of process flow among staff. Followed by which potential sources of errors are identified and assessed for their impact based on severity, the likelihood of occurrence, and detectability, implementing controls and checks to prevent and detect error before it harms the patient. This technical report is to emphasize that patients can be harmed not only by the issuance of erratic results but also due to failure of certain processes like delayed result generation or delayed critical results communication, etc.
Risk management and internal standards
The International standards organization (ISO) 3100:2018 standard on principles and generic guidelines on managing risks faced by organizations define risk as 'the effect of uncertainty on objectives'. Though this definition can be interpreted in different ways, in simple terms, it is the probability of an unfortunate occurrence. The standard ISO 15189:2012, which specifies the requirement of quality and competence in medical laboratories states, 'The laboratory shall evaluate the impact of work processes and potential failures on examination results as they affect patient safety and shall modify processes to reduce or eliminate the identified risks and document decisions and actions taken'. It is evident from the above statement that the emphasis is on patient safety rather than risks arising due to lab safety issues, i.e., biological or chemical hazards. However, ignoring these risks could also have serious consequences on testing personnel and the testing environment to an extent causing a temporary or permanent shutdown of laboratories. So as a good laboratory practice and irrespective of accreditation status, risk management assessing all laboratory processes should be carried out to ensure both patients and personnel safety. Risk management is a cyclic preventive action that constitutes risk identification, risk quantification, risk prioritization, risk mitigation, and surveillance through a set of recurrent quality assurance activities [1].
Risk identification
A typical risk management activity starts from the identification of potential risks in current processes. There are three major processes within a clinical laboratory, namely pre-examination (pre-analytical), examination (analytical), and post-analytical. A logical approach for risk identification would be to map the steps within existing processes and evaluate for associated risks. We, the authors, developed a practical tool through the evaluation of the five components, namely i) Specimen, ii) Test system, iii) Reagent, iv) Environment, v) Testing personnel, and have developed a comprehensive seventy-point risk identification questionnaire which is listed in Table 1
Risk quantification
Risk quantification is a process of evaluating identified risks, which is the sentinel step in making the right decisions as to what needs to be done to remove or reduce the chance of its occurrence. Identified risks shall be quantified using the FMEA tool, which is based on the calculation of the RPN. RPN is a multiplication product of scores assigned for severity (S), likelihood of occurrence (O), and likelihood of detection (D), and Table 2 contains the details of the scoring assignment system [2][3].
Severity (S) 1
Effect If a risk leads to error, how severe it could harm the patient? Scores are assigned between 1 and 10, which means that risks with a score of 1 are very unlikely to harm the patient, and those assigned a score of 10 are very likely to cause severe harm (morbidity or mortality). 2 Likelihood of occurrence (O): How likely is the identified risk that could lead to error? Scores are assigned between 1 and 10, which means that for risks with a score of 1, errors are least likely or rare to occur, and for those assigned a score of 10 errors can occur very frequently. 3 Likelihood of Detection (D): How likely are the errors will be detected? Scores are assigned between 1 and 10, which means errors with a score of 1, are easily detectable/obvious, and those assigned a score of 10 are completely not-detectable [3].
Risk prioritization
In simple terms, it is the process of determining the sequence in which the identified risk will be acted upon. It may appear easier at first glance that this task of assigning priority could easily be achieved by looking at the RPNs. It is a tough task if clusters of risks will be identified and all of them have the same RPNs. In general, most rational approaches are based on answering a few questions, such as How critical it is in compromising patient safety? How imminent is the error? What impact it could cause immediately and in the future on the reputation of the Laboratory? What are the expected financial losses to the Laboratory? Whether the risk has the potential to recur within the period of non-attention when it has not received priority? [4]. All the risks identified shall be listed using the template, as shown in Table 3.
Risk Quantification Mitigation
Re-quantification
S.
No.
Risk mitigation
This step of risk management involves the planning and development of methods and options that are focused on reducing the risks and errors that might occur as a consequence of a risk. There are three important strategies of risk mitigation that are specifically applicable to healthcare settings, namely 1) Avoid: This type of strategy is usually adopted when identified risks have to be completely eliminated as it may have direct effects on the reputation of the organization. For example, suspending routinely requested tests due to the non-availability of reagents or consumables. For example, sudden suspension of cardiac markers or any similar critical testing can have a major negative impact on a patient's clinical management, and labs are always expected to have contingency plans in place for such testing services. 2) Control: This strategy applies to those risks that are repetitive and can never be completely eliminated as they are inherent to an existing system of operation. A classical example in a laboratory for this strategy is analyzing quality control materials before analyzing the patient samples. It is the responsibility of the laboratory to decide upon the frequency of such analysis to keep control of the risk of generating and releasing erratic test results. 3) Watch/Monitor: This strategy is watching and identifying any changes that can affect the impact of the risk. For example, during the COVID-19 pandemic, many healthcare organizations took a risky decision to invest in setting up a molecular diagnostic laboratory, the reason why it is risky is that governments had decided to price cap COVID-19 testing and poorly planned resource allocation could have led to huge financial losses. Another similar example is at the start of the pandemic, there were many rapid card-based test methods with limited or unknown sensitivity, and specificity was used widely. Hospitals that could not afford the state of the art of molecular method opted to use these rapid tests. In such a scenario, watchful surveillance of predictive values of such card-based tests becomes essential [5].
Surveillance
Risk surveillance is a thorough set of quality assurance activities carried out in a scheduled manner, either through brainstorming sessions with the risk management team members or by adopting recommendations of regulatory bodies or published guidelines. The quality assurance activities, as listed in Table 4, could be as small as logging housekeeping of laboratory to hospital server backup activities. Surveillance helps to continuously evaluate the effectiveness of completed risk management activity.
Discussion
The procedure typically starts by forming a risk management committee constituting key personnel of the Laboratory and a few Adhoc members at the discretion of the main members. The committee shall perform a process flow analysis of all laboratory technical processes (pre-examination, examination, and postexamination). Evaluate the five major components of the testing process, namely Specimen, Test system, Reagent, Testing Environment, and Testing personnel, to identify potential sources of errors. In this regard the committee shall meet regularly for brainstorming sessions to review the laboratory's established policies, standard operating procedures, and work instructions; feedback about the front office and phlebotomy services; annual feedback from internal customers (hospital physician and nurses); performance of established quality indicators; equipment records, including major and minor breakdown logs; records of the callback of supplied reagents, calibrators or quality control lots; performance in proficiency testing programs; vendor evaluation records; adverse occurrence/incidents reports; personnel employee training records and training needs assessments.
The current paper presents a seventy-point questionnaire listed in Table 1 that shall serve as a readymade template for laboratorians to complete the risk identification step. This article also presents Table 2, which aids in score assignments for risks identified, and Table 3, which gives a template for initial RPN calculation and recalculation following the implementation of corrective actions. Table 4 presents a list of scheduled quality assurance activities that helps not only in controlling the re-emergence of already mitigated risks but also in the emergence of any potential risks. Figure 1 demonstrates a typical flow of the described risk management procedure.
FIGURE 1: Risk Management Process Flow
There are pieces of literature quoting RPN targets between 300 to 1000, but in a healthcare setting, any risk with RPN greater or equal to 100 has the potential to cause an error and should be eliminated [6]. On the other hand, it is also quite possible that a few of identified risks may have RPN less than 100, but the severity score assigned might be 9 or 10. Such risks should not be ignored based on low RPN only, and control measures should be devised and implemented to eliminate them from the process.
Limitations
Though FMEA-based risk management is considered highly practical and prevalent, there are certain subjective components to it. For example, score assignments for the severity of identified risks are subjective and might directly affect the RPN calculation [7]. There are chances for both under and overestimation of RPN. From the flow of risk management described in Figure 1, it is evident that RPN is the deciding factor of risk mitigation and should be accurate. Table 2, containing the scores and the criteria, to a greater extent, helps to avoid the subjective component in score assignments.
Conclusions
Risk management, though it originates from manufacturing industries, is not a newer concept in healthcare settings. As we are in the era of evidence-based medical practice, there has been a lot of focus on this topic. There are plenty of published pieces of literature on risk management in health information protection, healthcare process management, in-vitro diagnostics production, pharmaceutical production, drug dispensing, and so on. In developed nations, risk management forms an essential component of their patient safety program. Risk management in clinical laboratories is an essential quality improvement activity that must evaluate all processes involved in testing. It is a repetitive preventive action that needs to be carried out by laboratories at least annually. The risk identification questionnaire demonstrated in this article is easy to understand and implement and can be assigned to any supervisory staff for completion without needing additional training. Similarly, the Quality Assurance (QA) activities enlisted are quite comprehensive and include both technical elements and general management elements. The schedules for QA activities are majorly classified into daily, monthly, and annual basis are clearly indicated in the text. These identified QA activities serve as tools for both risk surveillance and capturing quality indicator data of laboratory processes.
As a continuous improvement initiative, these QA activities shall be assessed, and appropriate actions shall be planned, implemented, and documented. We authors, through this report, have tried to provide a sustainable solution to laboratory supervisors for carrying out risk management exercises.
Additional Information Disclosures
Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2011-04-28T00:00:00.000
|
16959352
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://molecularneurodegeneration.biomedcentral.com/track/pdf/10.1186/1750-1326-6-27",
"pdf_hash": "7713eb445437b8d105365df3d950d6f15a625084",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43370",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "2b053081a20443550fc0f495d7ea8de93ae64b6b",
"year": 2011
}
|
pes2o/s2orc
|
Biology and pathophysiology of the amyloid precursor protein
The amyloid precursor protein (APP) plays a central role in the pathophysiology of Alzheimer's disease in large part due to the sequential proteolytic cleavages that result in the generation of β-amyloid peptides (Aβ). Not surprisingly, the biological properties of APP have also been the subject of great interest and intense investigations. Since our 2006 review, the body of literature on APP continues to expand, thereby offering further insights into the biochemical, cellular and functional properties of this interesting molecule. Sophisticated mouse models have been created to allow in vivo examination of cell type-specific functions of APP together with the many functional domains. This review provides an overview and update on our current understanding of the pathobiology of APP.
Introduction
Alzheimer's disease (AD) is the most common cause of dementia and neurodegenerative disorder in the elderly. It is characterized by two pathological hallmarks: senile plaques and neurofibrillary tangles, as well as loss of neurons and synapses in selected areas of the brain. Senile plaques are extracellular deposits composed primarily of amyloid β-protein (Aβ), which is a 40-42 amino acid long peptide derived by proteolytic cleavages of the amyloid precursor protein (APP), with surrounding neuritic alterations and reactive glial cells. Aβ has taken a central role in Alzheimer's disease research for the past two decades in large part because of the amyloid cascade hypothesis which posits that Aβ is the common initiating factor in AD pathogenesis. Because of this, the processing of APP and generation of Aβ from APP have been areas of substantial research focus by a large number of laboratories. By comparison, whether full-length APP or other non-Aβ APP processing products play a significant role in AD or contribute to other neurological disorders has received somewhat less consideration. For example, it is unclear if the mutations in the APP gene found in the hereditary form of familial AD and the related hereditary amyloid angiopathy with cerebral hemorrhage (http:// www.molgen.ua.ac.be/ADMutations/) are pathogenic solely because of perturbed Aβ properties. However, increasing evidence supports a role of APP in various aspects of nervous system function and, in view of the recent negative outcome of clinical trials targeting Aβ production or clearance, there is renewed interest in investigating the physiological roles of APP in the central nervous system (CNS) and whether perturbation of these activities can contribute to AD pathogenesis. This review will update some of the recent findings on the physiological properties of APP. We start with a general overview of APP. Because APP consists of multiple structural and function domains, we will focus our review by addressing the properties of the full-length APP as well as APP extracellular and intracellular domains. Finally, we provide an update on the current knowledge concerning the APP function in vivo, especially recent findings from the APP conditional knockout mice and knock-in alleles expressing various APP domains. For discussions on the pathophysiology of Aβ, there are many excellent reviews that summarize this area in detail but is otherwise beyond the scope of this article.
A. APP Overview a) The APP Family
APP is a member of a family of conserved type I membrane proteins. The APP orthologs have been identified in, among others, C. elegans [1], Drosophila [2,3], Zebrafish [4] and Xenopus Laevis [5,6]. Three APP homologs, namely APP [7,8], APP like protein 1 (APLP1) [9] and 2 (APLP2) [10,11], have been identified in mammals ( Figure 1). These proteins share a conserved structure with a large extracellular domain and a short cytoplasmic domain. There are several conserved motifs, including the E1 and E2 domains in the extracellular region and the intracellular domain, the latter exhibiting the highest sequence identity between APP, APLP1 and APLP2. Of interest, the Aβ sequence is not conserved and is unique to APP. Additionally, the APP and APLP2 genes, but not APLP1, were identified in Xenopus Laevis, suggesting that the first gene duplication resulted in APP and pre-APLP in the evolution of the APP superfamily, prior to the separation of mammals and amphibians [12]. Thus, APLP1 diverged from the APLP2 gene such that APLP1 does not contain two additional exons present in both APP and APLP2, one of which encodes a Kunitz-type protease inhibitor domain. With this history, it is not surprising that APLP1 is found only in mammals and, unlike APP and APLP2, it is expressed only in brain. However, given the sequence identity between the three genes, it is also not unexpected that the mammalian APP homologs play redundant activities in vivo (discussed in "The in vivo Function of APP"). The functional conservation of APP across species is also documented by the partial rescue of the Drosophila Appl null behavioral phenotype by human APP [3]. These observations indicate that the conserved motifs, rather than the non-conserved Aβ sequence, likely underline the physiological functions among the APP species.
b) APP Expression
The mammalian APP family of proteins is abundantly expressed in the brain. Similar to Drosophila Appl [13], APLP1 expression is restricted to neurons. However, although highly enriched in the brain, APP and APLP2 are ubiquitously expressed outside of the brain. The human APP gene, located on the long arm of chromosome 21, contains at least 18 exons [14,15]. Alternative splicing generates APP mRNAs encoding several isoforms that range from 365 to 770 amino acid residues. The major Aβ peptide encoding proteins are 695, 751, and 770 amino acids (referred to as APP695, APP751 and APP770). APP751 and APP770 contain a domain homologous to the Kunitz-type serine protease inhibitors (KPI) in the extracellular sequences, and these isoforms are expressed in most tissues examined. The APP695 isoform lacks the KPI domain and is predominately or even exclusively expressed in neurons and accounts for the primary source of APP in brain [16]. For example, there is a burst of increased expression of APP695 during neuronal differentiation. However, following brain injury, expression of the APP751/770 isoforms is substantially increased in astrocytes and microglial cells [17,18]. The reason and functional significance for this apparent tissue-specific alternative splicing is poorly understood.
c) APP Processing
APP is processed in the constitutive secretory pathway and is post-translationally modified by N-and O-glycosylation, phosphorylation and tyrosine sulfation (reviewed in [19]). Full-length APP is sequentially processed by at least three proteases termed α-, βand γ-secretases ( Figure 2). Cleavage by α-secretase or β-secretase within the luminal/ extracellular domain results in the shedding of nearly the entire ectodomain to yield large soluble APP derivatives (called APPsα and APPsβ, respectively) and generation of membrane-tethered αor β-carboxyl-terminal fragments (APP-CTFα and APP-CTFβ). The APP-CTFs are subsequently cleaved by γ-secretase to generate either a 3 kDa product (p3, from APP-CTFα) or Aβ (from APP-CTFβ), and the APP intracellular domain (AICD).
The major neuronal β-secretase is a transmembrane aspartyl protease, termed BACE1 (β-site APP cleaving enzyme; also called Asp-2 and memapsin-2) [20][21][22][23][24], and cleavage by BACE1 generates the N-terminus of Aβ. There is an alternative BACE (β') cleavage site following Glu at position +11 of the Aβ peptide [25]. In addition, there is a BACE2 homolog which is expressed widely but does not appear to play a role in Aβ generation as it appears to cleave near the α-secretase site [26,27]. Of note, cathepsin B has also been proposed to act as a β-secretase [28,29], but whether generation of Aβ in brain requires the coordinated action of both BACE1 and cathepsin B is not known but unlikely given the near total loss of Aβ in BACE1 deficient mice [23,24,30].
While cleavage at the β-site is specific to BACE1 and possibly cathepsin B, it was initially believed that a number of proteases, specifically members of the ADAM (a disintegrin and metalloprotease) family of proteases including ADAM9, ADAM10 and ADAM17 are candidates for the α-secretase (reviewed in [31]). It was reported that APP α-secretase cleavage can be stimulated by a number of molecules, such as phorbol ester or via protein kinase C activation, in which case, this so-called regulated cleavage is mediated by ADAM 17, also called TACE (tumor necrosis factor α-converting enzyme) [32,33]. However, recent studies indicated that constitutive α-secretase activity is likely to be mediated by ADAM10 [34]. Interestingly, ADAM10 is transcriptionally regulated by sirtuins [35], thus providing a mechanism where augmentation of α-secretase activity competes for β-secretase cleavage to lower generation of full length Aβ peptide. However, it should be noted that cleavage of APP by α-secretase processing only precludes the formation of an intact full length Aβ peptide. Although this latter event is commonly called the non-amyloidogenic pathway, it is unfortunately a bit of a misnomer because truncated Aβ (p3 peptide) from 17-42 is also deposited in brains of AD and Down Syndrome patients [36][37][38], indicating that shorter Aβ peptides starting at the α-secretase site may contribute to some aspects of AD-associated amyloid pathology [39,40]. As regards to γ-secretase cleavage that releases Aβ from the membrane, this activity is executed by a high molecular weight complex consisting of presenilin (PS), nicastrin, anterior pharynx defective (APH1) and presenilin enhancer (PEN2) (reviewed in [41,42]). Although these four proteins form the mature γ-secretase complex, it appears that the core γ-secretase activity resides within presenilin itself functioning as an aspartyl protease [43,44]. In addition to generating Aβ peptides of different lengths, γ-secretase appears to cleave APP in multiple sequential steps [45][46][47]. An initial cleavage, termed ε-cleavage, taking place 3-4 residues from the cytoplasmic membrane face begins this process [48,49]. Elegant studies by Ihara and colleagues [50][51][52][53] have led to a model whereby sequential cleavages taking place every three residues along the α-helical face of the transmembrane domain of APP shortens the C-terminus to ultimately result in the release of Aβ.
It is worth mentioning that none of the secretases have unique substrate specificity towards APP. Besides APP, several transmembrane proteins such as growth factors, cytokines and cell surface receptors and their ligands, undergo ectodomain shedding by enzymes with α-secretase activity (see [54] for an overview). The relatively low affinity of BACE1 toward APP led to the suggestion that APP is not its sole physiological substrate. Indeed, neuregulin-1 (NRG1) now appears to be a bona fide substrate of BACE1 such that the shedding of NRG1 initiated by BACE1 cleavage would direct Schwann cells to myelinate peripheral nerves during development [55,56]. Similarly, γ-secretase has been reported to cleave more than 50 type I membrane proteins in addition to APP (reviewed by [57]), an event that requires an initial ectodomain shedding event, usually by α-secretase-mediated cleavage. While this cleavage in some cases has been demonstrated to initiate intracellular cell signaling, as exemplified by the γ-secretase dependent Notch activation, whether this also applies to APP and other γ-secretase substrates remains unconfirmed (see below and discussed in [58]).
B. The Full-length APP a) Cell Surface Receptor
Ever since the cloning of APP cDNA, APP has been proposed to function as a cell surface receptor. Further, the analogy between the secondary structures and proteolytic processing profiles between the Notch receptor and APP also suggests that APP could function as a cell surface receptor similar to Notch (reviewed in [59]). In support of this hypothesis, Yankner and colleagues reported that Aβ could bind to APP and thus could be a candidate ligand for APP [60], a finding that has been replicated by others [61]. Another piece of evidence came from Ho and Sudhof (2004) who showed that the APP extracellular domain binds to F-spondin, a neuronally secreted glycoprotein, and this interaction regulates Aβ production and downstream signaling [62]. Similarly, the Nogo-66 receptor has been shown to interact with the APP ectodomain and by which means affect Aβ production [63]. Another interacting protein recently reported is Netrin-1, a soluble molecule with multiple properties including axonal guidance through chemoattraction and tumorigenesis [64]. In this instance, addition of netrin-1 to neuronal cultures led to reduction in Aβ levels but also increased APP-Fe65 complex formation, thus suggesting a role in cell signaling (see below). Recently, work from the D'Adamio group showed that BRI2 could function as a putative ligand or co-receptor for APP and modulates APP processing [65,66]. Finally, the fact that the extracellular domains of the APP family of protein could potentially interact in trans (discussed below) suggest that APP molecules can interact in a homophilic or heterophilic manner between two cells. Overall, although a number of APP interacting proteins have been identified, it is unclear whether any of the candidates are bona fide ligands and definitive evidence supporting a physiological role of APP to function as a cell surface receptor is still lacking.
b) Cell and Synaptic Adhesion
The E1 and E2 regions in the extracellular domain of APP have been shown to interact with extracellular matrix proteins and heparin sulfate proteoglycans (reviewed in [67]), supporting its role in cell-substratum adhesion. The same sequences have also been implicated in cell-cell interactions. Specifically, X-ray analysis revealed that the E2 domain of APP could form parallel or antiparallel dimers [68], the latter structure would imply that there is a potential to function in transcellular adhesion. Indeed, cell culture studies support the homo-or hetero-dimer formation of the APP family members, and trans-dimerization was shown to promote cell-cell adhesion [69]. It was further shown that heparin binding to the E1 or E2 region would induce the formation of APP dimerization [70]. Besides the E1 and E2 regions, recent studies suggest that homodimerization can be promoted by the GxxxG motif near the luminal face of the membrane [71,72]. Interestingly, mutagenesis of the glycine residues in this motif resulted in production of truncated Aβ peptides of 34, 35, and 38 amino acids in length [71]. On the other hand, it is unclear whether these changes in Aβ generation are strictly related to APP dimerization because forced dimerization of APP with a bifunctional cross-linking agent did not lead to the same changes in Aβ profile [73]. In addition, while trans-dimerization would be expected to play a role in cell-cell interactions or adhesion, it is less clear what the cellular consequences of cis-homodimerization of APP are, aside from the alterations in Aβ peptides noted earlier. One possible role of dimerization is through downstream activity of the AICD peptide that is released after ε-cleavage, but support for this idea remains controversial. Interestingly though, using various reporter constructs, the subcellular localization of dimerized APP and APLP2 was reported to be different to that of APLP1 [74], suggesting that there are subtle functional roles in homo-or heterodimerization of the APP gene family that remain to be elucidated. Lastly, near the beginning of the Aβ sequence (and near the C-terminus of APPs) is a "RHDS" tetra-peptide motif that also appears to promote cell adhesion. It is believed that this region acts in an integrin-like manner by its homology to the "RGD" sequence [75]. In this regard, it is interesting that APP colocalizes with integrins on the surface of axons and at sites of adhesion [76,77]. In support of these earlier observations, it was recently shown that APP and integrin-β1 do interact [78] and that siRNA mediated silencing of APP during development led to defects in neuronal migration that may be related to cell adhesion [79], potentially to extracellular matrix proteins, with or without participation by integrins.
More compelling evidence of trans-APP dimerization was recently obtained in a primary neuron/HEK293 mixed culture assay. In this culture system, it was reported that trans-cellular APP/APP interaction induces presynaptic specializations in co-cultured neurons [80]. These studies identified APP proteins as a novel class of synaptic adhesion molecules (SAM) with shared biochemical properties as neurexins (NX)/neuroligins (NL), SynCAMs, and leucine-rich repeat transmembrane neuronal proteins (LRRTM) [81][82][83][84][85][86]. Like NX/NL and SynCAM-mediated synaptic adhesion in which extracellular sequences engage in trans-synaptic interactions and the intracellular domains recruit pre-or postsynaptic complexes (reviewed in [87]), both the extracellular and intracellular domains of APP are required to mediate the synaptogenic activity. Interestingly, using an affinity tagged APP molecule expressed in transgenic mice, the identified "APP-interactome" consisted of many proteins, such as Bassoon and neurexin, that are synaptic in localization [88]. Whether APP trans-synaptic interaction is involved in the recruitment of these synaptic molecules and whether APP coordinates with other synaptic adhesion complexes such as neurexin are interesting questions that warrant further investigation.
C. The APP Ectodomain
Various subdomains can be assigned to the APP extracellular sequences based on its primary sequences and structural studies ( Figure 1) (reviewed in [89,90]). These include the E1 domain, which consists of the N-terminal growth factor-like domain (GFLD) and the metal (copper and zinc) binding motif, the KPI domain present in APP751 and APP770 isoforms, and the E2 domain which includes the RERMS sequence and the extracellular matrix components. We address below the functional studies associated with the APP extracellular domain.
a) Synaptotrophic and Neuroprotective Functions
A number of publications have pointed to a neurotrophic role of the APP extracellular domain in both physiological and pathological settings, and this function may be linked to its adhesive properties described above either in its full-length form or as a secreted molecule (i.e. APPs) following ectodomain shredding. Thus, APP may exert these activities in both autocrine and paracrine fashions. Of note, APP undergoes rapid anterograde transport and is targeted to the synaptic sites [16,[91][92][93], where levels of secreted APP coincide with synaptogenesis [94]. APP expression is upregulated during neuronal maturation and differentiation [95,96]. Its expression is also induced during traumatic brain injury both in the mammalian system and in Drosophila [18,[97][98][99].
The crystal structure of the E1 domain shows similarities to known cysteine-rich growth factors and thus this domain in the N-terminus of APP has been linked to growth factor-like domain (GFLD) that is seen in the epidermal growth factor receptor [100]. One of the earliest indications of APP function came from the observation that assessing fibroblasts treated with an antisense APP construct grew slower and the growth retardation can be restored by treatment with secreted APPs [101]. The active domain was subsequently mapped to a pentapeptide domain "RERMS" in the E2 domain [102]. The activity is not limited to fibroblasts as infusion of this pentapeptide or APPsα into the brain resulted in increased synaptic density and better memory retention, while injection of APP antibodies directly into the brain led to impairment in behavioral tasks in adult rat [103]. Application of APPsα resulted in reduced neuronal apoptosis and improved functional recovery following traumatic brain injury (TBI) [103][104][105]; it also antagonized dendritic degeneration and neuronal death triggered by proteasomal stress [106]. These findings are corroborated by additional reports showing that reduction or loss of APP is associated with impaired neurite outgrowth and neuronal viability in vitro and synaptic activity in vivo [107][108][109]. Recent studies have further substantiated these early findings, showing for example that APPs regulates NMDA receptor function, synaptic plasticity and spatial memory [110], and that the growth promoting property may be mediated by the down-regulation of CDK5 and inhibition of tau hyperphosphorylation by APPsα [111]. Finally, a number of studies have reported the effects of APPsα on stem cells. Caille et al. first demonstrated the presence of binding sites for APPs in epidermal growth factor (EGF)-responsive neural stem cells in the subventricular zone in the adult rodent brain [112]. In this context, APPsα acts as a co-factor with EGF to stimulate the proliferation of these cells both in neurospheres in culture and in vivo. Subsequently, it was reported that APPs promoted neurite outgrowth in neural stem cells where APLP2 but not APLP1 was redundant to APP [113]. However and intriguingly, stem cells from APP/APLP1/APLP2 triple knockout embryos did not show any defects in neuronal differentiation in vitro [114]. Furthermore, in APP transgenic mice, overexpression of wild type APP resulted in decreased neurogenesis but promoted survival of newly generated cells [115]. At the moment, it is unclear how all these findings can be reconciled in a parsimonious picture of APP trophic functions.
Li et al. recently uncovered a novel role for APPs to regulate gene expression likely through binding to an unknown receptor [116]. In particular, they identified transthyretin (TTR) and Klotho as downstream targets of APP that are mediated by APPsβ. These targets are of direct relevance to AD as TTR has been shown to bind and sequester Aβ [117][118][119], and Klotho has been extensively implicated in the aging process [120][121][122]. The regulation of TTR and Klotho expression by APPsβ offers the intriguing possibility for a self-protective mechanism in the APP processing pathway to counter the production and toxicity of Aβ during aging. Because APPs levels have been reported to be reduced in individuals with AD [123][124][125][126], the results support the view that the loss of trophic activity or the defence mechanism of APPs may contribute at least in part to the neurodegeneration in AD.
Lastly and perhaps related to the growth promoting property of APP, an area that has come to light concerning APP function involves carcinogenesis, coinciding with the recent observation of an inverse association between cancer and AD [127]. Previous studies have reported an up-regulation of APP in various solid tumors. The reason for this is unclear but a recent study demonstrated that APP plays a role in growth of cancer cells [128]. Whether this potential tumorigenic activity involves adhesion, trophic properties of APPs, or cell signaling remain to be established.
b) Axonal Pruning and Degeneration
Whereas ample evidence support a role of APPsα in synaptotrophic and neuroprotective activities, APPsβ is known to be much less active or even toxic (reviewed in [129]). The differential activities between APPsα and APPsβ are difficult to comprehend considering that there are only 17 amino acid differences between the two isoforms and sequences implicated in trophic activities are mapped outside this region and common to both isoforms. The most striking finding related to differences between APPsα and APPsβ came from Nikolaev et al. who reported that, under trophic withdrawal conditions, APPsβ but not APPsα undergoes further cleavage to produce an N-terminal~35 kDa derivative (N-APP), which binds to DR6 death receptor and mediates axon pruning and degeneration [130]. The authors attempted to link this pathway to both axonal pruning during normal neurodevelopment and neurodegeneration occurring in AD. However, by using recombinant APPsβ in vitro and by creating APPsβ knockin mice in vivo [116], Li et al. demonstrate that APPsβ is highly stable and that APPsβ fails to correct the nerve sprouting phenotype of the APP/APLP2 null neuromuscular synapses (discussed in detail under "APP knockin mice"). Therefore, the biological and pathogenic relevance of the APPsβ/DR6 pathway outside of the trophic withdrawal paradigm requires further examination.
D. The APP Intracellular Domain
The high degree of sequence conservation between the intracellular domains of APP proteins predicts that it is a critical domain mediating APP function. Indeed, this relatively short cytoplasmic domain of 47 amino acid residues contains one well described phosphorylation site as well as multiple functional motifs and multiple binding partners that contribute to trafficking, metabolism, and possibly cell signaling functions of APP.
a) Phosphorylation and Protein-Protein Interaction
APP can be phosphorylated at multiple sites in both extracellular and intracellular domains (reviewed by [131]). Among these, the phosphorylation at the threonine residue within the VT 668 PEER motif (Thr 668 ) in the APP intracellular domain (Figure 1) has received most of the attention. Several kinases have been implicated in this phosphorylation event, including cyclin-dependent kinase 5 (CDK5), c-Jun N-terminal kinase 1 (JNK1) and JNK3, CDK1/CDC2 kinase and GSK3β [132][133][134][135]. Phosphorylation at this residue has been reported to result in several outcomes. First, it has been implicated to regulate APP localization to the growth cones and neurites [134,136], a finding consistent with the preferential transport of Thr 668 phosphorylated APP to nerve terminals [137]. Second, phosphorylation at Thr 668 has been reported to contribute to Aβ generation, a finding consistent with an increase of Thr 668 phosphorylated APP fragments in brains of AD individuals [138]. Third, Thr 668 phosphorylation leads to resistance of APP to be cleaved by caspases between Asp 664 and Ala 665 residues, an event that has been proposed to result in increased vulnerability to neuronal death (see below). Fourth, phosphorylation at Thr 668 leads to a conformational change in the APP cytoplasmic domain such that interaction with the cytoplasmic adaptor Fe65 through the distal YENPTY motif [139] is altered, thereby affecting the proposed nuclear signaling activity of the APP-Fe65 complex [140]. As the YENPTY motif has been shown to bind several other cytosolic adaptor proteins, it is not surprising then that Thr 668 phosphorylation has also been reported to modulate APP interaction with Mint-1/X11a [141]. Lastly, following phosphorylation, it has been shown that the peptidyl-propyl cis/trans isomerase Pin1 catalyzes the cis to trans isomerization of the Thr 668 -Pro 669 bond and this is predicted to alter APP conformation [142], possibly related to the Fe65 or Mint-1/X11a interaction with APP. In support of this idea, it was shown that loss of Pin1 in mice resulted in accumulation of hyperphosphorylated tau and increased Aβ levels [142,143], two features that should accelerate AD pathology in the brain. Nevertheless, knockin mice replacing the Thr668 with a non-phosphorylatable Ala residue did not result in substantive changes in either APP localization or in the levels of Aβ in brain [144], raising the question whether Thr 668 phosphorylation plays a significant role in regulating APP trafficking and Aβ generation in vivo.
In addition to Thr 668 phosphorylation, the highly conserved APP intracellular domain has been shown to bind to numerous proteins (reviewed in [145,146]). Of particular interest and relevance to this review, the Y 682 ENPTY motif is required to interact with various adaptor proteins, including Mint-1/X11a (and the family members Mint-2 and Mint-3, so named for their ability to interact with Munc18), Fe65 (as well as Fe65 like proteins Fe65L1 and Fe65L2) and c-Jun N-terminal kinase (JNK)-interacting protein (JIP), through the phosphotyrosine-binding (PTB) domain. The Y 682 has been shown to modulate APP processing in vivo [147]. Of interest is the finding that Fe65 acts as a functional linker between APP and LRP (another type I membrane protein containing two NPXY endocytosis motifs) in modulating endocytic APP trafficking and amyloidogenic processing [148].
b) Apoptosis
In contrast to the trophic activities of the soluble APP ectodomain, there are also a number of papers demonstrating the cytotoxic properties of β-secretase cleaved APP CTF (or C99), especially following overexpression [149][150][151]. The mechanism by which APP CTF is cytotoxic is unclear but one pathway may be through AICD released from APP CTF following ε-cleavage. Normally, AICD exists in very low levels in vivo but can be stabilized when Fe65 is overexpressed [152][153][154]. In cultured cells, overexpression of AICD led to cell death [154][155][156]. In transgenic mice overexpressing an AICD construct, there was activation of GSK-3β but no overt neuronal death [157,158], findings not replicated in a subsequent study however [159]. Interestingly, in mice expressing both AICD and Fe65, neuronal degeneration was observed in old mice together with tau hyperphosphorylation. Furthermore, behavioral abnormalities seen in these animals can be rescued by treatment with lithium, a GSK-3β inhibitor, in line with earlier evidence of activation of GSK-3β [160].
Another aspect of APP CTF mediated cytotoxicity concerns a caspase cleavage site within the cytosolic tail between position Asp 664 and Ala 665 [161]. In cell culture systems, loss of this caspase site by mutating the Asp 664 to Ala (D664A) resulted in an attenuation of APP C99 associated cytotoxicity. It has been proposed that release of the smaller fragments (C31 and Jcasp) from AICD after cleavage at position 664 results in the generation of new cytotoxic APP related peptides [162]. Thus, overexpression of either C31 or Jcasp, both derived from AICD, have resulted in cytotoxicity. Consistent with these in vitro findings, in an APP transgenic mouse line in which the caspase site is mutated to render APP noncleavable, the predicted Aβ-related phenotypes in brain (synaptic, behavior, and electrophysiological abnormalities) were absent in spite of abundant amyloid deposits [163,164]. Therefore, these initial observations indicated that the release of the smaller fragments (C31 or Jcasp) after caspase cleavage of C99 may result in cell death in a manner independent of γ-secretase [165]. However, analysis of another line of APP D664A transgenic mice with substantially higher APP expression failed to replicate the earlier findings [166], but the wide differences in expression of the transgene and resultant Aβ levels between the two transgenic mouse lines is such that the comparisons may be invalid [167]. In sum, there are at present several potential mechanisms whereby APP may contribute to neurotoxicity: via γ-secretase cleavage to release AICD or via alternative cleavage of the APP C-terminus to release other cytotoxic peptides. Whether these APP fragments contribute to in vivo neuronal death in AD pathogenesis remain to be established.
c) Cell Signaling
As mentioned previously, in addition to γ-secretase cleavage that yields Aβ40 and Aβ42, presenilin-dependent proteolysis appears to begin at the ε-site (Aβ49) close to the membrane-intracellular boundary [46,48,49]. Thus the ε-cleavage of APP may represent the primary or initial presenilin-dependent processing event. This is important because this cleavage releases AICD in a manner highly reminiscent of the release of the Notch intracellular domain (NICD) after γ-secretase processing, the latter being an obligatory step in Notch mediated signaling (reviewed in [59]). The predominant ε-cleavage releases AICD of 50 amino acids in length (CTF50-99), beginning with a Val residue. APP mutations that shift Aβ production in favor of Aβ42 would lengthen the AICD by one amino acid (CTF 49-99), now beginning with a Leu residue. This is of some interest because it has been pointed out that the N-end rule guiding protein stability through ubiquitination states that Val is a stabilizing residue while Leu is destabilizing (Reviewed in [168]). NICD, the intracellular domain derived from the Notch receptor, appears to follow this principle experimentally. If this situation applies to AICD, then there could be a different regulatory mechanism at play regarding AICD mediated cell signaling or in cell death. Furthermore, recent studies have suggested that AICD generation is in part dependent on whether APP was previously cleaved by αor β-secretase, indicating yet another layer of regulation [169,170]. Nonetheless, AICD is indeed very labile and, as mentioned previously, can be stabilized by Fe65 [153], a finding seen both in the in vitro and in vivo settings. A good deal of excitement followed the first report in which by using a heterologous reporter system, AICD was shown to form a transcriptionally active complex together with Fe65 and Tip60 [157,171]. This finding appeared to validate the notion that AICD is transcriptionally active, much like NICD. Scheinfeld et al. proposed a JIP-1 dependent transcriptional activity of AICD [172]. However, subsequent analyses have suggested that the earlier view may be too simplistic and incomplete. First, follow up studies by Cao et al. showed that AICD facilitates the recruitment of Fe65 but its nuclear translocation per se is not required [173]. Second, PS-dependent AICD production is not a prerequisite for the APP signaling activity, as it proceeds normally in PS null cells and by PS inhibitor treatment [174]. Instead, the authors provide an alternative pathway for this activity that involves Tip60 phosphorylation. Third, a later report documented that the proposed signaling activity is actually executed by Fe65 and that APP is not required altogether [175]. Lastly, Giliberto et al. reported that mice transgenic for AICD in neuronal cells are more susceptible to apoptosis. However, analysis of the basal transcription showed little changes in mice expressing AICD in the absence of Fe65 overexpression, leaving open the possibility that transcription may be influenced in a regulated fashion [176].
Regardless of the mechanism by which AICD may activate signaling pathways, a trans-activating role of the APP/Fe65/Tip60 complex has been consistently documented, at least in overexpression systems. However, these efforts have led to decidedly mixed results. A number of genes have been proposed to date including KAI [177], GSK3β [158,178], neprilysin [179], EGFR [180], p53 [181], LRP [182], APP itself [183], and genes involved in calcium regulation [184] and cytoskeletal dynamics [185]. However, the validity of these proposed targets have been either questioned or disputed [175,176,[186][187][188][189][190]. Thus, at present, a conservative view is that these target genes are indirectly or only weakly influenced by AICD mediated transcriptional regulation.
E. In vivo Function of APP
The in vivo gain-and loss-of-function phenotypes associated with the APP family of proteins in model systems (C. elegans, Drosophila and mice) are consistent with a role of APP in neuronal and synaptic function in both central and peripheral nervous systems. This may be mediated by the APP ectodomain or requires the APP intracellular domain. These findings will be discussed next in the respective animal models.
a) C. elegans
The C. elegans homolog of APP, APL-1, resembles the neuronal isoform APP695 as there are no known splice variants detected. Similar to APLP1 and APLP2, APL-1 does not contain the Aβ sequence. Nematode development includes four larval stages (L1-L4) after each of which is a molt where a new, larger exoskeleton is formed to accommodate the growth of the larvae. Inactivation of the single apl-1 gene leads to developmental arrest and lethality at the L1 stage, likely due to a molting defect [191,192]. In addition, apl-1 knockdown leads to hypersensitivity to the acetylcholinesterase inhibitor aldicarb, signifying a defect in neurotransmission [192]. The aldicarb hypersensitivity phenotype and the molting defect were found to be independent of one another, suggesting apl-1 contributes to multiple functions within the worm. Surprisingly, both phenotypes were rescued by either a membrane-anchored C-terminal truncation of APL-1 or by the soluble N-terminal fragments, showing that the highly conserved C-terminus is not required to support the viability of the worm [191,192]. This differs from the mammalian system in which the APP C-terminus is essential for viability on a non-redundant background (see discussion under "APP knock-in mice") [116,193]. Although the reason for the distinct domain requirement for C. elegans and mouse viability is not clear, it is worth reiterating that the lethality of the apl-1 null worm is likely caused by a molting defect not relevant to mammals. Consistent with this interpretation, it is interesting that expression of mammalian APP or its homologs are not able to rescue the apl-1 null lethality [191,192], indicating that this wormspecific molting activity is lost during mammalian evolution and that extrapolation of APP function from apl-1 may not be very informative.
b) Drosophila
The Drosophila APP homolog, APPL, like the worm homolog, does not contain the Aβ sequence and does not undergo alternative splicing. However, in contrast to the apl-1 null worm, Appl-deficient flies are viable with only subtle behavioral defects such as fast phototaxis impairment [3]. While human APP is not able to rescue the C. elegans apl-1 lethality, the behavioral phenotype present in the Appl null fly can be partially rescued by transgenic expression of either fly APPL or human APP [3]. Subsequent loss and gain-of-function studies revealed that APPL plays an important role in axonal transport, since either Appl deletion or overexpression caused axonal trafficking defects similar to kinesin and dynein mutants [194,195]. Although a similar role for APP in axonal transport of selected cargos has been reported [196][197][198], the findings have since been challenged by several laboratories [199].
APPL is required for the development of neuromuscular junctions (NMJs), since Appl deletion leads to decreased bouton number of NMJs, whereas Appl overexpression dramatically increases the satellite bouton number [200]. This activity can be explained by the formation of a potential complex including APPL, the APPL-binding protein dX11/Mint, and the cell adhesion molecule FasII, which together regulate synapse formation [201]. Overexpression of human APP homologs in Drosophila revealed a spectrum of other phenotypes, ranging from 1) a blistered wing phenotype that may involve cell adhesion [202], 2) a Notch gain-of-function phenotype in mechano-sensory organs, which reveals a possible genetic interaction of APP and Notch through Numb [203], and 3) a neurite outgrowth phenotype that is linked to the Abelson tyrosine kinase and JNK stress kinase [99]. Although the pathways implicated in each of the phenotypes are distinct, they all seem to require the APP intracellular domain via protein-protein interactions mediated through the conserved YENPTY sequence. These ectopic overexpression studies should be interpreted with caution because APP interacts with numerous adaptor proteins and many of the APP binding partners also interact with other proteins. Therefore, the phenotypes observed by overexpressing APP or APPL could be caused by the disturbance of a global protein-protein interaction network.
Interestingly, similar to the mammalian system, APPL is found to be upregulated in traumatic brain injury and Appl-deficient flies suffer a higher mortality rate compared to controls [99], supporting an important activity of APP family of proteins in nerve injury response and repair.
c) Mice
i. APP single knockout mice Three mouse APP alleles, one carrying a hypomorphic mutation and two with complete deficiencies of APP have been generated [204][205][206]. The APP null mice are viable and fertile but exhibit reduced body weight and brain weight. Loss of APP results in a wide spectrum of central and peripheral neuronal phenotypes including reduced locomotor activity [204,205,207], reactive gliosis [205], straindependent agenesis of the corpus callosum [205,208], and hypersensitivity to kainate induced seizures [209]. Although these phenotypes indicate a functional role of APP in the CNS, the molecular mechanisms mediating these effects remain to be established. Unbiased stereology analysis failed to reveal any loss of neurons or synapses in the hippocampus of aged APP null mice [210]. Attempts to examine spine density in APP KO mice have yielded mixed results. Using hippocampal autaptic cultures, Priller et al. reported an enhanced excitatory synaptic response in the absence of APP, and the authors attributed this effect to the lack of Aβ production [211]. Follow up studies by the same group reported that APP deletion led to a two-fold higher dendritic spine density in layers III and V of the somatosensory cortex of 4-6 month-old mice [212]. However, Lee et al. found a significant reduction in spine density in cortical layer II/III and hippocampal CA1 pyramidal neurons of one-year old APP KO mice compared with WT controls [213]. It is not clear whether differences in age or brain region may contribute to the discrepancy.
The APP null mice show impaired performances in Morris water maze and passive avoidance tasks, and the behavioral deficits are associated with a defect in long term potentiation (LTP) [207,210,214,215], the latter may be attributed to an abnormal GABAergic paired pulse depression [215]. Follow up work demonstrated that APP modulates GABAergic synaptic strength by regulating Cav1.2 L-type calcium channel (LTCC) expression and function in stratial and hippocampal GABAergic neurons [216]. APP deficiency leads to an increase in the levels of α1C, the pore forming subunit of Cav1.2 LTCCs and an enhanced Ca2+ current, which in turn results in reduced GABAergic mediated pairedpulse inhibition and increased GABAergic post-tetanic potentiation [216]. A role of APP in calcium regulation is further documented by APP overexpression and knockdown studies in hippocampal neurons which support an Aβ independent role of APP in the regulation of calcium oscillations [217].
Outside of the CNS, APP deficient mice display reduced grip strength [205,207]. This is likely due to impaired Ca 2+ handling at the neuromuscular junction (NMJ) as functional recordings revealed that APP null mice show abnormal paired pulse response and enhanced asynchronous release at NMJ resulting from aberrant activation of voltage gated N-and L-type calcium channels at motor neuron terminals [218]. Taken together, the studies thus provide strong support for the notion that APP plays an important role in Ca 2+ homeostasis and calcium-mediated synaptic responses in a variety of neurons, including GABAergic and cholinergic neurons and possibly others, through which it may regulate the neuronal network and cognitive function.
ii. APP, APLP1, APLP2 compound knockout mice The relatively subtle phenotypes of APP deficient mice are likely due to genetic redundancies as evidenced by gene knockout studies. While mice with individual deletion of APP, APLP1 and APLP2 are viable, APP/APLP2 and APLP1/APLP2 double knockout mice or mice deficient in all three APP family members are lethal in the early postnatal period [219,220]. Intriguingly and due to reasons not well understood, the APP/APLP1 double null mice are viable [220]. Although the NMJ of APP or APLP2 single null mice do not show overt structural abnormalities, the APP/APLP2 double knockout animals exhibit poorly formed neuromuscular synapses with reduced apposition of presynaptic proteins with postsynaptic acetylcholine receptors and excessive nerve terminal sprouting [221]. The number of synaptic vesicles at the presynaptic terminals is reduced, a finding consistent with defective neurotransmitter release. Examination of the parasympathetic submandibular ganglia of the double deficient animals also showed a reduction in active zone size, synaptic vesicle density, and number of docked vesicles per active zone [222].
Interestingly, tissue-specific deletion of APP either in neurons or in muscle on APLP2 knockout background resulted in neuromuscular defects similar to those seen in global APP/APLP2 double null mice, demonstrating that APP is required in both motoneurons and muscle cells for proper formation and function of neuromuscular synapses [80]. The authors propose that this is mediated by a trans-synaptic interaction of APP, a model that gained support by hippocampal and HEK293 mixed culture assays described above [80]. Interestingly, muscle APP expression is required for proper presynaptic localization of CHT and synaptic transmission, suggesting that trans-synaptic APP interaction is necessary in recruiting presynaptic APP/CHT complex [80,223].
Analysis of APP/APLP1/APLP2 triple knockout mice revealed that the majority of the animals showed cortical dysplasia suggestive of neuronal migration abnormalities and partial loss of cortical Cajal Retzius cells [224]. Interestingly, this defect is phenocopied in mice doubly deficient in APP binding proteins Fe65 and Fe65L1 [225]. It should be pointed out however, that morphological similarity does not necessarily implicate functional interaction. Indeed, cortical dysplasia with viable penetrance also exists in mice deficient in various other proteins including PS1, β1 and α6 integrins, focal adhesion kinase, α-dystroglycan and laminin α2 (reviewed in [226]).
In sum, the loss-of-function studies present a convincing picture that members of the APP gene family play essential roles in the development of the peripheral and central nervous systems relating to synapse structure and function, as well as in neuronal migration or adhesion. These may be mediated either by the full-length protein or by various proteolytic processing products, and may be due to mechanical properties or through activating signaling pathways, or both. The creation of knockin alleles expressing defined proteolytic fragments of APP offers a powerful system to delineate the APP functional domains in vivo. These are discussed in the following section.
iii. APP Knock-in mice To date, four APP domain knock-in alleles have been reported and these express αsecretase (APPsα [227]) or β-secretase (APPsβ [116]) processed soluble APP, the membrane anchored protein with deletions of either the last 15 aa (APPΔCT15 [227]) or 39 aa (APP/hAβ/mutC [193]) of the highly conserved C-terminal sequences of APP, the latter also replaced mouse Aβ with the human sequence and introduced three FAD mutations (Swedish, Arctic, and London) to facilitate Aβ production. The APPsα and APPΔCT15 knock-in mice appeared to rescue a variety of phenotypes observed in APP KO mice [227]. For instance, the reduced body and brain weight of APP null animals was largely rescued. Behaviorally, the knock-in mice do not exhibit any defects in grip strength or the Morris water maze test. Field recordings of hippocampal slices showed that the LTP deficits observed in 9-12 month-old APP KO mice was also absent in both knock-in lines. These findings are in agreement with the large body of literature documenting the synaptotrophic activity of APPsα (refer to "Synaptotrophic and Neuroprotective Functions" above) and that perhaps the predominant function of APP is mediated by APPsα.
Similar to APPsα and APPΔCT15 knock-in lines, the APPsβ and APP/hAβ/mutC mice did not show any overt growth or anatomatical deficits. However and in stark contrast to the aforementioned two knockin mouse lines, crossing these two alleles (APPsβ or anchored APP/hAβ/ mutC) to APLP2 -/background failed to rescue the early postnatal lethality and neuromuscular synapse defects of the APP/APLP2 KO mice [116,193], suggesting a critical and indispensable role of the conserved C-terminal region of APP in early postnatal development. An essential role of the APP C-terminal domain, specifically the YENPTY motif, in development was demonstrated by the creation of APP knock-in mice in which the Tyr 682 residue of the Y 682 ENPTY sequence was changed to Gly (APP YG ). Crossing the homozygous knock-in mice to APLP2 null background showed that the APP YG/YG / APLP2 -/mice exhibit neuromuscular synapse deficits and early lethality similar to APP/APLP2 double KO mice [228]. The differences in outcomes in these experiments are difficult to explain but may be related to a more severe phenotype in the APLP2 deficient background. Nevertheless, the inability to rescue the NMJ defects by the APP mutants lacking the intracellular domain or expressing the Tyr 682 to Gly mutation is compatible with the concept that APP functions as a synaptic adhesion protein. Furthermore, the fact that amyloid deposition can develop in the absence of the APP Cterminal sequences indicates that APP developmental function and amyloidogenesis are differentially regulated and require distinct APP domains [193].
Concluding Remarks
We hope this review has provided a timely update on what is known and what lies ahead in the field of APP biology. Since the first identification of the APP gene in 1987, the scientific community has worked together to obtain significant insights into the biochemical, cellular and functional properties of APP. It is clear that APP undergoes tightly regulated trafficking and processing and, through either the full-length protein and/or its cleavage products, it mediates synaptogenic and synapotrophic activities in development and during aging. As such, it is reasonable to speculate that misregulation of APP could contribute to the neuronal and synaptic impairment occurring in AD. Many key questions remain to be addressed. These include determining whether APP is a receptor or a ligand and, accordingly, the identities of its respective ligand or receptor. Does APP directly mediate cell signaling or only play a secondary role in gene expression? How is APP function coordinated between its full-length form and the various processing products, and how is it facilitated through its binding partners? Elucidating these questions will undoubtedly reveal novel insights into disease pathogenesis.
|
v3-fos-license
|
2020-07-30T02:06:53.078Z
|
2019-12-31T00:00:00.000
|
226760080
|
{
"extfieldsofstudy": [
"Philosophy"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://periodicos.puc-rio.br/index.php/revistaergodesign-hci/article/download/1304/733",
"pdf_hash": "1b4e140c781d2a110ab57c76c516b30a855cdf7b",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43372",
"s2fieldsofstudy": [],
"sha1": "e9b4e1de2df65757cd2256e355c3455da5b0b9de",
"year": 2019
}
|
pes2o/s2orc
|
APLICABILIDADE DE MODELOS DE TOMADA DE DECISÃO DE RISCO PARA ENTENDER O COMPORTAMENTO DE MOTRISTAS DURANTE RETOMADAS DE CONTROLE EM AUTOMAÇÃO VEICULAR
Este trabalho apresenta uma apreciação teórica da aplicabilidade de modelos de tomada de decisão de risco como uma ferramenta para entender o comportamento de motoristas durante retomadas de controle de um veículo automatizado. O artigo se foca na relação entre o conceito de “Out of the Loop” e consciência da situação. Uma discussão metodológica é feita, e suas implicaçóes para o design de produtos é apresentada. Ao fim da discussão, este artigo conclui que o processo de acumulação de evidência em modelos de tomada de decisão possui paralelos fortes com o conceito de retomada de consciência da situação. Dito isto, modelos de acumulação de evidência podem ser tuilizados como ferramentas para entender como motoristas usam a informação para tomar decisões seguras, e esta informação pode ser reforçada no design de interfaces embarcadas. Ao fim do artigo, um modelo conceitual é apresentado como sugestão para aplicação prática da teoria proposta em dados experimentais.
INTRODUCTION
Among the human factors-related challenges of implementing vehicle automation, is ensuring safe responses from users during transitions of control. Recent research into this issue forms part of a larger body of research regarding the better design of human-machine interfaces, spanning multiple domains and decades. These challenges highlight an old irony of automation, where the more reliable the automation, the less prepared the human is to react in a time of need (Bainbridge, 1985). This is especially true for higher levels of vehicle automation, which do not require continuous monitoring of the driving task, but still rely on users to resume control, for example, when a system limitation is reached (Level 3. See SAE, 2018 for a complete description of the levels of vehicular automation).
Many recent driving simulator studies, for example, those described by , have identified that drivers in higher levels of vehicle automation (SAE L2+), are removed from the decision-making and control loops of the driving task, placing them "out of the loop" (see for a recent description of the term). This disengagement from the loops is thought to reduce drivers' capacity to react in dangerous situations, increasing the likelihood of collisions.
Many researchers have tried to understand what constitutes a safe transition of control from automation, investigating what factors influence the success of a transition. For example, Gold et al. (2013) demonstrated that drivers' response to an impending collision, following a request for a transition of control, is dependent on the amount of time given to drivers for this response. These authors report that when drivers were given less time to react, they reacted faster, but more erratically, as shown by the vehicle's lateral and longitudinal accelerations. In contrast, when given more time to respond to an impending collision, drivers reacted more slowly but had a more stable response profile. Zeeb at al. (2015Zeeb at al. ( , 2016 have shown that drivers' take-over time and the quality of this take over (measured as vehicle lateral deviation), is linked to their attention to the road environment during automated driving, with higher levels of distraction to other, non-drivingrelated tasks, leading to a deterioration of take-over quality. However, Louw et al. (2018) suggest that take-over time and vehicle controllability alone are not good predictors of a safe transition of control, but rather the early mitigation of a threat, with earlier transitions of control leading to fewer collisions.
A common limitation of studies attempting to correlate drivers' visual attention with their performance on non-driving-related tasks during automation, is that most investigate the location of drivers' gaze, rather than attempting to understand how visual information, acquired from different sources during automation engagement, affects drivers' resumption of control. While there have been efforts to model the factors that influence drivers' capabilities to take-over control, and how they use the physical and mental resources they need to perform such an action, most have not managed to generate a predictive model, based on gaze patterns during take-overs (Happee et al., 2018). For example, in Victor et al. (2018), while have reported that some drivers, even though looking to the road centre, still failed to avoid crashes during a transition of control (similar to results also reported by . Studies in other domains have considered how visual information sampling affects decision making in humans (see Orquin & Loose, 2013 for a complete literature review of these studies). For instance, Fiedler & Glöckner (2012), identified that gamblers shift their gaze towards the gamble they are willing to make, before their decision, and used this information as a predictor of their choice selection. This paper proposes that the application of decision making theories, and related models, can be used to address some of the gaps in research on user resumption of control from vehicle automation, by providing a quantifiable method of linking the acquisition of specific information from the environment to the probability of a particular response (Orquin & Loose, 2013). Currently, there are only a few studies that highlight the possibility of such a link (c.f. Markkula et al., 2018). In this work, we consider how theoretical models for risky decisionmaking can be used to study drivers' transition of control in automation by observing their visual sampling behaviour during different stages of the take over process.
We begin with outlining the two theoretical bases of this work: decision-making theory, and the human factors of transitions of control. Thereafter, the two theories will be compared, especially regarding their analogous processes of Situation Awareness acquisition and evidence accumulation. Finally, this paper considers how such an approach can generate outputs that may be applied by system designers, to enhance driver performance and create safer systems.
TRANSITIONS OF CONTROL FROM VEHICLE AUTOMATION
This section of the paper aims to define key concepts in the field of human factors of transitions of control, such as the decision-action loop, Situation Awareness, and the issues that are related to this process. With a clear definition of this concept in hand, it will be possible to compare them to the concepts related to the decision-making theory, understanding how they might interact and complement each other.
The term transition of control was described by Louw (2017) as: "the process and period of transferring responsibility of, and control over, some or all aspects of a driving task, between a human driver and an automated driving system." SAE (2018) complement this definition with a taxonomy, by outlining how a driver's responsibility varies across the different levels of automation, and a distinction if they were system-or driver-initiated transitions. The need for such transitions of control is partly based on current system limitations, in terms of the technology's operational design domain (see NHTSA, 2016, for a more descriptive definition of the problem), where vehicles cannot operate in all scenarios, and the human drivers are expected to supervise the automation and resume control, whenever a system limitation is reached. However, the inherent problem with such supervisory roles is diminished driving capabilities associated with the relinquishing of control, which his associated with several challenges when drivers are requested to resume control, especially in time-critical scenarios (Louw, 2017). Some of these issues are discussed below.
The decision-action loop
According to many authors (e.g. Young, 2012), manual driving is a task which requires the driver to always be in the information processing "loop", with regards to their interactions with the surrounding road environment, as well as their ability to control and coordinate vehicle manoeuvres, involving steering, acceleration and braking. Thomas (2001) states that the operation of a vehicle is closely associated with constant feedback and feed-forward cycle of human interaction with the task. Here, humans' decisions and actions affect the situation, and this change is perceived once more by the individuals, who orient and adjust their behaviour accordingly. further complement this logic for the context of vehicle automation (based on the model purposed by Michon, 1985), by stating that there are two distinct loops in manual driving, which can be affected by ceding control to automation: one for motor-control coordination, and another for the several decision-making processes that need to be performed while driving. They suggest "(…) that "being in the loop" can be understood in terms of (1) the driver's physical control of the vehicle, and (2) monitoring the current driving situation (…)" . It must be noted that both loops continually interact with each other, and drivers must be aware of both their visual-motor coordination (see Wilkie et al., 2008 for a more descriptive definition of the term) and the surrounding environment, to safely maintain control of the task.
Situation Awareness Recovery
Using driving simulator experiments, Louw et al. (2016), supplemented by previous evidence from Damböck et al. (2013), argue that by removing drivers from the decision-making and control loops, vehicle automation reduces drivers' Situation Awareness (SA; Endsley, 1995), which needs to be re-acquired in order to safely resume control and avoid potentially dangerous situations on the road (Damböck et al., 2013). The definition of Situation Awareness used in this research, and defined initially by Endsley (1988), is: "the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future." In short, SA can be divided into three levels (perception; comprehension and prediction), which allow humans to orient their decisions in a particular context and volume of time (Fig. 2). Figure 2 Endsley's model of SA. This is a synthesis of versions she has given in several sources, notably Endsley (1995) and Endsley et al. (2000), in Wickens (2008).
The loss of Situation Awareness and its relation to being "out of the loop" have been declared by a number of studies on vehicle automation (Carsten et al., 2012;Ohn-Bar & Trivedi, 2016;Morando et al., 2019), some of which have considered how these concepts are affected by drivers' engagement in non-driving-related tasks. It is argued that upon a request to resume control from automation, drivers have to move their visual attention from the NDRT, to focus on other sources of information, related to the driving task, to acquire enough SA to take back control of the vehicle. Gartenberg et al. (2014) refer to this process (which is not only relevant to vehicle automation) as Situation Awareness Recovery or SAR. This is described as a visual scanning process with a considerable number of short fixations in different areas, with a significant lag of resumption in tasks, and a high probability of re-fixation to the same information source, more than once. Examples of such a process was observed in Louw et al. (2019), who reported in their driving simulator experiments that drivers who were engaged in a visual nondriving-related task during automation (assumed to induce an OotL state) had a more scattered gaze pattern after resumption of control from a silent automation failure, compared to those who were required to monitor the road environment during automation.
One of the challenges for the human factors community in addressing this problem is that the process of SAR is accompanied by several barriers, called SA challenges (Endsley, 2006). Endsley & Kriss (1995) named several challenges for the Situation Awareness acquisition, such as attention tunnelling, change blindness, stress on operators' (drivers') working memory, as well as the division of the information required from multiple sources, making it difficult for operators to gather all the information they might need in a reasonable amount of time (e.g. see Parasuraman & Riley, 1998). For driving automation, it has been demonstrated that time pressure, or information overload, might affect the quality of drivers' performance. This is thought to be because drivers' attentional resources are continuously stretched by the high demands of the driving task itself, which is aggravated by automation (Goodrich & Boer, 2003). The dispersion of drivers' gaze also competes between focused attention to the vehicle's heading (due to a visual-motor coordination, Wilkie et al., 2008) and hazard perception routines, which are generally characterised by an increased lateral gaze dispersion (Crundall et al., 1999). Therefore, drivers not only have to acquire information about the situation in the environment, and the current status of the system (an issue also reported by Endsley, 2006), but also have to recover their visual-motor coordination, which is degraded once you relinquish control from the vehicle (Mole et al., 2019). Many empirical studies show that this need to disperse visual attention to different sources affects drivers' performance, increasing risk of crashes (see Russel et al., 2016;Zeeb et al., 2015;Blommer et al., 2016;Merat et al., 2014;Gold et al., 2013;Damböck et al., 2013).
DECISION-MAKING THEORY PRINCIPLES AND MODELS
The definition of decision-making adopted in this work was proposed by Edwards (1954), and is defined as follows: "(…)given two states, A and B, into either one of which an individual may put himself, the individual chooses A in preference to B (or vice versa)". This definition was further developed by Simon (1959), who added organised this process into four main stages: 1) definition of the problem, 2) identification of possible solutions, 3) objective assessment of the value of each solution for the problem, 4) choice of the best solution. As human beings, we are continuously making decisions, based on our internal representation of what we should do in every situation, given certain parameters (stage 3). In a driving task, many actions involve a decision-making process. Some examples include deciding: a comfortable car-following distance (Boer, 1999), what gaps we will accept when changing lanes (Gipps, 1986), how we respond to a potential forward collision (Blommer et al., 2017), and whether to disengage from automation (see Markkula et al., 2018, for more examples).
In the context of this paper, decision-making can be defined as the drivers' choice to take-over control of the vehicle or not, and their take-over modality (how do they take-over). When constructing a model for such decision-making, to account for a good or bad decision, in terms of safety, we have as observable output variables the decision-making time (how long drivers took to decide to take-over), decision choice (how they reacted to the given scenario) and outcome (based on the objectives established for the given situation, were they able to achieve this goal?). Yet, there are several kinds of decision-making theory models, which may account for different aspects of human behaviour, and might be useful for certain situations and not others. Edwards (1954) also divided the decision-making theory models in two main spectrums, which their most recent and developed definitions shall be further explained in the later sections of this paper: the rational and risky decision-making models.
Rational decision-making models
The concept of rational decision-making (see Simon (1979) and March (1978) for a more descriptive definition of the term) is based on a metaphorical "thinking man", as a decisionmaker. According to Simon (1979) and March (1978), a thinking man can be characterized as an individual by two main conditions: 1) as being capable of acquiring and distinguishing all possible relevant information for the decision in hand; and 2) the thinking man is capable of assigning the correct value of a specific choice, based on their established goal in each decision-making scenario. Based on these assumptions, two individuals would always arrive at the same conclusion, when making a rational decision about the same problem. The only Good examples of rational decision-making models can be seen in game theory (Nash, 1950), which posits that all choices made by an individual have a counterpart by a "hostile" opponent (like a chess game). The opponent will focus their actions on maximising their chances of achieving their goal, which is the opposite of the individual's goal. Another example of a rational decision can be seen in the utilitarianism theory, created by Jeremy Bentham and John Stuart Mill in the early 19th century. This theory holds that there are "greater goods" in life, and every moral action can be quantified in terms the outcome of "happiness", and that it is always right to maximise happiness in our choices in life for a "greater good" (for a more complete description of the term, see Mill, 1868). Indeed, rational decision-making processes are utopic in most cases, and their scope for applicability is limited, as everything needs to be quantifiable, such as in mathematical logic problem solutions (for examples, see Bell et al., 1988).
Risky decision-making models
According to decision-making theory, whenever the decision-maker is forced to make a decision without a clear notion of the possible outcomes of their choice, this process is considered to be a risky decision (Edwards, 1954). Models in the risky decision-making theory are based on the assumptions: 1) that not all variables can be accurately, or even wholly, quantified, 2) that humans are not certain about how their actions will affect the environment of the task in hand, and, 3) humans are not aware of are all the variables that they should consider to make their decision. Humans in that situation can estimate, based on their mental models (see Nielsen, 2010 for a description of the term), the probable outcomes for a given task for each possible action that they can perform, and use that information to guide their decisionmaking. In situations where the outcome of an individual's decision is not predictable, they need to account for a level of uncertainty as part of their decision-making process. Uncertainty is defined by Shaw (1983) as the inability of the decision-making to assign the correct value of an option, nor predict the outcomes of their decision to the given environment. This uncertainty concept is a key assumption underlying risky decision-making models and is discussed later in this paper. As humans' mental processing is not directly observable, risky decision-making models can be used to explain human behaviour based on certain assumptions. The most relevant ones are described below: Evidence accumulation models assume that every decision-maker a priori does not have sufficient information about the situation to make a decision and will seek evidence that will influence their decision towards one of the outcomes known to them. Furthermore, every individual has a personal threshold of accumulated evidence that, once reached, causes them to opt for one possible choice, over another (Ratcliff & Smith, 2004). This threshold varied based on a number of factors, including experience, gender, personal attitudes and many others. It must be noted that the rate of evidence, or "drift", is accumulated differently for every person, which is also influenced by a number of factors. In the field of vehicle automation, Markkula et al. (2018) have demonstrated how to apply decision-making models based on evidenceaccumulation to explain, for example, what information drivers use to decide how to resume control from vehicle automation to avoid an incoming forward collision.
Bounded rationality models, first defined by Simon (1972), which holds that humans can make decisions based on the information available to them. These have similar assumptions to rational decision-making models but differ in that they assume that humans are not capable of considering all the relevant information to make a decision. This can be caused by a lack of cognitive resources, time pressure, or simple lack of knowledge about the presence of a particular source of information. Considering this paradigm, bounded rationality models assume that the decision-maker prioritises certain information over others (randomly or selectively). This prioritised information will most likely bias the decision towards a particular choice, depending on the information sampled, and not only on individual preferences. This kind of model is especially relevant for the transition of control in vehicle automation, as it is assumed that drivers in such situations can be overloaded with large volumes of spatially dispersed visual information, and may not be able to process all the information they would need Examples for such overload can be found in Gold et al. (2013) and Blommer et al. (2017), who show that drivers change their decisions about when to resume control from automation, based on the amount of time they have to react before the automated system reaches its limit. Although, it is worth considering that those authors haveonly considered visual information, so other factors might also have affected the observed results.
Satisficing decision-making models assume that the decision-maker will not seek the most optimal solution for his/her problem, but instead will make the first decision where the outcome satisfies their needs or goals in the given situation (Wierzbicki, 1982;Parke et al., 2007). This approach was used in studies by Boer (1999), Boer & Hoedemaeker (1998), and Goodrich & Boer (2003), in different scenarios. For example, Boer (1999) demonstrated that drivers tend to have not one specific "ideal car-following distance", but rather have a satisficing margin, that floats closer or further to the lead vehicle, where the drivers assume to be safe and close enough to be satisfied and refocus in other demands from the car-following task (such as lateral control of the vehicle), instead of actively re-adjust their following distance to a point they would consider to be ideal.
Most concepts in these models are somewhat interchangeable and can be combined in a descriptive or mechanistic analysis. Their relationship with the field of automation will be discussed in the subsequent sections of this work.
RELATIONSHIP BETWEEN HUMAN FACTORS CHALLENGES AND RISKY DECISION-MAKING
Based on the two types of decision-making theory models described above, it is evident that the process of Situation Awareness recovery during the transition control from vehicle automation presents several similarities to the risky decision-making theory, which is discussed in the following sections. stated that drivers re-enter the cognitive loop of the driving task by acquiring sufficient levels of Situation Awareness. In the same way, Ratcliff & Smith (2004) claim that whenever an individual is presented with an opportunity to make a decision, they will need to accumulate evidence that will support the choice they eventually make. This direct comparison shows similarities in the applicability of both the concept of evidence accumulation and SA for those theories with the same purpose, which is to understand how humans use the information to react to a given environmental condition and achieve their desired goal. Fig. 2 presents a schematic representation of the proposed relationship between the two theories. Figure 3 Representation of the relationship between SA and decision-making theory As mentioned above, decision-making theory holds that the decision-making process is composed by four steps: 1) define the problem, and understand its characteristics; 2) formulate/generate possible solutions for the given problem; 3) estimation of the value of possible outcomes; 4) selection of the outcome with the highest value for the given problem (see Simon, 1959 for a better description). Endsley (1995) divided the SA into levels, in a way that the individual needs to 1) identify the elements in the environment, 2) comprehend their meaning, and how it shapes the situation in hand, and 3) orient how those elements can be interacted with, in a way that is possible to predict what can be the outcomes of their potential actions. According to Simon (1957) and Edwards (1954), a decision can only be made if there is a clear notion/definition of the value of each solution to the upcoming problem, and that to achieve this, the decision-maker accumulates evidence that assigns the correct value to a particular option, reducing the decision-maker's level of uncertainty (Shaw, 1982). Observing the same phenomenon through the lenses of the SA theory, we can understand that the comprehension of the problem (in the case of this work, a request to transition control) and their possible solutions as level two SA. The process of assigning value, or expected outcome of possible action in order to make the appropriate decision can be directly linked to the level three situation awareness, or projection of future states. In this framework, it can be assumed that the process of moving from level two to level three SA can be directly compared to the process of accumulation of evidence, which is simply the reduction of uncertainty about the outcomes of a possible action to a given scenario.
The arguments presented in the previous section showed that barriers, called SA challenges (Endsley, 2006), impede an individual's ability to acquire all the sufficient levels of SA they need to make an optimal resumption of control from automation (see Parasuraman & Riley (1997) for an example of such phenomenon). Analysing the challenges imposed to an individual to resume control from automation through the lens of decision-making theory, similar problem is reported by Edwards (1954) and Simon (1957) who say that an entirely rational decision is Blommer et al. (2017) and Gold et al. (2013) showed that drivers have an increased probability to "just brake", instead of both braking and steering, whenever they had limited time to respond to the scenario. The authors noted that the scenario exceeded drivers' abilities to cope with the situation and to perform the ideal action. These two examples can be translated in the risky decision-making theory as satisficing decision-making actions, where even if it was not perfect, it was the best they could do with the information they had, opting to make a simple reaction to the scenario. Based on the arguments presented above, we believe that risky decision-making theory is a suitable candidate to model the process of the take-over of control from vehicle automation. The application of decision-making theory can complement the existing studies on the transition, as it can be used to understand the relationship between the information sampled by drivers and their subsequent behaviour. Practically speaking, this approach complements the current studies in the field by providing robust mathematical models that assign causality between evidence accumulation and decision (see Orquin & Loose, 2013), which are not commonly linked to the situation awareness theory. It is now essential to evaluate how this theory can be applied and implemented to better describe driver behaviour during transitions of control. Sivak (1996) stated that vision is the most important of the five human senses for driving, but yet, it is not suited to dealing with multiple demands at the same time. For this reason, drivers need to prioritise certain visual information over others to perform a transition of control (for more details about this process, see Goodrich & Boer, 2003).
USING DECISION-MAKING MODELS TO ORIENT DRIVERS' DECISION-MAKING
According to Orquin & Loose (2013), visual attention and decision-making are tightly coupled, since a driver's risky decision-making is continuously biased by whether or not they attended to relevant visual information available to them. In their literature review, the authors found a co-causal relationship between visual attendance to information and the occurrence of specific choices, in a discrete decision-making scenario. As part of a meta-analysis, the authors analysed several decision-making tasks that used eye-tracking data as a dependent variable. They concluded that an individual's gaze fixation on certain essential information could predict their upcoming choice in a discrete scenario, suggesting that the selective attention of drivers may bias their decision-making. Such an approach may also be applied to analyse drivers' response capabilities in a take-over scenario, once a take-over reaction is nothing more than a selective response to a particular scenario condition.
The arguments above support the possibility of modelling the relationship between different gaze allocation strategies and the probability of yielding specific responses to the takeover control scenario (based on the studies reported by Orquin & Loose, 2013). This approach would inform system designers about which information should be scanned with higher priority, to yield a higher probability of safe and timely responses to different take-over scenarios. This information could be used to create HMIs that guide drivers towards making decisions that result in safe outcomes. For example, indicating where drivers should focus their attention for a
|
v3-fos-license
|
2021-08-27T11:13:22.434Z
|
2021-01-01T00:00:00.000
|
237335658
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://eejournal.ktu.lt/index.php/elt/article/download/28916/14968",
"pdf_hash": "d0fbd78a2f9f4ea4f151df5daca31f42c2d28c2c",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43373",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "d0fbd78a2f9f4ea4f151df5daca31f42c2d28c2c",
"year": 2021
}
|
pes2o/s2orc
|
Robust Discrete-Time Nonlinear Attitude Stabilization of a Quadrotor UAV Subject to Time-Varying Disturbances
A discrete-time improved input/output linearization controller based on a nonlinear disturbance observer is considered to secure the stability of a four-rotor unmanned aerial vehicle under constant and time-varying disturbances, as well as uncertain system parameters for its attitude behaviour. Due to the nature of the quadrotor system, it contains the most extreme high level of nonlinearities, system parameter uncertainties (perturbations), and it has to cope with external disturbances that change over time. In this context, an offset-less tracking for the quadrotor system is provided with the input/output linearization controller together with a discrete-time pre-controller. In addition, the robustness of the system is increased with a discrete-time nonlinear disturbance observer for time-varying disturbances affecting the system. The main contribution of this study is to provide highly nonlinearities cancellation to guarantee the aircraft attitude stability and to propose a robust control structure in discrete-time, considering all uncertainties. Various simulation studies have been carried out to illustrate the robustness and effectiveness of the proposed controller structure.
I. INTRODUCTION
Within the last decade, the Unmanned Aerial Vehicles (UAV) are deployed for tasks where human interaction is dangerous. Today, there are many applications of UAV in sectors, such as military, transportation, and entertainment. Among the UAVs, the multi-rotors are mostly chosen for their agility and ability to hang in the air. Within the literature, many control methods are used to deploy to characterize the attitude and altitude problem of the quadrotors, which is commonly chosen as the multi-rotorframework [1], [2].
When the autonomous flight of the quadrotor is considered, the main effort relies on the attitude and stabilization of the vehicle. For this purpose, the nonlinear control methods applied on the quadrotors are the nonlinear back-stepping [3]- [5], which can be taken into account the matched and unmatched uncertainties, or sliding mode [6], [7], which is robust and able to represent the system with lower-order dynamics suffers from chattering problem, their Manuscript received 29 January, 2021; accepted 12 May, 2021.
Since the real-life flight of these vehicles deploys mainly outdoor, the controllers do not take external disturbances into account and become fragile. Thus, the studies dealing with the disturbances on quadrotor systems have been carried out [15]- [17]. The system control problem is still up-to-date in [18]- [23].
For control of UAV systems, in practice, it requires discrete-time signals, and due to the differences between the design in continuous-time and control signal implementation in discrete-time, the tuning of the design parameters may become a hard work. For this reason, several discrete-time controllers on the quadrotor are directly performed [24]- [27]. The input-output linearization controller can here be emphasized as a nonlinear controller in discrete-time [28]- [30].
To overcome the system perturbations with its undesired consequences, time-varying disturbances, and stabilization of the attitude quadrotor UAV system, a discrete-time robust nonlinear controller with nonlinear disturbance observer is proposed in this study.
Most of the previous works consider all the steps of the controller design in the continuous-time domain and then the discretization of the controller. However, in the particular case, the direct discrete-time controller approach, which is the focus of this paper, much effort is still needed to be put in for the solution of this up-to-date problem. In the light of the previous studies, the differences from the existing results and the significant additions of this study can be summarized as follows.
The constructed controller assures the offset-less high precision tracking under the effects of system perturbations and subjecting to constant disturbance. In addition, the discrete-time discretization approximation error is rejected here with the aid of a digital filter compassing the effect of the digital PI. For the attenuation of the time-varying disturbance, which changes over time and is slower than system dynamics, it is considered by the nonlinear disturbance estimation.
Robust Discrete-Time Nonlinear Attitude Stabilization of a Quadrotor UAV Subject to
Time-Varying Disturbances The solution of all the mentioned problems is implemented in the discrete-time domain. Finally, an improved robust control structure is tested via numerical simulations containing input saturation difficulties and limitations.
II. SYSTEM DYNAMICS AND MODELING
The general quadrotor UAV aircraft, which is used in this study, is presented in Fig. 1. Here, B is the Body frame and E is the Earth frame, and the structure of the system dynamics is fixed and uniformly distributed, also the propellers are static. The gravity centre of the system coincides with the B-frame centre. Moreover, the square of the propeller speed is considered proportional to the forces of thrust and drag.
Since :, is for the lift and d is for the scaling factor for the force moment. Here, j F is for the control signals applied to the quadrotor. After the continuous-time overall quadrotor UAV system dynamics are presented, the approximated system difference equations are needed for the design steps in discrete-time for the attitude behaviour. In this paper, the quadrotor UAV attitude system difference equations are obtained with the linear extrapolation method which is adopted from [23]. Defining and denotes the Euler angles. Utilizing (11) and (12), the attitude dynamics of the quadrotor ((5)- (7)) can be written as in (8) and (9). The mathematical arrangements are given in Appendix A.
III. DISTURBANCE OBSERVER DESIGN
In this section, the time-varying disturbances that affect the quadrotor UAV system are estimated by nonlinear disturbance observer (NDO) in the discrete-time setting. The NDO is designed with the assumption that the timevarying disturbances are supposed to be unknown constant or time-varying disturbances that vary much slower than the quadrotor UAV dynamics. The advantage of the considered observer design in the discrete-time domain is that the measurements of the data of the systems are achieved via the digital sensor devices.
Proposition: Consider the discretized attitude dynamics (8), (9). The estimation dynamics of the external disturbance asymptotically stable under the designed discrete-time NDO structure given by: if satisfying the condition stand for an augmented state variable, the estimation of the external disturbance, an auxiliary nonlinear function depend on the state variables and NDO design gain, respectively. (9) and the observer dynamics (13), (14), In this point, regarding the natural motion of the external disturbance ( , and if observer design gain 2 L is designed with an appropriate assignment value such that remaining within the unit circle, then the observer error asymptomatically fast converges to zero which completes the proof.
IV. PRELIMINARIES
Before the design procedure of the input-output linearization controller is directly given, some preconditions must be emphasized, such as relative degree, zero dynamics, and minimum phase [28]. The relative degree calculation is summarised as follows mathematical formulation ; .
The formulation means that the relative degree is the integer r for the quadrotor attitude dynamics given by (8) and (9). Thus, , and , gg information of the pitch and the yaw Euler angle can easily be obtained with the equations given in Appendix A. The relative degree of the discretized system (8), (9) Another critical issue is the zero dynamics so that if the zero dynamics of the considered system is local asymptotically stable (LAS), then a control law i uk exists and can be designed. Checking the LAS of the (8), (9), it can be easily determined according to whether the Jacobian matrix's eigenvalues remain within the interior of the unit circle Hence, the Jacobian matrix eigenvalues locally remain within the unit circle considering . s T As a result, an input-output linearized control law i uk that has to stabilize local asymptotically of the system (8), (9) exists.
V. CONTROLLER STRUCTURE
In this section, a discrete-time nonlinear input-output linearization controller is presented for stabilization of the attitude quadrotor. After controller design, the quadrotor attitude subsystem transformed into a linearized system with cancellation of highly nonlinearity effects.
However, the linearized attitude system is still affected by the parametric uncertainties, external disturbances, and discretization errors. To attenuate the mentioned errors, the digital PI-like filter is used. Thus, in Fig. 2, the closed loop attitude control structure can be seen. As previous section taking into account discretized quadrotor dynamics, the input-output linearization controller is and herewith the linear closed-loop input-output dynamics with relative degree 2 can be given as follows where 12
Hereby, the transfer function characteristic equation by shaping 12 , , and 3 can be remodeled as locally asymptotically stable. Note that stability, performance, and robustness depend on the roots of the characteristic equations for each ,i . However, it is unlikely to directly design the location of the roots in the proposed input-output controller structure for a nonlinear system subject to unmodeled dynamics, uncertainties, external disturbances, and discretization errors. To obtain offset-less output trajectory tracking, another controller, which is called "pre-controller", in this paper is needed.
is obtained. The minimal order pre-controller structure is possible via its degree being 2, so the proposed precontroller is casual. The denominator of the
Note that, although the proposed controller rejects many aforementioned undesired effects, the dynamics are still detorted by the time-varying disturbances. In this paper, an NDO designed in Section II for this problem solution is proposed. Clearly, considering 0 d ek given by (18) under the assumption of changing much slower than the quadrotor UAV dynamics of the time-varying disturbances, the application of the controller (24) to the dynamics of (8),
VI. SIMULATION RESULTS
To examine the effectiveness and robustness of the designed controller structure for the stabilization problem on the nonlinear uncertain quadrotor UAV, several numerical simulation results are presented. The results are carried out in MATLAB with a fixed sampling time 1 ms [24]. The step size of the solver of the quadrotor UAV dynamics is set as 1 .
s The maximum input signal to each Euler angle of the system dynamics is saturated with 24 .
N The values of the parameters utilized in the simulation are given in Table I. All parameters of the quadrotor are considered as uncertain and the simulation results are obtained assuming that the controller is known 80 % of the actual values of all the parameters. The initial states of the quadrotor UAV are set
Nms
The discrete nonlinear proposed controller matches the discretized quadrotor dynamics UAV to a second order linear system. In this paper, the linear model is designed as 2 0.99 0.0098, zz and 0 0.0198 is selected, which means that unity static gain is 1. Then the denominator of the pre-controller is designed as 2 0.9 0.02.
zz However, the quadrotor dynamics is affected by the time-varying disturbances. For robustifying on this problem, the NDO with the two mentioned controller parts is combined. The observer design gain 2 L in the auxiliary nonlinear function is set to 0.35 regarding the speed of the quadrotor UAV.
In the uncertain quadrotor system, the matched and unmatched uncertainties directly deform the performance of the system, i.e., transient and steady-state responses. The input-output linearization controller with the designed prefilter in the discrete-time domain has been performed under all constant system parameter uncertainties, discretized errors, and constant/time-varying disturbances. Note that the aforesaid constant uncertainties and constant disturbance occur the steady-state error throughout the system output. The devised pre-filter is activated with the simulation run time. Thus, the effect of the parameter uncertainties is not observed from the simulation results in 0 s-5 s (see (a) in Figs. [3][4][5]. However, it can be understood from the simulation results in 5 s-8 s (see (b) in Figs. [3][4][5] which effect is completely rejected. When applied the constant disturbances to each Euler angle at the fifth second, the disturbance influences have been suppressed without the disturbance estimation.
The simulation studies have been carried out under fixed and random disturbances, considering the full uncertain attitude quadrotor dynamics. In this context, the results for each Euler angle can be seen in Fig. 3, Fig. 4, and Fig. 5. These figures are given for three different scenarios: disturbance observer within the control structure, the control structure without disturbance observer, and the assumption in which the controller estimates the disturbance signal exactly. The results for each scenario are presented in the order of Euler angle positions (see a(1), b(1), c(1) in Figs. [3][4][5], Euler angle velocities (see a(2), b(2), c(2) in Figs. [3][4][5], real disturbance and the estimated disturbance values (see a(3), b(3), c(3) in Figs. [3][4][5], and the control signal for each angle (see a(4), b(4), c(4) in Figs. [3][4][5]. In Fig. 3(a), the uncertain attitude quadrotor dynamics guarantees the asymptotic stability. At the end of the fifth second, it is seen that the effect of the given constant disturbance is successfully suppressed for all three scenarios (see Fig. 3(b)). It is understood from this point that without NDO the system is already asymptotically stable and performs a robust behaviour. On the other hand, when a time-varying arbitrary disturbance at the eighth second is presented to the system, the results do not execute the same performance anymore (see Fig. 3(c)). Here, without NDO, it is easy to see that the peak value of the disturbance behaviour of the closed-loop system is roughly four times bigger than the response value of the system with NDO. Besides, there is almost no difference between the controller which utilizes the estimated disturbance signal and the used assumption in which the controller estimates precisely the disturbance signal in the control loop. This result illustrates the superior performance of the NDO, which can be seen in Fig. 4(a), Fig. 4(b), and Fig. 4(c) and in Fig. 5(a), Fig. 5(b), and Fig. 5(c), respectively. All aforementioned comments for Fig. 3 are also valid for the other two figures. As a result, the proposed control structure solves asymptotically the stability and tracking problem in the quadrotor attitude subsystem local in discrete-time. Moreover, the effects of the timevarying disturbance signal like wind-gust is suppressed as well as it is aimed. To better express the contribution of the proposed control structure, some comparisons from studies in the literature have been added to this section. In [21], a robust attitude controller based on a nonlinear disturbance observer (NDOB) is presented. In the cited paper, the peak-to-peak value for the attitude disturbance response of the quadrotor system without NDOB is approximately 20 deg. This value is 4 deg when including the NDO. d Namely, the method proposed in [21] has the capability of suppressing the amplitude of a sinusoidal disturbance to be approximately 0.2. In this paper, the proposed attitude controller with NDO has suppressed with the 0.1923 capacity the amplitude of the considered time-varying disturbance signal. On the other hand, a nonlinear feedback controller with a nonlinear extended state observer is proposed for the attitude control of a quadrotor [22]. Besides, a robust sliding mode controller is given in [23] which was added as a comparison paper to [22]. In these papers, the time durations of the rejection of the constant disturbance are approximately attained as 1 s and 2.5 s, respectively. In this paper, the duration is approximately 1.5 s. Note that these striking points are changeable according to case-by-case operation points. Consequently, the proposed control method in this paper successfully achieved its aim considering comparison values.
VII. DISCUSSION
The comprehensive simulation studies have been carried out to evaluate the different type uncertainties (constant system parameter, constant and time-varying disturbances) on attitude control of a quadrotor UAV under robust discrete-time I & O feedback linearizing controller with NDO. It can be clearly seen that the stabilization problem is overcome by an I & O feedback linearizing controller, which transforms the nonlinear attitude dynamics into a second-order linear system, and a pre-controller, which indicates the response of a PI-like digital filter, under constant any uncertainties. Hence, a linear and nonlinear combined robust controller is constructed with an offset-less response, but still taking effect time-varying disturbances, such as wind-gust. The NDO in the discrete-time setting is proposed to reduce the time-varying external disturbance effects. The estimated time-varying disturbance values are directly utilized in the controller input. Applying the proposed controller without any time-varying disturbances, the attitude tracking error asymptotically stabilized, however, this is not the case when there is such a disturbances effect. It can be clearly shown that the capability of attenuating the external time-varying disturbance of the proposed controller with NDO has a rate of roughly four times. The applicability of the established controller structure is successfully validated through simulation studies.
VIII. CONCLUSIONS
A discrete-time robust controller with an NDO is proposed for attitude stabilization of the nonlinear quadrotor UAV. The main conclusions are summarized as follows.
In the discrete-time setting, the attitude stabilization of a quadrotor system is performed considering system internal and external uncertainties and linearization errors. The attitude performance is strengthened with an NDO design by employing the stability analysis in discretetime.
The stability of the attitude closed-loop system is evaluated in the sense of by the Jury criterion. To test the performances of the devised controller structure, the simulation works are executed in detail. The effectiveness and robustness of the discrete-time proposed control structure have been demonstrated, and the presented results with comparisons have been promising in control of attitude tracking and stabilization for the aircraft systems.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.