added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2018-06-21T14:10:37.613Z
|
2018-06-28T00:00:00.000
|
46963063
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CC0",
"oa_status": "GREEN",
"oa_url": "https://jyx.jyu.fi/bitstream/123456789/59995/1/keklinenetal2018scandinavianjournalofmedicine26scienceinsports.pdf",
"pdf_hash": "b92f877602188c3514ddd3d25f637ff121a4cfcd",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2912",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"sha1": "b92f877602188c3514ddd3d25f637ff121a4cfcd",
"year": 2018
}
|
pes2o/s2orc
|
Motivational characteristics and resistance training in older adults: A randomized controlled trial and 1‐year follow‐up
The aim of this study was to investigate the effects of a 9‐month supervised resistance training intervention on motivational and volitional characteristics related to exercise, and whether the absolute level and/or intervention‐induced change in these characteristics predict self‐directed continuation of resistance training 1 year after the intervention. Community dwelling older adults aged 65‐75, who did not fulfill physical activity recommendations, were randomized into resistance training intervention groups: training once‐ (n = 26), twice‐ (n = 27), three‐times‐a‐week (n = 28) or non‐training control group (n = 25). Training groups participated in supervised resistance training for 9 months: during months 1‐3 all groups trained twice‐a‐week and then with allocated frequencies during months 4‐9. Exercise‐related motivation, self‐efficacy, and planning were measured with questionnaires at baseline, month‐3 and month‐9. The continuance of resistance training was determined by interviews 6 and 12 months after the end of the intervention. The intervention improved action and coping planning as well as intrinsic motivation (group × time P < .05). During 1‐year follow‐up, 54% of participants did not continue self‐directed regular resistance training, 22% continued regular resistance training once‐a‐week, and 24% twice‐a‐week. Increases in exercise self‐efficacy and intrinsic motivation related to training during the intervention predicted continuation of resistance training twice‐a‐week. Resistance training improved exercise‐related motivational and volitional characteristics in older adults. These improvements were linked to continuing resistance training 1 year after the supervised intervention. The role of these characteristics should be taken into account when promoting long‐term resistance training participation among older adults.
| INTRODUCTION
Regular and progressive resistance training (RT) offers major benefits to older adults' physical functioning and health, such as maintaining and increasing muscle mass, strength and power. [1][2][3] According to both the American College of Sport Medicine 4 and Finnish national physical activity recommendations, 5 older adults should participate in muscle-strengthening activities at least twice-a-week. However, in Finland, only 11% of older adults aged 65-74 report meeting these recommendations. 6 Hence, understanding the possible reasons for non-participation and participation in RT are important.
There are many theories explaining the potential psychological reasons for participating in exercise, and motivation is one of the key characteristics behind a behavior. 7,8 According to self-determination theory by Ryan and Deci,8 motivation can arise from the satisfaction and pleasure of the activity itself (intrinsic motivation) or from extrinsic outcomes achieved via the activity (extrinsic motivation). Motivation is seen as a continuum from more extrinsic motivation to more intrinsic and autonomous motivation. 8 In the Health Action Process Approach (HAPA) by Schwarzer, motivation is distinguished from volition: the motivational phase leads to intention and secondly, the volitional phase leads to actual behavior. 9 In the volitional phase, implementing intentions is closely related to action planning, which includes planning when, where and how to exercise, as well as coping planning, which describes how difficult situations will be confronted. 9,10 Both of these have been shown to be predictors of exercise adherence. 11 In addition, self-efficacy is needed in both motivational and volitional phases. 9,12 Based on Bandura's social cognitive theory, self-efficacy describes how confident an individual is about their abilities to succeed in different situations or tasks. 13 High self-efficacy is linked to commitment and intrinsic motivation. 13 Related to health-enhancing behavior, self-efficacy reflects, for instance, capabilities to perform an action and to overcome different barriers (coping self-efficacy). 9,12 Although exercise-related motivational and volitional characteristics have been shown to be determinants of exercise participation, 11,14,15 much less is known specifically regarding RT. Winett et al. 16,17 suggest that RT may require a specific set of beliefs compared to other physical activities because of its nature. For example, confidence to lift external loads and increase the training load regularly is a specific requirement in RT. Nevertheless, there is some evidence that exercise selfefficacy is also closely related to RT; exercise self-efficacy beliefs have been associated with the continuance of RT after intervention in older adults, 18 but not in all studies. 19 Interestingly, there is some evidence that the association between motivational and volitional characteristics and RT may be bidirectional; these characteristics are related to adoption and continuance of physical activity, but also RT in itself may be able to improve these characteristics. For example, RT has had a positive effect on confidence to lift external loads in older adults 20 and on exercise self-efficacy in younger adults. 21 In addition, there is evidence that RT interventions lasting 12-week (three times-a-week) can improve introjected, identified, and intrinsic motivation for exercise. 22,23 However, it is currently unclear whether RT training frequency influences these findings in older adults.
Therefore, the purpose of this study was to; (i) investigate the effects of a 9-month RT intervention with different training frequencies on exercise-related motivational and volitional characteristics, assessed here by motivation, self-efficacy, action, and coping planning, and (ii) investigate whether their absolute level and/or intervention-induced change predicted self-directed RT continuance 1 year after the intervention. Based on the review of the literature, it was hypothesized that RT will improve motivational and volitional characteristics in older adults and that higher levels of exercise motivation and self-efficacy predict RT continuance after the intervention.
The present study was a secondary analysis of a randomized controlled trial "Get in Shape in the Team Research" (Clinical trials (clinicaltrials.gov) register number NCT02413112). Previous studies that have published the physical outcomes of the trial showed that maximum strength 24 and cardiorespiratory fitness 25 improved after 3 months of training, and a higher training frequency provided greater benefit for maximum dynamic strength, but not for functional capacity, over the 9-month intervention. 26 2 | METHODS
| Study design and participants
This study was a secondary analysis of a parallel-group randomized controlled trial. Ethical approval for the study was obtained from the Ethical Committee of the University of Jyväskylä and all participants signed written informed consent prior to participation. The study design, participants, and intervention have been described in detail in previous studies. 24,26 Participants were randomly assigned to 4 groups: RT once-a-week (RT1 n = 26), RT twice-a-week (RT2 (n = 27), RT 3 times-a-week (RT3 n = 28), and non-training control group (CG, n = 25). Pre-trial power analysis for primary outcomes of the study (maximum strength and functional capacity) was based on the effect sizes reported in a meta-analysis by Liu and Latham. 1 With a 75:25 intervention-to-control ratio, a sample size of 60 for strength and 88 for functional capacity was sufficient to reach 80% probability to observe treatment difference with a 5% level of significance.
The flow diagram of the study is shown in Figure 1. Participants were selected by random sampling from a population register of community-dwelling older adults aged 65-75 living in the Jyväskylä area. Recruitment occurred in October-December 2014. Inclusion criteria were; (a) leisure-time aerobic exercise less than 3 h/wk, (b) no previous regular RT experience, (c) BMI<37, (d) no previous testosterone-altering treatment, (e) no serious cardiovascular disease that may affect participating in RT, (f) no medication related to the neuromuscular or endocrine systems, (g) capability to walk without walking aids, and (h) no smoking. Two thousand invitation letters were sent with a response rate of 23% (n = 454). Those who met the inclusion criteria (n = 148) were invited to an information session and 116 participants attended a physician's examination: 8 persons were excluded because of medical reasons and 2 persons were not interested in participating. Hence, 106 participants remained for randomization. Randomization and allocation were made by the principal investigator of the study who was not involved in data collection. Randomization was performed by an online random number generator in a block of 100 participants for 4 groups, 25 participants for each. Because it was assumed that higher training frequency is likely to increase non-compliance and dropout rates, 3 participants were randomized to RT3, 2 to RT2, and 1 to RT1 from the remaining 6 participants. Two participants dropped out from CG because of the randomization result.
| Intervention
Training groups (RT1, RT2, and RT3) participated in a 9-month supervised whole-body RT intervention at the Faculty of Sport and Health Sciences gym, University of Jyväskylä. The exact training program has been previously reported, 24,26 but briefly, each 1-hour training session included a 10-minute warm-up and 8-9 exercises for different muscle groups (eg leg press, knee extension and flexion, chest press, pulldown, pushdown, pec deck, ab crunch, and back extension) and was supervised by trained personnel. All training groups followed an alternating 2-session training program throughout the intervention. Months 1-3: To become familiar with RT and to build capacity for high-load RT, all training groups (RT1, RT2, and RT3) participated in supervised RT twice-a-week for the first 3 months. The focus of this initial period was on muscular endurance using low F I G U R E 1 Flowchart of the study loads, with participants performing 2-3 sets of 14-20 repetitions with 0-1-minute rest between sets for each exercise. Months 4-9: The training groups started to exercise with different frequencies: completing the 2-session program took 2 weeks from RT1, 1 week from RT2 and RT3 completed 3 cycles in 2 weeks. The frequency-specific 6-month training period focused on developing muscle hypertrophy and maximum strength. The participants performed 4-5 sets of 4-12 repetitions with 1-3-minute rest between sets. CG was instructed not to change their lifestyle during the intervention and after month-9 measurements they had an opportunity to participate in supervised RT twice-a-week for 6 months.
After the 9-month intervention, participants of the training groups could no longer train in the same gym as during the intervention and had to arrange possible continuance of training by themselves. To lower the threshold to continue training independently, participants were given a list of possible low-cost gyms in the city area from the principal investigator upon completion of the month-9 tests.
| Measurements
Motivational and volitional characteristics were measured with computer-based questionnaires related to exercise or physical activity in general (not specifically RT).
Exercise self-efficacy was measured by 10 questions developed by Schwarzer: 5 questions evaluated action and coping self-efficacy 12 and 5 questions barrier self-efficacy. 27 These scales have satisfactory psychometric properties 12,27 and have been used among Finnish adults and older adults. 28,29 Participants were asked to evaluate how confident they were to maintain exercise in different situations related to adoption (eg "even if I have to make a detailed plan to exercise") and to overcome barriers (eg "even when I am busy"). The answers were given on a scale from 1 = very certain I cannot to 4 = very certain I can, and aggregated. Cronbach's alphas were 0.91, 0.92, and 0.92 at baseline, month-3, and month-9, respectively.
Action planning and coping planning to exercise were both assessed with 4 questions. 10 Participants were asked whether they had a detailed plan when, where, how, and how often to exercise (action planning) and how to combat the different barriers and obstacles to exercise (coping planning). The scale was from 1 = not at all true to 4 = very true. The scale has good psychometric properties in a wide population age range. 10 Cronbach's alphas were 0.89, 0.90, and 0.89 for action planning and 0.90, 0.91, and 0.86 for coping planning at baseline, month-3, and month-9, respectively.
Motivation for physical activity and training was measured with the Exercise Self-Regulation Questionnaire (SRQ-E). 30 The questionnaire has shown adequate psychometric properties among different populations. [31][32][33] The questionnaire has two parts: 16 statements measure motivation regarding physical activity (eg "I try, or would like to try, to be physically active regularly because I enjoy physical activities") and 12 statements regarding training (eg "I exercise/work out (or would like to work out) because I feel pressure to work out"). The scale was from 1 = not at all true to 7 = very true. The mean scores for each motivational regulation (external, introjected, identified, and intrinsic) were calculated separately for physical activity and training. Cronbach's alphas showed an adequate fit 34 (range 0.70-0.92) to all 4 regulation styles for physical activity and to 3 regulation styles for training. The exception was introjected regulation for training: Cronbach's alphas were 0.47 at baseline, 0.57 at month-3, and 0.39 at month-9. The 3 items measuring introjected regulation for training were "I exercise/ work out because I would feel bad about myself if I didn't do it",…"I'd be afraid of falling too far out of shape if I didn't" and …"I feel pressured to work out." Deleting any of these items would not have improved alpha levels.
Leisure-time aerobic exercise was estimated at baseline by a single question specifying the average weekly minutes participants engaged in leisure-time aerobic exercise. During the intervention, diaries tracked daily activities, as well as the duration and intensity of the activities recorded. 35 The average weekly minutes for leisure-time aerobic exercise for months 1-3 and 4-9 were calculated from the diaries.
Continuance of RT: In interviews 6 months after the intervention ("follow-up 1″) and 12 months after the intervention ("follow-up 2"), participants were asked whether they had continued RT. At follow-up 1, the interviews were conducted face-to-face to all training group participants who participated in follow-up strength and functional capacity measurements (n = 66). At follow-up 2, interviews were conducted by telephone from all training group participants except the 2 dropouts and 1 participant (from RT3) who was not reachable (n = 78). From those who had continued RT, the regularity and average number of RT sessions per ordinary week was asked (common short breaks, eg because of flu or travel should not be taken into account) in both interviews. Training either individually or in a supervised group was accepted. The average weekly frequency to participate in RT during the year was calculated from values reported in follow-up 1 and follow-up 2. Those who reported that they had continued RT regularly during the year and at least 6 months twice-a-week and during the other 6 months once to twice-a-week (average for the year 1.75-2.5 times-a-week) were categorized as twice-a-week continuers. The cutoff value of 1.75 was used because these participants intended to participate in RT twice-a-week (and mostly achieved their intention) but there were some weeks when they trained only once-a-week due to holiday/illness or some other reason. Those who reported that they had continued RT regularly during the year at least once-a-week but not twicea-week throughout the year (average for the year 1-1.5 timesa-week) were classified as once-a-week continuers. All others, including both participants who had not continued RT at all and participants who had continued less than once-a-week on average (eg some participants reported training approx. oncea-month), were classified as non-continuers.
| Statistical analysis
Analyses were performed using IBM SPSS Statistics 24.0. A significance level of P < .05 was used in all analyses. The first study aim, to investigate the effects of intervention on motivational and volitional characteristics was analyzed by an intention-to-treat principle, with the exception of the 2 participants that dropped out immediately after the randomization before baseline measurements for motivational and volitional characteristics. The differences between groups at baseline were analyzed by one-way ANOVA for continuous variables and Chi-Square test for categorical variables. The generalized estimation equation (GEE) method with an unstructured working correlation matrix was used to analyze group × time differences between groups. Based on the structure of the intervention, 3 sets of GEE analyses were performed; (a) baseline -month-3 between CG and the training group (RT1, RT2, and RT3), (b) month-3 -month-9, and (c) baseline -month-9 between CG, RT1, RT2, and RT3. Because the amount of leisure-time aerobic exercise (min/wk) was not stable in all groups during the intervention (assessed by physical activity diaries), GEE analyses were repeated and adjusted for the amount of leisure-time aerobic exercise. The diaries were missing from 7 participants (1 from each RT group and 4 from CG), hence those participants were excluded from the adjusted analyses. The results of analyses with and without aerobic exercise adjustment were similar, therefore only the adjusted GEE results are shown in the Results section. In addition to GEE analyses, the standardized effect sizes for differences in change-scores between groups were calculated with Cohen's d formula. 36 The changes in motivational and volitional characteristics during the intervention were calculated by subtracting the baseline value from the intervention completion (month-9) value. Within-group differences were analyzed by paired sample t-tests.
The second study aim was to investigate predictors of continuing RT after the intervention. Both the absolute level at post-intervention (month-9) and change during the intervention (month-9 value -baseline value) of motivational and volitional characteristics were analyzed as possible predictors. Differences in possible predictors between non-continuers, once-a-week continuers, and twice-a-week continuers were analyzed by one-way ANOVA and Chi-Square tests.
| Descriptive statistics
The flow of the participants throughout the study is shown in Figure 1. Three participants dropped out from CG, 1 from RT1, and 1 from RT2 (Figure 1).The dropout from RT1 was due to back pain induced by the strength testing in month-6, and other dropouts occurred due to health (acid reflux events, stress-related high blood pressure, cancer recurrence) or personal issues unrelated to the study.
There were some adverse events, as judged by the investigators, in the study. Adverse effects were collected by selfreports: participants reported more serious ones directly to the principal investigator and documented minor ones in their training diaries. The most serious adverse event was the back pain induced by the strength testing. One participant strained a hamstring muscle walking home after strength measurements at month-3 and could continue the training program but no longer participate in maximum muscle strength measurements. Based on information collected from training diaries, about one-third of the participants in training groups had some temporary pain or soreness, for example, in the knee joint, shoulder or back. For these participants, the training program was adjusted (eg use of lighter loads, modified exercise technique) for a maximum of 2-3 weeks. These discomforts were judged to be part of the adaptation processes, and indeed these feelings were short-term and did not adversely affect participation rates during the 9-month intervention period.
Participants' demographic characteristics and baseline values in motivational and volitional characteristics are presented in Table 1. The only significant difference between groups at baseline was in coping planning (ANOVA F(df) = 3.30(3), P = .024), which had a higher value in RT1 compared to RT3.
| Changes in exercise-related motivational and volitional characteristics during the intervention
The effects of the intervention on motivational and volitional characteristics are presented in Table 2. The mean values at each measurement point and effect sizes for mean change between the groups can be found from supporting information. After 3 months of intervention, there were significant group × time effects for exercise self-efficacy, action and coping planning, and intrinsic motivation related to training. From month-3 to month-9, there were no statistically significant between-group differences. Throughout the 9-month intervention, changes occurred in several measured motivational and volitional characteristics. Action planning improved in all training groups compared to CG. Coping planning and intrinsic T A B L E 1 Participants' demographic, motivational, and volitional characteristics at baseline. Mean values with standard deviations in parentheses or frequencies presented
T A B L E 2 Effect of the intervention on motivational and volitional characteristics, analyzed by generalized estimated equations (GEE),
p-values presented. Models adjusted by mean aerobic exercise (min/wk) from a months 1-3, b months 3-9, and c months 1-9. PA, motivational regulation related to physical activity, TR, motivational regulation related to training. Significant differences between d CG vs RT1 P = .019, CG vs RT2 P < .001, CG vs RT3 P < .001; e CG vs RT2 P = .010, CG vs RT3 P = .001; f CG vs RT2 P = .026; RT1 vs RT2 P = .004; g CG vs RT2 P = .013, CG vs RT3 P = .009; h CG vs RT2 P = .008, CG vs RT3 P < .001, RT1 vs RT2 P = .047, RT1 vs RT3 P = .003. motivation related to physical activity improved in RT2 and RT3 compared to CG. Both RT2 and RT3 increased their intrinsic motivation related to training and RT2 also introjected regulation related to physical activity compared to CG and RT1. The changes in these variables are shown in Figure 2.
| Predictors of RT continuance
Out of 78 participants, 31 (40%) reported that they did not continue RT at all after the intervention. Five persons (6%) continued RT immediately after the intervention but quit prior to follow-up 1 (6 months after the intervention) and 6 persons continued RT occasionally (less than once-a-week on average) throughout follow-up (12 months). Therefore, 42 persons (54%) were counted as non-continuers. Seventeen persons (22%) reported to have trained regularly once-a-week on average during the whole year and 19 persons (24%) twice-a-week on average. The average weekly frequency to participate in RT varied between 1.0 and 2.5 (mean 1.58, SD 0.48). There were no significant differences between intervention groups in continuance rates ( Table 3). The level of motivational and volitional characteristics at post-intervention (month-9) and their change during the intervention (baseline to month-9) were analyzed as possible predictors for continuance of RT (Table 3). Twicea-week continuers had a greater increase in exercise self-efficacy during the intervention than non-continuers and a greater increase in intrinsic motivation related to training compared to both non-continuers and once-aweek continuers.
| DISCUSSION
The purpose of this study was to (a) investigate the effects of an RT intervention with the different training frequencies on exercise-related motivational and volitional characteristics, and (b) whether the absolute level and/or change in these characteristics predict self-directed continuance of RT 1 year after the intervention. RT led to positive changes in exercise self-efficacy, planning, and autonomous motivation. Twentytwo percent of participants continued regular RT once-aweek and 24% twice-a-week during the next year after the intervention. A greater increase in exercise self-efficacy and intrinsic motivation related to training during the intervention predicted continuation of RT twice-a-week during follow-up.
The effect of RT on exercise self-efficacy and motivation is in-line with previous studies, [20][21][22][23]37 indicating that participation in RT intervention can increase self-confidence to maintain exercise behavior and increase intrinsic motivation to exercise in addition to physical outcomes. However, the improvements in exercise self-efficacy diminished after 3 months in the present study. It is plausible that at month-3 measurements participants were more confident because the intervention was going to continue but at post-measurements they had started to think about training independently. 38 Action and coping planning in relation to RT have not been studied previously, but the results of the present study suggest that participating in an RT intervention can also increase exercise planning. In relation to motivation, a limitation of this study was that the scale for introjected regulation for training had low internal consistency (alpha values 0.39-0.57). Low alpha reliability implies that the items intended to measure introjected regulation do not measure it in a consistent way in this study sample, even though the scale has had good psychometric properties in previous studies. [31][32][33] Therefore, the results for introjected regulation for training should be treated with extreme caution.
A possible source for exercise motivation could be the improvements in strength and functional capacity during the intervention. [24][25][26] However, in the present study, improvements in strength and functional capacity did not influence changes in motivational and volitional characteristics (tested in GEE analyses adjusted by change in strength and functional capacity, the results not included in the paper). In addition, other characteristics of the intervention may have affected the results. For instance, training in small groups (8-10 persons) and having support/encouragement from the research staff may have affected motivational and volitional characteristics. Hence, even though RT interventions seem to be an effective way to increase intrinsic motivation and planning, it is not clear whether performing RT itself (ie without these potential confounding factors) led to these findings. To examine this, the relationship between RT and these characteristics should be studied external to a structured scientific, group-based intervention.
When comparing the results between different training frequencies, there were no differences between groups from month-3 to month-9. It should be noticed, however, that when comparing month-9 results to baseline, RT1 only improved their action planning compared to CG, whereas RT2 and RT3 improved also coping planning and intrinsic motivation related to both physical activity and training. Furthermore, RT2 and RT3 improved their intrinsic motivation related to training also compared to RT1. In support of these findings, effect sizes suggest that training twice-or 3 times-a-week was related to greater changes in these characteristics over the entire intervention period. It is likely that the structure of the intervention influenced these findings and should be remembered when interpreting the results: all training groups participated in RT twice-a-week during the first 3 months and then split into different frequencies. This meant that the once-a-week group was forced to reduce their training frequency. Therefore, it is not possible to separate the effects of training frequency per se from the effects of reducing training frequency. In future studies, it should further be investigated whether it is the recommended RT frequency of at least twice-a-week 4 to derive also the most motivational benefits. However, our results suggest that greater resistance training frequency contributes little to improved motivational and volitional characteristics since we did not observed differences from month-3 to month-9 measurements.
Almost half of the participants (46%) continued RT during the 1-year follow-up at least once-a-week. The continuation rate is similar to the studies by Geirsdottir et al., 39 in which ~18% of participants continued RT once-or twice-a-week and ~24% ≥three times-a-week, and Inaba et al., 40 in which 43% continued RT at least once-a-week. In both studies the continuation was measured by self-reports. A limitation of the present study is that participants were not able to continue RT in the same gym used during the intervention. Finding a new place to continue RT could have been a significant barrier for some participants and reduced the continuance rates. In any case, it is a positive finding that almost half of the participants continued training at least once-a-week, since even participating in RT only once-a-week leads to important physical improvements for older individuals compared to not participating at all. 26,41 However, another limitation of this study is that the continuance frequencies are based on selfreports and information about participants' actual gym visits was not available. For example, Van Roie et al. 19 obtained information regarding gym visits from the fitness center and F I G U R E 2 Key findings: relative changes (±standard error estimation) in motivational characteristics between baseline and month-3 (A), between month-3 and month-9 (B), and between baseline and month-9 (C). TR, Training; PA, Physical Activity found that only 20% of participants continued RT during a 4-month follow-up and none of them met the recommended twice-a-week frequency. However, even though self-reports may both over-or underestimate the actual amount of physical activity, 42 even a single question could be a useful method to estimate physical activity. 43 The present study provided some interesting findings regarding motivational and volitional characteristics and RT continuation. Firstly, contrary to previous research, 15,18 the level of the characteristics was not related to RT continuation. A possible reason for that could be that the participants of this study had relatively high values in these characteristics already at baseline (eg compared to a sample of younger physically inactive Finnish adults 28 ). In addition, it is possible that participants may have had high motivation for exercise in general but they preferred physical activities other than RT. This may not have been observable in the present study because the measurements used were related to non-specific physical activity or exercise. Indeed, when exercise preferences were asked from participants at follow-up 2, over half of them mentioned some type of aerobic exercise and only 16% mentioned RT as their favorite type of exercise. Unfortunately, the information about frequency to participate in other physical activities during the follow-up period was not gathered in the present study. Secondly, increases in exercise self-efficacy and intrinsic motivation related to training during the intervention were related to continuance of RT during follow-up. Individuals with high exercise self-efficacy have better strategies and are ready to put more effort on exercising and overcoming barriers. It is possible that during regular participation in RT for 9 months, some participants realized that they could overcome the potential barriers and that contributed to continuing RT also after the intervention. This finding supports the results of previous studies in relation to self-efficacy change and exercise continuance. 18,44 Regarding intrinsic motivation, it is possible that simply those who enjoy RT continued training after the intervention. Thirdly, there were differences only between twice-a-week continuers and non-continuers, not between once-a-week continuers and non-continuers. Moreover, twice-a-week continuers had greater increases in intrinsic motivation also compared to once-a-week continuers. This is an important finding for future studies to investigate because RT twice-a-week is superior to once-a-week for strength gains. 45 According to the results of the presents study, it can be suggested that by improving motivational and Level at post-intervention, mean (standard deviation). b Change during the intervention (from month-0 to month-9), mean (standard deviation). c Significant difference between non-continuers and twice-a-week continuers analyzed by ANOVA (P=.021) and Bonferroni post-hoc tests. d significant difference between non-continuers vs. twice-a-week continuers and once-a-week continuers vs twice-a-week continuers (P=.002).
volitional characteristics related to exercise it could be possible to also increase RT participation. It has been shown that RT programs with behavioral counseling are more effective for the adoption of RT and exercise self-efficacy than basic programs including only exercise. 46,47 Therefore, physical activity interventions should also aim to improve these characteristics in order to promote the continuance of behavior after the intervention. When generalizing the results of this study, it should be taken into account that first, only 23% of the original random sample replied to the invitation letters. As mentioned above, it is possible that the study sample was already highly motivated to participate in exercise/RT. Secondly, the sample consisted of healthy older adults aged 65-75 who did not meet the physical activity guidelines for either leisure time aerobic exercise or RT at baseline. Therefore, the results may not necessarily be generalized to different patient groups, all ages, or more physically active individuals. It should be noted that the amount of leisure time aerobic exercise and RT background at baseline were measured with single questions, thus the information about specific exercise patterns before the intervention is not available. Thirdly, as a secondary analysis of a randomized controlled trial, this study may have been unpowered to detect differences in motivational and volitional characteristics with this sample size.
| PERSPECTIVES
As resistance training is beneficial and recommended for older adults, 1-4 but only a minority regularly participate (~10%) in it, 6 it is essential to identify ways to promote participation in resistance training activities. The present study showed that a resistance training intervention can improve exercise-related motivational and volitional characteristics and these improvements were related to continuing selfdirected resistance training after the intervention. Therefore, if interventions can target and improve exercise motivation and self-efficacy, then it may be possible to increase participation rates in regular resistance training in healthy older populations.
|
v3-fos-license
|
2021-08-02T00:05:40.102Z
|
2021-05-13T00:00:00.000
|
236552334
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4395/11/5/974/pdf",
"pdf_hash": "2f5853e3d326b22b42859e7950940f748f16a4bf",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2913",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "aa2e246190f06f675644a061f2fea3d2d7d4a95c",
"year": 2021
}
|
pes2o/s2orc
|
Impact of Weed Control by Hand Tools on Soil Erosion under a No-Tillage System Cultivation
: In conservation agriculture, the no-tillage cultivation system and the retention of perma-nent vegetal cover are crucial to the control of soil erosion by water. This paper analyses the cultivation of maize under no-tillage, with particular reference to the effect produced on soil erosion when weed control is performed by a hand tool (machete), which disturbs the surface of the soil, and to the behavior of the soil cover in these circumstances. The study area is located in the humid tropical mountains of northern Nicaragua (Peñas Blancas Massif Nature Reserve). The results obtained show that 59.2% of the soil surface was affected by appreciable levels of sheet and splash erosion, although the vegetal cover of the soil was relatively high (with average weed and litter cover of 33.9% and 33.8%, respectively). The use of machetes for weed control provoked considerable soil disturbance, which explained the high rates of erosion observed. Moreover, this form of soil management disturbs the litter layer, making it less effective in preventing erosion. The litter remains loose on the soil surface, and so an increase in soil cover does not achieve a proportionate reduction in the area affected by erosion; thus, even with 80–100% weed and litter cover, 42% of the cultivated area continued to present soil erosion.
Introduction
Maize (Zea mays L.) is considered a health-giving food [1] and is fundamental to the human diet in certain regions, such as Central America [2]. According to Food and Agriculture Organization (FAO) [3], the land surface area dedicated to the cultivation of maize in Central America increased at an average annual rate of 0.35% between 1961 and 2017, to the current total area of 9.3 million ha. This expansion in the cultivated acreage has been accompanied by a parallel rise in productivity (annual average increase of 2.35%, with crop yields improving from 974 to 2426 kg ha −1 between 1961 and 2017). These advances are among the achievements of the "Green Revolution", which has intensified the productivity of cultivation systems in this region since the 1950s [4].
The cultivation of maize in Central America has been developed not only in the areas most suitable for agriculture but also in mountainous regions, where the expansion of cultivated lands has often been to the detriment of wooded and forested areas [5]. The tropical mountains are especially vulnerable to degradation, mainly due to soil erosion by water [6]. This degradation process may have severe economic and social consequences in the region. Soil erosion causes the loss of the most fertile soil layers and, consequently, their productive capacity is reduced [7], which constitutes a threat to the sustainability of small and medium agricultural producers who live in mountainous areas [8]. With climate change, the problem is expected to worsen in the coming decades [9]. The traditional cultivation system that continues to be practiced in many upland areas of Central America is based on no-tillage and on the presence of high rates of vegetal cover on the soil [10]. These features constitute the most important measures of conservation agriculture (CA) to reduce soil erosion by water [11]. With no-till, the soil structure remains intact, which makes it more resistant to splash dispersion and runoff [12]. In this regard, ground vegetal cover is of fundamental importance, for the following reasons: (1) it intercepts rainfall and reduces its kinetic energy, thus preventing the dispersion of soil aggregates; in addition, it slows runoff, which increases infiltration and reduces soil losses [13,14]; (2) the enrichment of organic matter, from the decomposition of vegetal residues and associated biological activities, improves structural stability and, consequently, increases the resistance to soil erosion [15,16].
Analysis of the effectiveness of this culture system in reducing soil erosion is usually based on the joint study of these two key measures of conservation agriculture, because they are closely related; when no-tillage is practiced, a layer of vegetal residues is normally left on the soil surface [17]. The question addressed in this paper is: what happens when one of the conservation agriculture measures, namely no-tillage, is altered? This issue is of major importance, because the situation observed under the controlled conditions of experimental farmland may be quite different in the real-world context of private property, where crop management policies might not adhere to the guidelines established for conventional systems. It is essential to take into account all of the actions carried out during the crop cycle in order to ensure the effectiveness of conservation agriculture. In this respect, weed control is of special interest. This is one of the most important management practices in the agricultural cycle, with far-reaching implications for soil conservation. Therefore, to preserve the benefits of the CA system, weed control must be performed appropriately [18]. The traditional procedure, using manual tools such as hoes (with tillage systems) and machetes (with no-tillage systems), is common in Central America [19]. With a machete, the weeds are cut at ground level, but if this is not done correctly, i.e., if the tool makes contact with the ground, the resulting impact disturbs the structure of the soil surface. We believe this action may increase erosion, by modifying the favorable conditions provided by the no-tillage system.
In order to achieve a better understanding of the sustainable use of agricultural soils, we believe it necessary to examine the effectiveness of erosion control measures within conservation agriculture in situations altered by management practices. This is a key issue from the economic perspective. Crop productivity under conservation agriculture, in comparison with conventional tillage, has been studied. Different results have been obtained, but studies in which productivity has decreased under conservation agriculture, especially in tropical environments, have been predominant [20,21]. These results could even be aggravated if erosion increases as a result of poor agricultural management during the crop cycle. In view of these considerations, the present study has the following aims: (1) to analyze the influence on soil erosion by water when weed control is conducted by machete in no-tillage maize cultivation, disturbing the soil surface; (2) to determine the influence of vegetal cover in this situation.
Crop Management
The agricultural system employed consists of a rotational cycle, with two crops, of maize and beans (Phaseolus vulgaris L.) ( Figure 2). In this cycle, the maize is sown at the beginning of the rainy season, in April-May, and harvested in October-November. Then, at the end of the rainy season, beans are sowed in November-December, and harvested in February-March. In both cases, the sowing is conducted using a digging stick (espeque), and the technique is classed as no-till [24]. Sowing was performed at 0.8-1 m intervals along contour lines (rows) spaced 1.0-1.1 m apart. Weed control is performed twice yearly during maize cultivation ( Figure 2), in the early stages of plant growth because this is when it is more sensitive to competition from weeds for resources (water and nutrients). Throughout the region, weed control is normally performed with machetes and herbicides. First, the weeds are removed using a machete, applied very superficially; in most cases, the tool makes contact with the soil surface, causing the first 2-3 cm of the soil structure to be disturbed. Smaller weeds may escape this mechanical action and therefore the control process is continued with the application of herbicide, usually paraquat or glyphosate.
Weeds are not controlled during bean cultivation because their development is limited by the arid conditions of the dry season. The most frequent fertilizer treatments are the application of NPK (15:15:15 or 18:46:0), during the first weed control, or of urea, twice yearly.
For the purposes of this study, samples were obtained during August, within 15 days of the second weed control.
Figure 2.
Maize-bean rotational cultivation system and sampling period.
Soil Erosion Analysis and Calculation of Ground Vegetal Cover
Sampling plots were selected by means of a previous work phase of recognition of the study area. This phase consisted in performing a survey of the area, farm by farm, in order to diagnose the erosive situations currently affecting the soil as the result of agricultural activity. The aim of this reconnaissance activity was to establish the research goals and to select representative plots in which to conduct the study. These plots were homogeneous in terms of the physical factors (mainly, slope gradient and soil type) that may influence erosion processes. Afterwards, the soil analyses confirmed that the study plots had homogeneous characteristics. The sampling was carried out after a rainy event, for which rain gauges were installed in the study area in order to plan the sampling work.
Soil erosion by water was analyzed using the method described by Blanco [25], who studied the erosive state of the soil (i.e., the percentage of land area affected by different types of erosion and by surface processes) and the soil loss (in m 3 ha -1 ). The erosive state was analyzed by an improved version of the Stocking and Murnaghan method [26], based on visual indicators of erosion. Soil loss was calculated by volumetric measurements of the rills and gullies [27]. Table 1 shows the types of erosion and the surface processes observed, coded by an index (capital letter) to indicate the type of process and by a subscript that provides complementary information, such as the type of erosion and type of protective vegetal cover. Table 1. Processes and types of erosion affecting soil in the cultivation of maize (adapted from Blanco [25]). A line transect sampling method was used to analyze the area affected by erosion, to calculate the level of ground cover and, using grids, to determine the volume of soil loss. For each sampling plot, three transects and three grids were established, at random, as described in Figure 3 and its accompanying text. Figure 3. Sampling procedure to establish soil erosion and ground cover (adapted from Blanco [25]). The transects and grid squares originated from point 0 in the sample plot. The distances of the x-y axes were randomly determined by the blind extraction of numbered balls. Each transect was 10 m long and samples were taken at 25 cm intervals (in total, there were 40 observation points of the erosion indicators and of ground cover per transect, i.e., 120 observations were made per plot, and 1920 in the total of the 16 plots studied). The grid squares measured 3 × 3 m and were used to measure the length, width and depth of all the rills and gullies present.
Research Design and Statistical Analysis
In the present study, a non-experimental explanatory sequential mixed method was used, following Creswell [28]. In this two-step approach, both quantitative and qualitative analyses were performed. In the first step, the quantitative analysis, the results were examined by tests of bivariate correlation (Spearman correlation coefficient, rho) and by analysis of the variance (the Kruskal-Wallis test). These analyses were conducted using IBM SPSS 25.0 software (IBM Corp, Armonk, NY, USA). The subsequent, qualitative, phase involved field observation of the visual indicators of the study event, seeking a more detailed understanding of the results obtained by the quantitative analysis.
In this study, it was not possible to apply an experimental research design because the necessary circumstances for this did not exist: (a) on the one hand, the crop management system in the study area was homogeneous, and so there were no cases in which weed control was carried out differently from that described; (b) furthermore, the study area consisted of small farms, whose owners were very reluctant to cede part of their land for experimental research.
The State of Soil Erosion and the Ground Cover
In the maize crops studied, the mean area affected by erosion (Et) was 59.2 ± 5.0% ( Table 2). The main types of erosion observed in soils that had previously been disturbed by the use of machetes for weed control were sheet erosion (Ms) (33.3 ± 4.4%) and splash erosion (Mi) (24.6 ± 5.9%). The other processes affecting the soil (trampling and deposition) had little perceptible influence on erosion. The crop area that remained unaffected by erosion (Ene) was 37.2 ± 5.3%. The two types of land cover (weeds and vegetal residue) are assumed to provide the same degree of protection against soil erosion, as the same rates of erosion were observed in both cases: 13.5 ± 2.5% in soils protected by weed cover (Nw) and 13.7 ± 2.7% in those protected by a layer of plant litter (Nl). No soil losses in the form of rills or gullies were observed. The mean level of ground cover in the maize crops studied was 67.7 ± 4.2%, divided equally between weeds (33.9 ± 5.7%) and vegetal residues (33.8 ± 4.7) ( Table 3). Weed control with machetes did not produce soil inversion, and so high rates of ground cover were maintained. However, the impact of the machete on the ground, as well as disturbing the soil, reduced the stability of the litter layer. These residues continued to be present on the soil surface, but in most cases were loosened, and thus became more susceptible to runoff (Figure 4). Table 3. Ground cover in maize cultivation plots (n = 16).
Characteristics
Abbreviation Mean ± Standard Error Ground cover by weeds (%) GCW 33.9 ± 5.7 Ground cover by litter (%) GCL 33.8 ± 4.7 Total ground cover (weeds and litter) (%) GCW-L 67.7 ± 4.2 Figure 4. Visual indicators of the reduced stability of the litter layer after weed control: view of the whole plot (a); detail of the ground before weed control (b); weed control with machete (c); detail of the aspect after weed removal, with vegetal residues loose on the surface (d).
Influence of Ground Cover on Soil Erosion
The correlation analysis performed revealed no relationship between soil erosion and ground cover (GC). The surface area affected by erosion did not present any significant correlation with GC by weeds (rho = −0.20, p = 0.457), with GC by litter (rho = −0.20, p = 0.460) or with total GC (rho = −0.27, p = 0.304) ( Table 4). Graphical analysis of the erosion behavior, according to GC intervals by weeds and litter, showed that erosion decreased slightly as GC increased ( Figure 5). Thus, soil erosion fell from 65.5% to 54.7% when GC by weeds increased by 60%, and from 64.3% to 51.7% with a 60% increase in GC by litter. A larger decrease was only achieved when weed and litter cover were considered jointly. Then, erosion fell from 58.7% to 41.9% when total GC increased from 40% to 100%. However, the erosion reduction observed with increased GC was insufficient for statistically significant differences to become apparent. Analysis of variance showed there were no significant differences in erosion between the different intervals of ground cover, in any of the situations studied as a clustering variable: GC by weeds (chi-square = 1.35, p = 0.509), GC by litter (chi-square = 0.62, p = 0.735) and total GC (chi-square = 2.11, p = 0.348) ( Table 5). Table 5. Kruskal-Wallis test of the relation between erosion and ground cover in maize cultivation plots (n = 16).
Ground Cover Types Cover Intervals (%) (n for Each Interval) Results
Ground
Discussion
The study results obtained show that the sample plots were subject to the effects of sheet and splash erosion, but there was no loss of soil in the form of rills and gullies. When soil is disturbed it loses cohesion and this may favor the concentration of runoff water [29]. For this reason, we hypothesized that disturbing the soil with a machete during weed control might result in the formation of rills and gullies. In the study area, however, this disturbance was only superficial, affecting the first 2-3 cm, and did not invert the soil mass, and so the vegetal cover was not incorporated into the soil. In consequence, the level of GC remained relatively high, impeding the concentration of runoff and the formation of rills and gullies. Prosdocimi et al. [30] analyzed the role of litter cover on erosion and concluded that this factor is very effective in reducing surface runoff and in increasing water infiltration into the soil, which reduces its susceptibility to erosion via rills and gullies.
Total ground cover (weeds and litter) (%)
No correlation was observed between the ground cover and the water erosion. For this reason, even when the level of GC was relatively high, soil disturbance leads to a considerable area of the soil surface being affected by splash and sheet erosion. Disturbed soil is less resistant to the impact of rainfall [31], which accounts for a large proportion of the soil surface affected by splash erosion (24.8%). Weed cover is known to protect the soil against this impact [32], but it is not as effective against runoff, in which case litter cover plays a more significant role [19]. Weed control by machete disturbed not only the soil, but also the litter layer, reducing its effectiveness in preventing sheet erosion. Vegetal residues must not only be present but also stable on the soil surface, and our study showed that the management of ground cover with the use of machetes disturbed and loosened the litter layer, thus reducing its effectiveness in preventing erosion. Under these circumstances, it can be understood why water erosion did not show a correlation with soil cover. A comparison of our results with those obtained in a previous study is illustrative of this consideration. In the sample plots considered in this study, with GC by litter of 33.8%, the surface area affected by sheet erosion (Ms and Ts) was 34.3%, whereas 13.7% of the area under plant litter (Nl) was not subject to erosion. This finding contrasts with that obtained by Blanco and Aguilar [33] in a study of bean cultivation with a no-till system in the same area as that addressed in the present study (hence, the environmental conditions were the same). Moreover, rainfall patterns were similar during the sampling periods of both investigations. In the early study, only 1.9% of the study area was affected by sheet erosion, whereas 36.2% of the area under plant litter was unaffected by erosion, 22.5 percentage points more than in our case, despite the similar level of litter cover (35.4%).
Our study results show that the ground vegetal cover arising from the type of soil management performed, i.e., the greater presence of weed cover and litter, did not reduce the area of soil affected by erosion in the same proportion. Weed and litter cover, considered individually, did not reduce the area affected by erosion to below 50%, despite GC levels of up to 60%. Even when GC (weed and litter) reached 100%, the area affected by erosion only decreased to 42%. These results are far from those obtained by Blanco and Aguilar [33], in whose study of no-tillage bean cultivation the presence of 65% of litter cover reduced the area affected by erosion to around 20%.
The findings presented in this paper clarify the role played by ground cover, both that of living plants and that of vegetal residues, in protecting the soil from erosion. Previous research in this field has shown that, apart from GC, erosion is subject to various other influences [30,34] that should be taken into account in order to achieve effective erosion control. Our results demonstrate that the litter layer in itself did not protect the soil from erosion; this contrasts with previous reports [33,[35][36][37][38]. In fact, the disturbance of the surface structure of the soil when weed control is carried out by machete is the factor that really prevented effective erosion control, even when the litter layer covered 60% of the soil surface, a value that rises to 100% when the weed cover is also taken into account.
Undoubtedly, the GC factor is important, but it seems that this is not enough. Our results show that the soil structure must remain undisturbed if effective erosion control is to be achieved. Arshad et al. [39] and Zhang et al. [40] corroborate this view, concluding that low rates of erosion are observed in crops with no-tillage systems due to their greater structural stability and the greater macroporosity of the surface when the soil structure remains intact. These conditions favor the hydrological properties of the soil, thus facilitating infiltration and reducing soil losses. In this regard, Lal [41] emphasized the importance of employing appropriate techniques of soil management in order to avoid disturbing the soil surface. For this study area, we suggest the use of very sharp machetes, with a cutting angle as parallel as possible to the ground surface. To facilitate accurate cutting, the farmer can use a stick that can serve as a guide to lift the weeds and open the spaces for cutting. In this way, disturbing the soil surface could be avoided. A video is attached (link: https://youtu.be/vogi8ktZs6o [42]) that illustrates the use of a machete for weed control in coffee crops with an adequate cutting angle and the support of a stick guide. This handling procedure can be perfectly adapted to maize cultivation.
Conclusions
Our study results show that weed control with machetes is the decisive factor in explaining the large area of the sample plots affected by erosion (59.2%). The positive effects of no-tillage cultivation on erosion control are annulled by the soil disturbance produced by the use of machetes, and the influence of the ground cover is insufficient to counteract this outcome. Thus, a high proportion of the land area continued to be affected by erosion: 57% with litter cover of up to 60%, and 42% with litter and weed cover of up to 100%. These figures are far from those reported in a previous study of no-tillage cultivation, without soil disturbance in the weed control process, in which 20% of the total area was affected by erosion when the level of litter cover was 65%. When machetes are used in weed control, loose residues are deposited on the soil surface and its capacity to prevent erosion is reduced. For this reason, no correlation was observed between the increased proportion of ground cover and the reduction in erosion.
The study results presented in this paper show that the effectiveness of conservation agriculture in controlling erosion depends on the simultaneous maintenance of two conditions: not disturbing the soil structure and maintaining high levels of ground cover. When the first condition ceases to apply, the second, by itself, is unable to control erosion.
|
v3-fos-license
|
2019-04-10T13:12:03.407Z
|
2018-09-28T00:00:00.000
|
105471407
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4344/8/10/423/pdf?version=1538122374",
"pdf_hash": "bdc69399ffa5d7a978f43449c3b7757f3b40fc6b",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2914",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "c449bcf7cc4185430bd20654901589d15bf0556d",
"year": 2018
}
|
pes2o/s2orc
|
Application of Lecitase® Ultra-Catalyzed Hydrolysis to the Kinetic Resolution of (E)-4-phenylbut-3-en-2-yl Esters
The possibility of using Lecitase® Ultra as a novel alternative biocatalyst for the kinetic resolution of model racemic allyl esters of (E)-4-phenylbut-3-en-3-ol: Acetate (4a) and propionate (4b) through their enantioselective hydrolysis was investigated. Reaction afforded (+)-(R)-alcohol (3) and unreacted (−)-(S)-ester (4a or 4b). Hydrolysis of propionate 4b proceeded with higher enantioselectivity than acetate 4a. (R)-Alcohol (3) with highest enantiomeric excess (93–99%) was obtained at 20–30 ◦C by hydrolysis of propionate 4b, while the highest optical purity of unreacted substrate was observed for (S)-acetate 4a (ee = 34–56%). The highest enantioselectivity was found for the hydrolysis of propionate 4b catalyzed at 30 ◦C (E = 38). Reaction carried out at 40 ◦C significantly lowered enantiomeric excess of produced alcohol 3 and enantioselectivity in resolution. Lecitase® Ultra catalyzed the enantioselective hydrolysis of allyl esters 4a,b according to Kazlauskas’ rule to produce (R)-alcohol 3 and can find application as a novel biocatalyst in the processes of kinetic resolution of racemic allyl esters.
Introduction
Enantiomers of alcohols are widely used as building blocks in the synthesis of many biologically active compounds or directly as medicines [1], antifeedants [2], odorants [3], or pheromones [4].The wellestablished method for the production of enantiomerically enriched alcohols is their enzyme-catalyzed transesterification or hydrolysis of their racemic esters.For this purpose, commercially available lipases are commonly used to afford high enantioselectivity and enantiomeric purity of the products [5][6][7][8], but new biocatalysts are also developed, among others phospholipases.
Development of new biocatalysts useful in the kinetic resolution of racemic mixtures is still a challenge in biotransformation.Enzymes belonging to the class of phospholipases seem to be good candidates as substitutes of lipases because of the similarity in the mechanism of their action and interfacial activation and ability to the hydrolysis of ester bonds in a wide range of structurally different substrates.In nature, phospholipases catalyze the hydrolysis of phospholipids to their lyso form [9].One of the enzymatic preparations with phospholipase A 1 activity is cheap and easily accessible Lecitase ® Ultra, which is the product of genes fusion of lipase from Thermomyces lanuginosus and lipase from Fusarium oxysporum.It presents the stability of the lipase and the activity of the phospholipase.Designed mainly for industrial degumming of vegetable oils [10], Lecitase ® Ultra was also applied to the production of structured phospholipids, due to the ability to catalyze acidolysis or interesterification [11,12].Nevertheless, little effort has been made in exploitation of its lipase activity in asymmetric synthesis.Up to now, due to its hydrolytic activity, Lecitase ® Ultra has found application for the resolution of racemic esters, that is, esters of 2-hydroxy carboxylic acids [13][14][15], glycidate esters [15,16], N-acetyl-α-amino acid methyl esters [15], as well as for asymmetric hydrolysis of dimethyl 3-phenylglutarate [17].This enzyme was also used to the regioselective hydrolysis of peracetylated mono-and disaccharides [18].
Results and Discussion
Substrates of enzymatic reactions: Racemic (E)-4-phenylbut-3-en-2-yl acetate (4a) and propionate (4b) were obtained through a three-step synthesis.In the first step, benzaldehyde 1 was subjected to Claisen-Schmidt condensation to obtain α,β-unsaturated ketone 2, which was reduced with sodium borohydride to racemic allyl alcohol 3. The alcohol was treated with acetyl or propionyl chloride to afford the corresponding esters 4a,b (Scheme 1).Scheme 1. Three-step synthesis of racemic acetate 4a and propionate 4b.
Because of the inseparability of enantiomers of alcohol 3 during chiral gas chromatography (CGC), before analysis, samples taken from the enzymatic reaction were directly treated with corresponding acyl chloride.In the case of hydrolysis of acetate 4a, produced alcohol 3 was derivatized into propionate and unreacted acetate was unchanged.A similar procedure was applied in the case of hydrolysis of propionate 4b, where alcohol 3 was transformed into acetate.This method allowed us to determine the enantiomeric composition of unreacted ester and produced alcohol (as ester derivative) in one Chiral Gas Chromatography (CGC) analysis.The described procedure was succesfully applied in our earlier studies on the transesterification of allyl alcohols [21,27].
The first experiments of the hydrolysis of esters 4a and 4b with Lecitase ® Ultra as a biocatalyst (Scheme 2) were carried out at room temperature (20 °C).In the case of resolution of acetate 4a, after 24 h the conversion was 14% and successively increased in time to reach 40% after 168 h (Figure 1A).The enantiomeric excess (ee) of the resulting alcohol 3 after 24 h was 92% and it decreased in time to reach 85% after 168 h (Figure 2A).Significantly lower enantiomeric purity was observed for unreacted acetate 4a-in the first 24 h of the process its enantiomeric excess was only 15%, while a Scheme 1. Three-step synthesis of racemic acetate 4a and propionate 4b.
Because of the inseparability of enantiomers of alcohol 3 during chiral gas chromatography (CGC), before analysis, samples taken from the enzymatic reaction were directly treated with corresponding acyl chloride.In the case of hydrolysis of acetate 4a, produced alcohol 3 was derivatized into propionate and unreacted acetate was unchanged.A similar procedure was applied in the case of hydrolysis of propionate 4b, where alcohol 3 was transformed into acetate.This method allowed us to determine the enantiomeric composition of unreacted ester and produced alcohol (as ester derivative) in one Chiral Gas Chromatography (CGC) analysis.The described procedure was succesfully applied in our earlier studies on the transesterification of allyl alcohols [21,27].
The first experiments of the hydrolysis of esters 4a and 4b with Lecitase ® Ultra as a biocatalyst (Scheme 2) were carried out at room temperature (20 • C).In the case of resolution of acetate 4a, after 24 h the conversion was 14% and successively increased in time to reach 40% after 168 h (Figure 1A).The enantiomeric excess (ee) of the resulting alcohol 3 after 24 h was 92% and it decreased in time to reach 85% after 168 h (Figure 2A).Significantly lower enantiomeric purity was observed for unreacted acetate 4a-in the first 24 h of the process its enantiomeric excess was only 15%, while a significant increase (up to 56%) was noticed after 168 h.The enantioselectivity of the resolution was moderate (E = 22, Table 1, Entry 1). 1 Conversion, c = ee s /(ee s + ee p ). 2 The enantiomeric ratio calculated at the highest conversion rate according to the following equation: E = ln[(1 − ee s )/(1 + (ee s /ee p )]/ln[(1 + ee s )/(1 + (ee s /ee p )]; ee s -enantiomeric excess of unreacted ester, ee p = enantiomeric excess of alcohol.
Under the same conditions, propionate 4b was hydrolyzed with a lower conversion degree which did not exceed 30% after 120 h (Figure 1B), but a higher ee of alcohol 3 was determined.During the first 8 h, enantiomerically pure alcohol was produced, and after 120 h its optical purity was reduced to 90%.A significantly lower ee of unreacted propionate 4b (28%) (Figure 2B) compared to that of acetate 4a (Figure 2A) was determined after 72 h.In comparison to the hydrolysis of acetate 4a, a slightly higher enantioselectivity of the process was observed (E = 26, Table 1, Entry 4).
significant increase (up to 56%) was noticed after 168 h.The enantioselectivity of the resolution was moderate (E = 22, Table 1, Entry 1).Under the same conditions, propionate 4b was hydrolyzed with a lower conversion degree which did not exceed 30% after 120 h (Figure 1B), but a higher ee of alcohol 3 was determined.During the first 8 h, enantiomerically pure alcohol was produced, and after 120 h its optical purity was reduced to 90%.A significantly lower ee of unreacted propionate 4b (28%) (Figure 2B) compared to that of acetate 4a (Figure 2A) was determined after 72 h.In comparison to the hydrolysis of acetate 4a, a slightly higher enantioselectivity of the process was observed (E = 26, Table 1, Entry 4).The results of Lecitase ® Ultra-catalyzed hydrolysis of both racemic esters 4a,b were compared with the results obtained in our working group under the same conditions, using commercially available lipase B from Candida antarctica.In the case of the resolution of racemic acetate 4a, after 6 h of reaction, approximately 50% of the ester conversion was achieved, with 80% ee of alcohol 3 and 97% ee of unreacted acetate 4a (E = 37).In the case of the hydrolysis of racemic propionate 4b, a good resolution was achieved after 2 h: ee of alcohol 3 92%, ee of unreacted propionate 4b 85%, conversion degree 57%, and high enantioselectivity (E = 65).Comparable results of the resolution of the enantiomers of acetate 4a with lipase B from C. antarctica were previously obtained by Ghanem and Schurig [26] during the hydrolysis carried out in phosphate buffer (pH 6) and toluene as the solvent for substrate.After 24 h, at 45% conversion, the ee of alcohol 3 and unreacted acetate 4a were 99% and 80%, respectively, with excellent enantioselectivity (E > 200).
Based on the previous studies conducted by Mishra and et al. [15], confirming the thermal stability of Lecitase ® Ultra in the range of 30−50 °C, the effect of temperature on the hydrolytic activity towards esters 4a and 4b was determined.For this purpose, the reactions were carried out at 30 °C and 40 °C.At 30 °C, the hydrolysis of acetate 4a proceeded in a similar way as that described for the process at 20 °C, but a lower conversion degree (32%) and lower enantioselectivity (E = 13, Table 1, Entry 2) were observed after 168 h (Figure 1A).A significant decrease of ee was observed for unreacted acetate 4a, which, after 168 h, reached only 38%.Optical purity of alcohol 3 was slightly lower than that observed at 20 °C, and decreased from 88% after 24 h to 80% after 168 h (Figure 2A).The results of Lecitase ® Ultra-catalyzed hydrolysis of both racemic esters 4a,b were compared with the results obtained in our working group under the same conditions, using commercially available lipase B from Candida antarctica.In the case of the resolution of racemic acetate 4a, after 6 h of reaction, approximately 50% of the ester conversion was achieved, with 80% ee of alcohol 3 and 97% ee of unreacted acetate 4a (E = 37).In the case of the hydrolysis of racemic propionate 4b, a good resolution was achieved after 2 h: ee of alcohol 3 92%, ee of unreacted propionate 4b 85%, conversion degree 57%, and high enantioselectivity (E = 65).Comparable results of the resolution of the enantiomers of acetate 4a with lipase B from C. antarctica were previously obtained by Ghanem and Schurig [26] during the hydrolysis carried out in phosphate buffer (pH 6) and toluene as the solvent for substrate.After 24 h, at 45% conversion, the ee of alcohol 3 and unreacted acetate 4a were 99% and 80%, respectively, with excellent enantioselectivity (E > 200).
Based on the previous studies conducted by Mishra and et al. [15], confirming the thermal stability of Lecitase ® Ultra in the range of 30-50 • C, the effect of temperature on the hydrolytic activity towards esters 4a and 4b was determined.For this purpose, the reactions were carried out at 30 • C and 40 • C. At 30 • C, the hydrolysis of acetate 4a proceeded in a similar way as that described for the process at 20 • C, but a lower conversion degree (32%) and lower enantioselectivity (E = 13, Table 1, Entry 2) were observed after 168 h (Figure 1A).A significant decrease of ee was observed for unreacted acetate 4a, which, after 168 h, reached only 38%.Optical purity of alcohol 3 was slightly lower than that observed at 20 • C, and decreased from 88% after 24 h to 80% after 168 h (Figure 2A).The prolongation of the reaction time did not affect the enantiomeric excesses of both alcohol 3 and acetate 4a.
Catalysts 2018, 8, x FOR PEER REVIEW 4 of 9 The prolongation of the reaction time did not affect the enantiomeric excesses of both alcohol 3 and acetate 4a.A significantly negative effect on the reaction rate and decreased enantioselectivity (E = 3, Table 1, Entry 3) was noticed when hydrolysis of acetate 4a was carried out at 40 °C.In this case, similarly to the process catalyzed at 20 °C, the highest conversion (40%) was achieved only after 168 h (Figure 1A).The highest ee of alcohol 3 (67%) was observed after 48 h at 36% conversion, while the ee of acetate 4a was only 17%.From this moment, a significant drop of enantiomeric excess of alcohol 3 to 41% was observed after 168 h, with a simultaneous slight increase of ee for acetate 4a to 27% (Figure 2A).For propionate 4b, raising the temperature of hydrolysis to 30 °C had a positive effect on the enantioselectivity of resolution of propionate 4b (E = 38, Table 1, Entry 5).After 24 h of reaction, the enantiomeric excess of the resulting alcohol and unreacted substrate was 97% and 5%, respectively, but the conversion of propionate 4b was only 5%.Continuation of the reaction caused the gradual decrease of ee of produced alcohol and, after 144 h, it amounted to 93%.Simultaneously, the ee of unreacted propionate reached the level of 32%.The reaction carried out at 40 °C was characterized by a significantly lower enantioselectivity (E = 12, Table 1, Entry 6) and clearly impacted the optical purity of produced alcohol, whose ee decreased from 88% after 8 h to 80% after 120 h of the process.Enantiomeric excesses of the recovered propionate 3 at higher (30 °C and 40°C) temperatures (32% and 29%, respectively) were comparable to those observed during the reaction carried out at 20 °C (Figure 2B), whereas the conversions were comparable to reactions catalyzed in all temperatures studied.
The R configuration of the enantiomerically enriched, dextrorotatory alcohol 3 was confirmed by comparison of its rotation sign with the literature data [28].Determined configuration indicates A significantly negative effect on the reaction rate and decreased enantioselectivity (E = 3, Table 1, Entry 3) was noticed when hydrolysis of acetate 4a was carried out at 40 • C. In this case, similarly to the process catalyzed at 20 • C, the highest conversion (40%) was achieved only after 168 h (Figure 1A).The highest ee of alcohol 3 (67%) was observed after 48 h at 36% conversion, while the ee of acetate 4a was only 17%.From this moment, a significant drop of enantiomeric excess of alcohol 3 to 41% was observed after 168 h, with a simultaneous slight increase of ee for acetate 4a to 27% (Figure 2A).
The prolongation of the reaction time did not affect the enantiomeric excesses of both alcohol 3 and acetate 4a.A significantly negative effect on the reaction rate and decreased enantioselectivity (E = 3, Table 1, Entry 3) was noticed when hydrolysis of acetate 4a was carried out at 40 °C.In this case, similarly to the process catalyzed at 20 °C, the highest conversion (40%) was achieved only after 168 h (Figure 1A).The highest ee of alcohol 3 (67%) was observed after 48 h at 36% conversion, while the ee of acetate 4a was only 17%.From this moment, a significant drop of enantiomeric excess of alcohol 3 to 41% was observed after 168 h, with a simultaneous slight increase of ee for acetate 4a to 27% (Figure 2A).For propionate 4b, raising the temperature of hydrolysis to 30 °C had a positive effect on the enantioselectivity of resolution of propionate 4b (E = 38, Table 1, Entry 5).After 24 h of reaction, the enantiomeric excess of the resulting alcohol and unreacted substrate was 97% and 5%, respectively, but the conversion of propionate 4b was only 5%.Continuation of the reaction caused the gradual decrease of ee of produced alcohol and, after 144 h, it amounted to 93%.Simultaneously, the ee of unreacted propionate reached the level of 32%.The reaction carried out at 40 °C was characterized by a significantly lower enantioselectivity (E = 12, Table 1, Entry 6) and clearly impacted the optical purity of produced alcohol, whose ee decreased from 88% after 8 h to 80% after 120 h of the process.Enantiomeric excesses of the recovered propionate 3 at higher (30 °C and 40°C) temperatures (32% and 29%, respectively) were comparable to those observed during the reaction carried out at 20 °C (Figure 2B), whereas the conversions were comparable to reactions catalyzed in all temperatures studied.
The R configuration of the enantiomerically enriched, dextrorotatory alcohol 3 was confirmed by comparison of its rotation sign with the literature data [28].Determined configuration indicates For propionate 4b, raising the temperature of hydrolysis to 30 • C had a positive effect on the enantioselectivity of resolution of propionate 4b (E = 38, Table 1, Entry 5).After 24 h of reaction, the enantiomeric excess of the resulting alcohol and unreacted substrate was 97% and 5%, respectively, but the conversion of propionate 4b was only 5%.Continuation of the reaction caused the gradual decrease of ee of produced alcohol and, after 144 h, it amounted to 93%.Simultaneously, the ee of unreacted propionate reached the level of 32%.The reaction carried out at 40 • C was characterized by a significantly lower enantioselectivity (E = 12, Table 1, Entry 6) and clearly impacted the optical purity of produced alcohol, whose ee decreased from 88% after 8 h to 80% after 120 h of the process.Enantiomeric excesses of the recovered propionate 3 at higher (30 • C and 40 • C) temperatures (32% and 29%, respectively) were comparable to those observed during the reaction carried out at 20 • C (Figure 2B), whereas the conversions were comparable to reactions catalyzed in all temperatures studied.
The R configuration of the enantiomerically enriched, dextrorotatory alcohol 3 was confirmed by comparison of its rotation sign with the literature data [28].Determined configuration indicates that, similar to most lipases, Lecitase ® Ultra catalyzes the hydrolysis of esters 4a,b according to the Kazlauskas rule [29], according to which the configuration of more rapidly hydrolyzed enantiomers of these compounds can be predicted as R.
In summary, the obtained results show that Lecitase ® Ultra can be applied as a biocatalyst to obtain enantiomerically enriched (R,E)-4-phenylbut-3-en-2-ol (3) by enantioselective hydrolysis of its esters.For both acetate 4a and propionate 4b, 50% conversion was not achieved, but higher enantiomeric purities of alcohol 3 were observed in the case of hydrolysis of propionate 4b.From a practical point of view, the best results were obtained during hydrolysis of 4b at 30 • C, where enantiomerically enriched alcohol (ee = 93-96%) was obtained at 16-26% conversion degree and the highest enantioselectivity of resolution was observed (E = 38, Table 1).Conducting the process at 40 • C decreased the enantioselectivity of the reaction, particularly in the case of hydrolysis of acetate 4a.
Our pioneering, preliminary results showed the ability of Lecitase ® Ultra to achieve hydrolysis of allyl esters with moderate to good enantioselectivity.However, they indicated the necessity of further studies to develop Lecitase ® Ultra preparation as a biocatalyst in kinetic resolution of racemates.We are going to focus on the effect of various parameters of process (the type of organic solvent added to dissolve the substrate, pH of the reaction medium) as well as immobilization techniques on the increase of enantioselectivity and improvement of kinetic resolution.Substrate scope must be also investigated for future applications, not only to hydrolysis of esters, but also to transesterification of alcohols.Considering the low price of Lecitase ® Ultra in comparison to most lipases, we believe that these studies will finally allow the expansion of the practical applications of this enzyme.
Analysis
The composition of the reaction products was analyzed through gas chromatography (GC) on an Agilent Technologies 6890N instrument (Santa Clara, CA, USA), using a DB-5HT column (Agilent, Santa Clara, CA, USA), polyimide-coated fused silica tubing, 30 m × 0.25 mm × 0.10 µm) and hydrogen as a gas carrier.The following temperature program was applied: Injector 280 Synthesized compounds were purified through silica gel column chromatography (Kieselgel 60, 230−400 mesh, Merck, Darmstadt, Germany).Analytical Thin Layer Chromatography (TLC) was carried out on silica gel coated aluminium plates (DC-Alufolien Kieselgel 60 F 254 , Merck, Darmstadt, Germany) using hexane/acetone 4:1 system.Compounds were visualized by spraying the plates with a solution of 1% Ce(SO 4 ) 2 and 2% H 3 [P(Mo 3 O 10 ) 4 ] in 10% H 2 SO 4 .Benzaldehyde (1) (47 mmol) was dissolved in acetone (100 mL) and a 10% solution of NaOH (2 mL) was added dropwise to the reaction mixture, stirred in a water bath.After 24 h, the mixture was acidified with 1M HCl and the crude product was extracted with methylene chloride (3 × 40 mL).The organic extract was washed with brine, dried over anhydrous MgSO 4 , and filtered.The solvent was removed by evaporation in vacuo and pure ketone (2) was obtained in 97% yield (6.7 g) with physical and spectral data as reported previously [30].
Rac-(E)-4-Phenylbut-3-en-2-ol (3)
A solution of ketone (2) in methanol (150 mL) was placed in an ice bath and NaBH 4 (2.7 g) dissolved in water (6 mL) was added dropwise.When the ketone reacted completely (GC, TLC, 24 h), the mixture was diluted with hot water and the product was extracted with methylene chloride (3 × 40 mL).Pooled extracts were washed with brine and dried.The solvent was evaporated in vacuo and pure racemic alcohol 3 was obtained in 98% yield (6.6 g).Its physical and spectroscopic data are consistent with those reported earlier [28].Alcohol 3 (2.2 g) was dissolved in 100 mL of dry diethyl ether and 10 mL of pyridine.The mixture was stirred in an ice bath and 5 ml of acetyl or propionyl chloride was added dropwise.The reaction was continued at room temperature until the alcohol reacted completely (24 h, TLC).The reaction mixture was acidified with 1M HCl, the product was extracted with diethyl ether (3 × 40 mL) and separated through column chromatography (hexane/acetone, 10:1) to afford known esters: acetate 4a (2.7 g, 96% yield) [26] and propionate 4b [31] (2.9 g, 96% yield).Their spectroscopic data are consistent with those reported in the literature.
General Procedure for Enzymatic Hydrolysis of Allyl Esters (4a,b)
Two milliliters of Lecitase ® Ultra (or 50 mg lipase B from Candida antarctica CALB) and 0.2 g of substrate (4a or 4b) dissolved in 0.5 mL of acetone were placed into 10 mL screw cap glass vials containing 3.5 mL of phosphate buffer (pH 7.2).Vials were shaken at 750 rpm.Samples (0.6 mL) from the reaction with Lecitase ® Ultra were taken at different time intervals and extracted with diethyl ether.The extracts were dried and solvent was removed by evaporation in vacuo.Before analysis, samples were treated with 0.3 mL of corresponding acyl chloride (propionyl chloride, in the case of hydrolysis of 4a, or acetyl chloride, in the case of hydrolysis of 4b) in 1mL of dry diethyl ether and 0.5 mL of pyridine.The mixtures were stirred for 0.5 h at room temperature, diluted with 1 mL of 1 M HCl and 1 mL of diethyl ether.Ethereal layer was separated, washed with saturated NaHCO 3 , brine, and dried over anhydrous MgSO 4 .Extracts were filtrated and solvent was completely evaporated in vacuo.The residues were dissolved in 1 mL of hexane, transferred to the vials, and analyzed by CGC.A similar procedure was followed for the reaction with CALB, but the enzyme was removed from the reaction mixture through filtration.
Figure 1 .
Figure 1.Effect of temperature on the conversion of acetate 4a (A) and propionate 4b (B) during Lecitase ® Ultra-catalyzed hydrolysis.
Figure 2 .
Figure 2. Effect of temperature on the enantiomeric excess of the products during Lecitase ® Ultra-catalyzed hydrolysis of acetate 4a (A) and propionate 4b (B).
Figure 1 .
Figure 1.Effect of temperature on the conversion of acetate 4a (A) and propionate 4b (B) during Lecitase ® Ultra-catalyzed hydrolysis.
Figure 1 .
Figure 1.Effect of temperature on the conversion of acetate 4a (A) and propionate 4b (B) during Lecitase ® Ultra-catalyzed hydrolysis.
Figure 2 .
Figure 2. Effect of temperature on the enantiomeric excess of the products during Lecitase ® Ultra-catalyzed hydrolysis of acetate 4a (A) and propionate 4b (B).
Figure 2 .
Figure 2. Effect of temperature on the enantiomeric excess of the products during Lecitase ® Ultra-catalyzed hydrolysis of acetate 4a (A) and propionate 4b (B).
Table 1 .
Kinetic resolution of allyl esters 4a,b catalyzed by Lecitase ® Ultra-catalyzed hydrolysis at different temperatures.
Table 1 .
Kinetic resolution of allyl esters 4a,b catalyzed by Lecitase ® Ultra-catalyzed hydrolysis at different temperatures.
|
v3-fos-license
|
2021-11-02T15:09:12.653Z
|
2021-10-31T00:00:00.000
|
240383448
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.radcr.2021.10.003",
"pdf_hash": "83b08204ad632ba219257de50e01af3ee68c8576",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2915",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7e20dca3402f6981c9af4b6ce9f332374c92309a",
"year": 2021
}
|
pes2o/s2orc
|
Acute gastric volvulus following rapid and incomplete chewing of vegetables: A case report
One of the rare but serious causes of acute abdomen is gastric volvulus. It is considered an emergent surgical condition when it takes place acutely due to the risk of gastric strangulation, gangrene, and perforation. In this study, we introduce a case of a previously healthy young adult patient who presented with sudden severe epigastric and left upper quadrant abdominal pain along with nausea and retching following insufficient mastication and rapid swallowing of large amounts of vegetables. Radiological studies with chest and abdominal X-rays were in favor of acute gastric outlet obstruction and finally, laparotomy confirmed the diagnosis of acute, primary mesenteroaxial gastric volvulus. We postulated a probable justifying mechanism of the presence of a flaccid gastrocolic ligament (found through the laparotomy) besides rapid entrance of great pieces of vegetables into the stomach precipitated instant gastric rotation.
Introduction
Gastric volvulus is an uncommon but potentially lifethreatening clinical condition, defined as a pathological rotation of the stomach along its long (organoaxial) or short ✩ Acknowledgments: The authors did not use financial support of any kind for writing this article. ✩✩ Competing Interests: The authors declare no conflict of interest. * Corresponding author.
(mesenteroaxial) axis leading to the gastric inlet and outlet obstruction at variable levels [ 1 ,2 ]. It is regarded as a surgical emergency when it occurs in an acute setting, causing obstruction, ischemia (strangulation) then necrosis, and eventually perforation with stomach rotation of more than 180 degrees [ 2 ,3 ]. Based gastric volvulus is classified as primary (30%) and secondary (70%); the primary one is considered when there is no defect in the diaphragm or pathology in adjacent organs but probably it is attributed to weakness or laxity of gastric supports (gastrocolic, gastrosplenic, gastrophrenic and gastrohepatic ligaments), and the secondary one occurs while there are structural or functional gastric disorders or defects in the adjacent anatomies including diaphragm and spleen, which among them, the most common association is with paraesophageal hernia [ 1 ,3 ]. Herein, we provide an atypical presentation of an acute primary mesenteroaxial volvulus in a young adult patient and then discuss its possible pathophysiological mechanism with a brief review of the literature.
Case report
A 32-year-old man presented to the emergency department of the hospital with the complaint of severe acute abdominal pain. The patient felt completely healthy until the afternoon of the day before the presentation when he was eating two plates of uncooked vegetables such as parsley, basil, and coriander; that were ingested without enough mastication, and after about ten minutes, he developed sudden onset, severe abdominal pain located at the epigastric and left upper quadrant (LUQ) region. Then, about ten minutes later he noticed an abdominal distention feeling a mass at the LUQ area and acute dyspnea ensued. His abdominal pain was sharp in quality and he rated the intensity of the pain at 10 of 10 (on a scale from 1 to 10, with 0 representing "no pain" and 10 "very much pain"), but no radiation. It was partially alleviated by the knee-chest position and was exacerbated by the supine position. He also reported having nausea and retching but no emesis. The patient's medical history was mild dyspepsia from 20 years ago which was under treatment by Ranitidine consumed as needed and surgical history of appendectomy 4 years ago. There was no significant and relevant family, allergic, social, or habitual history. Upon physical examination; the patient was terrifically ill and was crying and urging for help desperately while he had a knee-chest position on the ground. In vital signs, he had a blood pressure of 110/85 mmHg, pulse rate of 96 beats per minute, a temperature of 36.5 centigrade, and respiratory rate of 28 breaths per minute. On abdominal examination, upon inspection, there was a local distention at the LUQ and epigastric region. The patient refused deep palpation and had a voluntary guarding abdomen but on light palpation, a severely tender mass was found at the LUQ. Other systemic examinations were unremarkable. In laboratory data, the white-cell count was 8400 per cubic millimeter, with 77.2 % neutrophils, 16.4 % lymphocytes. The hemoglobin level was 15 g per deciliter, and the platelet count was 169,000 per cubic millimeter. The serum level of sodium was 139 mmol per liter, potassium level 4.7 mmol per liter, Blood urea nitrogen level 23 mg per deciliter, creatinine level 1 mg per deciliter, and blood glucose 111 milligrams per deciliter. Venous blood gas analysis showed a pH of 7.33, partial pressure of carbon dioxide of 43.9 mm Hg, and a bicarbonate level of 23.1 mmol per liter. The liver function test, amylase, lipase, erythrocyte sedimentation rate,
Fig. 1 -Abdominal X-ray (supine) shows a massively dilated stomach in an expected intraabdominal location with collapsed small bowel loops.
C-reactive protein, prothrombin time, partial thromboplastin time, and the international normalized ratio were within the normal range. Abdominal X-ray (supine and upright) showed a massively dilated stomach in an expected intraabdominal location with collapsed small bowel loops ( Fig. 1 and 2 ). Abnormal position of gastric cardia and double bubble gastric air-fluid levels were also found ( Fig. 2 ). Upright chest X-ray demonstrated no evidence of pneumoperitoneum but the distended stomach was obvious ( Fig. 3 ). These findings were in favor of gastric outlet obstruction (G.O.O). In spite, there was no classic appearance of complete gastric volvulus, due to clinical history of acute onset of severe epigastric pain, these findings were suggestive of G.O.O due to acute gastric volvulus. According to the aforementioned findings, a nasogastric tube was tried to assist gastric decompression but failed to pass through which was further in favor of gastric volvulus. Ceftriaxone and metronidazole were administered; thereafter, the patient underwent laparotomy ( Fig. 4 ). During the operation, the diagnosis of acute mesenteroaxial volvulus was confirmed and its reduction was done. Subsequently, partial gastrectomy of the gangrenous part and then gastropexy was performed. The postoperative course was uncomplicated and the patient was discharged in generally good condition. At outpatient follow-up after 2 weeks of discharge, he was doing well.
Fig. 4 -A picture of the stomach during laparotomy was
shown and partial gastrectomy was performed due to evidence of gastric necrosis.
Discussion
Presented here was a case of a young adult patient who developed classic symptoms of acute gastric volvulus: severe sudden epigastric pain, intractable retching without vomiting, and inability to pass a nasogastric tube (Bouchard's triad) [4] ; Nevertheless, given this clinical and then radiological findings, we considered acute gastric outlet obstruction as a primary diagnosis and in approach to that in the context of the patient presentation, acute gastric volvulus and then phytobezoar were presumed as underlying causes. Eventually, laparotomy made the final diagnosis of acute mesenteroaxial volvulus and ruled out phytobezoar. In addition, further laparotomy findings showed elongated and lax gastrocolic ligament but no evidence of other pathologies in the stomach or its adjacent organs which proved that volvulus was from a primary source. As a result, we hypothesized that the presence of a flaccid gastrocolic ligament on the one hand and the sudden entrance of great pieces of vegetables to the stomach which increased gastric motility, on the other hand, paved the way for a sudden rotation of the stomach. To the best of our knowledge, this patient was the first case who had this unusual presentation of gastric volvulus; accordingly, further observation and studies are required to evaluate our hypothesis.
As mentioned in the case presentation, some gangrenous parts of the stomach were noted which led to partial gastrectomy; the fact that underscores the importance of early diagnosis of this surgical emergency. Gastric volvulus rarity and numerous causes of sudden abdominal pain result in a low clinical index of suspicion for this entity [ 2 ,3 ], therefore consideration of this condition in the differential diagnosis of a patient with acute epigastric pain and early employment of radiological modality is crucial. Upright abdominal X-ray may indicate double air-fluid levels in the antrum and fundus, or a single air bubble without further luminal gas and supine abdominal X-ray shows a distended fluid-filled stomach with a scarcity of distal GI gases [ 1 ,3 ]. Chest X-ray can demonstrate hiatus hernia, retrocardiac air bubble, or large air-fluid level in the chest [ 5 ,6 ]. As a result, although findings on plain radiography can be a useful guide for diagnosis of gastric volvulus, they are not highly specific nor sensitive; as was true about our case. According to the literature, abdominal CT scan is currently the modality of choice for the diagnosis of gastric volvulus with high sensitivity and specificity, superior to the barium study, endoscopy, and the other modalities; there are 2 signs in abdominal CT scan of gastric volvulus used in its diagnosis: gastropyloric transition zone and abnormal location of the antrum. Other advantages of CT scan consist of the definition of the anatomical defect which allows surgery planning, assessment of gastric viability, and perforation which aids decision regarding the urgency of operative intervention and its 24 hours accessibility and rapidity [6][7][8][9] . Although CT scan was accessible in our center, we decided to make an emergent transfer of the patient to the operating room regarding his critical condition and considerable suspicion of gastric volvulus based on the presentation and X-rays findings.
Based on the review of the literature, immediate surgical intervention comprising emergency laparotomy is recommended following diagnosis of acute gastric volvulus [2] . The reduction procedure is followed by a wide combination of procedures, such as simple gastropexy, gastropexy with division of gastrocolic omentum, partial gastrectomy, diaphragmatic crura reapproximation and hiatal repair, and fundoantral gastrogastrostomy (Opolzer's technique) [10] . Assessment of gastric viability within the operation is crucial to assist in surgery planning and resecting the gangrenous portion as was the case in our patient [11] . Conservative approaches like endoscopic reduction are confined to elderly patients with a less physiologic reserve and chronic gastric volvulus [1] .
Recently, laparoscopy is gaining more attention in the treatment of gastric volvulus and some studies have demonstrated its satisfactory outcomes and lower downtime as an alternative; however, data comparing laparoscopy to open is still scant [12] .
Conclusion
In sum, this report aims to increase physicians awareness about the various presentations of this rare disease and draw their attention to that when a patient presents with severe acute upper abdominal pain and fundamental imagings are in favor of acute gastric outlet obstruction, gastric volvulus can be an emergent diagnosis which requires immediate surgical consultation and intervention.
|
v3-fos-license
|
2017-05-09T23:46:51.181Z
|
2017-02-06T00:00:00.000
|
3779064
|
{
"extfieldsofstudy": [
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1364/oe.25.002594",
"pdf_hash": "b979a6bdb7d30498d4b5f97d1bd5421d98287762",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2918",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "ec8644cf219b454d65682e56e2ae7e8722462cb6",
"year": 2017
}
|
pes2o/s2orc
|
Multicolor lasers using birefringent filters: experimental demonstration with Cr:Nd:GSGG and Cr:LiSAF
: In this study, we numerically and experimentally investigate application of birefringent filters (BRFs) as frequency selective elements in multicolor lasers. A BRF plate made out of crystalline quartz with an arbitrarily oriented optical axis has been explored. Simulation results have shown that compared to regular BRFs where the optical axis lies in the plane of the plates surface, a BRF with an optical axis pointing out of its surface enables design flexibility in filter parameters, providing access to a wider set of free spectral range and bandwidth values. As a result, multicolor operation could be obtained in many wavelength pairs using a single BRF plate. In the experiments a 3-mm thick quartz BRF with an optical axis 45° to the surface plane has been used. With Cr:Nd:GSGG as a laser medium two-color and three-color cw laser operation has been demonstrated in 11 and 3 different transition combinations, respectively. Moreover, two-color laser operation has been demonstrated in 10 different wavelength pairs in Cr:LiSAF. To our knowledge, this study is the first detailed investigation and experimental demonstration of BRFs with tilted optical axis for multicolor operation of solid-state lasers. Compared to other methods, BRFs enable a rich selection of transition pairs and also the ratio of the power in each line could be regulated by fine adjustment of the rotation
Regular birefringent filters where the optical axis lies in the plane of the plate have been recently used in generation of dual-wavelength radiation from cw Yb:KGW lasers [27]. However, the free-spectral range of regular birefringent filters does not vary a lot with the rotation angle of the plate, limiting the set of THz frequencies that can be achieved in [27]. For example, a frequency difference of 7.57 THz (1014.6 nm and 1041.3 nm) has been demonstrated using a 4-mm-thick BR filter in [27], and accessing other different frequency values required use of BR filters with different thicknesses. A multiple-plate BRF was also employed to obtain dual-wavelength cw operation in Ti:Sapphire [28,29]. In our experiments, a 3-mm-thick quartz crystal birefringent filter with an optical axis tilted 45° with respect to the plate's surface has been used for spectral selection. Compared to regular BRFs where the optical axis lies in the plane of the plate, a BRF with an optical axis pointing out of its surface enables access to a wider set of free spectral range and filter bandwidths [30][31][32][33][34][35]. This fact eliminates the need to change the thickness of BR filter to access different multicolor wavelength pairs. In the experiments we have investigated multicolor laser operation with our BRF using Cr:Nd:GSGG and Cr:LiSAF gain media. In continuous-wave laser experiments with Cr:Nd:GSGG, we have achieved quite stable lasing in 11 different line pairs in dual-wavelength operation, and in 3 different groups of triple lines in tri-wavelength operation. Obtaining stable multicolor laser operation with Cr:LiSAF was more challenging due to the broadband nature of the gain, but as it was also recently demonstrated by Akbari et al in [27] with Yb, the method was still applicable and high-quality two-color cw output has been demonstrated in 10 different lines. Simple rotation of the BRF plate enabled adjustment of the power in each line. To our knowledge, this is the first report of multi-wavelength operation in a Cr:Nd:GSGG laser. Moreover, to our knowledge such diversity in wavelength selection has not been achieved before from any Nd-based system. Results with Cr:LiSAF confirm that the method can also be applied to broadband solid-state gain media. These experiments verify the advantages of BRFs with an optical axis pointing out of its surface as frequency selective elements in multicolor lasers. We believe that multicolor lasers systems based on birefringent filters may work quite well for other laser concepts as well.
Theoretical background and simulation results
In this part, with our numerical simulation results, we would like to present filtering properties of laser resonators that contain a uniaxial birefringent plate with an arbitrary optical axis. In particular, we show that the usually employed birefringent filters which have its crystal axis oriented within the surface of the plate can provide a limited range of filter parameters (such as free spectral range and bandwidth). On the other hand, if the crystal axis is chosen correctly, a single birefringent plate could offer a rich range of filter parameters from a single-element filter. Basically, as the BRF is rotated about an axis normal to the surface, one can attain different values of free spectral range and bandwidth from the BRF. Then accessing different filter parameters only requires adjustment of the plate's rotation angle correctly. As a result from a single BRF, one can obtain many different filter parameters, rendering these filters quite useful for many applications such as broadband cw and femtosecond tuning as well as multicolor laser operation, which will be the focus of this paper. As a starting point, Fig. 1(a) shows the particular situation that we would like to investigate here. We assume that our cavity contains a birefringent tuning plate (BR plate) that is placed at Brewster's angle inside the cavity. The cavity also contains a laser crystal that is inserted at Brewster's angle as well (this is not necessary and a flat-flat crystal with antireflective coatings could also be used, but a Brewster-cut crystal helps to increase the selectivity of the birefringent filter by increasing the modulation depth). The birefringent filter will transform the incident TM polarized light into some elliptical polarization that will contain both TM and TE modes. The Brewster surfaces of the laser crystal and the birefringent plate will create loss for the TE polarized part of the beam. Figure 1(b) shows a detailed view of the birefringent filter, where we have followed the notation that was introduced in [32]. The incident ray, the light's path inside the crystal and the orientation of the crystal ( c ) are all shown. Note that the axis of the birefringent plate ( c ) is arbitrarily oriented and does not lie in the plane of the plate's surface. In Fig. 1(b), t is the thickness of the plate, β is the internal Brewster's angle (around 33° in quartz around a wavelength of 1 μm), s is the direction of beam propagation in the plate, ρ is the rotation angle of the plate, γ is the angle between the crystals axis and the beam propagation direction and σ is the angle between the optical axis and the surface normal (when σ = 90°, the optical axis lies on the surface of the plate, which is what is typically employed in birefringent filters). As we have mentioned, the BR plate will change the polarization of the intracavity laser beam, creating a wavelength dependent loss. Besides the wavelength (λ), this loss will also depend on the thickness of the birefringent plate (t), the rotation angle of the birefringent filter ρ, as well as well as the angle between the optical axis and the surface normal (σ). Theoretical details of transmission characteristics of such laser systems containing birefringent filters have been studied in detail earlier [30][31][32][33][34][35] and will not be repeated here. In the following, we will present numerical simulation results that are based on Jones matrix analysis of the entire cavity. We will mostly elaborate on the importance of the orientation of the plate optical axis in determining filtering characteristics of the system especially for multicolor laser operation applications. Fig. 2. Calculated transmission characteristics of a Cr:Nd:GSGG laser cavity around 1 μm, as a function of wavelength for different values of birefringent plates rotation angle (for ρ in the range from 15° to 115°, see differently colored graphs). The calculation has been performed for a 3-mm-thick crystal quartz birefringent plate with an optical axis tilted 45° with respect to the plate's surface (σ = 45°). The graph is separated into two for easier viewing. . The calculation has been performed for a 3 mm thick crystal quartz birefringent plate with an optical axis that lies on the surface of the plate (σ = 90°).
As an example, Fig. 2(a) shows the calculated transmission properties of a Cr:Nd:GSGG laser operating around 1 μm that contains a 3-mm-thick crystal quartz birefringent plate with an optical axis tilted 45° with respect to the plate's surface (the angle is chosen as σ = 45° here, since such a BRF filter was available to us and therefore used in the experiments). Both the birefringent plate and the gain medium were assumed to be inserted at Brewster's angle. The transmission is calculated for several different rotation angles ρ of the birefringent plate in the range from 15° to 115°. In the experiments, changing ρ refers to rotation of the BRF about an axis normal to the surface, as shown in Fig. 1(b). Also, the results are symmetric for ρ values of 0°-180° and 180°-360°; hence, we will focus only on the 0°-180° range here. Note from Fig. 2 that as the plate's rotation angle is varied, the filter properties such as modulation depth, free spectral range and full-width-half-maximum (FWHM) of the transmission peaks change considerably. As an example, when ρ = 15°, the filter has a FWHM of 75 nm and a modulation depth of around 45%. On the other hand, ρ = 115° provides a similar modulation depth but has a FWHM of around 5 nm. Hence, when its optical axis is not oriented parallel to its surface, the same filter could provide quite different filter functions at different values of rotation. For comparison, Fig. 3 shows the calculated transmission in the same wavelength region for a regular birefringent plate with an optical axis on the surface of the plate (σ = 90°, all the other parameters are same as Fig. 2). Note that for this case, the free spectral range and the full-width-half-maximum of the filter stay nearly the same and are not strong functions of the rotation angle ρ. Hence, such a filter offers limited flexibility compared to an BRF with an optical axis pointing out of its surface.
As we have discussed above using Fig. 2 and Fig. 3, compared to a regular BRF plate, a BRF plate with an optical axis pointing out of plate's surface provides a wider range of filter parameters. This observation can also be seen from Fig. 4 which plots the variation of freespectral range (FSR), FWHM and modulation depth as a function of plates rotation angle ρ around the 1 μm region for the plate with an optical axis tilted 45° with respect to the plate's surface (σ = 45°). First of all, consistent with the observations from Fig. 2, there are two regions where the modulation depth is high, which is necessary for the plate to be useful for many of the tuning applications (one narrow band region around ρ = 15° and another broadband region around ρ = 115°). As may be seen from From an alternative perspective, Fig. 5 shows the calculated transmission of the system at a fixed wavelength of 1061 nm (coinciding with the gain peak of Cr:Nd:GSGG) as a function of plate's rotation angle. The transmission is calculated for two different σ angles: (a) 45° and (b) 90°. We would like to remind here that σ is the angle between the optical axis and the surface normal and σ = 90° refers to a typical birefringent plate with an optical axis that lies on the surface of the plate. Note that as the birefringent plate is rotated, the incident wave at 1061 nm sees several transmission maxima (where the birefringent plate does not change the incident polarization state). Note that, for the plate with a σ of 45° To elaborate on this issue further, Fig. 6 shows the calculated variation of filter free spectral range and modulation depth as a function of filter rotation angle ρ. The calculation has been performed for several different optical axis orientations σ in the range from 0° and 90°. When σ = 0°, the plate's crystal axis is perpendicular to the plate surface, and hence the BR plate does not change the polarization state of the incident beam at all, yielding a modulation depth of 0 and the calculated free spectral range does not have any physical significance. As mentioned when σ = 90°, the plate's crystal axis lies on the surface of the plate, and as can be seen the free spectral range value varies in a narrow range (30-45 nm). On the other hand, when σ = 45°, the FSR varies in a much wider range between 35 and 725 nm, and this makes the filter useful for a broader array of applications. We note here that, as we mentioned earlier, all the values of this FSR might not be accessible, since the modulation depth might be too low for some cases. Lastly, as σ gets closer to β (the internal Brewster's angle), the direction of beam propagation s gets closer to the direction of crystals c axis ( c ), γ approaches an angle of 0° and one can attain very large free spectral range values. However, in such a case the FWHM of the filter is also very large, which might not be optimum for tuning applications.
In this work, we are interested in use of birefringent filters in multicolor laser operation applications. For example, assume one wants to obtain a two-color Cr:Nd:GSGG laser operating at the wavelengths of 1058 nm and 1061 nm. Looking at the emission spectrum, the line of Cr:Nd:GSGG around 1061 nm is stronger than the line at 1058 nm. Hence one needs to create some loss at 1061 nm to push the laser to operate at two colors simultaneously (to roughly equalize the net gain at both transitions). Moreover, one may want to fine tune the net gain in each wavelength to adjust the laser power in each lasing line. Hence, ideally one needs a filter that potentially could enable a wide range of loss modulation. In that respect, since the birefringent filters with optical axis pointing out of the surface enables access to a wider selection of filter parameters such as free spectral range and filter bandwidth, they are better suited for multicolor laser operation applications. A wider set of filter parameters from a single BR plate: (i) enables multicolor operation in many different wavelength pairs, (ii) permits better adjustment of power at each wavelength in multicolor operation, and (iii) in some cases might improve the stability of the multicolor laser. In the following, we will present experimental data taken with Cr:Nd:GSGG and Cr:LiSAF lasers that confirm the expectations of our simulation results. In this section we present our multicolor laser operation results taken with the Cr:Nd:GSGG gain material. We will start this section with the regular continuous-wave laser results, meaning laser data taken in the usual single-wavelength operation regime. Figure 8(a) shows sample cw laser efficiency curves taken using 1%, 3% and 5.4% transmitting output couplers.
Multicolor laser operation with Cr:Nd:GSGG
As an inset, laser slope efficiencies obtained with respect to absorbed pump power were also indicated for each output coupler. The best laser performance was attained using the 5.4% output coupler, where we have obtained cw powers as high as 738 mW at an absorbed pump power of 4.6 W. On the other hand, to demonstrate tuning at more wavelengths or wavelength pairs, we have used the 3% transmitting output coupler in tuning experiments. With this output coupler, we have measured a laser threshold of 62 mW, and acquired a slope efficiency of 16.2%. The laser produced up to 496 mW of cw laser output at an absorbed pump power of 4.6 W. The free running laser wavelength was 1061 nm (with free running we mean for the cavity without the BR plate). When the BR plate was inserted into the cavity, the laser power at 1061 nm decreased to 455 mW due to the additional losses associated with the BR plate. By adjusting the rotation angle of the filter, we could attain lasing at 7 other wavelengths. Figure 8(b) shows sample spectra obtained from the single-wavelength cw Cr:Nd:GSGG laser. The cw output powers obtained at each transition are also indicated in the figure. Laser powers at the 1058 nm and 1061 nm lines are relatively high, as expected from the observed strength of these lines in the emission spectrum. Figure 9 summarizes the multicolor cw laser operation results obtained with Cr:Nd:GSGG. By adjusting the rotation angle of the BR plate to the desired position, dualwavelength (two-color) cw operation has been obtained at 11 Fig. 9, the data are plotted starting from the spectrum with the shortest lasing wavelength and the order of the graphs is not correlated with the rotation angle of the BRF plate. Output power levels are also indicated in the figure. In cw two-color laser operation, for most of the transition pairs, the laser was quite stable and by fine adjusting the rotation angle of the BRF, it was possible to vary the power in each transition. The frequency differences between the pairs range roughly between 0.7 THz and 13.5 THz, which potentially facilitates efficient THz generation at a wide spectral range. An interesting point to note here is that, even though the minimum free spectral range that can be provided by the BRF was around 30-40 nm for our case [ Fig. 4], we could still obtain two-color laser operation with wavelength separations as low as 3 nm. This finding shows that, as mentioned earlier, for multicolor laser operation one needs to equalize net gain for the oscillating modes by introducing the correct filter function and this does not necessitate use of a BRF with an FSR value that matches the wavelength separation. Moreover, a birefringent filter with an optical axis on the plate's surface can also be used for multicolor operation but it cannot provide the richness of wavelength pairs that we have obtained using an off-axis BRF in this study. With the BRF filter it was also possible to obtain three-color cw laser operation with Cr: Nd Fig. 9(b)]. However, as expected, unlike two-color laser operation the adjustment of the BRF filter angle does not allow free variation of lasers power in each line in this setting since this requires two independent inputs and we have only one. Still, as seen from Fig. 9 [55,56] report multicolor operation in a few pairs of transitions, whereas in our study we have obtained multicolor operation in 14 different wavelength groups. As indicated before, this is the advantage of using the BRF filter with a c-axis out of the crystals surface, which provides a rich range of filter parameters from a single device. Other approaches such as specially coated cavity mirrors, coupled cavities or two different laser crystals only provide multicolor operation at a few (if not one) wavelength pair. Moreover, the technique that is used in this paper allows easy adjustment of laser power in each line by adjustment of the BRF filter angle. This option is not always available relying on other techniques. Lastly, our setup is flexible to variation in pump power or output coupling, since again the fine adjustment of BRF angle provides a feedback mechanism to cancel out any undesired fluctuations. The results presented here were taken with Cr:Nd:GSGG which was available at the laboratory during the experiments. However the method we present is general and can be applied to any Nd-based gain media as well as other rare earth element doped gain media such as Er, Yb, Ho and Tm.
Multicolor laser operation with Cr:LiSAF
In the previous section, we have presented laser operation results taken with the Nd-based material Cr:Nd:GSGG. In multicolor laser operation, rare earth element doped gain media with sharp laser lines have advantage since the filter parameters should be optimized for two or three fixed known wavelengths. On the other hand, for the transition metal doped gain media such as Cr:LiSAF, laser operation over a broad wavelength range is feasible and multicolor operation requires a more subtle optimization. To test the feasibility of a BRF plate in multicolor operation in transition metal-doped gain media, we have performed detailed experiments with Cr:LiSAF as well. Single-wavelength cw operation results of a multimode diode pumped Cr:LiSAF laser has recently been presented in [59], and here only the new results involving multicolor laser operation will be presented. In the experiments, the free running Cr:LiSAF laser was producing around 500 mW of cw output around 860 nm at an absorbed pump power of 1.5 W using a 1% transmitting output coupler (while pumping only with one of the multimode diodes). With the insertion of the BR plate, the output powers around 860 nm decreased to the 450 mW level. In single wavelength operation, the laser could be tuned continuously from 770 nm to 1110 nm. It requires two different highly reflective mirror sets to cover all this region [59]. Figure 10 shows the dual-wavelength operation results obtained with the Cr:LiSAF laser in cw mode. Obtained power levels from the Cr:LiSAF laser are indicated in the figure as well (at an absorbed pump power of 1.5 W). Fig. 10, the spectra are depicted starting with lasing wavelength pairs with minimum separation and the order of the plots is not correlated with the rotation angle of the BRF plate. Note that the center of the lasing lines are located around 860 nm, which is where the gain of Cr:LiSAF maximizes. Besides the 10 wavelength pairs shown in Fig. 10, multicolor laser operation could also be obtained at many other pairs; the Cr:LiSAF laser, howevere, was not very stable at these other transition pairs and they will not be presented here. This we believe is due to the broad FWHM gain bandwidth of Cr:LiSAF of approximately 200 nm, which makes stabilization of two-color laser operation more challenging compared to narrow and discrete line gain media such as Cr:Nd:GSGG. First of all, unlike Cr:Nd:GSGG, it was not possible to adjust the wavelengths in multicolor operation: the laser optimizes itself and works at a wavelength pair that cannot be controlled. For example, we have obtained stable two-color laser operation at the 848 nm & 870 nm wavelength pair and obtained an output power of 350 mW. As mentioned the wavelengths are determined by the properties of the BRF, the specific optics used and the laser materials gain properties and therefore, they cannot be adjusted freely. Moreover, with Cr:LiSAF it was harder to adjust the power level in each line during twocolor operation again due to the above mentioned difficulty of broadband gain. However, as the results in Fig. 10 shows the method still works in Cr:LiSAF for many different wavelength pairs. Different BRF designs could potentially be used to attain two-color laser operation at other transition pairs in Cr:LiSAF.
Note from Fig. 10 that the wavelength differences between the operating wavelength pairs are in the 22 nm to 80 nm range. This corresponds to a frequency difference ranging from around 9 THz to 70 THz. Moreover, the average power levels that are indicated in Fig. 10 are the results obtained while pumping the Cr:LiSAF laser with only one multimode-diode, and could be scaled up 3-4 times by applying all the available pump power from the diodes (i.e. to above 1.5 W level) [59]. We also believe that, using this BRF plate, multicolor laser operation is also feasible with ultrashort pulses in either Kerr lens mode-locked or Saturable Bragg mirror mode-locked laser cavities [60,61]. Lastly, dual-wavelength cw operation has been demonstrated in Cr:LiSAF before using grating controlled coupled cavities [23]. However, the power levels reported were in the order of 10 mW only.
Conclusions
In summary, we have explored the application of birefringent filters as frequency selective elements in multicolor lasers. Our simulation results predicted that birefringent filters with an optical axis pointing out of its surface provide design flexibility in filter parameters and offer access to a very wide set of filter parameters. In the experiments, we have obtained cw multicolor laser operation in many laser line pairs in Cr:Nd:GSGG and Cr:LiSAF materials, in accord with the simulation results. We believe that tilted-axis BRFs should provide efficient and low-cost multicolor laser operation with other laser gain media as well.
|
v3-fos-license
|
2018-12-29T09:36:34.837Z
|
2013-05-13T00:00:00.000
|
108512061
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=31248",
"pdf_hash": "b27658ba2c4d4a387823d998c779bfba25038768",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2921",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "b27658ba2c4d4a387823d998c779bfba25038768",
"year": 2013
}
|
pes2o/s2orc
|
Twin-T Oscillator Containing Polymer Coated Parallel Plate Capacitor for Sea Water Salinity Sensing
This paper presents the development of a Twin-T oscillator comprising polymer coated parallel plates as a sensor for ocean water salinity monitoring. This sensor employs a parallel plate capacitor design, with sea water serving as the medium between plates. Novalac resin and a proprietary commercial polymer (AccufloTM) were investigated as corrosion protective coatings for the copper electrodes of the capacitor. Electrochemical Impedance Spectroscopy (EIS) was employed to evaluate corrosion inhibition of polymer coating in sea water. A detection circuit was designed and simulated using P-spice and then implemented in Printed Circuit Board (PCB). EIS results indicate that Accuflo exhibits better corrosion inhibition in ocean water than Novolac. Further, the use of Twin-T oscillator based detection circuit resulted in enhanced sensitivity and better detection limit. Experiments performed using ocean water samples resulted in oscillator frequency shift of 410 Hertz/power supply unit (Hz/PSU). Oscillator frequency drift was reduced using frequency-to-voltage converters and sensitivity of 10 mV/PSU was achieved.
Introduction
The oceanic studies began in the 1930s with the search for petroleum, continued with the emphasis for improved naval warfare and more recently have been driven by a need to understand and protect the ecosystem.No discussion on oceanography is complete without a mention of parameters such as salinity, temperature, pressure and density.Extensive research has been conducted to understand the role of these parameters in regulating oceanic processes, [1,2] but there is still a lot of latent information that seems to remain elusive to the oceanographers worldwide.Salt concentration measurement in sea water is very important as it affect the weight of surface waters.Fresh water is light and floats on the surface, while salty water is heavy and sinks.Together, salinity and temperature determine seawater density and buoyancy, driving the extent of ocean stratification, mixing, and water mass formation.The density of surface seawater ranges from about 1020 to 1029 kg m −3 , depending on the temperature and salinity.Salinity is commonly defined as the ratio between the weight of dissolved material in the sea water sample and the weight of the sample [3] and is generally expressed in parts per thousand.The dissolved material includes dissolved gases, but excludes fine suspended particles and other solids that are in contact with sea water.Majority of seawater has a salinity of between 3.1% and 3.8%, with average of about 3.5% (35 g•L −1 ).
Salinity varies from place to place and it is lower where mixing occurs with fresh water runoff from river mouths or near melting glaciers, and found higher where high rates of evaporation, low precipitation and river inflow, and confined circulation occurs such as Red Sea.Chlorine (55.3%), sodium (30.8%), sulfate (7.7%), magnesium (3.7%), calcium (1.2%) and potassium (1.1%) are the major constituent of sea water salts [4].Salinity is also conveniently referred to as TDS or Total Dissolved Salts.Greater salinity and colder temperatures results in an increase in ocean density with a corresponding depression of the sea surface height, whereas in fresh waters and warm temperature the density is lower resulting in an elevation of the sea surface [3][4][5][6][7].Further, the presence of high concentrations of salts in sea water makes it very harsh for any material to sustain in it.Thus, salinity measurement is important to people and Indus-tries dealing with sea water and require sensors which can withstand the harsh conditions of sea water for salinity monitoring.In ocean sciences, the electrical conductivity of seawater is used as an indication of salinity.Seawater salinity is calculated by formulas of the Practical Salinity Scale of 1978 (PSS-78) based on conductivity ratio measured by conductance sensors [4,8,9].Conductivity measures how well the water sample conducts an electrical current, a property which is proportional to the concentration of ions in solution.The saltier the water, the higher the conductivity will be.The advantage of using conductivity over TDS is the ease with which measurements can be made.Thus, Seawater salinity can be calculated from measured values of conductivity and temperature.
Recently various principles and techniques have been reviewed for sensing sea surface salinity [10,11].Menn et al. described the advances in measuring ocean salinity using optical sensors.Such sensor usually measure refractive index of seawater which is related to density and can therefore be used to measure absolute salinity [12].Malarde et al., described optical refractometer based salinity measurement via refractive index recording of seawater.Tests and calibrations were made in a temperature-stabilized seawater tank.Results indicated that it is capable of measuring seawater refractive index with a resolution of about ± 4 × 10 −7 , equivalent to a salinity resolution of ± 2 × 10 −3 g•kg −1 [13].Among conductivity based sensors, two types of sensors I.e.Contact type sensors and Non-Contact type sensors are available for salinity measurement.In Contact type of sensors, there is a direct contact between the measurement probes and the surrounding media, whereas there is no contact between the analyte and the sensing device for Non-Contact type sensors.This gives Non-Contact type sensors an apparent advantage over the Contact type sensors, whose long term stability is limited by polarization and fouling.The measuring principle for most non-contact conductivity sensors is inductive.However, inductive sensors suffer from the problem of external fields' interference.Apparently, the measuring field in this type of sensing is exposed to the surrounding media and does not couple entirely to the secondary coil of the transformer.Sea-bird Electronics Inc., in their comparative study on conductivity cells showed that most of the inductive cells currently in use have 11% to 20% of their field as external field which causes error in measurement [14].The external fields lead to errors commonly known as proximity errors, which can have major consequences on the data.Thus, any material that has a conductivity value other than that of sea water can influence the calibration of the system.Cables, sensor housings and marine growth close to the conductivity cell also contribute to a shift in the system's calibration.
To overcome the problem of external fields associated with inductive sensing capacitive principles for conduction sensing can be employed.Parallel plate capacitors rely on trapping the electric field in the region between the plates, [15,16] which provides an inherent confinement of the measuring field between the two plates, with little interference due to external fields/objects.Only a small portion of the field is exposed to the external media causing fringing effects at the corners and edges of the plates.Thus, due to the fringing fields at the plate edges the measured capacitance of a capacitor is generally higher than the calculated capacitance.Although, it is difficult to calculate the fringe field analytically, these are significant when the distance between the electrodes, is comparable to the smallest dimension of the electrode [17].However, its effect can be minimized by using Kelvin guard electrodes (ring) in system [18].
In this work attempts have been made to use parallel plate based capacitor for salinity measurement.Two commercially available polymers have been investigated to solve the problem of electrode fouling in sea water.Simulations have been conducted using P-spice software to simulate results and Twin-T oscillator design was employed to achieve higher sensitivity.Further, a reduction in frequency drifts has been achieved using frequency to voltage converter and Kelvin ring guard was employed to reduce the fringing field effects at edges of device.
Sensor Design and Simulation
Figure 1(A) shows the schematic of sensor design having two parallel plates with a Kelvin guard ring.The Kelvin guard ring was used to minimize the effect of fringing fields and to improve the accuracy of capacitance measurements.Further, to reduce the effects of fringing fields in the proposed meso-scale capacitor, the distance between the electrodes was kept ~600 microns, and the smallest dimension (length of the top electrode) was kept 1 cm.Additionally, the following design features ensured high fidelity in measurements, a) the guard ring and the electrode were fabricated in one step and were coplanar, b) the distance between the guard ring and top electrode was kept at 400 µm, smaller than the 600 µm distance between the capacitor plates, c) the guard ring was held at a potential equal to that of the opposite plate.
Finite Element Modeling (FEM) using the FEM tool Comsol Multiphysics was performed to quantify the reduction in fringe field due to the incorporation of the Kelvin guard ring.Further, "Quasi-static" a sub-module of the "Electromagnetic" section was used to simulate appropriate sensors operational frequency range.Each shape in the geometry was assigned a sub-domain name to facilitate the specification of materials used in the construction of the sensor.To simulate the seawater medium around the sensor, the capacitor was enclosed in a box with dielectric properties matching those of seawater.Next, the boundary conditions were specified by assigning voltage for each of the interfaces.This is the most important step prior to simulation, as the nature of the boundary dictates the electromagnetic equation used in the calculation of the electric fields in the system.The boundaries of the top metal plate were assigned a sinusoidal voltage, 5 Vp-p in magnitude, while those of the lower plate were assigned to ground.The junction of the liquid-dielectric interface was assigned as a continuous interface and is hence, governed by the equation, where n is the normal vector and J 1 , J 2 are the current density vectors of the two adjoining materials in consideration.In reality the space charge region in liquid dielectric junction with polarization contribute to the overall impedance.However, for simplicity these contributions were neglected in this simulation.Further, a mesh with "normal" mesh size was used for the mesh elements in simulation.The sensor was analyzed in the "time harmonic, small current" mode and solved using "UMFP-ACK".The simulated system results for the potential distribution and electric field are shown in Figure 1(B).
Top in Figure 1(B) shows the electric field distribution represented by the streamlines while the color gradient represents the potential distribution at various regions in the capacitor.The energy density (E) stored by a capacitor was obtained using the post-processing operation via quasi-static simulation in COMSOL and given by the Expression 2, Consequently, Equation (3) gives the capacitance from the energy density.The simulated capacitance for the modeled capacitor was obtained as 4.608 pF.Further, the capacitance of system obtained after the addition of guard rings (held at ground potential) around the capacitor plate (shown in bottom of Figure 1(B)), was computed using Expression 3 and found as 4.1026 pF.As expected, the capacitance for this system was found lower than the sensor without guard ring.The reduction in the computed capacitance was 10.9%.
Employing Twin-T Oscillator Circuit
The capacitive sensor is part of a Twin-T oscillator detection circuit.The Twin-T circuit consists of two arms, the high pass arm and the low pass arm.The advantage of Twin-T oscillator over other single-capacitor circuits is a lower distortion sine wave output.This circuit functions as a notch filter by eliminating a particular frequency from the incoming signal, [19] which can be achieved by adjusting the values of the constituent elements of the circuit.The schematic of the oscillator circuit is as shown in Figure 2(A).It was observed that at ~64 KHz (given by the circuit components), the phase of the output undergoes a 180˚ shift.For sustained oscillations, this filter can be configured as an oscillator by introducing an additional 180 degrees of phase shift around a closed loop.
In present work, capacitors in the circuit were replaced with the capacitive sensors fabricated in house, which act as the salinity senor.Also, the resistors used have a tolerance of 2%, to minimize drift in the oscillator response.This kind of a circuit incorporating R and C components can also be utilized in applications requiring low frequencies of operation.
Alternate Detection Circuit
To compare the response of system employing Twin-T oscillator, an alternative circuit for capacitive salinity sensing was developed based on frequency to voltage converters (F-V).This design employs only one capacitor and does not require matching of circuit components.A block diagram of the circuit is shown in Figure 2(C
Difference amplifier (LM 6142).
In the schematic shown (Figure 2(C)), the square wave required for the operation of the circuit was generated using a 555 timer.This was done to ensure that the entire circuit could be driven from a single voltage source of 10 V. A 20 turn precision potentiometer R18 was used for calibration.The potentiometer adjusts the pulse-width of the one shot internal to the reference F-V converter U2 during the calibration of the circuit.The expression governing the output voltage of the F-V converters is given by equation 4 [20].
Fabrication of Conductivity Sensor
The sensor consists of two parallel copper-coated FR4 plates.One of the plates had a dimension of 2 × 1 cm and the other plate has a slightly larger dimension (2.1 cm × 1.1 cm), for alignment tolerance and bonding.The sensor plates were fabricated using copper clad FR-4 substrate boards.The process flow steps for the fabrication are as depicted in Figure 3.The first step in the fabrication of the sensor involves lithography.Electrode patterns were transferred using photolithography on positive photoresist coated FR-4.The pattern was developed in aqueous KOH (9 grams of KOH per liter of Deionized water) at 50˚C for 15 seconds.After development, the exposed copper was etched using Ferric Chloride solution.Subsequently, the photoresist over the copper was dissolved using acetone.Next, a thin layer of a dielectric material was coated on the plates.Two polymeric materials (Novolac and Honeywell's Accuflo™) were evaluated to determine their effectiveness as corrosion inhibitors in ocean water.
Novolac resin was spin cast at 1000 rpm for 60 seconds to yield a thickness of ~5 microns.The resin was then hard baked on a hotplate at 140˚C for 90 sec.The top and bottom plates, fabricated separately, were aligned face to face with a spacing of ~600 m, using spacers and bonded using photoresist.Similar methodology was adopted for Accuflo coating on electrodes.
Electrochemical Impedance Spectroscopy Analysis of Electrode Passivation
Electrochemical Impedance Spectroscopy (EIS) is a powerful technique with a wide range of applications from material characterization to corrosion monitoring.Depending upon the type of EIS response and the prior knowledge of system under test, one can arrive at plausible conclusions about the state of the system.Usually for analysis, the real part of impedance is plotted against the imaginary part (complex plane impedance plot).Such a plot usually gives a semicircular arc, which in the context of corrosion can be interpreted as the electrochemical response of a corroding metal in a conductive solution.
Further, the solution resistance is given by the high frequency intercept of the semicircle (closest to imaginary axis) and the charge transfer resistance is estimated by the diameter of the semicircle.Based on the estimation of these parameters, many physical quantities such as solution conductivity, exchange current density etc. can be determined.
In the present work, EIS technique was employed for the salinity sensor to understand the bulk and interfacial phenomenon.An Agilent 4294A impedance analyzer was used to record the impedance spectra of the parallel plate sensor system.Complex nonlinear squares (CNLS) fitting was used to analyze the generated experimental data.
Novolac AZ and Accuflo Coated
Parallel plate Salinity Sensor Figure 4(A-a) shows the Complex plane impedance diagram of copper electrode coated with Novolac AZ in 34.471 power supply unit (PSU) sea water.In Figure 4(A), generation of semicircle in the Complex impedance plane is attributed to the convolution of coating and corrosion response [21].Figure 4(A-b) shows the equivalent circuit used for data fitting and parameter extraction, and it includes the coating elements and interfacial elements [22][23][24].The model consists of solution resistance in series with a parallel combination of coating capacitance and pore resistance.Each pore provides an ionic pathway; hence, electro-migration and diffusion proceed in the pores leading to a charge transfer at the electrode and double layer formation.The overall impedance of the equivalent circuit of Figure 4(A-b) is given by, where, R pore is the pore resistance, R ct is the charge transfer resistance, C coat is the coating capacitance, C dl is the double layer capacitance and R sol is the solution resistance.
Figure 4(B-a) shows the complex plane impedance plot of the Accuflo coated copper electrode.The shape of the curve indicates a low frequency Constant Phase Element (CPE) process in series with a parallel combination of pore capacitance and pore resistance.Figure 4(B-a) and the circuit (Figure 4(B-b)) reveal good coating properties, which is indicated by the absence of charge transfer resistance, that represents the corrosion process.Cir-cuit shown in Figure 4(B-b) was used to model the impedance data of Figure 4(B-a).The overall impedance of this circuit is given by, where, ω is the angular frequency.The CPE1, is a constant phase element, which represents pore capacitance.The impedance of a constant phase element is given by the expression, 1/A * (j * ω) n , where A is the magnitude of the element, ω = 2 * pi * f, is the angular frequency, and f is the frequency in Hertz.The parameter n is such that if n = 1, the impedance of a CPE is that of an ideal capacitor and when n = 0, the CPE is a pure resistor.The parameter CPE2 represents the electrical double layer effects.
Even though the use of CPE yielded better quality than using discrete capacitances, the CPE power factors, α, is close to unity, indicating proximity to pure capacitor behavior.The overall impedance response might also be affected by the polymer film relaxation, [25,26] introducing the slight frequency dependence to film and interfacial capacity.
Optical Characterization of Novolac Resin and Accuflo Coated Sensors
Two sensors, one coated with Novolac and the other coated with Accuflo were characterized using optical observations (data not shown).After 48 hours of immersion in ocean water sample at room temperature indicated that the Novolac coating is severely compromised, whereas Accuflo coating maintained its structural integrity during the period of exposure.These observations were found consistent with the EIS analysis of the two polymers, where it was found that the pore resistance of Novolac was three orders of magnitude lower than that of (i) Accuflo.Pore resistance represents the cross sectional dimension of the electrolytic pathway that connects the electrolyte to the electrode metal.The larger the pore resistance means smaller the pathway and hence lesser the corrosion.The shape of the Nyquist plot itself provides a good inferential tool to predict coating failure.For example, the low frequency portion of the complex plane impedance plot of Accuflo is projected away from the real axis, as opposed to Novolac, for which it is approaching the real axis.The approach to real axis indicates a resistive mechanism at low frequencies, which is the corrosion mechanism.A straight line-like low frequency behavior parallel to the imaginary axis, such as the one seen for Accuflo, indicates a capacitive blocking behavior at low frequencies, which indicate an absence or very small amount of corrosion.Thus, visual observation of these plots could be used as visual corrosion diagnostic.
Response of F-V Converter Circuit
From the impedance analysis performed earlier, it was determined that Honeywell Inc's proprietary (Accuflo) spin-on-polymer is more suitable as a dielectric layer for this application than Novolac.Hence, it was selected as the coating material of choice for the salinity sensor.Experiments were performed to determine its sensitivity using the Frequency to Voltage conversion scheme discussed in section 2.2.
From Figure 6, it is clear that the output voltage decrease with increasing salinity.This is because the output voltage is proportional to the sensor capacitance.For a change in salinity of 0.07 PSU, the change in output voltage was obtained as 0.7 mV.Thus, the sensitivity of this detection circuit is 10 mV/PSU.Further, the effect of temperature on the output voltage was studied in this experiment, at constant salinity of 34.65 PSU.The output voltage was observed to follow a decreasing trend with a temperature due to the capacitance drop with increasing temperature.Sensitivity of this approach to variations in temperature was found to be equal to 6.3 mV/˚C.
Conclusion
A polymer coated parallel-plate capacitive salinity sensor was fabricated and tested for salinity monitoring in marine environment.The use of standard PCB process for sensor fabrication provided low cost and mass fabrication.The EIS investigations on two polymeric materials as possible electrode passivation coatings for corrosion inhibition, namely, Novolac resin polymer (Novolac) and a proprietary commercial organic polymer (Accuflo™) reveals that Accuflo maintains its structural integrity, whereas Novolac is severely compromised and the underlying electrode undergo corrosion, which is in agreement with optical observations of the sensors after 48 hours of immersion in ocean water.Optimization of the sensor geometry was performed using Finite Element Modeling for fringe field reduction and interference minimization.By the addition of guard ring around one of the electrodes; the capacitance was observed to fall by 10%.In salinity detection strategies the Twin-T oscillator based detection circuit provided a sensitivity of 410 Hz/ PSU and enhanced frequency stability of the oscillator, with Accuflo as the electrode barrier layer.Whereas, the adoption of frequency-to-voltage conversion approach eliminated the effects due to drift in source frequency and sensitivity of this detection circuit was found to be 10 mV/PSU.Results indicate that Accuflo coating with Twin-T oscillator design can be used as effective sensor for monitoring of salinity under harsh conditions of sea water.
Acknowledgement
Authors are thankful to University of South Florida, Tampa, FL, USA, for providing facility and funding to carry out experimental work for this study.
Figure 1 .
Figure 1.(A) Illustration of a capacitor incorporating kelvin guard ring; (B-1) Potential and electric field distribution of the capacitive sensor in seawater, (B-2) with guard rings.
Figure 2 .
Figure 2. (A) Circuit schematic of Twin-T Oscillator; (B) Oscillator response simulated using Pspice; (C) Block diagram of circuit employing two frequency to voltage converters (F-V).
Figure 3 .
Figure 3. Process flow steps for fabrication of the sensor.
Figure 4 .
Figure 4. Complex plane impedance response of parallel plate conductivity sensor system with (A) (a) Novolac coating.(b) Equivalent circuit used to parameterize the impedance data.(B)(a) Accuflo (b) Equivalent circuit used to parameterize the impedance data.Points represent measured data and line shows the fitting curve.
Figure 5 (
Figure5(A) shows the Twin-T oscillator with Novolac resin coating response to the variation in salinity of sea water.The frequency drift in the output for constant salinity of the solution was found to be ~400 Hz.This large frequency drift indicate the poor dielectric characteristics of Novolac.The frequency of the oscillator was observed to increase with increasing salinity.The oscillator frequency is inversely proportional to the capacitance of the sensor and hence, as salinity increases, the capacitance decreases, resulting in increased frequencies.The change in frequency was observed to be ~500 Hz for 0.0875 PSU change in salinity.It is observed from Figure5(A) that, as temperature increases the oscillator frequency increases.This is because the sensor's capacitance decreases with increasing temperature.Hence, frequency of the oscillator increases with temperature.The frequency was observed to shift by 0.12 KHz for every 5˚C.Figure5(B) shows the Twin-T oscillator's with Accuflo as insulation layer response for varying salinities at 22˚C.The oscillator frequency was observed to increase by 0.058 KHz for every 0.14 PSU change in salinity which implies a sensitivity of 0.41 KHz/PSU.Further, with Accuflo as the insulation layer, good stability was achieved in the output of the oscillator.The frequency was observed to drift only by ± 10 Hz about a mean value at constant salinity and temperature.
Figure 5 (
Figure5(A) shows the Twin-T oscillator with Novolac resin coating response to the variation in salinity of sea water.The frequency drift in the output for constant salinity of the solution was found to be ~400 Hz.This large frequency drift indicate the poor dielectric characteristics of Novolac.The frequency of the oscillator was observed to increase with increasing salinity.The oscillator frequency is inversely proportional to the capacitance of the sensor and hence, as salinity increases, the capacitance decreases, resulting in increased frequencies.The change in frequency was observed to be ~500 Hz for 0.0875 PSU change in salinity.It is observed from Figure5(A) that, as temperature increases the oscillator frequency increases.This is because the sensor's capacitance decreases with increasing temperature.Hence, frequency of the oscillator increases with temperature.The frequency was observed to shift by 0.12 KHz for every 5˚C.Figure5(B) shows the Twin-T oscillator's with Accuflo as insulation layer response for varying salinities at 22˚C.The oscillator frequency was observed to increase by 0.058 KHz for every 0.14 PSU change in salinity which implies a sensitivity of 0.41 KHz/PSU.Further, with Accuflo as the insulation layer, good stability was achieved in the output of the oscillator.The frequency was observed to drift only by ± 10 Hz about a mean value at constant salinity and temperature.
Figure 5 .
Figure 5. Twin-T Oscillator frequency vs salinity at 22˚C and frequency vs temperature at 34.471 power supply unit (PSU) salinity for (A) Novolac and (B) Accuflo.
Figure 6 .
Figure 6.Output voltage versus Salinity at 22˚C and output voltage versus temperature at 34.65 PSU.
|
v3-fos-license
|
2022-04-27T06:47:52.905Z
|
2022-04-25T00:00:00.000
|
248392012
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP09(2022)034.pdf",
"pdf_hash": "4fa0088b1794430ddf9e3f6dc468c09d53ce40bc",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2924",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "ca10fce3dbc56c2de2883ddcbfe8b7244b30bf84",
"year": 2022
}
|
pes2o/s2orc
|
Membranes in AdS4 orientifold vacua and their Weak Gravity Conjecture
We study type IIA orientifold compactifications with fluxes that give rise to perturbatively stable, non-supersymmetric AdS$_4$ vacua with D6-brane gauge sectors. Non-perturbative instabilities can be mediated by D8-branes wrapped on the six internal dimensions $X_6$, if they reduce to 4d membranes with a charge $Q$ larger than its tension $T$. The mismatch $Q \neq T$ arises due to i) curvature corrections and ii) the BIon backreaction of D6-branes wrapping 3-cycles of $X_6$. We give a simple expression for the second effect in toroidal orientifolds, and find that only pairs of 3-cycles at SU(2) angles contribute to it. They either contribute towards $Q>T$ or $Q<T$ depending on the 3-cycles separation, allowing to engineer 4d $\mathcal{N}=0$ vacua in tension with the Weak Gravity Conjecture for membranes.
Introduction and summary
In order to properly describe the string Landscape one not only needs to provide the set of string vacua, but also specify some key properties like their stability. In this sense, the AdS Instability Conjecture [1,2], that proposes that all N = 0 AdS d vacua are at best metastable, is a very powerful statement. The proposal is partially motivated by a refinement of the Weak Gravity Conjecture (WGC) applied to (d − 2)-branes, stating that the WGC inequality is only saturated in supersymmetric setups [1]. In non-supersymmetric ones, and in particular in AdS d vacua supported by d-form background fluxes, there should be a superextremal (d − 2)-brane that nucleates and expands towards the AdS d boundary [3], mediating a non-perturbative decay.
This sort of nucleation has been observed in many different contexts, including type II string flux compactifications to AdS [4][5][6][7][8][9][10][11][12][13], but it is particularly meaningful in those setups where the AdS scale is much lower than the compactification scale, as it allows us to connect with our standard picture of the string Landscape. This highlights the massive type IIA orientifold compactifications to AdS 4 put forward in [14,15], known in the literature as DGKT-like vacua.
These models feature O6-planes and D6-branes wrapping three-cycles of the internal manifold X 6 , and a set of fluxes generating a 4d potential that is particularly simple in the large volume regime [16][17][18]. This simplicity allows one to perform a rather general analysis of the set of vacua even when X 6 corresponds to a general Calabi-Yau manifold [19][20][21]. From this analysis one encounters a family of supersymmetric vacua and two universal families of non-supersymmetric vacua. Finally, by computing the 4d flux induced spectrum and using scale separation one concludes that the latter are perturbatively stable [21]. It thus remains to check their nonperturbative stability.
Given that these are examples of AdS 4 vacua supported by 4d fluxes, the proposal of [1] gives clear candidates to mediate non-perturbative decays, namely 4d membranes coupled to such fluxes, with a charge Q and tension T such that Q > T . The most obvious case are D4branes wrapping (anti)holomorphic two-cycles of X 6 , in the family of non-supersymmetric vacua that are related to the supersymmetric ones by a sign flip of the internal four-form flux. The membrane charge and tension for such D4-branes were computed in the probe approximation in [22,23], for both supersymmetric vacua and the said non-supersymmetric cousins. It was found that in both cases the 4d membranes satisfy the BPS relation Q = T , that corresponds to a marginal decay and not to an actual instability. However, such a computation uses a Calabi-Yau metric for X 6 , which is only an approximation of the actual background. A more precise background that takes into account the backreaction of localised sources can be found following [24,25]. Performing the computation in this new background can be understood as including one-loop corrections to the charge and tension of the membrane. This exercise was made in [26], finding that Q = T also at this level. Therefore, checking the proposal of [1] in this case would require a more accurate description of the background or perhaps alternative techniques as in [27], and as of today it remains an open problem.
Nevertheless, it was argued in [26] that a second kind of 4d membranes exist which mediates non-perturbative decays. These new membranes are made of D8-branes wrapping the internal manifold X 6 , and have space-time filling D6-branes attached to them. At leading order they satisfy the BPS equality Q = T , but at the level of one-loop corrections and for N = 0 vacua this is no longer true, there being two sources of correction to this equality. The first source is the correction to the D8-brane worldvolume action due to the curvature of X 6 , that induces a negative D4-brane charge and tension specified by the second Chern class of X 6 . For the N = 0 vacua of interest this correction is such that ∆ curv D8 (Q − T ) > 0, favouring the nucleation of the membrane towards the AdS 4 boundary. The second correction is harder to compute, as it involves the worldvolume flux induced by the BIon-like backreaction of localised objects, namely the D6-branes ending on the D8-brane. In spite of this, such a correction was computed for the geometry X 6 = T 6 /(Z 2 × Z 2 ) assuming a naive, symmetric distribution of D6-branes, finding that ∆ BIon D8 (Q − T ) > 0. As a result, in this particular case one finds that Q > T and the conjecture of [1] is verified for D8-branes. If it happened that ∆ BIon D8 (Q − T ) > 0 for arbitrary geometries and D6-brane configurations, one could extend this statement to all N = 0 vacua of this sort, or in other words verify the refined WGC for 4d membranes made out of D8-branes.
In this work we undertake a more general study of ∆ BIon D8 , considering orientifolds of the form X 6 = (T 2 × T 2 × T 2 )/Γ with different orbifold groups Γ and D6-brane configurations.
Remarkably, we find that for certain D6-brane configurations ∆ BIon D8 < 0, even in the simple geometry X 6 = T 6 /(Z 2 × Z 2 ). The key ingredient to achieve this negative sign seems to be the presence of localised sources that do not intersect, and in particular non-intersecting O6-planes.
Indeed, from our analysis one can derive some lessons that we expect to be valid in more general geometries. First, ∆ BIon D8 can be split into several contributions, one per each pair of localised sources, or in other words by a pair of three-cycles Π α and Π β wrapped by D6-branes, that can always be placed on top of O6-planes. Second, if these two three-cycles have a nonvanishing intersection number or lie in homology classes that are proportional to each other, then the pair does not contribute to ∆ BIon D8 . Non-vanishing contributions occur instead when Π α and Π β are related by a SU (2) rotation in X 6 . This includes cases where they intersect over a one-cycle, and cases where they do not intersect at all. In these cases the open-string spectrum between Π α and Π β typically arranges itself in N = 2 multiplets, and so we dub these kind of pairs as N = 2 sectors of the compactification. Third, in the simple case where D6-branes are placed on top of O6-planes we obtain the simple formula (1.1) Here α, β run over the different smooth three-cycles that appear in the covering space T 6 , which in our setup are nothing but the O6-plane locations Π O6 = ∪ α Π α , andq α ∈ N is the jump in the number of D6-branes wrapped on Π α when we cross a single D8-brane. The sum only selects those pairs that are related by an SU (2) rotation: #(Π α ∩ Π β ) a ≥ 0 is the 'number of intersections' on the (T 2 ) b × (T 2 ) c where this rotation happens and T a D4 is the tension of a D4-brane wrapping the remaining (T 2 ) a . In particular, if Π α and Π β intersect over one-cycles over (T 2 ) a , then #(Π α ∩ Π β ) a is the number of such intersections. Finally, N Γ is the order of the orbifold group and ε αβ is an integer that distinguishes between the two possibilities that we encounter in our examples. If Π α and Π β intersect over one-cycles then ε αβ = 2, and if they do not intersect at all then ε αβ = −1. In other words, D6-branes on top of O6-planes that intersect contribute towards ∆ BIon D8 > 0, and therefore the non-perturbative instability of the vacuum, while those on top of O6-planes that do not intersect have the opposite contribution.
It follows that if one considers D6-branes configurations with N = 2 pairs where none of them intersect, as it is possible in blown-up orbifold geometries, one necessarily has that ∆ BIon D8 < 0. Presumably, the same could happen in more general Calabi-Yau geometries where O6-planes do not intersect. In those cases, one needs to insure that ∆ curv D8 > |∆ BIon D8 | in order to satisfy the refined WGC for 4d membranes made out of D8-branes. While in most of the examples that we have analysed this is the case, one can also engineer vacua in which ∆ curv D8 + ∆ BIon D8 < 0. Notice that this does not determine the non-perturbative stability of the N = 0 vacuum, as nucleation of D4-branes wrapping internal two-cycles could still be favoured, but it nevertheless selects potential counterexamples to the WGC for 4d membranes, up to some caveats that we comment on. It would be important to establish whether the WGC is violated or not for a subfamily of type IIA flux vacua, as this could affect our picture of the string Landscape.
The rest of the paper is organised as follows. In section 2 we review the AdS 4 compactifications of interest and the computation of 4d membrane charges and tensions in them. In section 3 we summarise how to compute the BIonic excess charge ∆ BIon D8 in toroidal orientifolds, based on the explicit computations of section 4. Given this expression for ∆ BIon D8 we provide a simple example in which ∆ curv D8 + ∆ BIon D8 < 0. Due to flux quantisation conditions, such an example must be engineered in a blown-up T 6 /(Z 2 × Z 2 ) geometry, discussed in appendix A, and whose second Chern class is computed in Appendix B. We finally draw our conclusions in section 5.
AdS 4 orientifold vacua
To construct non-supersymmetric AdS 4 backgrounds let us consider type IIA string theory compactified on a Calabi-Yau three-fold X 6 . To this background we apply an orientifold quotient generated by Ω p (−1) F L R, with Ω p the worldsheet parity reversal operator, F L the left-movers space-time fermion number and R an anti-holomorphic involution of X 6 , that acts as RJ CY = −J CY and RΩ CY = −Ω CY on its Kähler 2-form and holomorphic 3-form, respectively. The fixed locus Π O6 of R is made of one or several smooth 3-cycles of X 6 , hosting O6-planes. The presence of O6-planes reduces the background supersymmetry to 4d N = 1, and induces an RR tadpole that can be cancelled by a combination of D6-branes wrapping special Lagrangian three-cycles [28][29][30][31], D8-branes wrapping coisotropic cycles with fluxes [32], and background fluxes including the Romans mass. For simplicity, in the following we will consider that the Dbrane content consists of D6-branes placed on top of the O6-planes or in another representative of the same homology class. The remaining RR tadpole is then cancelled by the presence of backgrounds fluxes, yielding either a 4d N = 1 or N = 0 vacuum.
In general, the effect of the backgrounds fluxes is two-fold. On the one hand they generate a potential that stabilises the moduli of the Calabi-Yau orientifold compactification, yielding families of supersymmetric and non-supersymmetric vacua. On the other hand they generate a warp factor, a varying dilaton and deform the background away from the Calabi-Yau metric. The first effect was analysed from the four-dimensional viewpoint in [14,15] for toroidal geometries, and more recently in [21] for general Calabi-Yau geometries. The second one was addressed in [24,25] (see also [33,34]) where the equations of 10d massive type IIA supergravity were expanded in either g s (the average value of the 10d dilaton e φ ) orμ = s /R (the AdS 4 scale in the 10d string frame in units of the string length s = 2π √ α ). The solution for the first terms of this expansion was given quite explicitly in [25] for supersymmetric vacua, while the same degree of accuracy was extended to one family of non-supersymmetric vacua in [26]. Finally, it was pointed out in [26] that 4d membranes made up from D8-branes wrapped on X 6 are natural candidates to mediate a non-perturbative instability in such a family of 4d N = 0 vacua.
To properly understand this last point let us briefly review some of the results of the above references. One important ingredient is the flux background of these compactifications, that can be conveniently described using the democratic formulation of type IIA supergravity [35], in which all RR potentials are grouped in a polyform C = C 1 + C 3 + C 5 + C 7 + C 9 and so are their gauge invariant field strengths Here H is the three-form NS flux, d H ≡ (d − H∧) is the H-twisted differential andḠ a formal sum of closed p-forms on X 6 . The Bianchi identities read where Π α hosts a D-brane source with a quantised worldvolume flux F α , and δ(Π α ) is the bump δ-function form with support on Π α and indices transverse to it, such that p−9 s δ(Π α ) lies in the Poincaré dual class to [Π α ]. O6-planes contribute as D6-branes but with minus four times their charge and F α ≡ 0. Finally, λ is the operator that reverses the order of the indices of a p-form.
In the presence of D6-branes and O6-planes the Bianchi identities for the RR fluxes read where we have defined δ D6/O6 ≡ −2 s δ(Π D6/O6 ). This in particular implies that where we assume that the NS flux H is closed. Here m = s G 0 ∈ Z is the quantum of Romans mass, and N α the number D6-branes wrapping a three-cycle in the homology class [Π D6 α ]. The 4d vacua analysis yields the following conditions on the internal background fluxes: where ω a ,ω a are integral harmonic two-and four-forms of X 6 such that −6 s X 6 ω a ∧ω b = δ b a , and K a = X 6 J CY ∧ J CY ∧ ω a . Here = −1 describes supersymmetric backgrounds, while = 1 corresponds to non-supersymmetric vacua. Finally we have that relating the AdS 4 radius with the average 10d dilaton value, which is in turn fixed by the equations (2.5), and is a small parameter for those solutions with large Calabi-Yau volume V CY .
With these data and using pure spinor techniques an approximate solution to the 10d massive type IIA supersymmetry equations was found in [25], as the first terms of an expansion in g s .
Similarly, following the more general approach of [24], an approximate solution to the 10d equations of motion corresponding to the non-supersymmetric vacua with = 1 was found in [26]. Both of these 10d backgrounds display a warped metric of the form with A a function on X 6 . The metric on X 6 is not Calabi-Yau, but a deformation to a SU (3) × SU (3) structure metric. Such a deformation is described in terms of a (2,1) primitive current k and a real function ϕ that satisfies X 6 ϕ = 0. More precisely, one obtains the following metric background and dilaton profile , Indeed, let us express the RR two-form flux in terms of a three-form current K as G 2 = d † CY K, so that its Bianchi identity reads have a solution of the form [25,36] K α,η = ϕ α,η Re Ω CY + Re k α,η , (2.12) and by linearity of the equation (2.9) one can express K as and so the quantities ϕ and k that determine the background (2.8) are given by ϕ = α,η q α,η ϕ α,η and k = α,η q α,η k α,η , respectively. In particular we have that where δ α,η ≡ * CY (Im Ω CY ∧ δ(Π α,η )), V CY = − 1 6 −6 s X 6 J 3 CY is the Calabi-Yau volume and s Πα Im Ω CY . As a result ϕ ∼ − qα,η r in the vicinity of a Π O6 α,η . If the localised charge is negative it describes a small region where the 10d string coupling blows up, the warp factor becomes negative and, as expected, the supergravity approximation cannot be trusted.
Let us consider a simplified setup in which all localised sources wrap three-cycles determined by the O6-plane locus. We describe the O6-plane locus as a union of several smooth three-cycles (2.15) where the index α runs over different homology classes and η over the p α different representatives of the same homology class: Then we consider D6-branes that wrap three-cycles on the same homology classes, that is we take One may further assume that all D6-branes lie on top of O6-planes, so Π D6 α,η = Π O6 α,η . An advantage of this further simplification is that on top of the O6-planes one can always have a vanishing worldvolume flux for the D6-brane, which is a necessary condition for a vacuum. If we displace such a D6-brane away from the O6-plane location the presence of the H-flux will generically induce a B-field in its worldvolume, that will generate a dynamical tadpole. 1 Then, in an analogous fashion to [38], the WGC could be violated due to the lack of equilibrium. Our choice avoids such a possibility.
To sum up, we consider a setup in which the three-cycles Π α,η in (2.10) correspond to those in (2.15). As a result ] ∈ H 3 (X 6 , Z), we only need to require that hM, N M ∈ Z, as will happen in the toroidal orientifold geometries that we will analyse in the following sections. Additionally, the 4d analysis on vacua conditions requires that mh and N, N α,η are non-negative, so that there is a finite number of solutions to the tadpole equation. 1 In general there will be a discretum of other representatives within [Π O6 α ] besides the O6-plane locus where the D6-brane worldvolume flux can vanish, similarly to the open string landscape in [37]. Our discussion below can be easily extended to include those D6-brane locations as well.
The approximate flux background is also described in terms of ϕ and k. We have that where in the supersymmetric case 19) and in the non-supersymmetric case
4d membranes
In this background, one may consider branes that correspond to membranes in 4d. There are three different kinds of such membranes that are BPS objects in N = 1 vacua. D8-branes wrapping the whole internal manifold X 6 , NS5-branes wrapping special Lagrangian three-cycles of X 6 and D4-branes wrapping (anti)holomorphic two-cycles of X 6 .
Let us consider a D4-brane wrapping an (anti)holomorphic two-cycle Σ of X 6 . Crossing such a membrane in 4d induces a change in the quanta of the internal four-form flux, scanning over the infinite family of flux vacua found in [14]. To see if such a membrane induces a nonperturbative instability one can dimensionally reduce the DBI+CS action of the D4-brane in the probe approximation, as done in [22,23]. This can be interpreted as computing the 4d membrane charge Q and tension T , and if Q > T one expects an instability similar to the one of [3]. This computation was performed in [22,23] for D4-branes in both cases = ±1, in the smeared approximation. This corresponds to only consider the leading terms of the background expansion (2.8) and (2.18), which yield a Calabi-Yau metric, and more precisely to set ϕ = k = 0 in those expressions. The computation was extended to the corrected backgrounds in [26], which can be interpreted as a one-loop correction to the DBI+CS expressions of [22,23], and more precisely to the effect of a crosscap diagram between such D4-branes and the O6-planes. At this level of accuracy it was found in [26] that in 4d Planck units where η = sign G 0 , , a are as in (2.19) and (2.20) and K is the 4d Kähler potential. By appropriately choosing the orientation of Σ, or equivalently by considering D4-branes or anti-D4-branes on holomorphic cycles, one can get Q D4 = T D4 , which correspond to marginal domain walls, but not Q D4 > T D4 . Thus, in order to check the refinement of the Weak Gravity Conjecture made in [1] one should compute further terms in the background expansion given above.
In models with background D6-branes, that is with N > 0 in (2.17), there is second kind of 4d membranes obtained from D-branes that are BPS in N = 1 vacua. These are D8-branes wrapped on the whole of X 6 , whose description is more involved than those of D4-branes. First, they can host harmonic (1,1) primitive worldvolume fluxes F h , which together with the curvature corrections modify the DBI+CS action and induce D4-brane and tension. Taking these two effects into account one obtains a total tension of the form with T D8 = e K/2 V CY and T a D4 = e K/2 t a , where J CY = t a ω a defines the Kähler moduli. Also It is important to notice that in our conventions both K a T a D4 and K F a T a D4 are non-negative quantities. In addition, one can always set K F a = 0 via setting F h = 0. A second important feature of these D8-branes is that they have D6-branes ending on them, to cure the Freed-Witten anomaly induced by the H-flux [39]. In 4d terms, a membrane of this sort induces a jump in the flux quantum m when crossing its worldvolume, so there should be a corresponding jump in N in order to satisfy (2.17) at both sides of the membrane. For a single D8-brane we have the following transition 2 m → m + 1 =⇒ q α,η → q α,η +q α,η with ηq α,η = hp α , (2.25) whereq α,η ≥ 0, and the upper bound q α,η ≤ 4 should always be respected. At the level of accuracy with which we are describing the 10d background, this feature manifests itself as a BIon-like profile developed by the D8-brane [26]. This profile is slightly more involved than the simplest examples [40,41], but it contains similar features. We have a non-closed piece of the D8-brane worldvolume flux that reads 3 We also have a non-trivial profile for the D8-brane transverse coordinate This BIon-like profile also contributes to the D8-brane DBI+CS action, and therefore modifies the 4d membrane charge and tension. In terms of the latter, we have an extra term in (2.23) which resembles the term K F a T a D4 , except that it involves a different component of the worldvolume flux. In the supersymmetric background and for a BPS D8-brane, the three corrections to T D8 also appear in the 4d membrane charge, yielding as expected that T total D8 = Q total D8 . For the non-supersymmetric background with = 1 the same D8-brane develops these corrections but with opposite charge. That is As a result, the excess charge for such membranes reads If the term in brackets is positive for some 4d membrane the refined WGC of [1] is verified, signalling a non-perturbative instability of the non-supersymmetric vacuum. As mentioned before, the first term inside the bracket is always non-negative, and in fact it is positive away from the boundary of the Kähler cone. The second one is non-positive, but it can always be chosen to vanish by appropriate choice of worldvolume fluxes. It is thus the third one that remains to analyse, which will be the subject of the next section. For concreteness we define the quantity ∆ Bion D8 ≡ −e K/2 1 2 6 that we dub as the BIonic excess charge of the membrane. A priori this quantity is comparable to the effect of curvature corrections, and it is in fact larger for Calabi-Yau geometries near a toroidal orbifold limit. In the next sections we will analyse ∆ Bion D8 precisely for those geometries. Remarkably, we find a very simple expression, that suggests generalisation to arbitrary Calabi-Yau geometries of the form ∆ Bion D8 = D a T a D4 , where D a depend on discrete data.
Toroidal orientifolds
In this section we specify the above setup to toroidal Abelian orbifolds of the form T 6 /Z N or and the orbifold action respects the factorisation. As we show in the next section, for these geometries one can compute the quantity (2.31) explicitly, obtaining a simple general expression.
In the following we will summarise this expression and discuss its consequences for the stability of AdS 4 vacua with different D6-brane configurations.
The BIonic excess charge
In toroidal Abelian orbifolds of the form ( the O6-plane content in the covering space T 6 is characterised by a set of factorisable threecycles, which in homology read Here α runs over different homology classes in the covering space, specified by the wrapping numbers (n i α , m i α ) ∈ Z 2 of each factorisable three-cycle on (T 2 ) i . The index η runs over different representatives in the same homology class, giving rise to a multiplicity p α . If we place the existing D6-branes on top of the O6-planes, more precisely N α,η of them on top of Π O6 α,η , the background RR two-form flux is of the form with q α,η = 4 − N α,η and where (x i , y i ) are the period-one coordinates of (T 2 ) i . From here one can extract the quantities ϕ and k that appear in (2.13), and describe the full background (2.8) and (2.18).
Additionally, given a D8-brane-mediated flux jump of the form (2.25), the BIon-like solution that describes the D8/D6-brane system features a coexact worldvolume flux of the form (2.26).
As a consequence we have that (2.31) is of the form From our explicit computations in the next section we moreover obtain the following results: • The integral in (3.4) is non-zero only when the intersection number , which in particular implies that ∆ α,η;α,ζ = 0. In practice, this means that non-vanishing contributions to (3.4) come from N = 2 sectors of the compactification, that is from pairs of D6-branes wrapping three-cycles related by an SU (2) rotation. In our setup, this translates into wrapping numbers (n i α , m i α ), (n i β , m i β ), that are similar in one two-torus (T 2 ) i and different in the other two. We denote these pairs of three-cycles as N = 2 pairs, see figure 1 for examples. • Given a N = 2 pair (α, η; β, ζ), the integral in (3.4) depends separately on the indices α, β that describe the homology classes [Π O6 α ] and [Π O6 β ], and the indices η, ζ that specify the representatives. The dependence in α, β corresponds to the number of regions of minimal separation between Π O6 α and Π O6 β , which we dub N = 2 subsectors. For instance, if Π O6 α and Π O6 β intersect over one-cycles, the number of N = 2 subsectors is the number of intersections. To measure this number we define where i = j = k. When Π O6 α and Π O6 β have parallel one-cycles in (T 2 ) i but they do not coincide, (3.5) does not count intersections, but instead regions of minimal separation between the two three-cycles. In both cases, (3.5) amounts to the number of 'intersections' in the two two-tori where Π O6 α and Π O6 β are not parallel, it is non-vanishing for a single choice of i, and because each N = 2 subsector contributes equally to the integral in (3.4), ∆ α,η;β,ζ is proportional to this number.
• The dependence on the indices η, ζ arises because ∆ α,η;β,ζ is different if Π O6 α,η and Π O6 β,ζ intersect or not. In general, the contribution of each N = 2 subsector to the integral in (3.4) is proportional to t i , which is the area of the (T 2 ) i selected by (3.5), or in other words the two-torus where Π O6 α,η and Π O6 β,ζ are parallel. The coefficient of the contribution depends on whether these two three-cycles intersect or not. If they intersect over a one-cycle on If instead Π O6 α,η and Π O6 β,ζ do not overlap, but they are only parallel in ( Integrating over X 6 , we divide both results by the orbifold group Γ order, dubbed N Γ . 4 In the toroidal orientifold geometries that we consider in the next section, an N = 2 pair of O6-planes that do not intersect are separated at mid-distance in their common transverse space in (T 2 )i. When we consider D6-branes wrapped in the same homology classes [Π O6 α ] and [Π O6 β ] but not on top of orientifold planes in (T 2 )i, their BPS locations form a discretum analogous to the ones in [37,42], because the presence of H-flux implies that only at certain discrete locations the D6-brane worldvolume flux F = B|Π D6 + 2 s 2π F can vanish. In this case, the separation between three-cycles is of the form 2 s t i L k 2P , where L the length of the one-cycle wrapped in (T 2 )i, P ∈ N is determined by the quanta of H-flux, and 0 ≤ k ≤ 2P is an integer. Given this separation, the contribution of this N = 2 D6-brane pair to the integral − −6 s T 6 JCY ∧ Fα,η ∧ F β,ζ is given by 1 2 which reduces to (3.6) for k = 0 and to (3.7) for k = P .
Adding all these results together, we end up with the following expression for the BIonic contribution to the 4d membrane excess charge: Here T i D4 = e K/2 t i corresponds to the 4d membrane tension of a D4-brane wrapped around (T 2 ) i , while ε ηζ = 2 for intersecting N = 2 pairs and ε ηζ = −1 for those at mid-distance. Note that in the above expression the factor of 2 associated to the exchange of F α,η and F β,ζ in (3.4) has already been accounted for, so that we sum over each N = 2 pair only once.
Finally, we find that in general a D8-brane with a worldvolume flux is not invariant under the orientifold action, and therefore we need to consider two of them. This reflects the fact that in Calabi-Yau orientifolds oftentimes the quantum of Romans mass must be even. In fact, if we insist of working with a toroidal orbifold geometry the quantisation conditions for m and other background fluxes become even more restrictive, as we now turn to discuss.
Flux quantisation and blow-up modes
In the absence of localised sources the Bianchi identities (2.2) are quite trivial, in the sense that e −B ∧ C is globally well-defined. Then the quantisation condition for NS and RR fluxes read (3.9) When we include localised sources like D-branes, we need to substitute these conditions by Page charge quantisation [43]. Nevertheless, we can still make use of the quanta defined in (3.9), which are in fact the flux quanta used to describe the compactification in the smeared approximation.
Additionally, the presence of O-planes can affect the quantisation of those fluxes that are not sourced by any localised object. Indeed, as pointed out in [44], in type IIB orientifold compactifications that only contain O3-planes with negative charge and tension (dubbed O3 − ) the quanta of NS and RR background three-form fluxes must be even integers. This observation was applied to toroidal orbifold geometries in [45,46], where it was found that three-form flux quanta in the covering space should be multiples of 2M if no flux along collapsed three-cycles was to be involved, with M ∈ Z depending on the particular orbifold.
Clearly, these type IIB orientifold constraints must have a counterpart in our type IIA setup.
Let us for instance take the type IIB setup of [44], with 64 O3 − on a T 6 . An NS flux of the form By performing three T-dualities along {x 1 , x 2 , x 3 } one recovers type IIA on T 6 with 8 O6 − that extend along such coordinates. Assuming a factorised metric, this T-duality does not affect the H-flux that we have considered, and so one concludes that a type IIA H-flux integrated over a three-cycle that intersects an even number of O6 − must be quantised in terms of even integers. The same reasoning can be applied by T-dualising the type IIB RR three-form flux along any three-cycle of T 6 . By doing so, we recover that G 0 ,Ḡ 2 ,Ḡ 4 ,Ḡ 6 should also correspond to even integer quanta in the said type IIA background. In general, we expect a similar statement to apply in a smooth Calabi-Yau geometry X 6 , whenever a p-cycle intersects an even number of O6 − .
The orbifold geometries X 6 = T 6 /Γ that we consider in the next section do contain O6 − , but their homology classes are more involved than that of T 6 . The difference mostly resides in the orbifold twisted sector, which corresponds to a set of cycles that are collapsed in the orbifold limit of a smooth Calabi-Yau. Since they are collapsed, the approximation of diluted fluxes that leads to the solution (2.8) and (2.18) is justified as long as the background fluxes do not have components on the twisted sector. Here is where the logic of [45,46] applies, and as a result the flux quanta computed in the covering space T 6 must be multiples of 2M , for some M ∈ Z.
In the following we will discuss how these quantisation conditions look like in the case of the Z 2 × Z 2 orientifolds mirror dual to the ones considered in [45,46].
Let us consider a Z 2 × Z 2 orbifold over the factorisable six-torus The complex coordinate describing each two-torus is given by with x i and y i real coordinates of unit periodicity, u i ∈ R describing the complex structure and The generators of the orbifold group act as We can now apply the orientifold quotient Ω p (−1) F L R, with This generates four different kinds of O6-planes: to correspond to O6 + -planes [49]. Thus, in this second case, by placing D6-branes on top of the O6-planes one will never be able to construct a model absent of NS tadpoles, even in the presence of fluxes. 5 For this reason in the following we will focus on the case where (h 1,1 , h 2,1 ) orb = (51, 3).
Let us now see what is the appropriate flux quantisation in the Z 2 × Z 2 orientifold with (h 1,1 , h 2,1 ) orb = (51, 3). In the absence of orientifold projection one can use the results of [45], that show that the integral lattice of three-cycles is of the form is an integer three-cycle in the covering space T 6 . If we now apply our criterium for flux quantisation in the presence of O6 − -planes we obtain that the H-flux must be quantised in units of 4 from the viewpoint of T 6 . That is, In particular, if as before we consider a flux of the form [ −2 This quantisation in units of four is quite reminiscent of a similar condition for D6-branes.
Indeed, for this choice of discrete torsion the minimal amount of covering-space three-cycles needed to build a consistent boundary state is two [52,53]. Then, when introducing the orientifold projection and placing the D6-branes on top of an O6-plane one finds that its gauge group is U Sp(2N ), which means that each D6-brane in the orientifolded theory corresponds to four D6branes in the covering space [54]. In other words, the charges q α,η that appear in (2.16) are quantised in units of 4.
Let us finally turn to the quantisation of internal RR fluxes. In this case one can directly use the results of [46] on a type IIB mirror symmetric orientifold, because both the RR fluxes and the D-branes that generate them have a simple behaviour under T-duality. It was found in [46] that covering-space RR three-form fluxes must be quantised in units of 8 if one does not want to turn them on along twisted three-cycles. In our type IIA setup, this means that the quanta of Romans mass m and that of four-form flux must also be quantised in units of 8 if one wants to maintain the orbifold geometry T 6 /(Z 2 × Z 2 ). From the type IIA perspective the quantisation in units of 8 of the Romans mass may seem surprising, but one can understand it in terms of the D-brane object that generates G 0 = −1 s m, namely a D8-brane wrapped on the internal space. Such a D8-brane will have induced D4-brane charge in the twisted sector, due to the curvature corrections and the non-trivial B-field at the orbifold point. The results of [45,46] imply that, in order to construct a D8-brane boundary state with no induced twisted charges, one needs four of them in the covering space to form the regular representation of the orbifold group. The orientifold then doubles this number to eight D8-branes. In terms of fluxes, if one wants to have a non-vanishing Romans mass without inducing any four-form flux on the orientifold twisted sector one must impose that m is a multiple of 8.
Notice that these flux quantisation conditions are quite constraining when imposing the tadpole equation (2.17), as they only allow for the solution which contains no D6-branes at all. Thus, a domain-wall transition of the form (2.25) is not allowed starting from this orientifold vacuum, because the quantum of Romans cannot be any larger, and this applies to both supersymmetric and non-supersymmetric vacua.
Nevertheless, one can apply the same philosophy of [14] and consider orientifold vacua in which the Kähler moduli of the twisted sector have been blown up due to the presence of a four-form flux along them, see Appendix A. In this case we no longer need to impose that m is a multiple of 8, but only impose the orientifold constraint that sets it as an even integer.
Therefore we have a richer set of solutions to the tadpole constraint (2.17), like the family Moreover, if as in [14] we make a choice of four-form flux such that the blow-up Kähler moduli are much smaller than the toroidal ones, then the result (3.8) should be a good approximation for the BIonic D8-brane excess-charge in N = 0 vacua. Indeed, when twisted Kähler moduli are blown up both J CY and F will be modified and so will be ∆ Bion D8 , but one expects an effect that is of the order of the size of the blown-up two-cycles. Therefore, if we blow up the twisted two-cycles but their size remains much smaller than the toroidal Kähler moduli, we expect (3.8) to give us a good approximation of the BIonic D8-brane excess charge.
Given the value of ∆ Bion D8 , one should finally compare it with ∆ curv a T a D4 , which one can again compute in the orbifold limit. For this computation the relevant intersection number is c 2 (X 6 ).R i , where R i is the sliding divisor defined in Appendix A. Using the results of [55] one finds that c 2 (X 6 ).R i = 24 and therefore , and so each T i D4 is selected twice in the sum (3.8). Applying all these data we obtain (3.14). A transition between these two Z 2 × Z 2 orientifold vacua is not mediated by 4d membrane arising from a BIonic D8-brane, but instead from a bound state of D8-brane, D4-brane and NS5-brane.
BIon configurations and the WGC
The 4d vacuum of larger energy is N = 1 Minkowski, and the membrane bound state is BPS and satisfies a no-force condition regardless of whether we jump to a N = 1 or N = 0 AdS 4 vacuum, as expected from the general results of [56,57].
Transitions mediated by a BIonic D8-brane for instance arise when increasing the value of k in the family of vacua (3.15) and (3.16) which, as explained, take us away from the orbifold limit.
If we are in a non-supersymmetric vacuum of the sort discussed in section 2, the BIon excess charge should be computed to a good approximation by (3.8), which will depend on how the D6-branes are arranged before and after the jump. In general we will have 8(4 − 2kh) D6-branes distributed in groups of 4 on the three-cycles Π O6 α,η within each homology class in (3.13). For simplicity, we may consider the case where for each value of α all 8(4 − 2kh) D6-branes are on a single three-cycle, that is in a given choice of η. For instance, one may consider the case that such D6-branes are on top of the four O6-planes that go through the origin, which corresponds to selecting η = (0, 0, 0) for each value of α, as represented in figure 2. Then one can apply (3.8) to compute the BIon excess charge of a single D8-brane, without taking into account its orientifold image. In this case we have that and that each two-torus is selected twice by the pairwise intersection. Therefore Figure 2: D6-brane configuration leading to (3.19). In red are the O6-planes with D6-branes on top of them.
signalling an instability of the vacuum. One can also consider a configuration in which the D6-branes do not intersect among each other, like for instance in figure 3. Then Figure 3: D6-brane configuration that leads to (3.21). In red are the O6-planes with D6-branes on top of them.
with all otherq α, η vanishing. Because there is no pair of BIon sources that intersect, ε η, ζ = −1 and the contributions to (3.8) are all negative, and more precisely we recover Taking into account the curvature correction effect (3.17), one concludes that, for h = 1, ∆ curv D8 + ∆ Bion D8 < 0. In this case there is an excess tension for the corresponding 4d membrane, which then does not satisfies the inequality of the Weak Gravity Conjecture. As far as D8/D6-systems are concerned, such a 4d non-supersymmetric vacuum seems non-perturbatively stable.
Caveats
The result ∆ curv D8 + ∆ Bion D8 < 0 is surprising from the viewpoint of the WGC for 4d membranes. Indeed, the set of N = 0 vacua corresponding to (3.19) and (3.21) have several independent decay channels. One consists of decreasing the four-form flux quanta via nucleation of D4-branes on two-cycles. A second one is to increase k in (3.15) or (3.16), mediated by BIonic D8-branes. A third one would be to leave m fixed and increase the H-flux quantum h whenever the tadpole conditions permits, mediated by an NS5-brane wrapping a special Lagrangian three-cycle. Out of these three possibilities, only the first one is available when k takes it maximal value in (3.15) or (3.16). In that case from the intuition developed in [1] one would expect that at least some D4-brane nucleation is favoured, leading to a non-perturbative instability. If that is the case, all vacua of this sort, including those with space-time filling D6-branes, are likely to be unstable via D4-brane nucleation, and so the AdS Instability Conjecture would be verified for this setup. As mentioned before, at this level of approximation Q D4 = T D4 , and it remains as an open problem to see whether or not Q D4 > T D4 after further corrections are taken into account.
Whenever we have several possible decay channels involving independent 4d membrane charges, we would expect that several 4d membranes satisfy the refined WGC Q > T , or more precisely a Convex Hull Condition [58] adapted to 4d membranes. For the vacua of the sort (3.19) and (3.21) this includes at least one 4d membrane with D8-brane charge. However for h = 1 in (3.16) we find that depending on the D6-brane positions we have either Q total D8 > T total D8 or Q total D8 < T total D8 . This contradicts our WGC-based expectations, because in both cases the transition is very similar energetically. Indeed, the vacuum energy at tree level reads Given that ∆ Bion D8 = D i T i D4 , it seems reasonable to assume that the full flux jump is given by where for simplicity we have set m a = 0 in (A.1), and again neglected fluxes along twisted cycles.
Or results above imply that D i = 8h 2 /3 for (3.19) and D i = −4h 2 /3 for (3.21), so in all cases |ê i | decreases except when ∆ curv D8 + ∆ Bion D8 < 0. While this effect increases the vacuum energy in such a case, for large values of |ê i | it is a subleading effect with respect to the increase in |m|.
So we always decrease the vacuum energy when we perform the jump k → k + 1, and so there is a priori no reason why in one vacuum D8-brane nucleation is favoured and not in the other.
In light of these considerations, let us discuss some possible loopholes in our derivation of (3.22), or in its interpretation as a violation of the WGC for 4d membranes: • As mentioned above, the results (3.20) and (3.22) are approximations, because they are computed in terms of an integral in the orbifold covering space T 6 . However, in order to have a transition that increases k in (3.15) it is necessary to consider Calabi-Yau geometries in which the twisted cycles have been blown up. This will modify the integral that leads to the general result (3.8), but one expects the correction to be suppressed as the quotient t tw /t untw , between the typical size of a blown-up two-cycle t tw and that of an untwisted two-cycle t untw . As follows from the analysis of Appendix A, this quotient can be arbitrarily small, and so it is consistent to neglect the corresponding correction to ∆ Bion D8 . Similarly, as we blow up the twisted cycles, the excess charge (2.30) will receive a different contribution from the term K (2) a T a D4 , as it follows from eq.(B.2). Again, this correction should be suppressed as t tw /t untw compared to (3.8), and can be neglected in the same way that they were neglected in (3.23). In particular, it is highly unlikely that any of these corrections will flip the sign of ∆ curv D8 + ∆ Bion D8 computed in the orbifold limit. • It could be that a more complicated bound state 4d membrane charges mediates the decay.
Adding harmonic worldvolume fluxes to the D8-brane would not help, as this would switch on K F a in (2.30) and render Q total D8 − T total D8 even more negative. A different option is to involve NS5-branes. It follows from (3.23) that in order to decrease the energy we need to increase the H-flux quantum h, which is not always an option. Indeed, if we increase k = 1 → 2 in (3.16) there is room to also increase h without violating the tadpole condition.
• The expression for the vacuum energy (3.23) is a tree-level result, and it is subject to oneloop corrections. In particular there will be corrections coming from open string states stretching between different D6-branes. The masses of these objects are the main difference between the two configurations (3.19) and (3.21). In the first case they include light modes that will appear in the effective theory, while in the second case they are all massive modes above the compactification scale that need to be integrated out. The resulting threshold corrections will therefore be different and this could imply a change in the vacuum energy such that the decay is no longer energetically favoured in the second case. While this is an exciting possibility, it could also be that such threshold corrections to the vacuum energy are captured by the different values of D i in (3.24). In that case for large values ofê i the effect on the vacuum energy would be significantly suppressed and nothing would change.
• Finally, 4d membranes made up from D8-branes belong to the set of EFT membranes defined in [57] (see also [59]), and so their domain wall solutions can be described in 4d EFT terms. However such solutions are a priori not captured by the thin wall approximation.
It could then be that because of the significant variation of the scalar fields, the criterium is not the appropriate one to detect a non-perturbative instability. there is no membrane nucleation.
Examples
In this section we present several examples of toroidal orbifolds, that illustrate how the different elements of formula describing the BIonic excess charge work together to provide the final result.
We mainly focus on the Z 2 × Z 2 , Z 4 and Z 3 × Z 3 orbifold groups, for which we perform the computations explicitly. We also consider, more schematically, the Z 6 and Z 2 × Z 4 orbifolds.
In order to compute the integral X F ∧ F ∧ J we need to find an explicit expression for the world-volume flux. As a first step we identify the different O6-planes and perform a Fourier expansion of the bump δ-forms that describe them. The motivation for this being that the world-volume flux is determined by a set of 3-form currents K α,η as in (2.26), and such 3-form currents are defined through the Laplace equation (3.2). Therefore, to find concrete expressions for K α,η we need to build currents whose Laplacian returns bump δ-forms. Expanding in Fourier modes will prove to be an extremely useful tool to make this construction while controlling at the same time the connection with the smeared limit of our solution. Once these aspects are known, it is immediate to compute F α,η and evaluate the BIonic corrections using (3.4).
T 6 /Z 2 × Z 2
We start by revisiting in greater detail the orbifold discussed in the previous section, that is a Z 2 × Z 2 orbifold with periodic coordinates given by (3.10) and orbifold action acting as (3.11).
The metric and the Kähler form are where we have defined the dimensionless radiiR i = R i / s and the Kähler moduli t i = 4π 2R2 i u i . It is worth noting that the choice of complex structure (3.10) is not the only one compatible with the Z 2 × Z 2 symmetry. For each of the two-tori we are free to choose the complex structure as τ i = iu i or τ i = 1/2 + iu i . From this point onward we will focus on the case where all the tori follow the former choice, as in (3. Table 1: O6-planes in T 6 /Z2 × Z2.
The above content of O6-planes can be expressed in terms of invariant bulk three-cycles. This is quite simple for the current case, but it will become more nuanced in the following examples.
Let π 2i−1 and π 2i constitute a basis of fundamental one-cycles on the torus (T 2 ) i (i = 1, 2, 3), i.e. one-cycles winded once around the directions used for the periodic identifications that parametrized the torus in (3.10). Then we define the following set of toroidal three-cycles: Then, the orientifold plane content can be expressed in terms of these invariant cycles as The next step will be to construct the δ-like bump functions living in the factorised orbifold structure. Taking the O6-plane positions from Table 1 a delta bump function can be expressed as a product of conventional Fourier expansions for each T 2 i with support on the fixed loci Π α .
At this stage, we can present the relation in cohomology between the flux H and the orientifold planes derived from (2.10), so that by using the equations of motion (2.5) we can fix the complex structure moduli u i . This implies where the β i are elements of the following basis of bulk 3-forms: Defining ρ = 8π 3R 1R2R3 and considering our choice of complex structure, the holomorphic (3,0)-form Ω is given by Then, a solution to the first equation in (2.5) can be accomplished if all the complex structure moduli are fixed to u i = 1, and µ = −1 s 4h/ρ. In light of all this, keeping the complex structure unfixed, we can construct F α,η = s d † K α,η .
We arrive at: (4.11d) Finally, we would like to compute F α,η ∧ F β,ξ ∧ J CY . To perform this integral we regularise it by interchanging the order between summation and integration. The physical interpretation of this procedure corresponds to smearing the O6-plane over a region of radius ∼ s , which is the region of X 6 where the supergravity approximation cannot be trusted. In practice this corresponds to a truncation of the summation over the Fourier modes labelled by n. In a finite sum we are able to swap summation and integration freely. We then take the limit when the cut-off of the sum diverges, returning to our original system with a localised source.
At this point we can check some of the statements made in the last section. First of all, we verify that ∆ α,η;α,ζ = 0. We focus on the simplest case and consider the contribution from two components of Π O6 R . In particular we choose α = 0 and η = ζ = (0, 0, 0) and compute Using (4.2) and (4.11a) we immediately see that the contribution vanishes, since there is always a wedge product of repeated one-forms. Note that this is independent on the value of η in Starting with the former we build the configuration from (3.19) and evaluate the contribution from the pair of D6-branes associated to Π O6 R and Π O6 Rθ . As depicted in figure 2, the branes intersect over (T 2 ) 3 . The associated BIon contribution is where we have defined Φ 6 = 6 s dx 1 ∧ dx 2 ∧ dx 3 ∧ dy 1 ∧ dy 2 ∧ dy 3 . To go from the second to the third line we have used the regularisation procedure stated above. It is easy to repeat the same computation for any ∆ R, η;Rθ, ζ such that η 3 = ζ 3 (in order to preserve the intersection along (T 2 ) 3 ). The new exponential factors arising from (4.11) vanish once the Kronecker deltas are considered. Similarly, the same result is obtained for intersections involving other cohomology classes. Hence, we verify that an N = 2 sector in which D6-branes intersect over a one-cycle in (3.4). Finally we test the case in which the D6-branes do not overlap but run parallel over the one two-torus. To do so, we build the configuration described in (3.21) (see figure 3) and evaluate the contribution from the D6-brane associated to the Π O6 R and Π O6 Rθ as before. We obtain and so we recover (3.7).
It is worth noting that even though (3.6) and (
T 6 /Z 4
Let us now consider the Z 4 orbifold over a factorisable six-torus as discussed in [60,61], see also [62]. The two-dimensional lattice that defines each 2-torus is generated by the basis of the complex plane e i1 = 2πR i and e i2 = 2πR i τ i , where R i are the radii of (T 2 ) i and τ i = a i + iu i is its complex structure. The complex coordinate for each 2-torus is The action of the Z 4 group over T 6 is generated by an element θ that acts as follows with v i = (1/4, 1/4, 1/2). The action of this group severely constrains the complex structure.
In fact, the complex structure of the first two T 2 's is fixed. For the third torus, in which the Z 4 action has an orbit of order 2, the constrains are less severe. There are two options available, commonly denoted by AAA and AAB [61,63], and both of them have u 3 free. The AAA case is characterised by the choice a 3 = 0, whereas the AAB has a 3 = 1/2. Therefore, in the Z 4 orbifold there is always one unconstrained complex structure modulus.
For concreteness let us consider the choice AAA. All the steps of the analysis can be replicated in the AAB scenario to arrive to the same results. In the present case, we have τ 1 = τ 2 = i and The basis of the lattice that generates the torus is orthogonal and gives the following identifications Up to the constraints on the complex structure, the covering space metric and the Kähler form are the same as in the Z 2 × Z 2 case.
where again we defined the dimensionless radiiR i = R i / s and the Kähler moduli t i = 4π 2R2 i u i . The orientifold planes are given by the fixed points of the orientifold involution σ(z) =z, up to orbifold action identifications. Consequently, we have the following orientifold planes, summarised in table 2 and represented in figure 4. Table 2: O6-planes in T 6 /Z4.
(4.22)
The factor of 2 in (4.22) is due to the fact that θ 2 acts trivially over π ijk . Hence, the O6-planes content can be expressed as As we have seen, due to the factorised structure of the orbifold, the orientifold three-cycles are also factorised as products of one-cycles in the covering space, each one defined in each of the two-tori. A δ-function supported on these one-dimensional objects can be expressed using the conventional Fourier expansion for the δ-function distribution: where w denotes the direction transverse to the cycle normalised to unit norm and S is the periodicity of the configuration along such a transverse direction. Therefore, in order to build the bump δ-functions for factorisable three-cycles, we need to find the transverse periodicity S of the respective one-cycles, which we define as the distance that separates two consecutive intersection points between the loci of the cycle (given by the linear equations of table 2) projected over the two-torus we are considering and the transverse direction to the cycle in that same two-torus.
As a general rule, if we have a minimal-length one-cycle of length L on a two-torus of area A, the dimensionless transverse period S that appears in (4.24) will be S = A/ s L.
We did not have to worry about this factor in the Z 2 × Z 2 example, since all the cycles had periodicity one in the normalised coordinates. That will no longer be the case in general for the rest of our examples. We illustrate this reasoning by building the δ-like bump functions with support on to the loci Π i introduced in table 2. The factor S will be crucial to properly define the δ-bump function describing the orientifold planes that do not decompose as a single product of fundamental one-cycles, such as Π 1 .
2 (x i + y i ) and η has entries that are either 0 or 1. With all this information it is straightforward to build the three-form K satisfying (2.9) through the introduction of the following set of three-form currents defined in (3.2): where K α,η is the function associated to Π α,η and | n| 2 = n 2 1 /S 2 1R 2 , with S i the transverse period of the one-cycle obtained from projecting the three-cycle Π over (T 2 ) i .
Note that | n| changes for each function K α , since each one is describing a different three-cycle.
Also, as before, the relative signs between the different K α are chosen so that Im Ω calibrates all the orientifold planes.
At this point, we introduce the cohomology relation [ −2 (4.27) Now we can impose the equation of motion using (2.5). Defining ρ = 8π 3R 1R2R3 and taking into account our choice of complex structure, the holomorphic form Ω is Im In order to satisfy (2.5) the remaining complex structure modulus must be fixed to u 3 = 2, while µ is given by µ = −1 s 4h/ρu 3 . Along the lines of the Z 2 × Z 2 case, let us turn to the appropriate flux quantisation condition in the Z 4 orientifold. Taking the results from [61], the minimal integral lattice of three-cycles is defined as in (4.22). Applying the flux quantisation criterium for the H flux once we consider the presence of O6-planes we find that [ Next, we compute the different components of F in (2.26) as F α,η = s d † K α,η : (4.31d) Now we would like to compute J CY ∧ F α,η ∧ F β,ζ . To perform this integral we regularise it as before, interchanging the order between summation and integration. Similarly to the Z 2 × Z 2 orbifold, this allows us to obtain Kronecker deltas from the following relations: With all this information we can finally evaluate the different terms that contribute to (3.4).
Many of them will be exactly as in the Z 2 × Z 2 orbifold, but there are also some new kinds of contributions. First of all, we can consider pairs of three-cycles with non-vanishing intersection number. Let us for instance choose ∆ 0, 0;3, 0 . From figure 4 we see that the three-cycles intersect at a single point. Using (4.20), (4.31a) and (4.31d) we obtain where we defined Φ 6 = 6 s dx 1 ∧ dx 2 ∧ dx 3 ∧ dy 1 ∧ dy 2 ∧ dy 3 . Therefore, we observe once more that the only non-trivial contributions come from the N = 2 sector. For the case of Z 4 orbifold, the aforementioned sector is richer and more diverse than the Z 2 × Z 2 orbifold. In addition to pairs of branes of the form (4.13) we must also consider contributions involving cycles that do not run along the fundamental periodic directions. Let us focus on ∆ 1,0;3,0 . In figure 4 we can observe the involved three-cycles intersect over a one-cycle on (T 2 ) 3 . We find that The result again agrees with (3.6). Similarly, we can consider cycles that do not intersect, but run parallel along one of the two-torus. We take, for instance, F 1,0 and F 3,1/2 , obtaining Putting all the contributions together we conclude that with ε αβ defined as in (3.8). Taking as an example the family of solutions defined in (4.30) we can provide again a configuration of D6-branes with negative ∆ Bion D8 . For instance, let us consider a configuration such that for each value of α all the corresponding p α (4 − 2k) D6-branes are wrapping a single three-cycle. In particular, one can takê q 0,(0,0,0) =q 2,(0,0, 1 2 ) = 8,q 1,(0,0,0) =q 3,(0,0, 1 2 ) = 2. (4.42) With such a configuration we obtain Therefore, this result signals again an BIonic excess tension for the 4d membrane, which could imply a possible failure of the WGC inequality. Indeed, a naive computation 7 gives ∆ curv D8 = 1 2 T 3 D4 in the orbifold limit, which implies that ∆ curv D8 + ∆ Bion D8 < 0. Hence, this vacuum also seems to be in tension with the WGC for 4d membranes.
Repeating the analysis for the choice AAB provides the same results.
We consider now the case where the internal space is an orientifold of the orbifold T 6 /Z 2 3 described in [14,24,65]. In order to be consistent with our choice of orientifold involution, we will slightly change the notation of the aforementioned references.
The above symmetries, together with the orientifold involution, are more constraining that those introduced in the Z 2 × Z 2 or Z 4 orbifolds and they fully fix the complex structure to Hence, the factor (T 2 ) i can be described as a quotient of C by a lattice generated by e i1 = 2πR i i and e i2 = 2πR i ( √ 3/2 + i/2). They provide the following periodic identifications: It is worth noting that only the generator θ of the first Z 3 preserves the lattice generated by these vectors. The trick of this orbifold is that we are not taking the quotient simultaneously.
Q is not a symmetry of T 6 by itself, but it emerges as a symmetry of the quotient T 6 /Z θ 3 . This construction was described in detail in [65]. Using the periodic coordinates, the metric and the Kähler form are where we defined the dimensionless radiiR i = R i / s and the Kähler moduli Table 3: O6-planes in T 6 /Z3 × Z3.
The above O6-plane content can be expressed in terms of bulk three-cycles ρ i . Consider again the three-cycles inherited from the covering space T 6 . Let us define the basis of fundamental one-cycles π 2i−1 and π 2i of the tilted torus (T 2 ) i , i.e. cycles winded once along the periodic directions given by the identifications that defined our tori in (4.46).
where we have definedŷ i = (−x i + y i )/2,ȳ i = (x i + 2y i )/2 andỹ i = (2x i + y i )/2. Note again that the relative signs in the above expression have been chosen so that the volume of the orientifolds is calibrated by Im Ω.
With all this information we introduce the cohomology relation [ −2 Now, in a similar reasoning to the previous cases the flux quantisation condition for the Z 3 × Z 3 orientifold will be given applying the quantisation criterium for the H-flux. We can now provide the different components of F: The last step will be to compute X 6 J CY ∧ F α,η ∧ F β,ξ . To do so we will face six different families of integrals that we regularise by exchanging integration and summation following the same line of reasoning as in the previous cases. We also make use the following relations that allows us to obtain Kronecker deltas T 2 e 4πinȳ 1 e 4π i mȳ 1 dx 1 dy 1 = δ n+m , T 2 e 4πinỹ 1 e 4π i mỹ 1 dx 1 dy 1 = δ n+m , T 2 e 4πinŷ 1 e 4π i mŷ 1 dx 1 dy 1 = δ n+m , T 2 e 4πinȳ 1 e 4π i mỹ 1 dx 1 dy 1 = δ n δ m , (4.56) T 2 e 4πinȳ 1 e 4π i mŷ 1 dx 1 dy 1 = δ n δ m , T 2 e 4πinỹ 1 e 4π i mŷ 1 dx 1 dy 1 = δ n δ m .
It is worth noting that the different terms contributing to (3.4) always intersect along onecycles in contrast to earlier results where parallel cycles appear as in (3.21 Let us compute, for instance, ∆ 0,7 . In figure 5 and with the help of table 3 we can observe the preceding pair of branes. where again we have defined Φ 6 = 6 s dx 1 ∧ dx 2 ∧ dx 3 ∧ dy 1 ∧ dy 2 ∧ dy 3 . Iterating the previous procedure we can compute (3.4) for the most general configuration of D6-branes. We to arrive to ∆ BIon D8 = 9 12N Γ (q 0q4 +q 0q3 +q 3q4 +q 1q6 +q 1q7 +q 6q7 +q 2q5 +q 2q8 +q 5q8 ) T 1
Other orbifolds
We can extend the same analysis to other orbifolds. We briefly summarise our results below.
We work with the orbifold described in [60,66] adapted to our conventions. We start by introducing in a lattice generated by e i1 = 2πR i (a i + iu i ) and e i2 = 2πiR i , with a i = √ 3/2, u i = 1/2 ∀i. Hence, we have the same complex structure as in the Z 3 × Z 3 example with τ i = √ 3/2 + 1/2i. The action of Z 6 over T 6 is generated by an element θ that acts as where v i = (1/6, 1/6, −1/3). The orientifold planes associated to this symmetry are summarised in table 4. Following the same steps as in the previous computations we arrive to Lastly, we consider the Z 2 × Z 4 orbifold described in [66,67]. We work in a lattice generated by e i1 = 2πR i and e i2 = 2πiR i u i , with u i = (1, 1, u 3 ). Consequently we have the same complex structure as in the Z 4 example, with z i = 2πR i (x i + iu i y i ). The action of the Z 2 × Z 4 group over our T 6 is generated by an order four element θ and an order two element ω that act as where v i = (1/4, −1/4, 0) and w i = (0, 1/2, −1/2). With this action we find the orientifold planes summarised in
Conclusions
In this paper we have analysed type IIA AdS 4 flux vacua with O6-planes and D6-branes. These vacua can be either N = 1 and N = 0, and the latter can be subject to non-perturbative instabilities via membrane nucleation, in line with the AdS Instability Conjecture [1,2]. We have analysed those instabilities that correspond to 4d membranes made up from D8-branes wrapping the compact manifold X 6 , building on the previous work [26]. As pointed out therein, one should be able to determine whether Q > T or not for this class of membranes with our current, approximate description of a family of N = 0 that are closely related to supersymmetric ones. In this work we have expanded on this observation by analysing such D8-brane charge and tension in several orientifold backgrounds with different space-time filling D6-brane configurations. We have considered D6-branes that lie on top of O6-planes, which always solve the vacua conditions.
As pointed out in [26] at leading order Q D8 = T D8 , and then there are three corrections that can tip the scales to one side or the other, represented in (2.30). Out of these three corrections two of them are unavoidable, namely the curvature correction ∆ curv D8 = K in N = 0 vacua, as the refined WGC for membranes does, translates into the non-trivial constraint ∆ curv D8 + ∆ Bion D8 > 0 for any D6-brane configuration. We have computed ∆ Bion D8 in toroidal orbifold geometries, finding that the simple expression (3.8) that indeed shows that this correction can be either positive or negative. A negative value is favoured when we have pairs of D6-branes that do not intersect in the internal dimensions, so that open strings stretched between them lead to a spectrum with masses above the compactification scale. By choosing the D6-brane positions one can build configurations where ∆ Bion D8 < 0. In this way, we have been able to engineer vacua where ∆ curv D8 + ∆ Bion D8 < 0, therefore naively violating the WGC inequality for 4d membranes. They are however not necessarily in tension with the AdS Instability Conjecture, since there could be other channels, in particular D4-brane nucleation, that could mediate a non-perturbative decay to an N = 0 vacuum of lower energy.
We have pointed out some caveats that could reconcile our results with our expectations from the WGC for 4d membranes. From these, perhaps the most promising one is the one-loop threshold corrections to the vacuum energy, which as ∆ Bion D8 depends on the D6-brane positions, and could decrease the vacuum energy such that the controversial decay channels are no longer energetically favoured. If this was the case, computing ∆ Bion D8 could give valuable information about one-loop corrections in N = 0 vacua. If not, one should perhaps reconsider what the WGC statement should be for this particular class of 4d membranes. After all, they have a very special nature even from the 4d viewpoint: their 4d backreaction does not lead to a thin wall, they have space-time filling D-branes attached to them and their charges are bounded by the tadpole condition. This last point is particularly important, as it could modify the usual convex hull condition, that is typically formulated for an unbounded lattice of charges. In this respect the formalism of [59] to implement tadpole constraints in 4d EFTs could be of use.
Taken at face value, our results suggest that N = 0 AdS 4 vacua with a gauge sector without zero/light modes charged under it are more stable than those that contain charged light modes.
Showing whether or not this is true is an interesting challenge, as well as to unveil the would-be implications for our understanding of the string Landscape.
A Moduli stabilisation in T 6 /(Z 2 × Z 2 ) In this appendix we consider the moduli stabilisation of the Kähler sector in the T 6 /(Z 2 × Z 2 ) orientifold background with (h 1,1 , h 2,1 ) orb = (51, 3). As in [14,62], we look for vacua where the twisted two-and four-cycles are blown up due to the presence of background four-form fluxes.
As pointed out in [21], for the class of type IIA flux vacua analysed in the main text the Kähler moduli stabilisation conditions amount to K a = 10 3mê a ,ê a := e a − 1 2 2 ∧ω a ∈ Z , (A.2) and = ±1 distinguishes between supersymmetric and non-supersymmetric vacua, as in eq.(2.5).
The connection with this set of equations can be made by taking into account the dependence of G 4 onḠ 4 ,Ḡ 2 , G 0 and the B-field axions, something that it is usually done in the smeared approximation. In any event, in the following we will consider compactifications where m a = 0, so that these subtleties disappear and (A.1) simplifies.
To look for solutions to this equation we need to compute the quantity − 1 2 K a , that in our conventions measures the volume of holomorphic four-cycles or divisors. For this we need to parametrise the Kähler form in terms of such divisors, including the exceptional ones, and compute their triple intersection numbers. This exercise was done in [55] for the above orbifold background T 6 /(Z 2 ×Z 2 ) with a type IIB orientifold projection that leads to O3-and O7-planes.
Notice that the orientifold projection that we are interested in is different, as it leads to type IIA O6-planes. Therefore, we will take the approach of [21] and solve (A.1) for the unorientifolded orbifold geometry T 6 /(Z 2 × Z 2 ). Then, following the remarks in section 3.2, we will demand that e a ∈ 2Z for the four-form flux quanta defined in the covering space T 6 /(Z 2 × Z 2 ). The necessary topological data for this case can be extracted from the results of [64,68].
Similar relations hold for R 2 and R 3 . With these conventions and assuming the symmetric resolution of [55] one finds that the intersection form is given by With this intersection form one can compute the quantity K a for each divisor R i and E iα,jβ .
For simplicity we assume that all twisted moduli and untwisted moduli are equal among them: One then obtains that What is important, though, is that the values for r and t correspond to the interior of the Kähler cone. From [55, eq.(6.11)] this amounts to require that r > 4t > 0. This is satisfied as long as t > 0 and x > 4, which is in general quite easy to achieve. B Curvature corrections in T 6 /(Z 2 × Z 2 ) In order to check the WGC for 4d membranes one needs to compute the curvature correction ∆ curv D8 . In this appendix we perform its computation for the case of X 6 = T 6 /(Z 2 × Z 2 ), again assuming the symmetric resolution of [55]. 8 For this, we use the result of this reference that claims that the divisors D iα that appear in (A.4) have the topology of P 1 ×P 1 , and the exceptional divisors E iα,jβ that of P 1 × P 1 with four blown-up points. Using toric geometry techniques one can compute the following intrinsic topological data for each of these divisors where we have used the relations With these results it is easy to see that c 2 (X 6 ).R iα = 24, from where we obtain 1 24 c 2 (X 6 ).J = i r i − 1 6 α,βγ (t 1α,2β + t 2β,3γ + t 3γ,1α ) .
(B.2)
Going to the orbifold limit t iα,jβ → 0, one recovers (3.17) by using the dictionary T i D4 = e K/2 t i = 2e K/2 r i that can be deduced from (A.4).
|
v3-fos-license
|
2018-07-19T23:05:20.406Z
|
2016-01-01T00:00:00.000
|
51752288
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1961&context=theses",
"pdf_hash": "5ccc8afb360f3cd2ca7946a03ec12ee3eab18359",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2925",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "a3c60ec43283c58a411ac6141b487eebe95b81e5",
"year": 2016
}
|
pes2o/s2orc
|
Evaluating the Effect of Minimal Risk Natural Products for Control of the Tick, Ixodes scapularis
I evaluated the knock-down and residual activity of eleven minimal risk natural products (MRNP) against host-seeking nymphal stage blacklegged ticks (Ixodes scapularis Say) using a novel micro-plot product screening system in a landscape setting similar to a wooded residential property. The micro-plot system reduced variability between testing sites typically seen in larger field trials and provided the opportunity to compare results of studies conducted under the same environmental conditions, saving both time and money by confining product application and tick sampling to a 0.3 m diameter arena. By seeding the arenas with a known number of laboratory-raised blacklegged tick nymphs, I was able to further reduce the variability and improve product screening reproducibility across years. The products evaluated included CedarCide PCO Choice, EcoPCO® EC-X, Met52® EC, EcoEXEMPT® IC, EcoSMART® Organic® Insecticide, EssentriaTM IC, nootkatone, Progaea, Tick Guard, Tick Killz and Tick Stop. Five of the eleven products tested (EcoPCO® ECX, Met52® EC, EcoEXEMPT® IC, EssentriaTM IC and nootkatone) were found to have a statistically significant (P < 0.05) “knockdown” effect (meaning the product was applied while ticks were in the arenas), and only two of them, EcoPCO® EC-X and nootkatone, displayed significant “residual” tick-killing activity after weathering for 2 weeks. I found relatively inconsistent results with botanical oil-based products tested multiple times, indicating batch-to-batch variability, as well as variability between formulations. The results of my study suggest a need for better quality control and/or efficacy testing of botanical oil and other minimal risk natural products. Such MRNP screening can provide consumers with an improved ability to make more informed decisions about the level of tick encounter protection they might expect from products they may be purchasing because they believe them to be environmentally safer.
BACKGROUND
Ticks are obligate ectoparasites, relying on the blood meal from a host to complete each life stage. They are native to a variety of unique habitats, which affords them the opportunity to encounter and feed on a wide variety of hosts, in turn dictating the specific pathogens which they can later transmit. Worldwide, ticks are known to transmit over 20 different emerging or Category A-C pathogens of medical and veterinary importance including viruses, bacteria and parasites (Balashov 1972).
In southern New England, there are three common species of ticks which are known to bite humans and transmit disease-causing pathogens: American Dog ticks (Dermacentor variabilis), Lone Star ticks (Amblyomma americanum) and blacklegged ticks (Ixodes scapularis) which are more commonly known as deer ticks. All are found predominantly within their own distinct habitats and are known to transmit their own assortment of pathogens. Blacklegged ticks are vectors of the bacterial agent causing Lyme disease, the most commonly occurring tick-borne disease in North America with an estimated 300,000 human cases acquired annually in the United States (Kuehn 2013); less commonly, these same ticks also are capable of transmitting the pathogens that cause anaplasmosis, babesiosis, a tick-borne relapsing fever, and Powassan virus encephalitis.
Blacklegged ticks are predominantly found in landscapes containing deciduous forests, but due to demographic and wildlife population trends, human tick encounters have steadily increased and the spatial distribution of these ticks have expanded, even into residential backyards. This increase in tick encounter rates along with advances in medical diagnostics, patient screening, and disease reporting have also contributed to the steady increase of confirmed Lyme disease cases over the past few decades (Berardi et al. 1988, Pearson 2014, CDC 2016a, 2016b To combat rising levels of tick-borne disease incidence among humans and pets, there is an increasing need to take measures that protect against tick bites. From a personal protection viewpoint, (1) identifying and avoiding tick habitat, (2) using repellents, especially long-lasting clothing-only repellents with the active ingredient permethrin, and (3) performing daily tick checks all can help reduce the risk of potential tick bites , Miller et al. 2011, Vaughn and Meshnick 2011, Eisen and Dolan 2016. Environmental measures also can be taken, including (1) landscape management to reduce tick habitat (e.g., cutting back low hanging branches to increase the amount of sunlight and reduce humid environments, clearing leaves and controlled burning to reduce tick habitat, stacking wood piles to reduce rodent (tick hosts) habitat), (2) use of host targeted strategies to kill ticks before they can feed (e.g., Max Force bait boxes and Damminix tick tubes), and (3) using a broadcast acaricide application often called a "perimeter spray", referring to targeted spraying of the habitat most frequented by these ticks in the residential landscape (Mount 1981, Mather et al. 1987, Deblinger and Rimmer 1991, Schulze et al. 1995, Hubálek et al. 2006, Piesman 2006, Stafford 2007, Ginsberg 2014). Some homeowners have become somewhat suspicious of possible side effects of spraying synthetic chemical pesticides on their property which creates a potential barrier to effective tick bite protection. However, a growing trend favors a more natural tick treatment like botanical oils or biopesticides.
In 1996, the Environmental Protection Agency (EPA) amended the Federal Insecticide, Fungicide & Rodenticide Act to exclude from regulation a class of pesticides they termed "Minimum Risk Pesticides". These products are deemed to "pose little to no risk to human health or the environment" but must meet 6 conditions in order to qualify, one being that their active ingredients being listed as a qualifying ingredient on EPA's minimal risk products list (40 CFR 152.25(f)(1)) (EPA 2016).
Once approved, makers of these products are exempted from registering with the EPA under clause 25(b), and 25(b) exempt products generally fall solely under the regulation of individual States where they are distributed. Accordingly, these products do not undergo the same rigorous testing and analysis required of most pesticides.
Using a novel microplot design in a field trial, this study was conducted to help amend this possible oversight by screening the tick-killing efficacy of commercially available and experimental minimal risk natural products (MRNPs) claiming to reduce tick abundance.
INTRODUCTION
Annual cases of Lyme disease, the most commonly reported tick-borne disease in the United States, have been increasing consistently over the past 20 years, especially in the northeastern United States (CDC 2016a). Since 2005, in Rhode Island alone, the Lyme disease incidence rate has increased from 3.6 to 54 cases per 100,000 residents compared to the national rate of 7.9 cases per 100,000 (CDC 2016b).
Furthermore, in Rhode Island, it is estimated that more than 300 additional cases go unreported every year (CDC 2016b).
The public generally understands that blacklegged ticks, Ixodes scapularis Say carry the Lyme disease-causing bacterium and transmits it to people and pets during blood feeding (Childs et al. 1998, Herrington Jr 2004. They also are familiar with the bull's eye rash that is characteristic of Lyme disease. Although public awareness regarding tick bite-associated health risks is increasing, a large gap in tick-bite prevention knowledge and action still exists. Despite being well versed in the consequences of tick exposure, the public is largely uneducated, inexperienced, and prone to foregoing the most effective tick bite prevention behaviors and activities (Herrington Jr 2004, Gould et al. 2008, Connally et al. 2009). Many factors likely contribute to this, including: 1) lack or improper use of protective measures such as repellents and wearing repellent-treated clothes, 2) difficulty in finding attached and feeding ticks and 3) failure to recognize and avoid tick habitat.
Along with host-targeted strategies and landscape manipulations, suppressing the tick population with an area-wide treatment using chemical pesticides is considered one of the most effective methods for reducing tick encounter risk on residential properties. For control of the blacklegged tick, a broadcast application method often called a "perimeter spray" is used, referring to targeted spraying of the habitat most frequented by these ticks in the residential landscape (Piesman 2006, Stafford 2007).
If applied correctly using effective products, perimeter sprays can significantly reduce tick encounter risks for family members, including pets, within their own yard (Stafford 2007). However, due to concerns about potential human toxicity/carcinogenicity, environmental contamination (including groundwater), and toxicity toward non-target organisms and pets (Childs et al. 1998), recent consumer trends suggest that homeowners are embracing newer, "greener" natural alternatives over industry standard synthetic chemical pesticides which have historically been proven effective. Though possibly less damaging to the environment, the natural pesticides, which may include various botanical oils, biopesticides, and abrasives, or a combination of these, have not been thoroughly tested. Also, due to their "natural" active ingredients, they do not fall under the same Environmental Protection Agency Traditionally, field plots used for evaluating efficacy of acaricides to control blacklegged ticks using the "area-wide" method typically range from 100 m 2 to hectares in size and must be replicated extensively to support enough tick collection numbers for statistical analyses. Such studies are labor intensive and expensive, presenting a significant impediment to evaluating tick control products. Moreover, when conducted across residential sites, ecological variability often results in variances much larger than means. This study simultaneously evaluates an array of MRNPs in a novel micro-plot system that simulates ecological conditions found in typical residential sites in the northeast U.S. where blacklegged ticks are highly endemic. Using field-derived but laboratory-reared nymphal blacklegged ticks, I compared the tick-killing knockdown and residual activity of some of these products to highly effective formulations of bifenthrin, the current industry standard which has been proven effective against ticks (Stafford 2007, Elias et al. 2013). MRNPs to the arenas to allow tick dispersal into the leaf litter. The residual arenas were sprayed at the same time as the knockdown arenas, but were allowed to weather for 2 weeks before ticks were added to them. Three humidity loggers were placed within the study site to record temperature and relative humidity for the duration of each study season in an attempt to detect any low moisture events which might negatively impact tick survival (Berger et al. 2014).
Field
Treatment Preparations and Applications. The materials evaluated at labeled field rates were commercially available and/or experimental materials (Table 1) and/or (d) we were testing different concentrations of the same product (Talstar® Professional, Essentria™ IC 3 ) ( Table 2).
Liquid formulations of MRNPs were prepared according to label specifications, mixed in 1 gallon plastic containers and poured into Solo backpack sprayers (Solo Inc, Newport News, VA), where they were hand-pumped to 620.5 kPa.
A 0.91 m 2 piece of plastic was used to create a 0.3 m diameter cylindrical "spray shield" which was placed inside of the arenas to prevent over spray beyond the arenas.
The sprayer wand was placed inside of the spray shield, just above the leaf litter, and 30 milliliters of product was applied in a circular motion, in an attempt to have even distribution of product. Dry formulations were weighed into plastic portion cups (one per arena) prior to application. Post product application, arenas were covered with 3.2 cm 2 hardware cloth secured with stakes until ready for sampling to prevent disruption from wildlife.
Sampling. Arenas were evaluated for 2 weeks at 3-4 day increments after treatment ( Fig. 2) using a round 0.3 m diameter pressboard wrapped in a flannel "bonnet". Each arena was continuously sampled by pressing the board into the leaf litter for 5 second increments to collect questing nymphs, until 3 consecutive samples revealed no ticks attached. Using fine-pointed tweezers, all ticks were placed into vials after each press and results recorded. Care was taken to keep separate pressboards for each treatment and to launder the flannel bonnets between sample days to avoid cross contamination.
Nymphal ticks. Ticks for these experiments were reared from wild-caught hostseeking females, then fed on rabbits, and fed as larvae on hamsters in the laboratory (Mather and Mather 1990) Corrected Percent (%) control = 1-n in T after treatment * 100 n in C after treatment where n is nymphal tick density, T is treated plots and C is water control plots.
The Henderson and Tilton formula (1955) wasn't needed for this study because all of the arenas, both treatment and control, contained the same number of ticks at the beginning of the study, therefore making the two formulas equivalent.
Trials. None of the MRNPs tested (Tick Guard, Progaea and EcoSMART®
Organic® Insecticide) were found to have a significant effect as a knockdown treatment (253 nymphs recovered, 0% KD, P = 0.922; 139 nymphs recovered, 37.1% KD, P = 0.099; 257 nymphs recovered, 0% KD, P = 0.512, respectively) when compared to water-only control plots (Table 2, Figure 6), and two of them (Tick Guard and EcoSMART® Organic® Insecticide) had more nymphs recovered than in the water-only control plots. All three products also were found to be ineffective as residual treatments (286 nymphs recovered, 0% RESID, P = 0.998; 228 nymphs recovered, 17.9% RESID, P = 0.475; 297 nymphs recovered, 0% RESID, P = 0.954, respectively), with more nymphs being recovered from the Tick Guard and EcoSMART® Organic® Insecticide plots than from the water-only controls. A onethird lower concentration of Talstar® Professional than used in the previous trials was still highly effective as both a knockdown and residual treatment (1 nymph recovered, 99.55% KD, P < 0.001; 0 nymphs recovered, 100% RESID, P < 0.001, respectively) when compared to the water-only control plots, and the full-strength application had a similar performance to previous years (0 nymphs recovered, 100% KD, P < 0.001; 4 nymphs recovered, 98.55% RESID, P < 0.001, respectively).
DISCUSSION
A novel micro-plot system was developed for screening multiple acaricidal products under the same environmental conditions and location in the field as a means of evaluating minimal risk natural products for control of nymphal blacklegged ticks.
By compressing large field test sites into single 0.3 m arenas which were seeded with a known number of first generation lab-reared nymphs, my approach saved time by sampling small areas, and also reduced study costs due to more efficient treatment application. Use of spray shields prevented cross-contamination of treatments and allowed me to use less product per treatment. An additional benefit of the novel micro-plot system was that I could use a large and known number of ticks seeded within each test arena, affording decreased variability between seasonal treatments and ease of reproducibility and comparison across years.
Eleven MRNP materials and formulations were assessed for their efficacy as tick control products in comparison to bifenthrin and a water-only control. Results showed that along with the synthetic pyrethroid bifenthrin, only one commerciallyavailable MRNP, EcoPCO® EC-X (pyrethrins) and one experimental (nootkatone with d-Limonene) provided a high level of knockdown control over host-seeking I. scapularis nymphs and, although their tick-killing efficacy may have degraded somewhat, acaricidal activity of these MRNPs still persisted to effect a statistically significant level of tick control in the two week residual study.
For the purpose of standardization, in 2012, the sampling timeline for the Met52® EC was kept the same as all the other products, with sampling beginning 3 days post application, which was contrary to the label instructions. Under these conditions, Met52® EC did not have a significant impact as either a knockdown or residual application as had been previously published (Stafford and Allan 2010). In 2013, the sampling timeline for the Met52® EC was adjusted to label instructions, allowing the fungal spores to establish for a full week prior to tick sampling, and under a longer incubation scenario, this biopesticide did exhibit a knockdown effect on the questing nymphs that was statistically significant when compared to water-only controls, but it did not significantly suppress nymphs in the residual study. The use of Metarhizium sp. as a biological control agent has been widely studied against several arthropod pests including blowflies in England (Wright et al. 2004), grasshoppers and locusts in Australia (Hunter 2005), mosquitoes in Mexico and Korea (Garza-Hernandez et al. 2015, Lee et al. 2015 and several species of ticks world-wide (Benjamin et al. 2002, Kirkland et al. 2004, Leemon et al. 2008, Bharadwaj and Stafford 2010, Wassermann et al. 2016) with mixed results. One potential reason for this may be due to variation between several strains of the fungus; each may have a different effect depending on pest species, pest life stage, environmental conditions, spore concentration and formulation. Another reason may have been the sampling technique used in the respective study designs. The results from this study were indicative of fungal growth/tick killing effect under natural field conditions, whereas in some previously published M. anisopliae studies, ticks were sampled out of plots and returned to the lab to be maintained under ideal conditions for fungal growth (Benjamin et al. 2002, Bharadwaj and Stafford 2010, Stafford and Allan 2010.
Two additional minimal risk natural products exhibited a significant knockdown effect; in 2013, both the EcoEXEMPT® IC 2 and Essentria™ IC 3 knockdown treatments had significantly fewer ticks recovered than the water-only control. Both of these products contain rosemary and peppermint oils. The newer Essentria™ IC 3 is the replacement formulation of EcoEXEMPT® IC 2 which previously had been shown to be effective against blacklegged ticks (Rand et al. 2010). The EcoEXEMPT® IC 2 required adding an emulsifier prior to dilution and application; in re-formulating the product, an adjuvant was added to Essentria™ IC 3 so that the emulsifier was no longer required to keep the oils in suspension. Although still effective, the original IC 2 formulation had a greater impact on host-seeking nymphs than the newer Essentria™ IC 3 , but neither product remained active enough to have a significant impact on the nymphs exposed during the residual trials. It is possible that the greater tick-killing action of the IC 2 formulation could be attributed to the emulsifier than to the botanical oils (Schroer et al. 2001, Mullin et al. 2015. In 2014, I received a sample of nootkatone crystals from the Centers for Disease Control and made a 2% solution by dissolving them in d-Limonene (a solvent extracted from orange peels) before diluting it in water containing EZ-Mulse (a proprietary blend of nonionic surfactants used to emulsify citrus extracts and natural oils) (Jordan et al. 2011, Bharadwaj et al. 2012. As had been seen in previous studies (Dolan et al. 2009, Jordan et al. 2011, Bharadwaj et al. 2012, this experimental nootkatone formulation exhibited a significant immediate knockdown effect (83.2%) on the host-seeking nymphs, and although its tick-killing efficacy may have degraded slightly, it still remained active for the two week residual study, killing 35% of nymphs released into the arenas two weeks after product application. Essentria™ IC 3 was tested for a third time, but using a less concentrated solution as per label rates, and while this treatment had a significant knockdown effect (30.6%), it had no residual effect.
In 2015, two privately-labelled products, Tick Guard and Progaea,based on the original formulation of EcoEXEMPT® IC 2 , and EcoSMART® Organic® Insecticide granules had no significant knockdown or residual effect on host-seeking blacklegged tick nymphs. In fact, more nymphs were recovered from the Tick Guard and EcoSMART® Organic® Insecticide plots than from the water-only control plots.
In total, five formulations of rosemary and peppermint oil were tested and only two of them exhibited any significant knockdown effect. The observed batch-to-batch variability in efficacy raises concerns regarding formulating botanical oil products, and this study provides evidence that there is a need for better quality control.
Minimal risk natural product active ingredients include various botanical oils (such as rosemary, peppermint and cedar oils), biopesticides, and abrasives, or a combination of these. Most or all of this class of product are exempted from Environmental Protection Agency registration and regulation under section 25(b) of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) (EPA 2016). While individual States may impose a greater degree of oversight and regulation, generally, the 25(b) exemption means that this class of products do not typically undergo the same rigorous testing and analysis that most pesticides do prior to production and distribution. Because of this, there may not be sufficient evidence that they truly work against the list of pests claimed on their labels, and potential environmental side effects, while presumably minimal, remain unknown. Furthermore, while a few of the MRNP materials tested here showed a statistically significant effect when compared to the water-only control treatments, many would not be recommended for use in controlling blacklegged tick populations, as their claim of efficacy still left >40-50% of the original tick population alive following a single knockdown treatment. It should be noted that in September 2012, the Federal Trade Commission (FTC) filed deceptive advertising charges and began litigation against multiple companies including CedarCide Industries, Inc., (makers of CedarCide PCO Choice) challenging their strategies for bed bug and lice treatments (Lordan 2012). The complaint was for making unsubstantiated and false claims about (1) the efficacy of their product, (2) about scientific studies that had been conducted and (3) claiming that their product was invented for the U.S. Army at the request of the U.S. Department of Agriculture.
In some cases, my study likely provides some of the first or only efficacy data for these products in controlling blacklegged ticks, and consumers may want to consider this before relying on using a MRNP for residential tick control.
Finally, I included bifenthrin, currently considered the industry standard in broadcast tick control treatments, as the positive control in this study. It was highly effective as both a knockdown and residual treatment in all four years, including as a residual 4 weeks post-application. In the final year of testing, we decreased the bifenthrin concentration by a third of its labelled rate and still had <2% recovery of ticks from both the knockdown and residual plots. With such a high rate of efficacy and low rate of application, when combined with its typical use as a perimeter treatment in residential landscapes, it would seem difficult at this time to dismiss the use of bifenthrin as an effective tool in tick control and tick-borne disease prevention.
|
v3-fos-license
|
2023-03-28T13:06:10.373Z
|
2023-03-01T00:00:00.000
|
257767976
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://sciendo.com/pdf/10.2478/eurodl-2023-0003",
"pdf_hash": "7f884d58b9557a13f69bf5ff96a43af04a157198",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2926",
"s2fieldsofstudy": [
"Education"
],
"sha1": "8aaf1fe7ff3143e33489792fc1023a495afff3d4",
"year": 2023
}
|
pes2o/s2orc
|
Effectiveness of the project-based 6E learning model
: This research aims to determine the effectiveness of the Project-based 6E (engagement, exploration, explanation, elaboration, extension and evaluation) Learning Model. The researchers created the model by combining the basic features of project-based learning with an understanding of the learning cycle. The pre-test/post-test semi-experimental design, in which no control group is included, was used in the study. The study group in the research consisted of junior students studying at the Turkish Teaching and English Language Teaching departments of the Faculty of Education, Düzce University, in the fall semester of the academic year 2020–2021. The experimental process was structured according to the Project-based 6E Learning Model. Two different types of data, namely, qualitative and quantitative, were collected to determine the effectiveness of the experimental process. It was concluded that the students found the model efficient, effective and attractive. The model positively changed students’ perceptions of values and values education. It was also seen that the model enables students to realise their values, review and question the values, reinforce them, complete their deficiencies and gain new values. Due to the effect of the learning model, students positively experienced significant changes in their teaching perception.
Introduction
Today, education aims to raise individuals who can solve problems, adapt their knowledge to real life and engage in lifelong learning (Hains & Smith, 2012).Students should be able to transform their knowledge into practice to solve real-life problems.They should have high-level competencies such as communication, collaboration, creativity, critical thinking and problem-solving, known as 21st-century skills (Tan et al., 2015).This situation enables adopting a student-centred education approach that focuses on students' individual needs and preferences and ensures students' participation in decision-making processes (Benson, 2012).The use of models develops the talents, curiosity and productivity of individuals; encourages diversity; and broadens the minds of all students, providing them the opportunity to go beyond what they have been taught (Bass, 1997).
For this purpose, many learning models have been designed.One of these learning models is projectbased learning.Project-based learning is a studentcentred teaching approach that organises learning around projects and encourages students' high-level thinking and active collaboration (Wahyuni, 2014;Aksela & Haatainen, 2018).In project-based learning, students need to find answers to real-life problems by performing problem determination, research and investigation; developing various strategies for solving the problem; producing products based on cooperation; and exhibiting the products (Thomas, 2000).Students need to work collaboratively to identify problems, formulate them, solve them and develop concrete actions (Abishova et al., 2020).This model provides an in-depth understanding of topics and concepts; improves the permanence of knowledge and skills; and promotes the ability to use this knowledge and skills in new situations (Solomon, 2003).Active participation of students in the project process enables them to shape their ideas and reveal their perspectives (Zoller, 1991).
However, when studies using project-based learning in Turkey were examined (Sokur, 2018;Taşkın, 2018;Çalış, 2019;Demircioğlu, 2019;Sezer, 2019;Türkmen, 2019;Adalı Bakıoğlu, 2020;Ulukaya Öteleş & Ezer, 2020), in general the following problems were encountered.While the curriculum was applied in the control group, it was not applied in the experimental group and projects independent from the curriculum were used.While the courses were taught in accordance with the outcomes specified in the curriculum, the projects were carried out outside the course independent of the outcomes.The subjects were shared among groups instead of all students working according to all outcomes.During the implementation period (4-5 weeks), the students worked on the subject of the project for a single or few acquisitions.The projects did not meet the acquisitions that should have been given during that period.Additionally, generally only one unit was covered, the whole semester/year was not planned and students made a single presentation at the end of the term.Moreover, the activities during the course were perceived as a project.All of these show that the model cannot be integrated into the course; it is considered an application outside the course; and project-based learning is understood as a 'project assignment' given to the students.Sumarni (2013) found that it is not easy to guide students' studies when project-based learning is applied in crowded classrooms.Tonbuloğlu and Aslan (2013) identified one of this model's limitations as the difficulty faced in monitoring students' work.Öztuna Kaplan and Diker Coşkun (2012) stated in their study that teachers have problems with time and space in project-based learning.All of these show some difficulties in integrating project-based learning into the courses.These problems have become more apparent in the distance education process, and the issue of guiding students has emerged as a more critical problem.The study by Şahin et al. (2020) determined that students had problems in terms of 'communication' during the distance project-based values education course.In the distance education process, it has been observed that students need the guidance of the teachers while carrying out their projects, and they want to be in 'communication' with them to a greater degree.
Effective learning designs have gained even more importance in the critical distance education process faced by the world and our country.According to the meta-analysis by Russel (1999), when distance education is structured correctly, face-to-face education will have no significant difference.Considering that the learning and teaching processes of distance education require students and teachers to work without being in the same place and without the supervision of the teacher, it is crucial to use effective models in the planning of the lesson in order to achieve the following: maximise the learning potential of each student; create positive effects on student participation and student attitudes; and ensure effective learning.
We decided to remodel project-based learning as a 'learning cycle' and created the 'Project-based 6E Learning Model' (consisting of engagement, exploration, explanation, elaboration, extension and evaluation steps) to find solutions to the problems mentioned above, integrating project-based learning into the courses, ensuring integrity by structuring the entire course according to the same understanding, guiding students more effectively by determining what the students and teachers will do at each step, increasing student-student and student-teacher interaction and adapting project-based learning to distance education when necessary.
Project-Based 6E Learning Model
We created a model by combining the basic features of project-based learning with the understanding of the learning cycle.The learning cycle is a research-based teaching strategy based on students' prior knowledge.It shifts the emphasis from the teacher to the student and the active role played by the student in the learning process.It promotes learning through research and application-based activities (Sam et al., 2018).In learning cycles, students actively participate in each learning stage organised in a planned and sequential manner to structure their learning and gain learning competencies (Sharma & Sankhian, 2018).It improves the attitudes towards the course and improves students' reasoning skills and critical thinking (Sam et al., 2018;Racheal, 2019).By combining the powerful features of the learning cycle with project-based learning, each step of the students' activities, such as problem determination, research analysis, data collection, data analysis, development of various strategies for solving the problem and product creation are effectively structured with the individual, group and classroom activities of the course.Thus, we aimed for students to gain high-level thinking skills.
This model was created to be used in different disciplines.It is thought that students can gain skills such as problem determination, research analysis, data collection, analysing data and developing various strategies for solving the problems, as well as knowledge, skills and values related to the subject area effectively.The model can be easily adapted to both face-to-face and distance modes of education.The model is shown in Figure 1.
The Project-based 6E Learning Model is designed as a 'learning cycle' completed in two stages.At each stage, 'individual work,' 'group work' and 'class work' follow each other in respective order.At each stage, the students carry out individual studies before group work.Then, students do group work at the time they prefer before the lesson/classwork.They can choose face-toface or synchronous (online) communication.Classwork also can be arranged according to face-to-face or synchronous distance (online) education methods according to the situation while designing the class.
• Individual: In democratic learning environments, students need to work individually to gain personal experience and bring their experiences to a lesson (Gollob et al., 2010).Since successful cooperation starts with individual abilities, individual responsibility and motivation, individual learning is both a prerequisite and a complement to cooperation.In addition, individual learning is a requirement for individual responsibility (Yadin & Or-Bach, 2010).
• Group: In democratic educational environments, group work is as important as individual work.
Interactions with peer groups in the learning process positively affect students, being open to diversity, critical thinking and active learning.In such environments, students also gain values such as cooperation, trust, responsibility and democratic behaviour (Nagda et al., 2003).• Class: In the classroom, students not only share their own knowledge with the class but also learn new information in addition to their individual and group learning.Interactions and experiences in the classroom environment strengthen the social development of students.It provides opportunities for students to communicate with people different from themselves and to learn about different cultures, lifestyles and people (Ahmad et al., 2015;Jacobs & Power, 2016).
Since we wanted the model cycle to be completed in two phases/weeks and to include individual, group and class work in each phase, we decided that the model should consist of six steps.The steps were organised based on the steps of the 5E and 7E models.
The 5E model has five steps, which are as follows: 1) engagement, 2) exploration, 3) explanation, 4) ).In the 7E cycle, the 'engagement' step of the 5E learning cycle is expanded to 'elicitation' and 'engagement.'Thus, more emphasis is put on activating previous knowledge for learning.The 'evaluation' step is changed to 'evaluation' and 'extension,' and thus, emphasis is placed on the transfer of knowledge.Thus, the 7E cycle consists of seven steps, which are as follows: 1) elicitation, 2) engagement, 3) exploration, 4) explanation, 5) elaboration, 6) evaluation and 7) extension (Eisenkraft, 2003).
In the Project-based 6E Learning Model, unlike the 5E Model, there is an 'Extension' step.Unlike the 7E Model, there is no 'elicitation' step.Moreover, the 'extension' step comes before the 'evaluation' step.The steps of the 6E Model are 1) engagement, 2) exploration, 3) explanation, 4) elaboration, 5) extension and 6) evaluation.The engagement, exploration and explanation steps in the first stage are followed in the second stage by elaboration, extension and evaluation.
Project-based learning is a process that progresses through the following steps: 1) determining problems; 2) collecting data; 3) analysing data; 4) solving problems; and 5) concluding the project with a presentation (Blumenfeld et al., 1991;Wolk, 2001;Westwood, 2006).These steps of project-based learning are embedded in the model.
The Project-based 6E Learning Model's steps are explained in the following sections.• Explanation: The explanation step is carried out as 'classwork.'In this step, students need to explain their experiences in the solution process, reveal their solutions and strategies regarding the problem, and explain their results and solution proposals.Students must reveal their knowledge and perceptions about the concepts, explain and define them and create new definitions.as 'classwork.'In groups, students present their experiences in the process, solutions to the problem and strategies.Students explain their results and solution suggestions for this stage/week and the common results and solution suggestions for both stages/weeks.The instructor ensures that the explanations are enriched and misconceptions are eliminated through the evaluations made at the end of the course.In this step, students individually evaluate the presentations of other groups.At the end of both stages, the groups submit their reports.
Purpose of the Research
An essential feature of values is that they do not come from birth but are acquired in life.Values education begins with the individual being sent to the realm of existence.The individual gains the values that will determine his/her personality, point of view and the direction of his/her behaviour in the future while living.The education given in the family by paying attention to the harmony of mind, heart and body constitutes the basis of the values of education.Although the impacts of the family, social environment and the school in value formation are intertwined, schools play the most influential role in the period when the characters of the individuals are shaped.Thus, schools are environments where the values gained in the family are internalised, applied and reinforced, and even new values are acquired (Kılıç et al., 2016;Şahin et al., 2016).
Values education was implemented in the curriculums at the beginning of the 20th century, and different approaches were implemented for this purpose (Kirschenbaum, 1995).Teaching methods that focus on memorising knowledge to acquire values will not work.It is essential to ensure that students use their reasoning skills through discussion and collaboration in democratic classroom environments (Lickona, 1991;Şahin et al., 2020).Providing students with research and analysis opportunities, creating a space where they can express their discourse by comparing various views and perspectives and providing an environment where they can interact encourages them to question and think critically (Simó et al., 2016).For this reason, as in all educational activities, in the practical values education method, process-based, student-centred and inquiryoriented models that improve students' critical thinking and problem-solving skills and encourage group learning, collaboration and student interaction should take precedence (Schwartz et al., 2013).Interactions in the learning process allow students to establish relationships with people different from themselves and learn about different cultures, lifestyles and people.Thus, it may be possible for students to recognise, appreciate and embrace values (Subba, 2014;Ahmad et al., 2015;Jacobs & Power, 2016).
In 2004, with the changes made in the curriculum in Turkey, values education began to be directly included in courses such as Science and Technology, Life Sciences, Religion Culture and Moral Knowledge, and especially the Social Studies course.The goal is to teach students some human and moral values within the framework of the particular purposes of these courses.However, everyone knows that this does not go beyond a theoretical wish because, until now, within the field of education, there has not been an understanding and consideration regarding enabling students gain these values in schools.It is thought that this gap can be filled with the 'Project-based 6E Learning Model.'The model can be successfully applied in values education and can be an example.Through the model, students are expected to use their individual and collaborative reasoning, critical thinking and problem-solving skills, compare various views and perspectives by conducting research and analysis and realise and adopt values. In
Research model
The experimental method was used in the research.The pre-test/post-test design, in which no control group was included, was adopted, and analysis was conducted using a semi-experimental design.The symbolic appearance of the pattern is given in Table 1.
Research process
The learning process was conducted according to the 'Project-based 6E Learning Model.'The research/ course process was carried out by distance education.
In the first 2 weeks, an introduction was made about the course, and pre-tests were applied.The model was put into practice as of the 3rd week.The students were placed in groups of 4-6 people according to their preferences.There were six and 10 groups in the English and Turkish sections.In order to conduct and organise group work, students were asked to elect a group president for each group.The president served for 2 weeks, and each group member became president.
Individual studies were carried out before group work.Materials related to the subject were sent to the students every Friday for individual studies.The group work was carried out in a face-to-face or synchronous/ online manner, and the students determined the time before the classwork.The classwork was done synchronously/online for the English group from 13.00 to 14.00 hours and for the Turkish group from 14.30 to 15.30 hours every Thursday on the university's distance education platform.
The learning steps of every 2 weeks continued within the limits of the unit topics.The unit topics are listed in Table 2.
At the end of the cycle or at 2 weeks, each group prepared a single report and uploaded it to Google Classroom as a single Word file until 24:00 hours on Friday.The head of the group was responsible for the preparation and sending of the reports.The reports from the students were evaluated by the researchers, notified to the students and stored in the electronic student portfolios.The students filled in the self-assessment and peer-assessment forms prepared by the researchers at the end of the semester.The post-test was applied the week after the end of the semester.
Data collection
Research data were collected using the Student opinion form, the Project-based Virtual Learning Competencies Scale and the Human Values Scale.Information about each tool is given below.
Student opinion form
A student opinion form was used to determine students' attitudes towards the course.The form included the following three questions: 1
Project-based Virtual Learning Competencies Scale
The Project-based Virtual Learning Competencies Scale developed by Tuncer and Yılmaz (2013) was used to measure the model's effectiveness on students' scientific research skills.The scale's Cronbach's alpha reliability coefficient was found to be 0.864.The scale dimensions and items are listed in Table 3.
Human Values Scale
The Human Values Scale, developed by Dilmaç (2007) and consisting of 42 items, was used to measure the model's effectiveness on students' values.This scale has six dimensions, namely, responsibility (seven items), friendship (seven items), peacefulness (seven items), respect (seven items), tolerance (seven items) Teachers' professional responsibilities, their relations with their colleagues and administration, and values 14 and honesty (seven items).The scale's Cronbach's alpha reliability coefficient was found to be 0.92.The articles were translated into English by Guzel and Oral (2018).
Data analysis
Quantitative data were analysed using the Statistical Package for the Social Sciences (SPSS) programme.
In order to examine the normality of the data, the Kolmogorov-Smirnov test was performed, and since it was seen that the data were normally distributed, the t-test, one of the parametric tests, was used in the analyses.
The qualitative data analysis process was carried out in three stages (Kılıç et al., 2019): organising data, summarising data and associating/interpreting.First of all, the data were organised.The forms of the Turkish Language Teaching students were coded as TLS1, TLS2, …, TLS53, and the forms of the English Language Teaching students were coded as ELT1, ELT2, …, ELT29.Content analysis was used in summarising the data.The content analysis enables researchers to understand social reality in a subjective but scientific way (Zhang & Wildemuth, 2009) by expressing a data reduction process aimed at determining fundamental consistencies and meanings and by handling voluminous qualitative material (Patton, 2002).In the content analysis process, first, the data were coded by the researcher; then, categories were created by examination of the codes.In the associating/ interpretation phase, the data were interpreted by establishing relationships between the categories.
Validity and reliability
Cronbach's alpha coefficients of the scales were calculated for the reliability of the quantitative data.The Cronbach's alpha coefficient of the Project-based Virtual Learning Competencies Scale was found to be 0.940 for the pre-test stage and 0.980 for the post-test stage.The Cronbach's alpha coefficient of the Human Values Scale was found to be 0.884 for the pre-test stage and 0.890 for the post-test stage.Values between 0.80 and 0.95 indicate a high level of reliability (Kline, 1986;Coaley, 2010).In this case, it can be said that the scale is highly reliable.
Prolonged involvement is a recommended strategy to establish a relationship of trust between the researcher and the participants.This includes spending enough time observing various aspects of an environment, developing relationships and engaging with members of the target audience (Pandey & Patnaik, 2014).In this study, the researchers carried out both the application and the research part, and long-term interaction was ensured with the participants.A detailed description includes providing sufficient information to the reader about all research processes from the data collection stage to the production of the final report (Pandey & Patnaik, 2014).This study illustrates the research process in detail, and the data collection and analysis process is explained in detail and transparently.To get a constant observation, the researcher repeatedly reads and analyses the data and creates and reconstructs categories.The researcher looks at the collected data with a critical eye, questions the adequacy of these data in answering the research questions and examines the data until the data gain depth (Korstjens & Moser, 2018).In this study, the researchers read and coded the data repeatedly, and combining the codes in the categories was carried out meticulously.The relationships between the categories were constantly examined throughout the analysis process, and the results were meticulously organised and interpreted.The data were interpreted objectively, and the researcher's biases were not reflected in the research.Examining the research with various dimensions by an expert is one of the measures that can be taken in terms of credibility (Anney, 2014).In this study, expert opinion was obtained by associating the research data with the results.Objectivity confirmability in qualitative research refers to the degree of impartiality in research results.It is about clearly determining that the results and interpretations are not the product of the researcher's imagination but are obtained from the data (Korstjens & Moser, 2018).In this study, the researchers tried to reflect the research environment of the study with direct quotations from the data collected from the forms.The raw data of the study were stored for review when necessary.
Results and Discussion
The effect of the model on students' learning competencies The results from the t-test analysis of the students' pretest and post-test scores on the Project-based Virtual Learning Competencies Scale are given in Table 4. Table 4 shows that there are significant differences in the scores of the students in the post-test stage in the following sub-categories: total score (t =-3.602, p =0.000), introduction (t =-2.840, p =0.005), execution (t =-3.101, p =0.002), conclusion (t =-4.479, p =0.000), working with a group (t =-3.282, p =0.001) and selfcontrol (t =-2.038, p =0.043).
Research skills such as identifying problems, collecting data, and analysing and reporting data too require high cognitive skills.These skills that make an individual a good researcher are also the skills required in student-centred learning.Teaching that enables students to develop these skills also forms the basis of lifelong learning (Yeoman & Zamorski, 2007).Regardless of their undergraduate major, students should experience research-based approaches to develop information literacy and research skills.Although project-based learning can successfully improve student engagement, not structuring the process can increase students' cognitive load and cause problems in learning.It is essential to structure the process well to reduce the cognitive load and thus improve research skills (Loveys et al., 2014).The fact that the students gained these scientific research skills as a result of the tool used in this study shows that the Project-based 6E Learning Model is successful.
The positive effect of the model on students' selfcontrol skills is also a significant result.Self-regulated learners are active participants in the learning process and take appropriate responsibility and control for their learning.They can observe and control their behaviours (Pilling-Cormick & Garrison, 2007).Self-control is critical in deciding what is worth learning and how to approach the learning task (Garrison, 1997).Especially in e-learning environments, students must exercise control over their learning.A successful pedagogical design should effectively decide the appropriate learning activities and help students choose the most suitable one (Lee & Dron, 2008).In this context, the fact that students have acquired this skill is quite positive regarding the model's effectiveness.
Students' views on the model
Students' views on the model are presented in Table 5.
Table 5 shows that the students' opinions are grouped under six categories: course design, individual study, group work, problem-solving, presentation and reporting.Students think that there are positive and negative sub-categories under each category.
Regarding the design of the lesson, the students found the course efficient, effective, engaging and enjoyable.They got permanent learning by experiencing, were satisfied with being in the centre and active during the course and thought that the course became easier over time.Negatively, they thought that the process was intense, complex and tiring.The students were not satisfied since the teacher was not lecturing, they complained about the lack of interaction and some students found the process tedious and thought that they could not get efficiency.
We think that the process is intense and challenging for the students because they are not used to studentcentred models that require them to learn from their own experiences.Taking responsibility for their learning is not easy.In educational environments, while some students are eager to take responsibility for their learning, some students may develop resistance to change.They may not be willing to make the necessary effort and involvement in managing their learning.Even students who want to take an active role in their learning need help to develop the necessary skills (Carpenter & Pease, 2013).As in our country (Turkey), it is quite natural and expected that this situation is even more challenging for students who are used to receiving teacher-centred education.Supporting this situation, Üstünoğlu (2009) concluded in his study conducted with 960 university students in Turkey that students do not perceive themselves as autonomous enough, are unwilling to take responsibility and continue to see the teacher as a dominant decision-making figure in the classroom.Ayish and Deveci (2019) conducted a study at a university in the United Arab Emirates.They also concluded that most students are aware of taking responsibility for their learning.However, a significant majority do not act on this awareness.We can say that some students' complaints about the lack of interaction are that the teacher is not in the centre and does not interfere with the process much.This happens due to not being used to student-centred understanding.In addition, we can say that distance education also affected this situation.
In general, the students were satisfied with the tasks that they performed in the individual step of the model.They thought that they learned new information, improved research and analysis, and enjoyed the process.On the other hand, some students found this job difficult and tiring.
Individual studies are essential for students to gain personal experience and come to the course with their individual experiences.There are also many studies showing that homework before the course increases students' active participation and success in the course (Yamane, 2006;Frydenberg, 2012;Herold et al., 2012;Talbert, 2012;Ronkainen, 2015;Garcia, 2018).
The group work category shows that the students were generally satisfied with this step.They thought that it provided the opportunity to interact and socialise with friends, they gained a different perspective and their workload lightened.They found that group work is effective in gaining skills and values.However, some students complained about others not fulfilling their responsibilities in group work, having difficulty coming together and having too many people in the group.
These results support the quantitative results that students gain the ability to work in groups.The study of Şahin et al. ( 2020) concluded that students found group works very positive in interacting with their friends, exchanging ideas and producing common and successful products.Hartman et al. (2013) state that high team cohesion supports higher levels of problembased coping, emotion-based coping and team goal achievement.Carpenter and Pease (2013) state that students in practical collaboration activities with their peers achieve better academic results, establish more robust relationships and are psychologically healthier.However, in line with the results of this study, Gül and Konu (2008) state that although some students see group work as beneficial, others have negative opinions about it.This situation shows that the students have acquired collaborative working skills.Supporting this result, Carpenter and Pease (2013) state that cooperation is not easy.Students cannot always be ready for cooperation.
We found that the students improved themselves in collecting data, analyzing them, and producing results and solutions steps of the study, gained different perspectives and multidimensional thinking skills, and enjoyed the problem-solving process.However, some students stated they had difficulties identifying problems and collecting data.
These results are in agreement with the quantitative results of the study.Both quantitative and qualitative results show that students have improved their scientific research skills.The results also show that the students enjoyed this process.İlhan et al. (2016) found in their study conducted with associate and undergraduate students that students' attitudes towards research are at a low level.The study of Wishkoskia et al. (2022) replicated previous findings indicating anxiety and negative attitudes surrounding research method-related courses.The model is successful in this context because undergraduate students generally do not have positive attitudes towards scientific research processes.
We found that the presentations made by the students provide them with teaching experience, presentation skills, and self-confidence, and they think it is helpful to listen to other groups' presentations.
However, we also found that making presentations every week caused stress for some students; not giving feedback to each group after their presentations negatively affected them, and some students could not focus on others' presentations.
The students were not willing to prepare a presentation about their research results.We think they preferred the information to be presented to them ready-made and they did not want to take responsibility for their learning.However, Sugeng and Suryani (2018) state that presentation-based learning enables students to engage actively in their learning process.It also provides a chance to students to exercise their self-regulated learning towards being a more independent learner.It increases their confidence to speak and participate in the class forum.An essential function of formative assessment is to provide continuous feedback to students within the scope of teaching (Bennett, 2011).In this context, we gave students collective/general feedback after each presentation.By giving collective feedback, it was aimed not to disrupt the integrity of the course and to prevent the personalisation of the feedback given.We think students requested direct feedback because they did not want to synthesise the relevant part of the explanation and wanted to obtain the information without putting effort into it.In other words, they wanted to avoid structuring evaluations.
We found that the students thought they had learned how to write reports and articles and were satisfied with producing products.However, they found it tiring to prepare a weekly report and criticized the lack of examples.Guo and Yang (2012) state that the emergence of a final product in the form of a design, model, device, application/activity and written/oral report used to present the product and result produced are very important for the effectiveness of project-based learning.
Some direct quotes from students' opinions under these categories are as follows: TÖ6: 'I think the course is one that every teacher candidate and even others from different professions should take.I think it is a beneficial and needed lesson.However, I can say that presentations in the course every week broke my will for the course a bit.' İÖ8: 'My personal views within the scope of this course were negative at the very beginning because the teaching of the course was different, and I had a prejudice.However, as the course progressed and I started to adapt to the activities we did, I realized that our teachers were trying to give good information to us with a different method.'TÖ11: 'I think doing group homework in the course is also a part of this awareness because we may encounter various problems while doing group homework.Additionally, I think we learned many values without being aware of doing so.' İÖ29: 'Although at the beginning, the course was quite complex and difficult, I understood the purpose of the course better and started to enjoy it.Working individually and in a group, collaboration was very convenient for the character and values education course.'TÖ32: 'Obtaining the information by researching ourselves became more permanent in the memory.However, sometimes it was tiring because of the systematic intensity of the courses and the preparation of reports every week.'
The effect of the course on students' values
The results from the t-test analysis of the students' 'Human Values Scale' pre-test and post-test scores are given in Table 6.
While there are some insignificant increases in the scores in the dimensions other than tolerance, there is a decrease in the tolerance scores.We think that the decrease in the tolerance scores of the students can be a result of their reaction to group work.This may be because development of students' 'working together' and 'adaptation' skills was insufficient.
The students' views on the effects of the model on their values and value perceptions are presented in Table 7.
Table 7 shows that the opinions of the students are grouped under five categories, namely, values information, values overview, values education at a glance, own values and teaching perception.
We found that students' knowledge about values within the scope of the course has increased.Students gained awareness of values, understood the importance and function of them, and realized value problems and solutions.The students' perspectives on values education have also changed.They understood that values are teachable.They realized the importance and necessity of values education and the importance of school in gaining values.They realized the importance of interaction with students, family, school, and environment in values education.
Results show that the Project-based 6E Learning Model positively affects students' value perceptions.These changes in students' perceptions are significant because perception involves the way one sees the world (Mcdonald, 2012).That is why we think that this change in students' perceptions will have an impact on the values they have.
We found that the students noticed their values, reviewed and questioned, reinforced, realized their missing values, completed their deficiencies, and gained new values.
The results show that this inquiry-based model affects students' values since it enables students to interact with each other and their environment and express their opinions by comparing various views and perspectives.The model also lets them gain the skills of reasoning, critical thinking, and problem-solving through discussion, collaborative environments, and so on to recognise, appreciate and embrace values.Some studies show the effectiveness of different models with such features on values.At the end of the values education programme, Erikli (2016) found that students' values improved positively.As a result of the moral development programme, Krop (2006) observed a significant effect on the responsibility values of the participants.Izgar (2013) noticed that the democratic attitudes and behaviours of students were positively affected after applying his values education programme to the students.The values education programme implemented by Perry and Wilkenfeld (2006) was also effective in value acquisition by students.We think that the lack of a significant difference in student values in quantitative results is because of the fact that rather than gaining new values that they never had, students recognise, reinforce and complete the deficiencies of their values.
We found that there had been a significant change in students' perception of teaching.Students' experiences have changed in subjects such as understanding the meaning of teaching, gaining a positive attitude towards the teaching profession, gaining experience in the teaching profession, realizing the responsibility of being a teacher, understanding the importance of the teacher in value acquisition, realizing the values that the teacher has and should have, and gaining motivation to add value to the students.
We think that changes in students' perceptions were significant.Because educational beliefs and practices of the teachers are fundamentally interrelated (Guerra & Wubbena, 2017), pre-service teachers' perceptions of values, values education and teaching will affect how they carry out their profession in the future and thus how they raise students; this, in turn, will have a significant impact on the education system.Some direct quotes from students' opinions under these categories are as follows.İÖ10: 'As I said in the previous question, we have fallen into an ocean of values, and in
Conclusion
We found that the Project-based 6E Learning Model improved students' research skills.The students improved themselves in collecting data, analysing them and producing results and solutions.The students thought that they had learned how to write reports and articles and were satisfied with making products.The students found the model efficient, effective, exciting and enjoyable.They got permanent learning by experiencing.The model had a positive effect on students' self-control skills, multidimensional thinking skills and problem-solving skills.
The students were satisfied with the tasks that they performed in the individual step of the model, and they achieved efficiency.The model gave students the ability to work in groups, a positive attitude towards group work, and various skills and values.In class works, presentations provided the students with teaching experience, presentation skills and self-confidence.
The Project-based 6E Learning Model provided students with various types of information about values.The students gained awareness of values, understood their importance and function and realised value problems and solutions.The students' perspectives on values education have also changed.They understood that values are teachable.The model enabled students to recognise values' importance, review, question, strengthen, complete their deficiencies and gain new ones.Finally, students experienced significant changes in their perception of teaching.They realised the meaning of teaching and the responsibility of being a teacher.They gained a positive attitude towards the teaching profession and developed motivation to add value to the students.
On the other hand, some students were not satisfied since the teacher was not lecturing, so they could not get efficiency.Some students thought that the process was intense, complex and tiring.Some students stated that they had difficulties in identifying problems and collecting data.We believe the process is intense and challenging for the students because they are not used to student-centred models and taking responsibility for their learning.They do not want to make the necessary effort and involvement in managing their knowledge.
In group work, some students complained about friends not taking care of their responsibilities, having difficulty coming together and having too many people in the group.Students may not find interacting with their peers from different backgrounds easy.It requires cooperation skills.Therefore, educators must design experiences for students that support the development of this skill.
Some students found it tiring to prepare a report and make presentations every week.But in projectbased learning, a final product is significant for its effectiveness.Some students complained about not giving specific feedback to each group after their presentations.We thought that this would reduce not only the workload of the lecturer but also improve the students' ability to analyse and synthesise their share of the evaluation and prevent the personalisation of the assessments.However, in the use of the model, giving specific feedback to each group can be considered.
We recommend applying the model to different branches, evaluating its effectiveness and generalising it.
The elaboration step is carried out as 'individual work.'This stage aims for students to develop a more in-depth understanding of the problem.Students individually develop suggestions to identify new problem situations.
• Elaboration: • Extension: The extension step is carried out as 'group work.'In this step, students identify a new problem related to the first problem, transfer concepts to new situations, collect/analyse new data -if necessary, reach new results and generate ideas for solutions as a group.Students are expected to reach common conclusions and recommendations for the problems of both stages/weeks.• Evaluation: The evaluation step is carried out this context, this research aims to determine the effectiveness of the 'Project Based 6E Learning Model' on students' character and on the Values Education course.In this direction, answers to the following
Table 1 .
Pre-test/post-test experimental design without control group . What are your positive or negative thoughts and opinions about the work carried out within the Character and Values Education course? 2. How did the Character and Values Education course affect your 'teaching perception'?3. How did the Character and Values Education course affect your values and perception of value?
Table 2 .
Chapter topics
Table 3 .
Project-based Virtual Learning Competencies Scale: dimensions and items I can evaluate the work of my project group and other groups.
Table 4 .
Project-based Virtual Learning Competencies Scale: pre-test and post-test results
Table 5 .
Students' views on the model this ocean of values, we have somehow found ourselves on another level.I can say that we added new things to ourselves.'TÖ16: 'It made me realize what values I have and what values I should have.'İÖ23: 'I used to want to be a good English teacher, but I realized that teaching English before being a good teacher is useless.People can learn English in some way in their lives, there are many ways for this, but I think it is a teacher's job to teach values.'TÖ33: 'As a teacher candidate, I think I should have all these values.Because I will establish society by educating students, in this respect, for me, the Character and Value[s] Education Course has become a subject and course of great importance.I have gained many positive things professionally for myself.'TÖ53: 'I realized that being a teacher is not just teaching something, but it is a way of life, and an active role model in the character and value development of the student.I learned how important teaching is in the formation and spread of values.'
|
v3-fos-license
|
2019-05-20T13:05:49.497Z
|
2018-01-01T00:00:00.000
|
158132572
|
{
"extfieldsofstudy": [
"History"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.org.za/pdf/yt/n19/05.pdf",
"pdf_hash": "9fdd2a71e1ff53f8eff067b804553661d39e697c",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2927",
"s2fieldsofstudy": [
"History",
"Political Science"
],
"sha1": "9fdd2a71e1ff53f8eff067b804553661d39e697c",
"year": 2018
}
|
pes2o/s2orc
|
A comparative investigation into the representation of Russia in apartheid and post-apartheid era South African History textbooks
In this comparative study we employed a quantitative approach, underpinned by the interpretivist paradigm, to analyse the content on Russia as found in Apartheid and post-Apartheid History textbooks. This was done by means of qualitative content analysis. The focus of the analysis was exclusively on the historical content or substantive knowledge as it related to Russia. What emerged was that the political eras Russia was studied under remained remarkably similar across the Apartheid and post-Apartheid eras. However, clear discernible similarities and differences were otherwise detectable. While big men dominated the content of both eras the approach adopted by the post-Apartheid era History textbooks towards them were generally more critical. While a fear of Communism was imbedded in the Apartheid era History textbooks, the opposite can be said of the post-Apartheid era textbooks. What this points to is that during both political eras the content on Russia was adapted to suit the prevailing identity politics, national narratives and ideology of the time – closed and insular under Apartheid and open and critical in the post-Apartheid era.
Introduction
The traditional beginning of Russian 1 History is regarded as being 862, with the formation of a loose federation amongst East Slavic tribes.This federation spawned an Orthodox Slavic culture which in time embraced Tsardom.Expansion under the Tsars created a vast Russian Empire stretching from Eastern Europe to the Pacific Ocean.Tsardom and its feudal nature were eventually overthrown during the Russian Revolution of 1917.The Russian Revolution ushered in an era of Communist rule under the Union of Soviet 1 Throughout the article the term Russia will be used to refer to Tsarist, Communist and post-Communist Russia.
Yesterday&Today, No. 19, July 2018 Socialist Republics (USSR).Communist rule in Russia ended in 1992, when the USSR was replaced by the Russian Federation which embraced Capitalism and aspects of Western democracy.
South Africa has a long relationship with Russia which has been determined by three significant ideological eras: Colonialism, Apartheid and post-Apartheid.Especially during the latter two ideological eras significant connections developed between the two countries.Soon after the rise of Communism in Russia the South African Communist Party (SACP) was established in 1921.This served to strengthen ties between Russia and South Africa.This relationship was deepened when the African National Congress (ANC) formed an alliance with the SACP to oppose the racist Apartheid policies of the ruling National Party (NP).Despite Communism being banned by the NP in 1950, Russia continued to aid the SACP.This was done as part of a broader ideological struggle tied to the Cold War, which on the one side had the Western aligned NP overseeing white minority rule portraying Russia and Communist and it's in its opposition to Apartheid and Capitalism as the Rooi Gevaar (Red Peril), and on the other the liberation movements experiencing Russia and Communism as the antidote to racial oppression.In the end the demise of Communist Russia in 1992 also ushered in the fall of the NP and apartheid.This paved the way for the 1994 democratic elections in South African which brought the ANC and its ally the SACP to power.In time the new South Africa and Russia, both products of significant political shifts, formed with China, India and Brazil, an economic alliance called BRICS.This served to entrench relations between South Africa and Russia in a new way.
With reference to History Education Russia has been studied in the senior grades of schooling during both the Apartheid and post-Apartheid eras.This is significant in that the challenges posed by what Russia stood for to the NP's reign could have affected the way it was represented in the historical content within the Apartheid era History textbooks.Similarly, the support offered by Russia to the liberation movements opposing Apartheid, and the contemporary relationships within BRICS could again have affected the representation of Russia in in the historical content within the post-Apartheid era History textbooks.
Using the aforementioned background and context as a framework this article investigated the representation of Russia in South African History textbooks across, ideologically-speaking, two very different political eras.More specifically the focus fell on the historical content, or substantive knowledge, as contained in the History textbooks (the programmatic curriculum) of the two eras.Differently put, the focus of the analysis fell on the written text.In the process visual images, paratext (textboxes), activities and exercises and historical sources were not engaged with.Although this meant a narrow focus we deemed, in terms of a comparative investigation, this a researchable undertaking.Outside of historical content it would have been extremely challenging to do a comparative study on how Russia was represented in the Apartheid and post-Apartheid era History textbooks.This is because the nature of History at school has changed radically from the Apartheid to the post-Apartheid era.During the former History at school level was mostly a memory discipline underpinned by passive forms of rote learning, and an Afrikaner Nationalist historiography that foregrounded white achievements.In line with the changes experienced by post-Apartheid society History at school has also transformed fundamentally.It is now a disciplinary discipline with active forms of learning that spoke to the constitutional democracy based on human rights that it serves (Wassermann, 2018).These changes are clearly visible when simply paging through the History textbooks of the two eras.However, such paging reveals very little about historical content.Therefore, room for this study exist since little to no research has been conducted on how the intricate relationships between South Africa and Russian over the past hundred-years affected the representation of the latter within the content of Apartheid and post-Apartheid era History textbooks.
Literature review
It has been argued that the nature of History Education is to be found in the selection of national narratives.Many a time specific national narratives, based on the promotion of a selective memory, are promoted (Foster, 2011;Zajda & Zajda, 2003).Selected narratives are used and manipulated in order to link a desired ideology to a society so as to instil a common set of values and identities (Engelbrecht, 2006;Foster, 2011).Pratte (1977) and Zajda and Zajda (2003) further detail that selected historical narratives are part of a power play to assert ideological dominance within History with the aim of establishing a tangible identity for the dominant group.
A national narrative can be equated to a preferred national identity.The envisaged national identity is created by manipulating historical knowledge according to presentist cultural or societal needs and norms (Foster & Crawford, 2006;Porat, 2004).Foster and Crawford (2006) suggest that nations select a desired past and a desired national identity that best represents the nation.The chosen national identity often manifests itself in a refashioned historical memory gleaned from desired historical conceptions (Porat, 2004;Stojanovic, 2001;Zajda, 2007).This thinking is furthered by Porat (2004), Stojanovic (2001) and Zajda (2007) who argues that the appropriation of desired historical concepts can redefine a national identity.However, this can often offer a tainted view of History as it supports and promotes a dominant culture which suppresses views not regarded as favourable (Apple, 2004).
The process of selecting agreed upon national narrative can be built upon the use of selected ideas to promote a distinct national consciousness (Zajda, 2007).According to Foster (2011), the purpose of selecting favoured ideas is to promote a collective memory which reinforces an entrenched national consciousness.The purpose of this would be to solidify political and societal norms so as to create a particular interpretation of History that is more desirable for the nation (Pingel, 2010).History textbooks are ideally suited to act as constructors and disseminators of national narratives and identities (Apple & Christian-Smith, 1991;Paxton, 1999;Wang, 2009).
Research on South African History textbook is a thriving field (Bertram & Wassermann, 2015).This research covers a plethora of topics including, a focus on how Apartheid era textbooks were used to promote Afrikaner masters symbols and racism (du Preez, 1983;da Cruz, 2005), and the ideological differences between textbooks of the Apartheid and post-Apartheid eras (Engelbrecht, 2006).Amongst this research is also to be found a study comparing specialised and everyday subject knowledge between Apartheid and post-Apartheid era History textbooks (Bertram & Bharath, 2011).In their study Bertram and Bharath found that contemporary Grade 6 History textbooks contained reduced substantive history knowledge when compared to the equivalent Standard 4 books of the 1980s dealing with the same topics.Additionally, the contemporary textbooks contained a more inclusive history and everyday knowledge and generic skills.In terms of History textbooks comparative studies with a South African focus are few and far between (Nishino, 2006;Chisholm, 2015).These comparative studies, furthermore, did not engage with the representation of the History of a specific country.
Internationally the available literature pertaining to the representation of Russia in History textbooks, other than Russian History textbooks itself, are also limited and contain no comparative studies.The literature that does exist is generally dated and promotes a common description of Russia as backward and ruled by tyrannical rulers (Anderson, 1954;Burkhardt, 1947Burkhardt, -1948;;Paddock, 1998).
The literature reviewed, as outlined above, clearly leaves room for an article such as ours which focusses on the comparative representation of Russia in South African History textbooks of the Apartheid and post-Apartheid eras.Additionally, the importance of a comparative study such as ours is argued for by Cavalli (2016) who states that it enables researchers to deduce the similarities or the differences found in textbooks so as to come to a greater understanding of the History that is being taught.
Research design and methodology
Research design and methodology are interlinked, with the former providing the planned structure, and the latter the means of data collection and analysis.For this article we employed an interpretivist paradigm.Interpretivists seek to understand truths constructed within a version of social reality.In this article this relates to the contrasting social realities of the ideological eras of Apartheid and post-Apartheid.The stark contrast between the two eras would therefore impact on the interpretation of Russia within these eras (Zhao, 2001).Since we employed an interpretivist paradigm we, to underscore our paradigmatic position, chose to use a qualitative approach.The rationale for using a qualitative approach is aligned to our research focus on the social construction of Russia during two very different ideological eras.In the light of this the need for our research approach to be qualitative in nature is imperative as it focusses on reality being a social construction (Bauer, Gaskell, & Allum, 2000;Denzin & Lincoln, 2008;Kahlke, 2014).
In order to gain a rounded understanding of the representation of Russia in Apartheid and post-Apartheid era textbooks a large enough sample was needed.For our research we chose to incorporate both convenience and purposive sampling.Cohen and Manion (1994) and Teddlie and Yu (2007) stipulate that convenience sampling is used to allow the researcher to select the most readily available and therefore the most convenient textbooks.From the outset we ensured that the textbooks selected were appropriate to the History curricula they were based on.For the Apartheid era textbooks we chose to employ mostly convenience sampling as it proved a challenge to find textbooks from this era.This was due to many schools no longer keeping them as they were outdated and for a different curriculum and political dispensation.Consequently, we were left with only a few textbooks from which to choose.We did, however, incorporate purposive sampling into the small selection of Apartheid era textbooks that we did acquire.This was done because we only needed three textbooks.As a result we specifically chose textbooks that, for the most part, shared a common publisher with the sampled post-Apartheid era textbooks.
We chose to label the three selected books, for analysis and ethical reasons, as Book A1, A2 and A3: • Book A1: South Africa in the modern world (1910-1970), 1974, publisher In terms of the selected Apartheid-era textbooks it is necessary to understand them contextually.The three chosen textbooks were published during three different phases of Apartheid: Book A1 was published in 1974, during the height of Apartheid.Book A2 was published in 1987, and although petty Apartheid was pretty much over, South Africa was still, despite global and internal resistance, an Apartheid state.Finally, Book A3 was published in 1989, when Apartheid, the Cold War and the Soviet Russia were winding down.
For the post-Apartheid textbooks, we only used purposive sampling as Cohen and Manion (1994) and Teddlie and Yu (2007) suggest that this type of sampling allows a cherry picking of textbooks for a specific purpose.Since we chose to use current in-circulation Curriculum in Assessment Policy (CAPS) History compliant History textbooks for our post-Apartheid textbook selection, there were numerous textbooks available to us.However, based on a need for as much continuity as possible we selected post-Apartheid era textbooks based on having, for the most part, the same publishers as the Apartheid era textbooks.
We have chosen to label the three selected textbooks, for analysis and ethical reasons, as Book B1, B2 and B3: Communism in Russia, 1900-1940 (pp. 3-56).
As part of our research methodology we chose to use qualitative content analysis, a subsection of content analysis.This allowed for the use of an open-coding system of analysis.Qualitative content analysis allows for the identification of patterns by means of codes thus allowing for an interpretation based on the emerged patterns (Elo & Knygäs, 2008;Hsieh & Shannon, 2005).It must, however, be borne in mind that qualitative content analysis permits a subjective interpretation of the patterns based on the contextual nature of the text.This is in contrast to content analysis which tends to allow for a more objective interpretation (Denzin & Lincoln, 2008).Following the structure of qualitative content analysis our data analysis process was broken down into three distinct phases: preparation, organising and reporting.
In the preparation phase we divided the historical content on Russian History into the three main historical eras as found in the selected History textbooks.The three eras were: Tsarist Russia, Bolshevik Russia and Communist Russia.During the organising phase that followed categories were created through the use of open-coding.Finally, during the reporting phase, our created categories, and the patterns that emerged, were interpreted.
Using a comparative methodological approach, we have analysed the content related to the representation of Russia in the six textbooks from two different ideological eras.This was done by using the interpretivist paradigm and a qualitative approach to underpin the qualitative content analysis employed.The findings that emerged form the analysis process as outlined will be discussed next.
Analysis -Findings on the representation of Russia in the content of Apartheid era History textbooks
We have chosen to organise our analysis broadly along the historical eras as found in the sampled History textbooks: Russia under the Tsarist Regime, Russia during the Bolshevik Revolution, Russia under Lenin and Russia under Stalin.
The main overall finding that was that Books A1 (1974) and A2 (1987) offered, throughout, a strong anti-Tsarist, anti-Communist and pro-West discourse.In contrast Book A3 (1989) offered a less critical approach to Communism, albeit still presenting an anti-Tsarist and pro-West positioning.The suggestion for the change in approach by Book A3 (1989) is that South Africa was nearing the end of Apartheid.At the same time the Cold War and Communist Russia, and by extension the "Red Scare", was winding down.Consequently there was, ideologically speaking, no longer any real need to maintain the same ideological position as in the past.In contrast, Books A1 (1974) and A2 (1987) both needed to adopt a critical approach towards Communism in keeping with the political sentiments as espoused by the National Party at the time.
Another finding that permeated all the historical eras in all three Apartheid era textbooks was the focus on big men.Whilst there was some difference in the representation of the various big male characters in terms of the amount of space allocated to each, Tsar Nicholas II, Vladimir Lenin, Leon Trotsky and Joseph Stalin, dominate the historical content.According to the three Apartheid-era textbooks these four men were instrumental in shaping Russian History.Consequently, other historical characters were under-represented or simply omitted.
Lenin and Stalin were across the analysed Apartheid-era History textbooks represented as being the masterminds behind Communist Russia.All three textbooks tended to perpetuate a similar image of Lenin -a good leader that can be admired.In contrast, Books A1 (1974) and A2 (1987) strongly associated Stalin with violence, brutality and absolute oppression.The representation of Stalin softened in the Book A3 (1989), perhaps in keeping with the changing circumstances in Soviet Russia.Where the three textbooks differ relates to the human cost of Stalin's rule.Book A1 (1974) condemns Stalin's cruelty and the great cost of human life during the purges but skims over the loss of life during industrialisation.Books A2 (1987) and A3 (1989), in contrast, see the modernisation of Russia as necessary and therefore the human cost as necessary collateral damage.As a result both textbooks focussed more on the economic and industrial successes of Russia under Stalin than on the cost to human life.In comparison to Lenin and Stalin, Trotsky is backgrounded.However, his consistent presence in all three Apartheid era History textbooks signifies his importance in the establishment of Communist Russia.
As mentioned above, there is a discernible anti-Tsar sentiment running through all three the Apartheid era History textbooks serving a Republican orientated NP regime.Tsar Nicholas II is represented in all three textbooks as oppressive and being indifferent to the needs of the Russian people.At the same time he is accused of contributing to Russia remaining a backward country.Additionally, he is considered weak and ill-equipped to deal with Russia's problems and is blamed for, not only the fall of Tsarist Russia, but the rise of Communist Russia.
Book A3 (1989) were especially vitriol in its representation of the Tsar and presented Communism as being a respite from his oppressive rule.In so doing Book A3 (1989) adopted a less critical approach to Communism when compared to Books A1 (1974) and A2 (1987).South Africa had outlawed Communism in 1950, and the ideology of the ruling NP was diametrically opposed to it.Consequently, both textbooks A1 (1974) and A2 (1987) highlighted the dangers of a revolutionary party by adopting an anti-revolutionary stance.In so doing the books offered a veiled warning against extra parliamentary groups.This was done by highlighting how a revolutionary party, such as the Bolsheviks, can ruin a country.In contrast, Book A3 (1989), by dint of the fact that it was published when radically political changes were facing South African and Russia, adopted a less critical approach to Communism.However, what all three textbooks did was to rile against Communism while, since it was illegal to do so, failing to provide a description of what Communism actually entails.As a result neither History teachers nor learners were offered a balanced understanding as to what Communism entailed.
In conclusion, the Apartheid era History textbooks outlined the same turn of events that created a Communist Russia and foregrounded the same male historical characters.Apart from minor differences the representation of Russian History within all three Apartheid textbooks remained largely similar.
Analysis -Findings on the representation of Russia in the content of post-Apartheid era History textbooks
The sampled post-Apartheid History Textbooks were written for the CAPS-History curriculum currently in use.Contextually the post-Apartheid textbooks were published in a South Africa where Communism was no longer banned but closely intertwined with the ruling ANC government.The historical periods selected, Russia under the Tsarist Regime, Russia during the Bolshevik Revolution, Russia under Lenin and Russia under Stalin mirrored those of the Apartheid era.This in itself demonstrates continuity in terms of historical content.
The three post-Apartheid era History textbooks, B1 (2012), B2 (2012) and B3 (2012) displayed a clear anti-Tsarist sentiment.As with the Apartheid era textbooks Tsar Nicholas II was represented as weak, oppressive, backward and perpetuating an oppressive and unsuccessful reign.
In the post-Apartheid era History textbooks big male characters, Tsar Nicholas II, Vladimir Lenin, Leon Trotsky and Joseph Stalin, similar to those found in the Apartheid era textbooks, dominated the historical content.However, a significant change is the portrayal of Trotsky in much more detailed than under Apartheid.However, the main focus was still on Lenin and Stalin.In textbooks B1 (2012) and B 2 (2012) Lenin emerges as a hero and an admirable leader.Especially Book B2 (2012) offered a balanced portrayal of Lenin and called into question his actions and debated his hero legacy.In contrast, Stalin and his legacy were debated and questioned by all three post-Apartheid era textbooks.In the process he was presented as either a hero or a tyrant.For example, Book B1 (2012), denied Stalin a hero status because of the human cost to his rule which was deemed unjustifiable.In contrast, Book B3 (2012) perceived the human cost under Stalin to be worthwhile since it resulted in a modern Russia.However, the most radical departure by the three post-Apartheid from the past is sections on the role of women during the Russian Revolution.This constituted a mighty blow to the male hegemony the Apartheid era textbooks tended to attributed to the Russian Revolution.
In line with the changes that took place in post-Apartheid South Africa all three textbooks analysed offered a basic description of the origins of Communism and what the ideology entailed.This is a radical departure from the Apartheid era and is indicative of how much South Africa and History at school, has changed since 1994.At the same time none of the post-Apartheid era History textbooks displayed an anti-Communist outlook.Instead, Books B1 (2012) and B2 (2012) offered a somewhat positive representation of the Bolsheviks as a revolutionary party.This did not prevent Book B2 (2012) from also offering a critical perspective on Bolshevism and totalitarian rule.
Despite the afore-mentioned, all three textbooks (B1, B2, B3) still adopted a pro-Western lens to understand Russia.For example, although Russia was represented as having modernised, this was linked to aid offered by the West as well as the need for forms of Capitalism within Russia's economy.
Although the three post-Apartheid History textbooks offered content to that found in the Apartheid era textbooks, in other ways the content had radically changed.This includes the adoption of a more critical approach, the appearance of women as leading historical characters and the normalisation of Communism as an ideology.As such the representation of Russia in post-Apartheid History textbooks exhibits both sweeping changes and moments of continuity.
Discussion
To society at large, and many a time to teachers and learners as well, historical content or substantive knowledge is what constitutes History at school.Although this is a dated idea it powerfully maintains currency.Looking at the historical content used to represent Russia in South African History textbooks across the Apartheid and post-Apartheid eras was, therefore, an exercise in trying to understand what has changed and what has remained the same.This was done against a backdrop of radical political changes in South African which also ushered in fundamental changes to the teaching and learning of History at school level.
What then has changed and what has continued in terms of the representation of Russia in South African History textbooks across the Apartheid and post-Apartheid eras?First and foremost the historical periods studied, the Tsarist Regime, Russia during the Bolshevik Revolution, Russia under Lenin and Russia under Stalin, remained remarkably similar across the two political eras.Consequently, the general historical events and characters also maintained a certain similarity.The rationale for this is relatively simple -the eras as outlined and the historical characters as identified are the benchmarked historical content on modern Russia.Reason being that these was the events and characters that birthed an alternative ideology and economic and political order, the like of which the world has never experienced before.How it unfolded and why it happened in Russia is therefore standard historical fare.The result was a certain transferability of historical content between curricula and educational material of the two very different political eras.Such a transfer is not necessary a strange practice especially since textbook producers are generally under pressure to present a commercial product to a government textbook vetting committee and the market.Additionally, most textbooks authors are not experts on Russian History and are relying on academic History publications which are recontextualised as school History.
Many a time the available historical publications will also have a certain pro-Western bias.Inevitably, considering the aforementioned, a certain similarity in terms of textbook content are to be found between the Apartheid and post-Apartheid publications.
However, if would be fair to say that Russia was not portrayed uniformly across the Apartheid and post-Apartheid era History textbooks.Internationally the limited literature we came across about Russia in History textbooks presented the country uniformly as backward and ruled by tyrannical rulers (Anderson, 1954;Burkhardt, 1947Burkhardt, -1948;;Paddock, 1998).However, apart from Russia under Tsar Nichols II, both the Apartheid and post-Apartheid era textbooks did not necessarily view Russia as backwards.Especially the large scale modernisation and industrialisation processes that took place under Communism were presented in a relatively positive light.And although Tsar Nicholas II and Stalin were seen as tyrants, Lenin was presented in the textbooks of both the Apartheid and post-Apartheid eras as anything but that.A telling departure from the international literature and the Apartheid era textbooks is the critical engagement with both Lenin and Stalin by the post-Apartheid era books (B1, B2 and B3).At the same time women as historical characters who played an important role in the Russian Revolution were foregrounded.This constituted a radical departure from the male hegemonic Russian History found in the Apartheid era textbooks and the international literature.
The most telling change in the representation of Russia in the Apartheid and post-Apartheid era History textbooks is the engagement with Communism as an ideology.In the Apartheid era textbooks, in line with the political ideology of the NP and within the context of the Cold War, Russia and Communism was the "Red Peril".Communism was foregrounded as a threat to especially Capitalism and the white way of life.Such was the fear of this peril that it was illegal to explain the nature of Communism or say anything positive about it.This ideological position permeated the representation of Russia in the Apartheid era textbooks.In contract, post-1994, Communism and Russia has both been liberated and normalised.Not only is the SACP part of the ruling government but Communism, and what it entails as an ideology is unpacked, be it at a rudimentary level, in the History textbooks.At the same time Russia is no longer a place to be feared because of an ideology it once embraced, but rather a place inhabited by people who faced their own challenges in the past.
The above-mentioned is the most deep-seated change in content on Russia between the Apartheid and post-Apartheid era History textbooks.Since Yesterday&Today, No. 19, July 2018 History textbooks are both educational and political constructions it carries selected narratives linked to a desired ideology with the aim to instil a common set of values and identities (Engelbrecht, 2006;Foster, 2011).This was, with reference to Russia done under Apartheid and is also happening in a post-Apartheid context.Consequently Russia was, and still is, a refashioned historical memory gleaned from desired historical conceptions (Porat, 2004;Stojanovic, 2001;Zajda, 2007) as History at school level moulded itself to the political order of the day (Apple, 2004;Apple & Christian-Smith, 1991).In all of this History textbooks were employed, with reference to Russia, during both political eras, as constructors and disseminators of national narratives and identities.
With reference to the above, school History under Apartheid was a memory discipline underpinned by a white supremacy ideology and based on a master narrative that was not to be contested.As an ideology if was forcefully dispensed and resulted in a very specific identity and consciousness -both in general terms and how it related to Russia.In contrast, in the post-Apartheid context, school History is seen as a disciplinary discipline that welcomes critical engagement and different perspectives.This is part of what it means to be living in a constitutional democracy and means that Russia, as represented in History textbooks, as well as the ideology and identity politics at play, can be challenged and contested.This is singularly the most important differences between engaging with the content on Russia in History textbooks in the Apartheid and post-Apartheid eras.
Conclusion
This article, comparative by nature, allowed for a reflection on change and continuity and similarities and differences (Cavalli, 2016;Nishino, 2006) as it related to the representation of Russia in History textbooks of the Apartheid and post-Apartheid eras.What emerged is, that while the overarching structure of the representation of Russian History generally remained uniform across the two eras certain discernible did exits on how Russia were viewed.In the context of the Apartheid era History textbooks Russia was, for the most part, a country equated to Communism and all the negativity the Apartheid state associated with this ideology.It was also a place where tough men generally ruled in either an incompetent or tyrannically manner.However, between 1974 and 1989, the timeframe the analysed Apartheid era textbooks spans, a more empathetic tone started towards Russia started to emerge.This in itself was a reflection of the political changes that were taking place in South Africa and which started to pave the way for a different narrative on national identity.
The post-Apartheid era History textbooks, while covering the same historical eras and similar content to their Apartheid era counterparts, were clearly a product of the new South Africa and all it entailed.Russia and Communism were no longer the "Red Scare" but a country whose History was engaged with critically.In the process different perspectives were expressed about the Russian leaders studied and women appeared as historical characters in their own right.Overall, school History, as it related to Russia were starting to reflect the social realities of the post-Apartheid political and educational order with new national narratives and identities being promoted by the History textbooks studied.
Chapter 1: The rise of Soviet Russia which falls under the heading:The rise of thesuper-powers, 1917 to 1939 (pp.1-24).
•Book A3: History for today, 1989, publisher: Juta Standard 10 [Grade 12]: Part One: General History is the section under study.
• Book B1: Top class, History, 2012, publisher: Shuter & Shooter, Grade 11 [Standard 9]: The textbook is divided into terms and the term being analysed was Term 1: Communism in Russia 1900-1940 with the sub-heading: How was communism applied in Russia under Lenin and Stalin?(pp.1-51).
|
v3-fos-license
|
2019-04-16T13:28:11.578Z
|
2018-04-20T00:00:00.000
|
116412035
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1073/11/4/1012/pdf",
"pdf_hash": "c0c4d689821cc5ce27ba0b581595876dd18bd56f",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2929",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "c5da2b4a0297ded256b33bc5e64c9b1ced1b818f",
"year": 2018
}
|
pes2o/s2orc
|
Simulation and Experimental Studies of a Multi-Tubular Floating Sea Wave Damper
This article explores the issue of shore protection from sea waves and has presented the main methods used for coastal protection. It discusses the construction and operation of the multi-tubular floating sea wave damper that has been developed at the Maritime University of Szczecin by Professor Boleslaw Kuźniewski. This paper presents the results of the research project aimed at creating and examining a prototype of the device. The research aimed to confirm the two hypotheses: “the largest damping force should occur when the damping units are placed at an optimal distance equal to half the length of the wave to be damped” and “a compensation of the horizontal forces caused by the rippling of water should occur in the damping device”. Simulation studies of the behaviour of the device’s buoyancy elements when floating on waves were performed using the ANSYS AQWAWB and AQWA software. The buoyancy components were modelled as TUBE elements with a diameter of 0.11 m and a length of 1.5 m and as triangular and square surface elements. The results of the experimental research and the computer simulation of the operation of the prototype device have also been presented. The external conditions adopted corresponded to the frequency of a wave equal to 0.807 Hz and to the wave height in front of the device which was equal to 0.1 m. Experimental studies were conducted in an auxiliary model basin with dimensions of 60 m × 7 m × 3.12 m at the Ship Hydromechanics Division, Ship Design and Research Centre (CTO S. A.) in Gdansk (Poland). The study recorded the distribution of the vertical and horizontal forces acting on the prototype device as well as the wave height both in front of and behind the device. Both hypotheses were proven. Simulation and experimental studies have been summarised. A proposal for future works has also been presented.
Introduction
Effective protection of the sea shore remains a valid but still unsolved social issue that forms part of the responsibilities of any public authority. The coastal area is a space of human expansion which includes the majority of hydrotechnical facilities. Furthermore, it is estimated that more than 60% of the world's population inhabit coastal areas [1]. These areas are directly related to many fields of human activity, such as industry, trade, agriculture, fishing and tourism. As a rule, hydrotechnical facilities are structures that are subjected to complex mechanical loadings [2][3][4][5][6][7][8], among which of primary importance are the hydrotechnical forces associated with water movement [9,10]. The most important factors that cause water movement in coastal zones are surface waves and wave-driven currents [11][12][13]. Dynamic phenomena occurring in coastal zones [14], including wind-driven sea waves which are a major cause, contribute to coastal erosion and abrasion [15]. Figure 1 shows an overview map of the impact of erosion processes on European sea shores. Particularly great coastal Particularly great coastal damage can be observed in Belgium, Denmark, Estonia, France, Spain and Italy. Interesting case studies regarding Mediterranean islands can be found in references [11,12,16]. Figure 1. Exposure to coastal erosion in Europe [17].
For example, the Polish sea annually claims about 50 hectares of land which have a total value of 500 million zlotys. The worst situation is on the western coast, on the Hel Peninsula and on the Vistula Spit [1]. There are areas where coastal regression exceeds one meter per year. Approximately 60% to 70% of the Polish coastline is exposed to erosion caused by wave movement. Every year, the country's territory is reduced by about 340,000 m 2 [18]. In recent decades, more than 70% of the Polish coast has been affected by erosive processes. The average annual rate of coastline recession in the years from 1975 to 1983 was 0.9 m/year. Furthermore, it is important to realise that a coastline recession at the rate of only 0.10 m per year means a loss of land area equal to about 37,000 m 2 per year. There are a number of methods of coastal protection which vary in efficiency, implementation and operation costs and level of environmental interference [19][20][21][22]. A brief overview of the basic methods is presented in Figure 2. Technical methods of shore protection [22].
The following methods are among the most popular:
Breakwater-this consists of concrete, stone or prefabricated structures placed parallel or at an angle to the shoreline. Figure 3 shows example breakwaters made of patented prefabricates of various types. One disadvantage to this solution is the deepening of the water basin directly in front of the structure from the side of the open sea, huge implementation cost and limits to water exchange between the sea and the area being protected. For example, the Polish sea annually claims about 50 hectares of land which have a total value of 500 million zlotys. The worst situation is on the western coast, on the Hel Peninsula and on the Vistula Spit [1]. There are areas where coastal regression exceeds one meter per year. Approximately 60% to 70% of the Polish coastline is exposed to erosion caused by wave movement. Every year, the country's territory is reduced by about 340,000 m 2 [18]. In recent decades, more than 70% of the Polish coast has been affected by erosive processes. The average annual rate of coastline recession in the years from 1975 to 1983 was 0.9 m/year. Furthermore, it is important to realise that a coastline recession at the rate of only 0.10 m per year means a loss of land area equal to about 37,000 m 2 per year. There are a number of methods of coastal protection which vary in efficiency, implementation and operation costs and level of environmental interference [19][20][21][22]. A brief overview of the basic methods is presented in Figure 2. Particularly great coastal damage can be observed in Belgium, Denmark, Estonia, France, Spain and Italy. Interesting case studies regarding Mediterranean islands can be found in references [11,12,16]. Figure 1. Exposure to coastal erosion in Europe [17].
For example, the Polish sea annually claims about 50 hectares of land which have a total value of 500 million zlotys. The worst situation is on the western coast, on the Hel Peninsula and on the Vistula Spit [1]. There are areas where coastal regression exceeds one meter per year. Approximately 60% to 70% of the Polish coastline is exposed to erosion caused by wave movement. Every year, the country's territory is reduced by about 340,000 m 2 [18]. In recent decades, more than 70% of the Polish coast has been affected by erosive processes. The average annual rate of coastline recession in the years from 1975 to 1983 was 0.9 m/year. Furthermore, it is important to realise that a coastline recession at the rate of only 0.10 m per year means a loss of land area equal to about 37,000 m 2 per year. There are a number of methods of coastal protection which vary in efficiency, implementation and operation costs and level of environmental interference [19][20][21][22]. A brief overview of the basic methods is presented in Figure 2. Technical methods of shore protection [22].
The following methods are among the most popular:
Breakwater-this consists of concrete, stone or prefabricated structures placed parallel or at an angle to the shoreline. Figure 3 shows example breakwaters made of patented prefabricates of various types. One disadvantage to this solution is the deepening of the water basin directly in front of the structure from the side of the open sea, huge implementation cost and limits to water exchange between the sea and the area being protected. The following methods are among the most popular: Breakwater-this consists of concrete, stone or prefabricated structures placed parallel or at an angle to the shoreline. Figure 3 shows example breakwaters made of patented prefabricates of various types. One disadvantage to this solution is the deepening of the water basin directly in front of the structure from the side of the open sea, huge implementation cost and limits to water exchange between the sea and the area being protected. Components installed on the sea bed, below the water surface-barrages, artificial reefs. The drawback to these kinds of structures is the deepening of the water basin directly in front of the structure and the need for maintenance and systematic monitoring of its technical condition. Structures placed in the beach area-plants, fences, brushwood, geotextiles and geosynthetic materials. The disadvantage of this type of solution is the extent of intervention in the natural ecosystem of the dunes. Gabion-concrete, soil and stone or prefabricated structure located on the slopes of dunes or cliffs. The disadvantage of this is sublittoral deepening and the slow erosion of its outskirts. Groyne-wooden posts laid perpendicularly to the shoreline reaching 100-200 m into the sea. A disadvantage of this solution is low efficiency in sediment-poor areas, deepening of the sea bottom in front of the structure, formation of erosional bays and the need for periodic maintenance. Refulation-a method which consists of collecting sandy material from the sea bottom by dredging and placing it on the shoreline by means of pipelines. Its main drawback is the need for periodic repetition of the process and the extent of environmental intervention at the place of collection of material. Storm surge barriers-the purpose these structures serve is the protection of areas located close to sea level. Their disadvantages are high maintenance costs and the need to provide large amounts of construction material.
Energies 2018, 11, x 3 of 20 Components installed on the sea bed, below the water surface-barrages, artificial reefs. The drawback to these kinds of structures is the deepening of the water basin directly in front of the structure and the need for maintenance and systematic monitoring of its technical condition. Structures placed in the beach area-plants, fences, brushwood, geotextiles and geosynthetic materials. The disadvantage of this type of solution is the extent of intervention in the natural ecosystem of the dunes. Gabion-concrete, soil and stone or prefabricated structure located on the slopes of dunes or cliffs. The disadvantage of this is sublittoral deepening and the slow erosion of its outskirts. Groyne-wooden posts laid perpendicularly to the shoreline reaching 100-200 m into the sea. A disadvantage of this solution is low efficiency in sediment-poor areas, deepening of the sea bottom in front of the structure, formation of erosional bays and the need for periodic maintenance. Refulation-a method which consists of collecting sandy material from the sea bottom by dredging and placing it on the shoreline by means of pipelines. Its main drawback is the need for periodic repetition of the process and the extent of environmental intervention at the place of collection of material. Storm surge barriers-the purpose these structures serve is the protection of areas located close to sea level. Their disadvantages are high maintenance costs and the need to provide large amounts of construction material. [19].
The presented overview of the technical solutions has demonstrated that their huge disadvantage lies is the fact that in order to protect the coastal area they have to be permanent engineering structures. This fact is the cause of their high expense, adverse impact on the natural appearance of the sea shore, disruption of the balance in adjacent ecosystems, and reduction of the recreational value of coastal areas. Furthermore, another drawback of such solutions is that the sea waves often direct their destructive energy towards locations close to the protected area. A possible solution to these drawbacks is the use of multi-tubular floating dampers presented in [23] (pp. 14-15). The presented overview of the technical solutions has demonstrated that their huge disadvantage lies is the fact that in order to protect the coastal area they have to be permanent engineering structures. This fact is the cause of their high expense, adverse impact on the natural appearance of the sea shore, disruption of the balance in adjacent ecosystems, and reduction of the recreational value of coastal areas. Furthermore, another drawback of such solutions is that the sea waves often direct their destructive energy towards locations close to the protected area. A possible solution to these drawbacks is the use of multi-tubular floating dampers presented in [23] (pp. 14-15).
Results-Kuźniewski's Damper
This invention was developed by Professor Bolesław Kuźniewski in 2007 at the Maritime University in Szczecin. The solution was granted the patent PL 210,447 [24] by the Polish Patent Office. The object of the invention was a method of shore protection against sea waves during storms and a sea wave damping assembly. Elements of the prototype device are shown in Figure 4.
Results-Kuźniewski's Damper
This invention was developed by Professor Bolesław Kuźniewski in 2007 at the Maritime University in Szczecin. The solution was granted the patent PL 210,447 [24] by the Polish Patent Office. The object of the invention was a method of shore protection against sea waves during storms and a sea wave damping assembly. Elements of the prototype device are shown in Figure 4. In accordance with the patent claim, the invention is a method of coastal protection consisting of placing in the way of sea waves beneath the rippling water at least one unit of the sea wave damper, composed of vertical tubular floating elements, and locating it behind the wave transformation line (breaking line) and at an economically optimal distance from the shoreline. The upper edges of the damping unit are underneath the surface of the water, while its lower edges are located at a distance from the bottom of the sea. The unit is attached to anchoring elements resting on the seabed by means of at least two tie rods. One of the benefits of the invention is that the unit's weight is less than its buoyancy.
The invention is visualised in Figure 5. The sea wave damping unit (1) contains thirty-four vertical elements (8) arranged in five rows. The vertical element (8) is a tube closed at both ends which constitutes a natural float with a specific buoyancy force. On each vertical tubular element (8), in its top and bottom part, there is a clamp (7) which is a three-sectional bracket connected by means of bolts. Each of the three elements of the bracket is equipped with a clamp to which the cut-off top of the rigid spacer (9) is attached using two bolts. The spacer is a flat bar in the form of a triangular truss with cut-off tops and a triangle inscribed within it. The spacers (9) combine the vertical elements (8) in their upper and lower parts, forming two rigid trusses in two planes. In one row of the vertical tubular elements (8) three top ends of the rods (6) are attached to the bottom of the clamps (7), and each of the lower ends of the rods (6) are fastened to the anchoring element (5) located on the seabed (4). Figure 5B present a top view of the device. The width 'S' of the damping unit (1) is 4 m, and its length is 8 m. The height of the vertical elements (8) is 4 m. The device may also contain thirty vertical elements arranged in five rows of six elements [24]. In accordance with the patent claim, the invention is a method of coastal protection consisting of placing in the way of sea waves beneath the rippling water at least one unit of the sea wave damper, composed of vertical tubular floating elements, and locating it behind the wave transformation line (breaking line) and at an economically optimal distance from the shoreline. The upper edges of the damping unit are underneath the surface of the water, while its lower edges are located at a distance from the bottom of the sea. The unit is attached to anchoring elements resting on the seabed by means of at least two tie rods. One of the benefits of the invention is that the unit's weight is less than its buoyancy.
The invention is visualised in Figure 5. The sea wave damping unit (1) contains thirty-four vertical elements (8) arranged in five rows. The vertical element (8) is a tube closed at both ends which constitutes a natural float with a specific buoyancy force. On each vertical tubular element (8), in its top and bottom part, there is a clamp (7) which is a three-sectional bracket connected by means of bolts. Each of the three elements of the bracket is equipped with a clamp to which the cut-off top of the rigid spacer (9) is attached using two bolts. The spacer is a flat bar in the form of a triangular truss with cut-off tops and a triangle inscribed within it. The spacers (9) combine the vertical elements (8) in their upper and lower parts, forming two rigid trusses in two planes. In one row of the vertical tubular elements (8) three top ends of the rods (6) are attached to the bottom of the clamps (7), and each of the lower ends of the rods (6) are fastened to the anchoring element (5) located on the seabed (4). Figure 5B present a top view of the device. The width 'S' of the damping unit (1) is 4 m, and its length is 8 m. The height of the vertical elements (8) is 4 m. The device may also contain thirty vertical elements arranged in five rows of six elements [24]. (11). Each damping unit (1) is installed within the water column in such a way that the upper edge (2) is, according to general recommendations, placed 0.5 m from the mean water level (10) and the lower edge (3) is located 1 m above the seabed. The sea wave damping units are located along the shoreline in at least one row. Figure 6. The coastal protection system consisting of three damping units in a row [24] (description in the text). (11). Each damping unit (1) is installed within the water column in such a way that the upper edge (2) is, according to general recommendations, placed 0.5 m from the mean water level (10) and the lower edge (3) is located 1 m above the seabed. The sea wave damping units are located along the shoreline in at least one row. (11). Each damping unit (1) is installed within the water column in such a way that the upper edge (2) is, according to general recommendations, placed 0.5 m from the mean water level (10) and the lower edge (3) is located 1 m above the seabed. The sea wave damping units are located along the shoreline in at least one row. Figure 6. The coastal protection system consisting of three damping units in a row [24] (description in the text). Figure 6. The coastal protection system consisting of three damping units in a row [24] (description in the text). The method for sea wave damping featured by the device allows for the dissipation of the energy of sea waves during a storm and thus protects the shoreline. The height of the vertical element ranges from 0.2 to 1.0 of the dominant wave length. The damping units are to be used for the protection of coastlines against storm wave damage by means of damping the energy of the sea wave already in the water and at a certain distance from the shoreline.
Installing the damping units at a distance equal to half of the wave length leads to the fact that horizontal forces caused by movement of the rippling water by the first unit are directed contrary to the horizontal forces by the second unit and thus the forces are mutually compensating [1]. A diagram of the damping device consisting of two identical units has been presented in Figure 7, where size A is the width of the damping device, size B is the distance between the units, and size C is the width of the units which may not exceed half of the wave length.
Energies 2018, 11, x 6 of 20 The method for sea wave damping featured by the device allows for the dissipation of the energy of sea waves during a storm and thus protects the shoreline. The height of the vertical element ranges from 0.2 to 1.0 of the dominant wave length. The damping units are to be used for the protection of coastlines against storm wave damage by means of damping the energy of the sea wave already in the water and at a certain distance from the shoreline.
Installing the damping units at a distance equal to half of the wave length leads to the fact that horizontal forces caused by movement of the rippling water by the first unit are directed contrary to the horizontal forces by the second unit and thus the forces are mutually compensating [1]. A diagram of the damping device consisting of two identical units has been presented in Figure 7, where size A is the width of the damping device, size B is the distance between the units, and size C is the width of the units which may not exceed half of the wave length. With a constant unit width, the greatest damping effect occurs when the spacing between the units is equal to half the wave length. The increase in the unit's width enhances damping efficiency and simultaneously increases the width of the device. It follows that the greatest attenuation effect can be achieved when the width of the damping device is equal to the wave length, which has been specified in the patent application.
Kuźniewski's damper was subjected to experimental tests conducted in a 60 m × 7 m × 3.12 m model basin (auxiliary pool) at the Ship Hydromechanics Division, Ship Design and Research Centre (CTO S. A.) in Gdańsk (Poland). Figure 8 showns the already immersed device in the course of the tests. With a constant unit width, the greatest damping effect occurs when the spacing between the units is equal to half the wave length. The increase in the unit's width enhances damping efficiency and simultaneously increases the width of the device. It follows that the greatest attenuation effect can be achieved when the width of the damping device is equal to the wave length, which has been specified in the patent application.
Kuźniewski's damper was subjected to experimental tests conducted in a 60 m × 7 m × 3.12 m model basin (auxiliary pool) at the Ship Hydromechanics Division, Ship Design and Research Centre (CTO S. A.) in Gdańsk (Poland). Figure 8 showns the already immersed device in the course of the tests. The method for sea wave damping featured by the device allows for the dissipation of the energy of sea waves during a storm and thus protects the shoreline. The height of the vertical element ranges from 0.2 to 1.0 of the dominant wave length. The damping units are to be used for the protection of coastlines against storm wave damage by means of damping the energy of the sea wave already in the water and at a certain distance from the shoreline.
Installing the damping units at a distance equal to half of the wave length leads to the fact that horizontal forces caused by movement of the rippling water by the first unit are directed contrary to the horizontal forces by the second unit and thus the forces are mutually compensating [1]. A diagram of the damping device consisting of two identical units has been presented in Figure 7, where size A is the width of the damping device, size B is the distance between the units, and size C is the width of the units which may not exceed half of the wave length. With a constant unit width, the greatest damping effect occurs when the spacing between the units is equal to half the wave length. The increase in the unit's width enhances damping efficiency and simultaneously increases the width of the device. It follows that the greatest attenuation effect can be achieved when the width of the damping device is equal to the wave length, which has been specified in the patent application.
Kuźniewski's damper was subjected to experimental tests conducted in a 60 m × 7 m × 3.12 m model basin (auxiliary pool) at the Ship Hydromechanics Division, Ship Design and Research Centre (CTO S. A.) in Gdańsk (Poland). Figure 8 showns the already immersed device in the course of the tests. A simulation and experimental plan was developed for the damping device prototype to determine its performance and parameters and to confirm the following hypotheses: The largest damping force should occur when the damping units are placed at an optimal distance equal to half the length of the wave to be damped. compensation of the horizontal forces caused by the rippling of water should occur in the damping device.
The examination was carried out to measure both the vertical and horizontal forces caused by the rippling water and, subsequently, to analyse the results of the measurements of the forces in order to perform: a harmonic analysis of the horizontal forces acting on the damping units; a harmonic analysis of the waves that trigger these forces; an analysis of the compensation of the horizontal forces.
Simulation Studies
The tested wave damping device was designed in such a way that the distance between the two damping segments was 1.2 m, which corresponds to half the wave length. The length of the simulated wave has to be equal to 2.4 m with amplitude of 0.05 m. Such waves were obtained at the auxiliary model basin at the Ship Design and Research Centre in Gdańsk. The speed c of the wave propagation depends not only on the length λ of the waves, but also on the depth h of the water [1]. In the general case, if: λ h < 2 then wave speed, c, is equal to: where: κ = 4π 2 T 2 g is the wave number, g = 9806 m/s 2 is acceleration due to gravity, T is the wave period and tanh is the hyperbolic tangent.
The dependence of the main parameters of the wave from its period for the deep water area is given in Table 1. Due to the small amplitude of the examined waves (0.05 m) relative to the depth h (3 m) of the water, the speed c and wave length λ can also be expressed using the period T: Based on Table 1 and the values obtained from Equation (2), it can be seen that the calculations for c, l and f for the deep water waves also showed the same values for small amplitude waves. For the Computer calculations of the loads were performed using the ANSYS AQWA OFFSHORE v 12.1 software (ANSYS Inc., Canonsburg, PA, USA). A view of the element modelled has been shown in Figure 10. Computer calculations of the loads were performed using the ANSYS AQWA OFFSHORE v 12.1 software (ANSYS Inc., Canonsburg, PA, USA). A view of the element modelled has been shown in Figure 10. Computer calculations of the loads were performed using the ANSYS AQWA OFFSHORE v 12.1 software (ANSYS Inc., Canonsburg, PA, USA). A view of the element modelled has been shown in Figure 10. Both wave amplitudes of 0.05 m and 0.10 m confirmed the fact that the wave amplitude decreased exponentially along with the depth. At the depth equal to half the wave length, the wave motion was damped by as much as 23 times [15].
Load Modelling of Two Elements
The diagram in Figure 14 shows the distribution of two buoyancy elements attached using vertical lines to the basin bottom and connected to each other at a distance equal to half the length of the wave. A view of the modelled elements is shown in Figure 15. Both wave amplitudes of 0.05 m and 0.10 m confirmed the fact that the wave amplitude decreased exponentially along with the depth. At the depth equal to half the wave length, the wave motion was damped by as much as 23 times [15].
Load Modelling of Two Elements
The diagram in Figure 14 shows the distribution of two buoyancy elements attached using vertical lines to the basin bottom and connected to each other at a distance equal to half the length of the wave. A view of the modelled elements is shown in Figure 15. These are two identical elements as shown in Figure 9, attached to the bottom of the basin with identical lines and at the same depth. The fundamental difference here lay in the use of rigid connectors between the two tubes instead of horizontal lines. The connectors are at a depth equal to the depth of the horizontal lines in Figure 9. Figures 16 and 17 present the results of the computer Both wave amplitudes of 0.05 m and 0.10 m confirmed the fact that the wave amplitude decreased exponentially along with the depth. At the depth equal to half the wave length, the wave motion was damped by as much as 23 times [15].
Load Modelling of Two Elements
The diagram in Figure 14 shows the distribution of two buoyancy elements attached using vertical lines to the basin bottom and connected to each other at a distance equal to half the length of the wave. A view of the modelled elements is shown in Figure 15. These are two identical elements as shown in Figure 9, attached to the bottom of the basin with identical lines and at the same depth. The fundamental difference here lay in the use of rigid connectors between the two tubes instead of horizontal lines. The connectors are at a depth equal to the depth of the horizontal lines in Figure 9. Figures 16 and 17 present the results of the computer These are two identical elements as shown in Figure 9, attached to the bottom of the basin with identical lines and at the same depth. The fundamental difference here lay in the use of rigid connectors between the two tubes instead of horizontal lines. The connectors are at a depth equal to the depth of the horizontal lines in Figure 9. Figures 16 and 17 The joints between the connectors and the tubes are articulated. The advantage of rigid connectors is that they can transfer the compression and extension forces. The joints between the connectors and the tubes are articulated. The advantage of rigid connectors is that they can transfer the compression and extension forces. The joints between the connectors and the tubes are articulated. The advantage of rigid connectors is that they can transfer the compression and extension forces.
Experimental Studies
The test site is presented in Figure 18. The first prototypes of the damping device were constructed from polyethylene tubes with a diameter of 0.11 m and length of 1.5 m installed vertically at the vertices of equilateral triangles with sides equal to 0.505 m. The studies undertaken have demonstrated the correct functioning of the first prototype of the damping device, but the results of the damping were considered to be insufficient. The results obtained clearly indicated that the main issue standing in the way of creating the second prototype was an increase in the damping force of the first prototype. To improve the damping properties of the device, a single row of rigid elements was attached vertically to each of the unit's trusses that were made of water-resistant plywood with a thickness of 0.02 m, a width of 0.1 m and a length of 1.5 m. Twelve additional elements were added to the first damping unit from the side of the wave direction, while the second unit which affects the already partially damped waves was reinforced with eighteen additional items.
Experimental Studies
The test site is presented in Figure 18. The first prototypes of the damping device were constructed from polyethylene tubes with a diameter of 0.11 m and length of 1.5 m installed vertically at the vertices of equilateral triangles with sides equal to 0.505 m. The studies undertaken have demonstrated the correct functioning of the first prototype of the damping device, but the results of the damping were considered to be insufficient. The results obtained clearly indicated that the main issue standing in the way of creating the second prototype was an increase in the damping force of the first prototype. To improve the damping properties of the device, a single row of rigid elements was attached vertically to each of the unit's trusses that were made of water-resistant plywood with a thickness of 0.02 m, a width of 0.1 m and a length of 1.5 m. Twelve additional elements were added to the first damping unit from the side of the wave direction, while the second unit which affects the already partially damped waves was reinforced with eighteen additional items. Figure 18. Diagram of the testing stand for damping device prototypes: (A) frame unit; (B) damping units; (C) rail system; (D) force measurement system; (E) immersion adjustment system; (F) immersion measurement system [25].
The metrological parameters of the generator allowed the production of a wave with the assumed length of 2.4 m with an accuracy of 0.005 m. To measure the profile of the wave, resistance probes were used with an uncertainty range of less than 0.0015 m at a 95% confidence level. Since the sinusoidal wave generated can be seen as an ergodic phenomenon, the Tps2* section for the probe S2 can be used for the analysis and development of the measurement data which was recognized to be undistorted. While in the case of the probe S1, a section was used that was equal to the first one in terms of parameter values, but shifted back in time by 22 s. This is shown schematically in Figure 19. The metrological parameters of the generator allowed the production of a wave with the assumed length of 2.4 m with an accuracy of 0.005 m. To measure the profile of the wave, resistance probes were used with an uncertainty range of less than 0.0015 m at a 95% confidence level. Since the sinusoidal wave generated can be seen as an ergodic phenomenon, the Tps2* section for the probe S2 can be used for the analysis and development of the measurement data which was recognized to be undistorted. While in the case of the probe S1, a section was used that was equal to the first one in terms of parameter values, but shifted back in time by 22 s. This is shown schematically in Figure 19. Figure 21 shows the measurement results of the horizontal force sensors for the same sample: 1.1; 2.1; this sample was tagged as pom36 in the Maritime University report. Figure 22. However, these damping values were considered unsatisfactory. The conclusions from this series of tests was that further investigation was needed in order to create another prototype with much better damping characteristics through the application of additional dissipative elements that would also limit the circular motion of the rippling water molecules in the vertical direction. It was assumed that this was to be achieved with a limited number of elements that would not violate the above condition of stability. This device structure Figure 22. However, these damping values were considered unsatisfactory. The conclusions from this series of tests was that further investigation was needed in order to create another prototype with much better damping characteristics through the application of additional dissipative elements that would also limit the circular motion of the rippling water molecules in the vertical direction. It was assumed that this was to be achieved with a limited number of elements that would not violate the above condition of stability. This device structure was positively experimentally tested. The additional dissipative elements were cords made of round plastic (polyethylene, PE) elements that are used in swimming pools to distinguish swimming lanes. Figure 22. However, these damping values were considered unsatisfactory. The conclusions from this series of tests was that further investigation was needed in order to create another prototype with much better damping characteristics through the application of additional dissipative elements that would also limit the circular motion of the rippling water molecules in the vertical direction. It was assumed that this was to be achieved with a limited number of elements that would not violate the above condition of stability. This device structure was positively experimentally tested. The additional dissipative elements were cords made of round plastic (polyethylene, PE) elements that are used in swimming pools to distinguish swimming lanes. Table 2 provides an overview of the results of the experimental study of wave damping. Measurements marked with an asterisk (*) are for devices with additional dissipative elements, Figure 22. Graphs displaying the wave height in front of and behind the damping device with no additional dissipative elements. Table 2 provides an overview of the results of the experimental study of wave damping. Measurements marked with an asterisk (*) are for devices with additional dissipative elements, negative draught means surfacing, and Hs1 and Hs2 mark the wave height in front and behind the damping device respectively. The highest damping efficiency was obtained for measurement number 44, which exceeded 70%. negative draught means surfacing, and Hs1 and Hs2 mark the wave height in front and behind the damping device respectively. The results presented show that the application of Kuźniewski's damper can attenuate waves in a wide range of values-by tens of percent up to values greater than 70%. They also demonstrated the high level of possible compensation of the horizontal forces with small amplitudes of vertical forces. These results have proven the validity of the assumptions for the tested device.
Conclusions
The experimental tests, performed at the model basin of the Ship Hydromechanics Division, Ship Design and Research Centre (CTO S. A.) in Gdańsk (Poland), have confirmed that the damping device functioned in accordance with the design assumptions. The results demonstrated damping of waves by more than 70% and a high level of compensation of horizontal forces. The results of the experimental studies were consistent with those obtained by means of computer simulations.
The physical basis for the presented device concept is a representation of the assumption described in the scientific literature that the motion of molecules of rippling water occurs in the layer
Conclusions
The experimental tests, performed at the model basin of the Ship Hydromechanics Division, Ship Design and Research Centre (CTO S. A.) in Gdańsk (Poland), have confirmed that the damping device functioned in accordance with the design assumptions. The results demonstrated damping of waves by more than 70% and a high level of compensation of horizontal forces. The results of the experimental studies were consistent with those obtained by means of computer simulations.
The physical basis for the presented device concept is a representation of the assumption described in the scientific literature that the motion of molecules of rippling water occurs in the layer
Conclusions
The experimental tests, performed at the model basin of the Ship Hydromechanics Division, Ship Design and Research Centre (CTO S. A.) in Gdańsk (Poland), have confirmed that the damping device functioned in accordance with the design assumptions. The results demonstrated damping of waves by more than 70% and a high level of compensation of horizontal forces. The results of the experimental studies were consistent with those obtained by means of computer simulations. The physical basis for the presented device concept is a representation of the assumption described in the scientific literature that the motion of molecules of rippling water occurs in the layer of thickness that is near half of the wave length and occurs in vertical planes along the lines of partly closed circles.
The simulation calculations showed that the horizontal forces acting on a single tube immersed vertically in rippling water occurring by the upper end of the tube were much stronger than the horizontal forces appearing at the bottom end of the tube. In addition, the simulation calculations for two tubes immersed in rippling water placed at a distance equal to half the wave length and tied to each other with a rigid connector demonstrated the maintenance of a stable position which implies a compensation for the horizontal forces.
The modular design of the damping device allows for any configuration and size of the structure. The design work was supported by cutting-edge computer modelling software. The design features of the device ensure its durability, stability in rippling water and the possibility to adjust the damping force by attaching the required number of highly dissipative elements to both units.
The experimental research undertaken provides confirmation of the two hypotheses: "the largest damping force should occur when the damping units are placed at an optimal distance equal to half the length of the wave to be damped" and "a compensation of the horizontal forces caused by the rippling of water should occur in the damping device".
The positive results of the experimental studies, after their verification and further development in more complex sea conditions, including those associated with the use of new structural materials [26,27], create the possibility for practical application of the new method of coastal protection against storm waves.
|
v3-fos-license
|
2017-09-15T22:55:33.677Z
|
2014-12-01T00:00:00.000
|
40812582
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.5935/0103-5053.20140239",
"pdf_hash": "50a3a6b9cd859e43b4da33c45957a0b5e89c401f",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2931",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "50a3a6b9cd859e43b4da33c45957a0b5e89c401f",
"year": 2014
}
|
pes2o/s2orc
|
Iridium Oxidation States in Catalytic Hydrogenation Intermediates
Os conhecidos hidretos de irídio(III) [IrH2(NCMe)3(PiPr3)]BF4, [IrH(h-C3H5)(NCMe)2(PiPr3)] BF4, [IrH(E-CH=CHPh)(NCMe)3(PiPr3)]BF4 e [IrH{C(Ph)=CH2}(NCMe)3(PiPr3)]BF4, derivados do precursor catalítico tipo Crabtree [Ir(cod)(NCMe)(PiPr3)]BF4, foram investigados em reações com substratos tipicamente envolvidos na catálise homogênea de hidrogenação. Novos complexos como as espécies tris-etileno irídio(I) [Ir(NCMe)(h-C2H4)3(PiPr3)]BF4, os produtos de inserção de difenilacetileno [IrH{Z-C(Ph)=CHPh}(NCMe)3(PiPr3)]BF4 e [Ir(h-C3H5){Z-C(Ph)=CHPh} (NCMe)2(PiPr3)]BF4, e os derivados de [Ir(kO-acac)(h-C3H5){Z-C(Ph)=CHPh}(PiPr3)] e [Ir{kC-C6H4-2-E-(CH=CPh)}(NCMe)3(PiPr3)]BF4, foram caracterizados. O conjunto de observações experimentais sugere que espécies irídio(I), embora acessíveis, são improváveis como intermediários de hidrogenação. Baseados em experimentos de deuteração, uma nova tautomerização do hidreto de alquenil a carbeno foi proposta.
Introduction
Iridium homogeneous catalysts have proved suitable for large scale enantioselective hydrogenations of C=N bonds 1 and could soon become industrial also for unfunctionalized olefin substrates. 2 To this end, cationic Crabtree-type catalysts, in particular, are currently under intense investigation to unravel mechanistic details useful for the optimization of ligands and catalysts. 3Such an investigation is mainly based on theoretical calculations, 4 with just a few experimental contributions. 5So far, the studies left no doubt that the olefin hydrogenation mechanism involves iridium(III) dihydrides, which are usually observable and often isolable, 5,6 but are less conclusive about whether the catalytic cycle closes via iridium(I) or iridium(V) intermediates.
Fifteen years ago, we reported the cationic iridium(III) dihydride complex [IrH 2 (NCMe) 3 (PiPr 3 )]BF 4 (1), which was prepared from the Crabtree-type catalyst precursor [Ir(cod)(NCMe)(PiPr 3 )]BF 4 under conditions typical of homogeneous hydrogenation. 7This compound turned out to be an adequate precursor for the nuclear magnetic resonance (NMR) observation of organometallic species potentially involved as intermediates in olefin hydrogenation cycles.In fact, we presented possible catalytic cycles for ethylene and propylene hydrogenation totally based on observed intermediates and reaction steps.Since then, our frequent use of this compound as starting material in synthetic work has provided new observations that may contribute to this renewed mechanistic discussion.These observations, presented and discussed in the following pages, suggest that Ir(I) species, though easily accessible, are unlikely hydrogenation intermediates.Vol. 25, No. 12, 2014
This type of insertion products retain from precursor 1 three easily replaceable acetonitrile ligands, which guarantee further reactivity, and in some cases are stable towards reductive elimination.This is not the case for the labile hydride-alkyl complexes derived from ethylene or propylene insertions, which were reported to readily evolve alkene or alkane via facile H β-eliminations or C-H reductive eliminations, respectively. 7The hydridealkenyl complex 2 proved to be more stable than the alkyl derivatives, 9 although it was observed to slowly decompose at room temperature forming Z-stilbene, even in the solid state.The reported phenylacetylene analogues of 2, isomers [IrH(E-CH=CHPh)(NCMe) 3 (PiPr 3 )]BF 4 (3-trans) and [IrH{C(Ph)=CH 2 }(NCMe) 3 (PiPr 3 )]BF 4 (3-gem) 8 (Scheme 2), were found to be rather robust and their study constitutes the first part of this work.In addition, the hydride-h 3 -allyl derivative [IrH(h 3 -C 3 H 5 )(NCMe) 2 (PiPr 3 )] BF 4 (4), 7 closely related to the hydride-alkyls but less labile due to the additional coordination of the olefin moiety, was also interrogated in the search for possible hydrogenation pathways other than reductive elimination.
As shown in Scheme 2, the treatment of a 1:1 mixture of the hydride-alkenyl isomers 3 8 with hydrogen (ca. 1 bar) at room temperature readily produced styrene and complex 1.At low temperature, however, the NMR monitoring of the reaction between these isomers and D 2 revealed deuterium incorporation into the hydride ligand prior to styrene formation.This could be conveniently followed by 31 P{ 1 H} NMR, given that hydride ligand deuteration produced isotopic shifts of about +0.072 ppm in the phosphorous signal of each isomer 3.This shift is just slightly smaller than those observed for similar isotopic substitutions in the precursor complex 1 (0.12 ppm). 7Although such a process scrambles the deuterium label and therefore impede further conclusions about the C-H bond forming step leading to styrene, it clearly indicates that complexes 3 do not need to undergo reductive elimination to cleave the H 2 molecule, in line with the mechanistic alternatives that postulate Ir(V) hydrogenation intermediates. 3terestingly, whereas the scrambling of deuterium between D 2 and 3-trans exclusively involved the hydride ligand, the isomer 3-gem was observed to further incorporate deuterium into the two geminal positions of the alkenyl ligand.The 1 H and 31 P{ 1 H} NMR signals of Figure 2, which correspond to a mixture of isotopomers 3-gem after a few minutes of exposition to D 2 at 253 K, show that the extent of deuterium incorporation into each of the three positions is similar.This could indicate that isomer 3-gem is in equilibrium with a putative carbene intermediate (Scheme 3), in which methyl rotation statistically distributes the deuterium label into the three positions where it is eventually observed.Given that the process only affects to the gem isomer of 3, a bulky substituent at the alkenyl α carbon might be important to achieve this rare hydridealkenyl to carbene tautomerization.In fact, whereas the protonation of alkenyl complexes is a known synthetic route to carbene derivatives, 10 to the best of our knowledge, an intramolecular version of such reaction has never been reported.Other more conventional explanations for the observed deuteration pattern, in particular those involving styrene formation, can be ruled out because styrene does not react at all with 1 under the conditions of this experiment.
In view of the behavior of well-characterized analogues, 11 the proposed intermediate of Scheme 3 should be considered an Ir(I) carbene complex even though it contains a typical Schrock type alkylidene ligand.Unconventional species aside, the experiment of Scheme 2 suggests that other Ir(I) intermediates resulting from reductive eliminations of the hydrogenation products are unlikely in the presence of hydrogen.This conclusion is also consistent with the reported behavior of the hydride-allyl complex 4, whose reaction with D 2 at room temperature produced propene and the hydride-deuteride isotopomer of 1. 7 4 was also tested in reactions with common hydrogen acceptors such as ethylene (Scheme 4).
The quick exposition to ethylene of a concentrated solution of 4 in chlorinated solvents, followed by a rapid precipitation in diethyl ether, led to the complex [IrH(h 3 -C 3 H 5 )(NCMe)(h 2 -C 2 H 4 )(PiPr 3 )]BF 4 (5, Scheme 4), in which ethylene was incorporated as a ligand instead of one acetonitrile.Note that unlike the reactions with alkynes (see later) and despite the presence of free acetonitrile, the favored product is the hydride-alkene complex instead of an alkyl derivative.Most signals of the NMR spectra of 5 are similar to those of its precursor 4, 7 including a doublet at d −29.25 (J HP = 15.0Hz) in the 1 H spectrum corresponding to a hydride ligand cis to phosphorous (and trans to acetonitrile) and a doublet at d 56.61 (J CP = 16.2Hz) in the 13 C{ 1 H} spectrum, consistent with an allyllic carbon trans to phosphorous.The new ethylene ligand displays a singlet at d 44.08 in the 13 C{ 1 H} NMR spectrum and an AA'BB' spin system at d 3.23 (J AA' = J BB' = 9.6 Hz, J AB = J A'B' = 8.7 Hz) in the 1 H NMR spectrum, in agreement with the fast rotation of the coordinated ethylene and the lack of symmetry elements in the complex.
More prolonged reactions with ethylene led to solutions whose color changed depending on the amount of dissolved ethylene; being red under vacuum and colorless under ethylene excess.The NMR spectra of these solutions were rather meaningless at any temperature because of the presence of broad signals.Nevertheless, we were able to obtain crystals of what we believe is the major species at low temperature under ethylene excess: the Ir(I) trisethylene complex [Ir(NCMe)(h 2 -C 2 H 4 ) 3 (PiPr 3 )]BF 4 (6).The X-ray structure of this highly symmetric compound is shown in Figure 3, relevant distances and angles are listed in Table 1.The compound displays a regular trigonal bipyramidal structure in which the metal atom is surrounded by six carbons in a planar arrangement, with Ir-C distances in the range 2.20 to 2.24 Å.
The formation of 6 indicates that the excess ethylene is indeed capable of stabilizing the oxidation state Ir(I).The mild reaction conditions leading to this compound also suggest that the replacement of the hard acetonitrile ligands of the Ir(III) precursor by ethylene may facilitate the reductive elimination of propene required to access the Ir(I) intermediates.Yet, the versatility of this type of complexes may enable pathways for propene elimination not necessarily involving Ir(I) intermediates, as suggested by the sequence of reactions in Scheme 5.
Similarly to precursor 1, complex 4 can undergo insertion of diphenylacetylene into the Ir-H bond to form an alkenyl derivative, [Ir(h 3 -C 3 H 5 ){Z-C(Ph)=CHPh} (NCMe) 2 (PiPr 3 )]BF 4 (7).Although we were not capable of obtaining crystals suitable for a diffraction experiment, the likely structural features of 7 can be inferred in the X-ray structure of its neutral derivative [Ir(k 2 O-acac)(h 3 -C 3 H 5 ) {Z-C(Ph)=CHPh}(PiPr 3 )] (8) (Figure 4 left and Table 1), which was obtained after the replacement of the two labile acetonitrile ligands of 7 by acetylacetonate.The NMR spectra of 8 are consistent with the solid state structure and resemble those of 7, thus supporting the structural proposal shown in Scheme 5.Both alkenyl-allyl derivatives display characteristic doublets in the 13 While derivative 8 is highly stable, its precursor 7 was observed to readily transform at room temperature into the complex [Ir{k 2 C-C 6 H 4 -2-E-(CH=CPh)}(NCMe) 3 (PiPr 3 )] BF 4 (9), with simultaneous evolution of propene.In view of the structure found for 9 (Figure 4 right and Table 1), the mildness of this transformation is surprising, as it should involve various C-H bond cleavages and formations, not only to enable propene elimination but also to isomerize the former Z alkenyl ligand into the E form.Such alkenyl ligand isomerizations have been previously observed 12 and seem to play a role in certain catalytic transformations. 13ypically, they have been attributed to the formation of carbene-like intermediates (zwitterionic or h 2 -vinyl), 14 although recent work has disclosed new mechanistic alternatives in iridium complexes. 15In any case, regardless of the specific mechanism, the transformation of 7 into 9 evidences that these complexes can carry out elaborate transformations of organic molecules without resorting to the oxidation state Ir(I).
Conclusions
The set of reactions and new compounds described in this study confirms previous observations indicating that Ir(III) dihydride complexes derived from Crabtreetype catalyst precursors can readily react with hydrogen acceptors (alkenes and alkynes) according to coordination and insertion sequences, although they are less likely to undergo reductive eliminations of the hydrogenated products to form Ir(I) intermediates.As an alternative, the Ir(III) insertion products have proved their ability to cleave H-H and C-H bonds, thus leading to the hydrogenation products without resorting to the oxidation state Ir(I).Nevertheless, the experiments have also allowed observing and proposing Ir(I) intermediates potentially accessible under hydrogenation conditions.One of them, a carbene complex, might result from the rare tautomerization of an Ir(III) hydride-alkenyl derivative.
General
All manipulations were carried out with exclusion of air by using standard Schlenk techniques or in an argon-filled drybox (MBraun).Solvents were obtained from a solvent purification system (MBraun).Deuterated solvents were dried with appropriate drying agents and degassed with argon prior to use.C, H and N analyses were carried out in a Perkin-Elmer 2400 CHNS/O analyzer.Mass spectrometry (MS) data were recorded on a VG Autospec double-focusing mass spectrometer operating in the positive mode; ions were produced with the Cs + gun at ca. 30 kV, and 3-nitrobenzyl alcohol (NBA) was used as the matrix.Infrared spectra were recorded as Nujol mulls on polyethylene sheets or in KBr using the spectrometers Bruker Equinox 55 or Perkin-Elmer Spectrum One.Conductivities were measured in ca. 3 × 10 −4 mol L −1 solutions using a Philips PW 9501/01 conductometer.NMR spectra were recorded on Bruker Avance 300 MHz spectrometer. 1 H (300.13 MHz) and 13 C (75.5 MHz) NMR chemical shifts were measured relative to partially deuterated solvent peaks but are reported in ppm relative to tetramethylsilane (TMS). 31P (121.5 MHz) chemical shifts were measured relative to H 3 PO 4 (85%).Coupling constants, J, are given in Hertz.In general, NMR spectral assignments were achieved through 1 H COSY, 1 H NOESY, 1 H{ 31 P}, 13 C APT, and 1 H/ 13 C HSQC experiments.Unless otherwise indicated, the NMR data are given at room temperature.
Synthesis and characterization of the complexes
The complexes 1 and 4, 7 and the 1:1 mixture of isomers 3-trans and 3-gem 8 were prepared following published procedures.All other reagents were commercial and were used as received.The new complexes described below are air-sensitive in solution and solid state.
Preparation of [Ir
Preparation of [Ir(k 2 O-acac)(h 3 -C 3 H 5 ){Z-C(Ph)=CHPh} (PiPr 3 )] (8): A solution of Na(acac) (34 mg, 0.28 mmol) in MeOH (1 mL) was added at 273 K to a freshly prepared solution of 7 (100 mg, 0.13 mmol) in MeOH (2 mL).The resulting solution was allowed to reach room temperature and stirred for 30 min.The microcrystalline yellow solid obtained was separated by decantation, washed with MeOH and dried in vacuo.Yield 79 mg (90 %); IR ν/cm −1 1570, 1590 (C=O); 1 X-ray data were collected at 100.0(2) K on Bruker SMART APEX area detector diffractometer equipped with a normal focus, 2.4 kW, sealed tube source (molybdenum radiation, λ 0.71073 Å) operating at 50 kV and 30 mA.In all cases, single crystals were mounted on a fiber and covered with protective perfluoropolyether.Each frame covered 0.3° in ω.Data were corrected for absorption by using a multi-scan method applied with the SADABS program. 16he structures were solved by the Patterson method and refined by full-matrix least squares on F 2 using the Bruker SHELXTL program package, 17 including isotropic and subsequently anisotropic displacement parameters for all non-hydrogen non-disordered atoms.Weighted R factors (R w ) and goodness of fit (S) are based on F 2 , and conventional R factors are based on F. Hydrogen atoms were included in calculated positions and refined riding on the corresponding carbon atoms, or in observed positions and refined freely.For most of the structures the highest electronic residuals were observed in the proximity of the Ir center and make no chemical sense.
Figure 1 .
Figure 1.Crystal structure of the cation of complex 2 at the 50% probability level.Hydrogen atoms, except the hydride ligand, are omitted for clarity.
Figure 2 .
Figure 2. Selected NMR signals of a mixture of isotopomers of 3-gem in CDCl 3 at 253 K: 1 H signals corresponding to the two geminal alkenyl protons (left) and 31 P{ 1 H} NMR signals (right).
Figure 3 .
Figure 3. Crystal structure of the cation of complex 6 at the 50% probability level.Hydrogen atoms are omitted for clarity.
C{ 1 H} NMR spectrum corresponding to alkenyl α carbons cis to phosphororus, at d 124.10 (J CP = 7.2 Hz) for 7 and at d 129.85 (J CP = 7.5 Hz) for 8.In addition, as mentioned for 5, doublets at d 64.47 (J CP = 20.5 Hz) for 7 and d 59.22 (J CP = 31.8Hz) for 8 are diagnostic of the relative trans disposition of one of the allyllic carbons and the phosphine.
Figure 4 .
Figure 4. Crystal structures of complex 8 (left) and the cation of complex 9 (right) at the 50% probability level.Hydrogen atoms are omitted for clarity.
|
v3-fos-license
|
2023-03-15T15:19:01.791Z
|
2023-03-11T00:00:00.000
|
257529896
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2571-631X/6/1/19/pdf?version=1678692146",
"pdf_hash": "5cc8053a4abbd2580ea1f3a202b7c1e169c513d6",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2932",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "c4b9fe60d1401c572fa0a1b68e541091bb2e915c",
"year": 2023
}
|
pes2o/s2orc
|
Prescribed Performance Control-Based Semi-Active Vibration Controller for Seat Suspension Equipped with an Electromagnetic Damper
: Seat suspension plays a vital role in improving riding comfort and protecting drivers’ health. This paper develops semi-active seat suspension that equips a controllable electromagnetic damper (EMD) and proposes a prescribed performance control-based semi-active vibration controller with experimental validation. The semi-active EMD mainly consists of a permanent magnet synchronous motor, a ball screw, a three-phase rectifier
Introduction
Vibration influences the physical health of drivers and passengers. In particular, long-term exposure to whole-body vibration (WBV) may lead to fatigue [1] and low back pain [2]. More specifically, vibration at low excitation frequencies (0.5-5 Hz) is the main risk factor for lumbago or backache, which seriously affects mental and physical health and reduces the work efficiency of drivers and passengers [3][4][5][6][7][8]. Multiple methods have been proposed to improve riding comfort and safety by controlling vibration. Generally, passive methods have been applied to eliminate vibrations in the vehicle seat. Carrella et al. proposed a useful vibration isolator consisting of a negative and positive stiffness structure in parallel [9][10][11][12]. Le and Ahn [13,14] designed and fabricated a negative stiffness structure to improve vibration isolation effectiveness under low excitation frequencies.
Yan et al. [15] designed novel nonlinear seat suspension for off-road vehicles, which could achieve quasi-zero-stiffness. All of the seat suspensions mentioned above are passive and uncontrollable. Large disturbances may lead to a jump-down frequency phenomenon [16], and high external frequency would generate more scattered chaotic behavior [17]. These phenomena or behaviors would harm the vibration isolator's performance. Therefore, a controllable device can be introduced to deal with different vibration conditions and achieve better vibration isolation performance.
•
A new semi-active seat suspension is developed by taking advantage of the controllable EMD; • A semi-active strategy based on PPC is designed for the semi-active seat suspension to guarantee the desired vibration isolation performance under different road conditions; • By combining model simulation and practical experiments, the effectiveness of the seat suspension and the controller in vibration control is validated.
The details in the remaining chapters are arranged as follows: the structure and the model of the controllable seat suspension are shown in Section 2; Section 3 presents the controller design; the results of the simulation and the experiments are shown in Section 4; the end of the article gives the conclusion.
Semi-Active Seat Suspension with the EMD
Seat suspension has been widely researched and can be divided into passive, semiactive, and active. The passive one requires multiple elements to achieve excellent performance, which may significantly increase the system's complexity. On the other hand, active seat suspension needs high energy consumption. However, the semi-active seat suspension can improve the system's performance while consuming little energy. This section will study a semi-active seat suspension equipped with an EMD, which can transform the Vibration 2023, 6 305 vibration energy into electrical energy by the motor. This paper mainly focuses on its vibration control performance, and the electrical load consumes the energy generated from vibration. The prototype has been manufactured and the model has been built similarly.
The Variable Damping Seat Suspension
The model and prototype of the variable damping seat suspension are shown in Figure 1; they consist of a passive spring and a controllable EMD, which are in parallel. The three-phase permanent magnet synchronous motor (PMSM) is connected to the ball screw through a coupling. The ball screw, whose lead is 0.016 m, can transform the torque of the EMD to a vertical force, and the rotary rate of the EMD, ω, is transformed as a vertical velocity, v. In addition, an encoder is applied to record the rotation angle of the motor. In this way, the relative displacement of the seat suspension can be obtained. It is easy to see that this variable damping seat suspension where only the controller and the sensor consume low electric energy without other energy input is a semi-active vibration absorber.
seat suspension can improve the system's performance while consuming little This section will study a semi-active seat suspension equipped with an EMD, wh transform the vibration energy into electrical energy by the motor. This paper ma cuses on its vibration control performance, and the electrical load consumes the generated from vibration. The prototype has been manufactured and the model h built similarly.
The Variable Damping Seat Suspension
The model and prototype of the variable damping seat suspension are shown ure 1; they consist of a passive spring and a controllable EMD, which are in para three-phase permanent magnet synchronous motor (PMSM) is connected to the ba through a coupling. The ball screw, whose lead is 0.016 m, can transform the to the EMD to a vertical force, and the rotary rate of the EMD, ω, is transformed as a velocity, . In addition, an encoder is applied to record the rotation angle of the m this way, the relative displacement of the seat suspension can be obtained. It is eas that this variable damping seat suspension where only the controller and the sen sume low electric energy without other energy input is a semi-active vibration ab
The Damping Controllable System
The EMD consists of a three-phase servo motor PMSM, a rectifier, and an resistance unit that can control the device damping [30][31][32]. The PMSM (Panasoni MSMD022G1S) can be modelled as a voltage source, an internal resistor and an inductor. The internal inductance can be ignored to simplify the model because vibration is at low frequencies, generally. In addition, a rectifier is applied to con alternating current generated by the PMSM to direct current [38].
From [30], the damping of an EMD increases with the decrease of the externa In other words, the system has the biggest damping when its external resistance and the system has the smallest damping when its external resistance is the big this article, a 500 Ω resistor is chosen to determine the smallest damping of the s The torque of the motor in the EMD is controllable, and the ball screw can tra the torque into a force output. Figure 2 shows the simplified model of the EMD Multiple resistors are in parallel, and a myRIO can control their connections in the By adjusting the resistors' switches, the circuit's equivalent external resistance wil accordingly.
The Damping Controllable System
The EMD consists of a three-phase servo motor PMSM, a rectifier, and an external resistance unit that can control the device damping [30][31][32]. The PMSM (Panasonic motor, MSMD022G1S) can be modelled as a voltage source, an internal resistor and an internal inductor. The internal inductance can be ignored to simplify the model because the seat vibration is at low frequencies, generally. In addition, a rectifier is applied to convert the alternating current generated by the PMSM to direct current [38].
From [30], the damping of an EMD increases with the decrease of the external loads. In other words, the system has the biggest damping when its external resistance is 0 Ω, and the system has the smallest damping when its external resistance is the biggest. In this article, a 500 Ω resistor is chosen to determine the smallest damping of the system.
The torque of the motor in the EMD is controllable, and the ball screw can transform the torque into a force output. Figure 2 shows the simplified model of the EMD circuit. Multiple resistors are in parallel, and a myRIO can control their connections in the circuit. By adjusting the resistors' switches, the circuit's equivalent external resistance will change accordingly. The equivalent damping force and the torque of the EMD have a relation ship as follows: where = is the ball screw transmission ratio. In the circuit of the EMD, there are connections between its torque and the current where is the torque constant of the motor. According to the Ohm's law, where is the generated voltage, and and are the internal and external re sistance, respectively.
The rotor coil in the EMD cuts the magnetic inductance line to generate electromotiv force, and all the circuit elements are connected as a loop. Therefore, where is the voltage constant and is the rotary rate of the motor. Ignoring the efficiency loss of the coupling during transmission, the relationship be tween the relative speed of the two ends of the electromagnetic shock absorber, , and motor rotor speed can be obtained:
= •
Additionally, in this motor, = . Thus, the damping is: The damping of the seat suspension is related to the ball screw, the torque or voltag constant of the PMSM, and the internal and external resistance. Table 1 shows the param eters of the EMD.
Symbol
Value 2π rad/m The equivalent damping force F C and the torque T EMD of the EMD have a relationship as follows: where r ball−screw = 2π lead is the ball screw transmission ratio. In the circuit of the EMD, there are connections between its torque and the current i: where k i is the torque constant of the motor. According to the Ohm's law, where E is the generated voltage, and R i and R e are the internal and external resistance, respectively. The rotor coil in the EMD cuts the magnetic inductance line to generate electromotive force, and all the circuit elements are connected as a loop. Therefore, where k e is the voltage constant and ω is the rotary rate of the motor. Ignoring the efficiency loss of the coupling during transmission, the relationship between the relative speed of the two ends of the electromagnetic shock absorber, v, and motor rotor speed ω can be obtained: Additionally, in this motor, k i = k e . Thus, the damping C is: The damping of the seat suspension is related to the ball screw, the torque or voltage constant of the PMSM, and the internal and external resistance. Table 1 shows the parameters of the EMD. The change in resistance in the circuit can cause a change in device damping. Thus, we can obtain semi-active control of the seat suspension by changing the resistance of the electric circuit.
Semi-Active Controller with Prescribed Performance Control
In this section, a semi-active controller with PPC is proposed for the variable damping system. The prescribed performance control method can not only ensure the steady performance of the system, but also meet the requirements of the dynamic performance design by limiting the tracking error of the system to the region constrained by the selected boundary function so that a reasonable performance constraint function boundary can be designed to limit the overshoot, adjustment time and steady-state error of the dynamic response of the tracking error. In this way, the desired control force requirements can be achieved. This seat suspension system also designs a force-tracking strategy in control, as shown in Figure 3. The EMD in seat suspension is used to provide controllable damping force according to the desired control force. The system states change when the vibration or excitation is transmitted to the seat through the suspension. Then, the controller will calculate an ideal force. According to Section 2, a change in the resistance results in a change in damping. In this way, this system can isolate the vibration. The change in resistance in the circuit can cause a change in device damping. Thus, we can obtain semi-active control of the seat suspension by changing the resistance of the electric circuit.
Semi-Active Controller with Prescribed Performance Control
In this section, a semi-active controller with PPC is proposed for the variable damping system. The prescribed performance control method can not only ensure the steady performance of the system, but also meet the requirements of the dynamic performance design by limiting the tracking error of the system to the region constrained by the selected boundary function so that a reasonable performance constraint function boundary can be designed to limit the overshoot, adjustment time and steady-state error of the dynamic response of the tracking error. In this way, the desired control force requirements can be achieved. This seat suspension system also designs a force-tracking strategy in control, as shown in Figure 3. The EMD in seat suspension is used to provide controllable damping force according to the desired control force. The system states change when the vibration or excitation is transmitted to the seat through the suspension. Then, the controller will calculate an ideal force. According to Section 2, a change in the resistance results in a change in damping. In this way, this system can isolate the vibration.
Problem Description
The whole seat suspension model is shown in Figure 4: Figure 4a is the simplified model of the seat suspension; Figure 4b is the free body diagram, where is the mass; is the spring stiffness; is the inherent damping, and it is mainly from friction; and is the force generated from the semi-active damper. In addition, is the seat response and is the excitation. Thus, the dynamic model is:
Problem Description
The whole seat suspension model is shown in Figure 4: Figure 4a is the simplified model of the seat suspension; Figure 4b is the free body diagram, where M is the mass; K is the spring stiffness; C eq is the inherent damping, and it is mainly from friction; and u is the force generated from the semi-active damper. In addition, x is the seat response and z is the excitation. Thus, the dynamic model is: (11) where C is the controllable damping of the EMD, F c is the damping force, F K is the stiffness force and f r represents the system friction, f is the friction coefficient, and α is the saturation boundary.
where is the controllable damping of the EMD, is the damping force, is the stiffness force and represents the system friction, is the friction coefficient, and is the saturation boundary.
The objective of control can be summarized as follows: 1. The output of the system can track the desired trajectory while maintaining the boundedness of all signals in the closed loop. 2. The tracking error meets the specified transient and steady-state performance limits.
The Design of the PPC
In vibration control, the controlled seat suspension is required to meet the ideal position trajectory. Thus, we set the position tracking error ( ) as The objective of control can be summarized as follows: 1.
The output of the system can track the desired trajectory while maintaining the boundedness of all signals in the closed loop.
2.
The tracking error meets the specified transient and steady-state performance limits.
The Design of the PPC
In vibration control, the controlled seat suspension is required to meet the ideal position trajectory. Thus, we set the position tracking error e(t) as where z d represents the desired displacement, which is set as zero in this paper. The desired control force will be calculated by using the PPC. The preset performance control in this paper is derived from the reference [33]. By setting a boundary on the position tracking error the preset steady-state and transient performance requirements are achieved, so e(t) should be satisfied: where ρ s (t) is performance function, which is strictly monotone and always greater than 0, and δ is the design constant where 0 ≤ δ ≤ 1. The constant ρ s (∞) represents the maximum allowable size of the tracking error in the steady state. At the same time, the preset decline rate of the performance function represents the minimum speed of convergence of the allowed tracking error. The choice of an appropriate performance function imposes boundaries on the system output.
Next, we use the error transformation function to transform the constrained tracking error into an equivalent unconstrained behavior: where ζ(t) represents the normalized position error. Based on [39], the first transformed error is designed as: Thus, the intermediate controller u 1 is designed as: where k 1 is a positive constant. It is assumed that the velocity error is e v (t), likewise: The performance function corresponding to the velocity error is ρ v (t): where ρ v (∞) and β are normal numbers and, as above, the transient and steady-state responses of the velocity need to satisfy. Additionally, similarly: where τ is second transformed error. Then, we can obtain the desired PPC force: where k 2 is a positive constant and u is F desired . There is a conclusion that the proposed control strategy can realize the transient-and steady-state. The proof of this has been developed in [33,36,40].
Force Tracking Strategy
A desired control force can be calculated in the last section. Then, a force tracking strategy is required to control the EMD. This process can be divided into two steps. In the first step, according to the ideal or desired force and the system's conditions, the desired damping can be calculated based on Figure 5. According to Equation (5) and Figure 5, the damping C is varied with the external resistance R e . The control of R e has been proposed in [38]. In this paper, the parallel resistors are selected as 0.1 Ohm, 50 Ohm, 100 Ohm, 150 Ohm, 200 Ohm, 300 Ohm, and 500 Ohm, respectively. Different resistor combinations can generate different external resistance. Hence, the largest damping of the EMD is 760 N·s/m, when the resistance is lowest. When the resistance is the largest, the value of damping is 50 N·s/m. When the controller calculates the desired damping C and the corresponding resistance, a controller will control the electric circuit to generate the resistance that is closest to the target resistance, by controlling the circuit of the EMD.
EMD is 760 Ns/m, when the resistance is lowest. When the resistance is th value of damping is 50 Ns/m. When the controller calculates the desired dam the corresponding resistance, a controller will control the electric circuit to resistance that is closest to the target resistance, by controlling the circuit of
Performance Evaluation
This section provides simulations and experiments to validate the seat performance. The seat suspension was tested by simulation at first. Then, a d form was utilized to test the semi-active seat suspension prototype. These were used to evaluate the performance of the seat suspension. The paramet tem are shown in Table 2.
Numerical Analysis
This section will use different excitations to evaluate the controller perf numerical simulation. A traditional passive suspension is used to compare, stant damping is applied to replace the EMD. Figure 6 shows the comparison of acceleration response to the sine wa in the time domain. The systems are traditional passive seats with a spring a and the semi-active EMD system. These sine wave amplitudes are 14 mm an the frequencies are 1.5 Hz and 3 Hz, respectively. The semi-active system acceleration, and the root mean square (RMS) was reduced by 42.8% and
Performance Evaluation
This section provides simulations and experiments to validate the seat suspension's performance. The seat suspension was tested by simulation at first. Then, a designed platform was utilized to test the semi-active seat suspension prototype. These two methods were used to evaluate the performance of the seat suspension. The parameters of the system are shown in Table 2.
Numerical Analysis
This section will use different excitations to evaluate the controller performance with numerical simulation. A traditional passive suspension is used to compare, in which constant damping is applied to replace the EMD. The controller's performance functions are set as ρ s (t) = (0.1 − 0.05)e −3t + 0.05 and ρ v (t) = (1 − 0.5)e −1.5t + 0.5. The other preset parameters are k 1 = 0.01, k 2 = 500 and δ = 1. Figure 6 shows the comparison of acceleration response to the sine wave excitation in the time domain. The systems are traditional passive seats with a spring and a damper and the semi-active EMD system. These sine wave amplitudes are 14 mm and 4 mm, and the frequencies are 1.5 Hz and 3 Hz, respectively. The semi-active system has a smaller acceleration, and the root mean square (RMS) was reduced by 42.8% and 27.9% compared to the passive system. Figure 7 shows the tracking force performance of the controller. From the figures, the semi-system can successfully follow the desired force in most regions. Vibration 2023, 6, 19 9 o (a) (b) Figure 8 shows the response to bump excitation in the simulation. The peak-to-p value in Figure 8 is 3.35 m/s for the passive system, and 2.14 m/s for the semi-ac system in Figure 8a. Additionally, 5.07 m/s for the passive system and 3.23 m/s for semi-active system are shown in Figure 8b. The reductions are 36.1% and 36.3%, resp tively. Figure 9 shows the force tracking performance. The figures show that the se active system can follow the desired force for most situations. Figure 8 shows the response to bump excitation in the simulation. The peak-tovalue in Figure 8 is 3.35 m/s for the passive system, and 2.14 m/s for the semi-a system in Figure 8a. Additionally, 5.07 m/s for the passive system and 3.23 m/s fo semi-active system are shown in Figure 8b. The reductions are 36.1% and 36.3%, res tively. Figure 9 shows the force tracking performance. The figures show that the s active system can follow the desired force for most situations. Figure 8 shows the response to bump excitation in the simulation. The peak-to-peak value in Figure 8 is 3.35 m/s 2 for the passive system, and 2.14 m/s 2 for the semi-active system in Figure 8a. Additionally, 5.07 m/s 2 for the passive system and 3.23 m/s 2 for the semi-active system are shown in Figure 8b. The reductions are 36.1% and 36.3%, respectively. Figure 9 shows the force tracking performance. The figures show that the semi-active system can follow the desired force for most situations. Figure 8 shows the response to bump excitation in the simulation. The peak-to-p value in Figure 8 is 3.35 m/s for the passive system, and 2.14 m/s for the semi-ac system in Figure 8a. Additionally, 5.07 m/s for the passive system and 3.23 m/s for semi-active system are shown in Figure 8b. The reductions are 36.1% and 36.3%, res tively. Figure 9 shows the force tracking performance. The figures show that the se active system can follow the desired force for most situations.
Random Vibration Simulation
The random signal is also taken into consideration compared to the traditional tem. The results of the simulation are shown in Figure 10. The RMS acceleration 3.52 m/s for the passive system and 2.17 m/s for the semi-active system. The s active system has a 38.4% improvement compared to the passive one. Finally, Figu shows the position error, which is always within the predefined boundaries.
Random Vibration Simulation
The random signal is also taken into consideration compared to the traditional system. The results of the simulation are shown in Figure 10. The RMS accelerations are 3.52 m/s 2 for the passive system and 2.17 m/s 2 for the semi-active system. The semi-active system has a 38.4% improvement compared to the passive one. Finally, Figure 11 shows the position error, which is always within the predefined boundaries.
Random Vibration Simulation
The random signal is also taken into consideration compared to the traditional system. The results of the simulation are shown in Figure 10. The RMS accelerations are 3.52 m/s for the passive system and 2.17 m/s for the semi-active system. The semiactive system has a 38.4% improvement compared to the passive one. Finally, Figure 11 shows the position error, which is always within the predefined boundaries.
Random Vibration Simulation
The random signal is also taken into consideration compared to the traditional system. The results of the simulation are shown in Figure 10. The RMS accelerations are 3.52 m/s for the passive system and 2.17 m/s for the semi-active system. The semiactive system has a 38.4% improvement compared to the passive one. Finally, Figure 11 shows the position error, which is always within the predefined boundaries.
Experimental Setup
The test system consists of a frame made of aluminum alloy bars and an electric cylinder controlled by NI MyRIO, as shown in Figure 12. Two accelerometers, 1C101 of DONGHUA and ADXL 103, are used to measure the acceleration of the seat suspension.
The rotary encoder, ORMOU E6H-CWZ, is used to test the rotary of the EMD for calculating the suspension relative displacement. Additionally, the seat suspension displacement is measured by a KathMatic laser displacement sensor whose measuring range is from 90 mm to 190 mm. The experimental system can be divided into three parts: vibration platform, test system and seat suspension. The single-freedom vibration platform, which is under the control of an NI myRIO, consists of a PC, a motor driver and an electric cylinder. Based on the PC's command, the platform can generate different excitations. The test system consists of an acceleration sensor, a displacement sensor, an NI myrio and a DC power, which supplied power for the sensor.
Experimental Setup
The test system consists of a frame made of aluminum alloy bars and an electric cylinder controlled by NI MyRIO, as shown in Figure 12. Two accelerometers, 1C101 of DONGHUA and ADXL 103, are used to measure the acceleration of the seat suspension. The rotary encoder, ORMOU E6H-CWZ, is used to test the rotary of the EMD for calculating the suspension relative displacement. Additionally, the seat suspension displacement is measured by a KathMatic laser displacement sensor whose measuring range is from 90 mm to 190 mm. The experimental system can be divided into three parts: vibration platform, test system and seat suspension. The single-freedom vibration platform, which is under the control of an NI myRIO, consists of a PC, a motor driver and an electric cylinder. Based on the PC's command, the platform can generate different excitations. The test system consists of an acceleration sensor, a displacement sensor, an NI myrio and a DC power, which supplied power for the sensor. Sine wave, bump, and random excitations are used to evaluate the proposed seat suspension performance. A typical road condition through a quarter-car model generates a random vibration signal. In these different tests, the seat suspension is set as three different conditions, with an external resistor at 0 Ohm, 500 Ohm and 88 Ohm. The value of 0 Ohm represents the maximum damping, 500 Ohm represents minimum damping, and 88 Ohm is applied to simulate a passive suspension. The international standard ISO 2631-1 [41] is applied to evaluate the seat suspension vibration isolation performance. Therefore, there are three parameters used to evaluate the performance, including the root mean square (RMS), the frequency-weighted RMS (FW-RMS) acceleration and the fourth power vibration dose value (VDV). According to the international standard, the FW-RMS and the VDV can be calculated as: Sine wave, bump, and random excitations are used to evaluate the proposed seat suspension performance. A typical road condition through a quarter-car model generates a random vibration signal. In these different tests, the seat suspension is set as three different conditions, with an external resistor at 0 Ohm, 500 Ohm and 88 Ohm. The value of 0 Ohm represents the maximum damping, 500 Ohm represents minimum damping, and 88 Ohm is applied to simulate a passive suspension. The international standard ISO 2631-1 [41] is applied to evaluate the seat suspension vibration isolation performance. Therefore, there are three parameters used to evaluate the performance, including the root mean square (RMS), the frequency-weighted RMS (FW-RMS) acceleration and the fourth power vibration dose value (VDV). According to the international standard, the FW-RMS and the VDV can be calculated as: Figure 13 shows the seat acceleration comparison with different external resistors of the sine wave. In Figure 13a, the excitation amplitude is 14 mm and the frequency is 1.5 Hz, which is the resonance frequency; the seat suspension shows resonance when the damping is small. The RMS value in Figure 13a is 1.58 m/s 2 for the passive system and 1.20 m/s 2 for the semi-active system, where there is a 24.1% reduction. This illustrates that the semi-active system generates large damping to suppress the resonance vibration. For Figure 13b, the semi-active system has a performance close to the passive because the equivalent damping of the EMD seat caused by the friction in 3 Hz is close to the damping of the passive seat. In addition, the semi-active and passive suspensions perform better than the EMD suspension with 0 Ohm external resistance. Figure 13 shows the seat acceleration comparison with different external resistor the sine wave. In Figure 13a, the excitation amplitude is 14 mm and the frequency 1.5 Hz, which is the resonance frequency; the seat suspension shows resonance when damping is small. The RMS value in Figure 13a is 1.58 m s ⁄ for the passive system a 1.20 m s ⁄ for the semi-active system, where there is a 24.1% reduction. This illustra that the semi-active system generates large damping to suppress the resonance vibrati For Figure 13b, the semi-active system has a performance close to the passive because equivalent damping of the EMD seat caused by the friction in 3 Hz is close to the damp of the passive seat. In addition, the semi-active and passive suspensions perform be than the EMD suspension with 0 Ohm external resistance.
Bump Vibration Test
The acceleration comparison of bump signals of different frequencies is shown Figure 14. The amplitude of the bump is 20 mm, while the frequency is 2 Hz and 3 respectively. Figure 15 shows the acceleration comparison where the amplitude is 30 m and the frequency is 1.5 Hz and 2 Hz, as well. In these chats, the semi-active system a lower peak value. The results demonstrate that the proposed seat suspension has be performance at the high frequency where the EMD has low damping.
Bump Vibration Test
The acceleration comparison of bump signals of different frequencies is shown in Figure 14. The amplitude of the bump is 20 mm, while the frequency is 2 Hz and 3 Hz, respectively. Figure 15 shows the acceleration comparison where the amplitude is 30 mm and the frequency is 1.5 Hz and 2 Hz, as well. In these chats, the semi-active system has a lower peak value. The results demonstrate that the proposed seat suspension has better performance at the high frequency where the EMD has low damping.
Random Vibration Test
Finally, the random excitation is transmitted to the seat suspension in Figure 16. The passive seat has a high peak acceleration, and the semi-active seat suspension can effectively suppress the resonance vibration. The change of damping under control is shown in Figure 17. Figure 18 shows that position errors with prescribed performance are bounded for all times. Then, Figure 19 shows the evaluation parameters of acceleration according to ISO 2631-1. Compared to the passive system, the semi-active is improved by 17.5% for RMS, 39.9% for FW-RMS and 25.4% for VDV.
Random Vibration Test
Finally, the random excitation is transmitted to the seat suspension in Figure 16. The passive seat has a high peak acceleration, and the semi-active seat suspension can effectively suppress the resonance vibration. The change of damping under control is shown in Figure 17. Figure 18 shows that position errors with prescribed performance are bounded for all times. Then, Figure 19 shows the evaluation parameters of acceleration according to ISO 2631-1. Compared to the passive system, the semi-active is improved by 17.5% for RMS, 39.9% for FW-RMS and 25.4% for VDV.
Random Vibration Test
Finally, the random excitation is transmitted to the seat suspension in Figure 16. The passive seat has a high peak acceleration, and the semi-active seat suspension can effectively suppress the resonance vibration. The change of damping under control is shown in Figure 17. Figure 18 shows that position errors with prescribed performance are bounded for all times. Then, Figure 19 shows the evaluation parameters of acceleration according to ISO 2631-1. Compared to the passive system, the semi-active is improved by 17.5% for RMS, 39.9% for FW-RMS and 25.4% for VDV.
Conclusions
In this paper, an EMD seat suspension with a PPC-based semi-active controller was developed and tested. First, an EMD prototype was built with a PMSM, a ball screw, a three-phase rectifier, and a controllable external resistor. By changing the resistance of the external resistor in real-time, the damping of the EMD can vary from 90 to 800 N • s/m. Hence, it can generate large damping when it needs to suppress the high-magnitude resonance vibration and provide small damping to guarantee ride comfort. Then, a PPCbased semi-active controller was designed to control the EMD for vibration control, where the PPC can guarantee the desired prescribed performance bound. Furthermore, a semiactive strategy has been proposed to control the external resistor of the EMD for generating a semi-active control force. Finally, a test platform has been designed and
Conclusions
In this paper, an EMD seat suspension with a PPC-based semi-active controller was developed and tested. First, an EMD prototype was built with a PMSM, a ball screw, a three-phase rectifier, and a controllable external resistor. By changing the resistance of the external resistor in real-time, the damping of the EMD can vary from 90 to 800 N • s/m. Hence, it can generate large damping when it needs to suppress the high-magnitude resonance vibration and provide small damping to guarantee ride comfort. Then, a PPCbased semi-active controller was designed to control the EMD for vibration control, where the PPC can guarantee the desired prescribed performance bound. Furthermore, a semiactive strategy has been proposed to control the external resistor of the EMD for generating a semi-active control force. Finally, a test platform has been designed and
Conclusions
In this paper, an EMD seat suspension with a PPC-based semi-active controller was developed and tested. First, an EMD prototype was built with a PMSM, a ball screw, a threephase rectifier, and a controllable external resistor. By changing the resistance of the external resistor in real-time, the damping of the EMD can vary from 90 to 800 N·s/m. Hence, it can generate large damping when it needs to suppress the high-magnitude resonance vibration and provide small damping to guarantee ride comfort. Then, a PPC-based semi-active controller was designed to control the EMD for vibration control, where the PPC can guarantee the desired prescribed performance bound. Furthermore, a semi-active strategy has been proposed to control the external resistor of the EMD for generating a semi-active control force. Finally, a test platform has been designed and manufactured to validate the performance of the semi-active seat suspension and the proposed controller. When the seat suspension is in control, the RMS, FW-RMS and VDV of the seat acceleration are reduced by 17.5%, 39.9% and 25.4%, respectively, compared with a passive suspension. The simulation and the experiment prove that this approach is useful and promising. In the future, we will further explore the characteristics of the EMD system and develop controllers for high-performance vibration control.
|
v3-fos-license
|
2024-01-24T16:53:51.680Z
|
2024-01-19T00:00:00.000
|
267124979
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1304283/pdf?isPublishedV2=False",
"pdf_hash": "ccb8d639848d7e0267d8ab94d79dd44bd0e86a56",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2933",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"sha1": "5931c81839bf2913735445930380de99cefc951a",
"year": 2024
}
|
pes2o/s2orc
|
Genomic analysis of Salmonella enterica from Metropolitan Manila abattoirs and markets reveals insights into circulating virulence and antimicrobial resistance genotypes
The integration of next-generation sequencing into the identification and characterization of resistant and virulent strains as well as the routine surveillance of foodborne pathogens such as Salmonella enterica have not yet been accomplished in the Philippines. This study investigated the antimicrobial profiles, virulence, and susceptibility of the 105 S. enterica isolates from swine and chicken samples obtained from slaughterhouses and public wet markets in Metropolitan Manila using whole-genome sequence analysis. Four predominant serovars were identified in genotypic serotyping, namely, Infantis (26.7%), Anatum (19.1%), Rissen (18.1%), and London (13.3%). Phenotypic antimicrobial resistance (AMR) profiling revealed that 65% of the isolates were resistant to at least one antibiotic, 37% were multidrug resistant (MDR), and 57% were extended-spectrum β-lactamase producers. Bioinformatic analysis revealed that isolates had resistance genes and plasmids belonging to the Col and Inc plasmid families that confer resistance against tetracycline (64%), sulfonamide (56%), and streptomycin (56%). Further analyses revealed the presence of 155 virulence genes, 42 of which were serovar-specific. The virulence genes primarily code for host immune system modulators, iron acquisition enzyme complexes, host cell invasion proteins, as well as proteins that allow intracellular and intramacrophage survival. This study showed that virulent MDR S. enterica and several phenotypic and genotypic AMR patterns were present in the food chain. It serves as a foundation to understand the current AMR status in the Philippines food chain and to prompt the creation of preventative measures and efficient treatments against foodborne pathogens.
Introduction
Salmonellosis, caused by Salmonella spp., is among the most frequently reported foodborne diseases worldwide and has a high health and economic burden.Salmonella enterica, with its six distinct subspecies (I, II, IIIa, IIIb, IV, and VI), is a leading cause of global human diarrheal cases and outbreaks annually, including in the Philippines.While there are numerous potential means and sources of transmission, chicken and pig meat, along with other farm animals and products, have been identified as the dominant food vehicles for S. enterica due to their wide distribution and natural, chronic carriage among livestock (World Health Organization, 2018;Ferrari et al., 2019;Edrington and Brown, 2022).The predicted increase in consumption of swine and poultry products in the Philippines in the next ten years is over 3 million metric tons per annum (Organisation for Economic Cooperation and Development/Food and Agriculture Organization, 2023), which increases the potential for exposure to the pathogen.In 2021, the agricultural sector contributed to 9.6% of the Philippines' national gross domestic product (GDP) (Philippine Statistics Authority, 2022).However, due to several outbreak events in the country such as African swine fever (Cooper et al., 2022) and COVID-19 pandemic (Espino et al., 2021), a significant decline in hog and poultry production output was reported.Approximately 16% (53 K metric tons) increase in imported frozen chicken meat was reported from 2020to 2021(Philippine Statistics Authority, 2022).In 2022 alone, pork (52%; 710 M metric tons) and chicken (30%; 411 M metric tons) topped the total meat importation to address the local supply shortage due to production loss and growing meat consumption (Bureau of Animal Industry, 2022).
Studies on raw and processed meats from abattoirs and wet markets in Metro Manila, Philippines, have revealed a high prevalence of S. enterica (>30%) and the additional high frequency of samples positive for the spvC virulence gene, which is strongly associated with strains that cause non-typhoidal bacteremia (Calayag et al., 2017;Santos et al., 2020).Subsequent work in Metro Manila has revealed highly frequent virulence genes (e.g., SPIs 1-5 genes), some co-occurring, and others linked to location and animal source (Pavon et al., 2022).In this setting, S. enterica has been identified primarily from chicken samples and ground pork (both >65%), and with multiple and mixed serogroups E1, C1, C2, B, and D being abundant (all >6%).S. enterica types vary significantly in their host range and their degree of host adaptation.Other studies set in Metro Manila in swine have found similarly high S. enterica bacterium prevalence across accredited and locally registered abattoirs (~50%) but with most bacteria under serogroup O (Ng and Rivera, 2015;Calayag et al., 2017).Collectively, these studies point to S. enterica circulating in the poultry and swine food chains in Metro Manila, with virulence genes, and thereby a likely major and increasing cause of gastroenteritis and enteric fever.
In 2015, a study in slaughtered swine in Metro Manila, Philippines detected five serotypes, namely, S. enterica Typhimurium, Agona, Heidelberg, Choleraesuis, and Weltevreden from tonsils and jejuna of freshly slaughtered swine (Ng and Rivera, 2015).In a separate work, serotypes Anatum, Kentucky, and Saintpaul have been found in bovine, porcine, and poultry meat from wet markets (Santos et al., 2020).The presence of these serotypes in meat samples can be correlated to disease.In a 15-year report of Salmonella serotype distribution in the Philippines by the Antimicrobial Resistance Surveillance Reference Laboratory of the Research Institute for Tropical Medicine (RITM), S. enterica Enteritidis, Typhimurium, Weltevreden, Stanley, and Anatum were found to be the five most prevalent non-typhoidal serotypes in clinical samples.The other serotypes found in meat samples such as Agona, Heidelberg, Choleraesuis, and Kentucky were also detected from clinical samples, although not as prevalent (Sia et al., 2020).This gives us a picture of how parallel the prevalent serotypes in meat samples and clinical samples are, and how tracking the source of these pathogens is paramount to reducing risk, designing mitigation strategies, and predicting future outbreaks.
Treatment options for salmonellosis are decreasing as the underlying bacteria continue to show antimicrobial resistance (AMR).Due to frequent antibiotic exposure, swine and poultry are now recognized as potential risks in disseminating drug-resistant S. enterica, with multidrug resistant (MDR) strains now being present in the Philippines (Calayag et al., 2017).As defined by Magiorakos et al. (2012), MDR organisms have non-susceptibility to at least one antimicrobial agent from three or more antimicrobial classes.Studies of S. enterica from slaughtered swine in Metro Manilla revealed high rates of resistance to ampicillin, trimethoprim/sulfamethoxazole, and MDR (all >67%) (Calayag et al., 2017).These forms of resistance can be detected by polymerase chain reaction (PCR)-based approaches that target known regions (e.g., on plasmids) or, phenotypically, using systems such as the VITEK ® 2 Compact 60 ID/AST System (bioMérieux, 2005) for antimicrobial susceptibility testing.However, in other infection settings, e.g., tuberculosis (Phelan et al., 2019), Klebsiella pneumoniae (Spadar et al., 2023), next-generation sequencing (NGS) has gained traction for fast and affordable AMR profiling (genotyping).Whole-genome sequencing (WGS) analysis can be a rapid and cost-effective approach to define resistance genotypes, predict resistance phenotypes, and identify identical isolate genomes that are part of transmission chains (Sobkowiak et al., 2020;Napier et al., 2022).
To demonstrate the utility of sequencing, we have performed WGS on 105 S. enterica isolates across live animal and processed meat domains in the poultry and swine food chains in Metro Manila between 2018 and 2022.This study is part of a bigger project that aimed to recover 2,500 S. enterica isolates from abattoirs and wet markets in eight cities in Metro Manila, Philippines.The 105 isolates in this study have been obtained from six out of the eight cities, and were the first batch subjected to whole genome sequencing.The resulting genomic variation is used to understand circulating AMR and virulence gene repertoires.Our work provides a baseline set of genomic data and a snapshot of diversity, which can be used in emerging applications of NGS for the routine monitoring of meat product safety in the Philippines.
Sample collection and processing
A collection of S. enterica strains isolated from swine and poultry meat was established to understand the AMR profiles and determine the virulence genes associated with swine and poultry food chains in Metro Manila, Philippines.The meat samples came from public wet markets and accredited as well as locally registered abattoirs in the Mora et al. 10.3389/fmicb.2023.1304283Frontiers in Microbiology 03 frontiersin.orgfour Metro Manila districts, namely, Capital, Eastern, Northern, and Southern.Sample collection from freshly slaughtered swine in slaughterhouses includes the 15-cm segment of the jejunum (Calayag et al., 2017).All sample collection was performed between 2018 and 2022.Different parts of swine and chicken meat were sampled from public wet markets.Raw meat samples include ground or cut-up meat, while processed meat samples include marinated, salted, cured, and pre-cooked products.The samples were transported to the laboratory in sterile plastic bags and kept cold in a cooler.Upon arrival at the laboratory, 25 g of meat sample was weighed and pre-enriched with 225 mL of sterile buffered peptone water (BD Difco, NJ, USA) in a sterile Rollbag ® (Interscience, France), homogenized with BagMixer ® 400 (Interscience, France) for 1 min, and incubated for 24 h at 37°C.For the single-enrichment broth culture method, 100-μL of the pre-enriched culture was transferred into Rappaport-Vassiliadis (RV) broth (10 mL; Difco, BD, Sparks, MD) and incubated at 42°C for 24 h (Ng and Rivera, 2015).A loopful of incubated RV broth was streak plated onto xylose lysine deoxycholate (XLD) agar (BD Diagnostics System, NJ, USA) plates for isolation and purification, and incubated at 37°C for 18-24 h.Typical Salmonella colonies, i.e., colonies with black centers and clear or transparent halo, were then subcultured on nutrient agar (NA) (BD Diagnostics System, NJ, USA) for further confirmation analysis (Pavon et al., 2022).
DNA extraction, molecular detection, and sequencing
Presumptive colonies of S. enterica were subjected to DNA extraction using a DNA purification kit (Monarch ® , New England BioLabs, MA, USA) and stored in a −20°C freezer (Schnee Irish, Vienna, Austria).PCR-based identification was conducted using the species-specific invA gene to confirm the identity of the isolates (Ng and Rivera, 2015).The amplicons were subjected to agarose gel electrophoresis using 1.5% agarose stained with 10,000 × GelRed ® in water (Biotium, CA, USA).Electrophoresis runs were made to proceed under 280 V for 35 min.The gels were visualized using a UV gel documentation system (Vilber Lourmat, France) (Pavon and Rivera, 2021).Amplicon size was estimated using 100-bp HyperLadder™ (Bioline, Meridian Bioscience, London, UK) as the molecular weight marker.Amplicons with the approximate size of ~244 bp were considered positive for the invA gene, and thus Salmonella-positive.The concentration and purity of the invA-positive DNA extracts were measured using a microplate spectrophotometer (Multiskan SkyHigh, Thermo Scientific™) before submission to the sequencing facilities.From a collection of 2,500 S. enterica isolates, a subset of 105 isolates was randomly selected for Illumina TruSeq library construction and sequencing undertaken at the DNA Sequencing Core Facility of the Philippine Genome Center and through the Applied Genome Centre at London School of Hygiene and Tropical Medicine (LSHTM).
Bioinformatics, sample genotyping, and phylogeny
Kraken2 software (v2.1.2) was used with the standard database to scan for any possible contamination (Wood et al., 2019).Genomes were assembled with Shovill software (v1.1.0)(Seemann, 2019) and were used by the SISTR tool (v1.1.1)(Yoshida et al., 2016) to predict serovars and cgMLST sequence types (STs), and to perform basic quality control checks.QUAST software (v5.2.0) was used to check the quality of the assemblies with all genomes passing a minimum of 95% of BUSCO genes found (Mikheenko et al., 2018).ParSNP software (v1.7.4) was used to perform a core genome analysis to identify genomic regions present in most strains.These regions were identified and aligned by ParSNP and this was used as input to build a phylogeny using the RAxML tool (v8.2.12; parameters: -m GTRGAMMA -N 100 -k -f a) (Stamatakis, 2014) for all isolates together, as well as for the four main individual serovar clades separately.The sequence data were scanned for 155 known virulence genes.The phylogenetic trees were used to identify potential transmission chains, through the application of established methods (Sobkowiak et al., 2020;Napier et al., 2022).The isolates have been run through Snippy (v4.6.0)1(Seemann, 2015) to find single nucleotide polymorphisms (SNPs) and determine genetic similarities within serovars and across sampling sites (slaughterhouses and markets), cities, and matrices (swine or chicken).Genetically similar isolates (i.e., less than 20 SNPs) have been plotted in single-linkage clusters using transmission graph viewer. 2
Ethical approval and consent to participate
Ethical review and approval were waived for this study due to informed consent obtained from the National Meat Inspection Service of the Philippine Department of Agriculture.Animal slaughter and evisceration were performed according to Philippine national Mora et al. 10.3389/fmicb.2023.1304283Frontiers in Microbiology 04 frontiersin.orgregulations.Informed consent was also obtained from veterinarians in charge of the abattoirs, and farm owners for sample collection.
Salmonella serovars and AMR
Genomic diversity existed within the isolates, but a clonal group is evident from both the phylogenetic tree and the presence of multiple isolates from the same cgMLST ST, all sourced from the same location and meat type.Resistance was common with isolate genomes containing genes that confer resistance to 9 drug types, including some isolates potentially simultaneously resistant to 8 different drugs.Plasmids were also found in all isolates, with two out of six being present across all (Figure 2A).
The Infantis serovar were all sourced from markets, and almost exclusively found in chicken (26/28).Some isolates sourced from different locations had the same cgMLST ST but significant genomic diversity existed preventing close clusters on the phylogenetic tree.AMR was also common in this serovar with genes conferring resistance to 13 different drug groups being found with some isolates presenting resistance to 11 different drugs.Six different plasmids were found with the IncFIB(K) type being present in all samples except one (Figure 2B).The London serovar was found only in swine and was sourced from both markets and slaughterhouses in four different locations.While some isolates clustered closely on the phylogenetic tree, every isolate had a unique cgMLST ST indicating a significant amount of diversity.Genes conferring resistance to 8 different drugs were found with one clade containing all 8 AMR genes, while the other clade displayed very little resistance with all isolates possessing, at most, one resistance gene.This may be reflective of the different plasmid content between the clades with 8 different plasmids found in total (Figure 2C).
The Anatum serovar was sourced in four different locations and exclusively found in swine from both markets and abattoirs.The phylogenetic tree formed two main clades, and all isolates had different cgMLST STs.AMR loci linked to four different drugs were found, with one isolate presenting all resistance genes.However, one large clade did not present any resistance genes.Six different plasmids were found, including in the clade without resistance genes (Figure 2D).There was evidence of high similarity of samples across different swine markets (Valenzuela, Muntinlupa, Pasay) suggestive of transmission within abattoirs or farms.
Genetic similarities within serovars
For the four most isolated serovars, isolates with less than 20 SNPs have been plotted in single-linkage clusters reflecting sampling sites, cities, and matrices in Figure 4.
For the Rissen serovar, isolates sourced from Caloocan slaughterhouse congregated in one cluster, while isolates from Valenzuela slaughterhouse and markets form multiple clusters (Figure 4A).For Anatum serovar, clustering can be observed per sampling site and city, apart from market isolates from Pasay and one isolate from Muntinlupa which have clustered together.Valenzuela Anatum isolates, on the other hand, formed multiple clusters (Figure 4B).Isolates under the London serovar from one slaughterhouse and three markets, all from three cities, are found to congregate in one major cluster.Meanwhile, Valenzuela and Quezon isolates form one minor cluster (Figure 4C).Finally, for the Infantis serovar, the isolates which were mostly sourced from chicken market samples and a few swine samples formed one major cluster which branches out into smaller clusters.Several San Juan isolates link to a few Quezon isolates, forming three minor clusters (Figure 4D).
Discussion
Due to its emergence, transmission, and persistence, AMR continues to be a serious global health problem.This issue is greatly exacerbated internationally through the use of antimicrobials in various food chains, notably in the production of livestock and poultry.It adversely affects food security in low-and middle-income countries such as the Philippines which is a large consumer of hog and poultry products.This study is the first to report on AMR and transmission of S. enterica using a WGS of isolates from swine and poultry in the Philippines.
Serovars
The four most dominant serovars found in this study, Infantis, Anatum, Rissen, and London are consistently detected in several institutional and independent studies in different countries and territories such as the European Union [European Food Safety Authority and European Centre for Disease Prevention and Control (EFSA and ECDC), 2022], the United States (Centers for Disease Control and Prevention (CDC), 2013; Bearson, 2021), and China (Liu et al., 2020;Tang et al., 2022Tang et al., , 2023)), to mention a few.In this study, the most dominant is S. Infantis which is one of the increasingly emerging and spreading serovars globally (Alvarez et al., 2023) and is widely linked to human salmonellosis (Montone et al., 2023).In our study, this serovar was associated with raw chicken meat, consistent with epidemiological data from the EU [European Food Safety Authority and European Centre for Disease Prevention and Control (EFSA and ECDC), 2022] and the USA [Centers for Disease Control and Prevention (CDC), 2013].Due to outbreaks in the EU, it has been recommended that this serovar (among others) be included in surveillance schemes (Ferrari et al., 2019).From the four most dominant serovars, the most AMR genes have been detected in S. Infantis.Likewise, S. Infantis harbored the most number of virulence genes.These AMR and virulence determinants would have likely propelled the increased emergence and rapid spread of S. Infantis.In a large-scale review by Alvarez et al. (2023) which included 3,725 independent studies on S. Infantis in different countries, S. Infantis isolates collectively exhibited resistance to all antibiotic classes examined, including aminoglycosides, amphenicols, β-lactams, quinolones, sulfonamides, and tetracyclines, among others.They have also exhibited the presence of resistance genes attributed to specific antimicrobials such as aad, aac, aph, sul, bla, qnr, and tet which are consistent with the results of this study.
The second most dominant serovar was Anatum, which was exclusively isolated from swine samples, consistent with settings such as Latin America (Ferrari et al., 2019).However, previous reviews have revealed that it is associated with beef and seafood, globally, which were not sources considered in our work (Ferrari et al., 2019).The London serovar has been associated with various infections in humans (Yong et al., 2005), animals, and food products (Meunsene et al., 2021).Although it is uncommonly detected and studied among Salmonella serovars, London isolates have exhibited potential zoonotic transmission and increasing resistance to antibiotics (Fang et al., 2022).In this study, London isolates were exclusively found in swine samples and only conferred resistance to limited antimicrobial classes.Lastly, the Rissen serovar is currently one of the emerging serovars in various countries worldwide.In a large-scale global review by Elbediwi et al. (2021), it was revealed that the bulk of Rissen isolates (~2/3) were obtained from human samples, more than half of which were asymptomatic individuals.While most non-clinical Rissen isolates in global studies were associated with poultry and porcine samples, several studies have also found the serovar in seafood (Atwill and Jeamsripong, 2021;Lozano-León et al., 2022), and other sources (Prasertsee et al., 2019;Silveira et al., 2019;Nguyen et al., 2021;Sanguankiat et al., 2023).
In the Philippines, Salmonella detection studies in non-clinical samples would often be limited to presence-absence tests.Only a few studies would have data on Salmonella serovars.One study on retail meats sampled from wet markets in Metro Manila revealed that bovine meat mostly harbored Anatum and Saintpaul, porcine meat mostly harbored Anatum, and poultry meat mostly harbored Kentucky (Santos et al., 2020).However, these identities were only obtained using H typing, and thus are not confirmed.Salmonella studies with confirmed serovar identification are limited to clinical samples from the Philippine Department of Health − Antimicrobial Resistance Surveillance Program (DOH-ARSP) of the RITM.In a surveillance study covering blood and stool samples obtained from 17 sentinel sites all over the country from 2013 to 2014, in silico genotyping using WGS revealed that the most dominant serovars were S. Typhi as well as S. Enteritidis and ST34 (I 4, [5],12: i: -) for the non-typhoidal types (Lagrada et al., 2022).
The genetic similarity of the isolates that clustered together can possibly be traced to having the same slaughterhouse or farm source(s), thus harboring the same strains.This can also indicate transmission, as clones of the same strain can be circulating in one sampling site.It is important to note that local farms supply livestock to different slaughterhouses, and these slaughterhouses do not necessarily supply markets within the same city.This means that a farm can send livestock for slaughter to one slaughterhouse, and the slaughtered meat can be supplied to different markets in different cities.Genetically distinct isolates, or those that form multiple clusters, could likely be traced from various slaughterhouses and/or farm sources.Cross-contamination, as apparent in Infantis isolates, could likely be traced to large-scale commercial and backyard farms that rear mixed livestock animals and/or market stalls that sell both pig and chicken meat.
Correlation between AMR and antimicrobial use
Salmonella has been found to have intrinsic resistance to β-lactams, macrolides (except azithromycin), lincosamides, glycopeptides, and fusidane (Stock and Wiedemann, 2000).The genetic determinants of these resistances are usually chromosomeencoded or because of functional and/or structural characteristics as opposed to plasmid-borne resistances that can be acquired via horizontal gene transfer.For example, macrolides, a family of antimicrobials that have a characteristic macrocyclic lactone ring structure, have difficulty traversing the Gram-negative cell wall, specifically through the polar and negatively charged outer membrane, consequent to their affinity to efflux pumps that actively transport them out of the cell (McDermott et al., 2003;Myers and Clark, 2021).Another example of this would be their intrinsic resistance to β-lactam antibiotics, such as penicillin which is the highest conferred phenotypic resistance (60/105 isolates) in this study.β-lactam antibiotics, as aptly named, would have a characteristic β-lactam ring, which, similar to macrolides, would have restricted access to the outer membrane.In addition, S. enterica would also have a chromosomallyencoded or plasmid-borne ampC gene that codes for β-lactamasesenzymes that can hydrolyze β-lactams (Narendrakumar et al., 2023).
The intrinsic resistance of Salmonella complicates treatment regimens and raises the need for other treatment options.As this is the case, other drugs of choice are usually prescribed for both typhoidal and non-typhoidal infections such as fluoroquinolones or cephalosporins (Shane et al., 2017;Tack et al., 2020).While these drugs still exhibit efficacy against Salmonella, there is a marked increase in the occurrence of intermediate and resistant phenotypes, as exhibited in this study.Alarmingly, there have also been many reports of emerging fluoroquinolone and cephalosporin resistance from various continents.In Russia, fluoroquinolone-and cephalosporin-resistant Salmonella strains have been isolated from raw poultry products as well as ready-to-eat chicken products (Egorova et al., 2021).A study in Ghana by Dekker et al. (2018) reported 63% fluoroquinolone resistance in S. enterica isolated from local and imported meat.In addition, this resistance is mostly conferred by qnrB2 resistance plasmids, which means that these resistance determinants can be transferable.In this study, a significant number of isolates had transferable resistance plasmids such as the Col plasmid family that facilitate resistance to glycopeptides (vancomycin), polymyxin (colistin), β-lactam antibiotics (e.g., penicillin and ampicillin), as well as quinolones (ciprofloxacin) (McMillan et al., 2020).The isolates also harbor Inc plasmids that confer resistance against aminoglycosides, β-lactams, and fluoroquinolones (Hiley et al., 2021).The modes of action of these resistance genes are primarily related to the production of efflux pumps that actively expel antimicrobial agents from the cell.In addition, several genes code for enzymes that promote the inactivation of antimicrobial agents, either by hydrolysis or through the production of inactivated forms of the drug (Helinski, 2022).
The increasing trend of resistance to fluoroquinolones can be traced to the growth of drug sales, especially for use in livestock settings.A study by Yin et al. (2022) was able to establish a link between a 41.67% increase in fluoroquinolone sales and a 5% increase in the prevalence of quinolone-resistant Salmonella in retail meat from 2013 to 2018.In 2020, the estimated use of antimicrobials for the rearing of cattle, sheep, poultry, and swine worldwide was 99,502 tons, with the highest usage recorded in Asia at 67% (66,666 tons).This is particularly important since the volume of imported meat and meat products in the Philippines has ballooned to 16% in 2022, with imports coming in from Brazil, Australia, Germany, India, Italy, and the USA, among others (Bureau of Animal Industry, 2022).Incidentally, these countries are considered antimicrobial use hotspots in 2020 and are projected to remain so come 2050 (Mulchandani et al., 2023).The high consumption of antimicrobials in these hotspot countries could be likely linked to their intensive farming practices to meet local and export demands.
The same literature estimated the global use of tetracycline to be 33,305 tons, making it the most consumed antimicrobial in the world, with the second highest antimicrobial being penicillin, at around 15,000 tons-only half as much as tetracycline.This trend could be correlated with the detection of tetracycline resistance genes in 64% of the isolates.Sulfonamides and aminoglycosides have ranked fourth and sixth, respectively, in global consumption.Resistance genes against these antimicrobials were also detected in 56% of isolates.
Virulence determinants
A total of 155 virulence genes were detected in 105 isolates, 42 of which are serovar-specific.The virulence-associated determinants are grouped into six categories: fimbriae adherence determinants, element-uptake determinants, secretion system, protein synthesis, colonization, and survival against the host immune system.Fimbriae function as an adhesion necessary for bacterial colonization and infection.The prevalence of csg and fim genes are conserved adhesion and infection factors in Enterobacteriaceae.Two csg operons, csgBAC, and csgDEFG, encode for curli fimbriae and mediate binding to tissue matrices, while fim genes that encode hairlike appendages called type 1 fimbriae (Römling et al., 1998;Zeiner et al., 2012).These two gene clusters were present in all 105 isolates in this study as they facilitate adhesion and binding to eukaryotic blood and tissue matrices.While lpfACE and faeDE were serovar-specific genes determined in the sequence analysis, both were present in Haifa, Kentucky, Saintpaul, and Infantis serovars.These genes are unique for long polar fimbriae and fimbrial adhesin production, respectively.The homologous interchange of horizontally transmitted segments may have allowed the deletion of genes to spread to other Salmonella serovars.In particular, the lpf operon was initially present in a lineage ancestor of Salmonella and has been found to be deleted in several lineages of the genus, contributing to its diverse phylogenetic distribution (Bäumler et al., 1997).Interestingly, most jejunum samples in this study acquired these lpf genes since lpf-mediated adhesion targeted the alimentary tract such as animal ileum (Bäumler and Heffron, 1995), induced biofilm formation (Ledeboer et al., 2006) and promoted long-term intestinal persistence (Weening et al., 2005), indicating a function-mediated gene deletion and acquisition.
Genes associated with type III secretion system (T3SS) encoded by Salmonella pathogenicity islands (SPI) are also predominantly present among all the isolates.Present genes associated with SPI-1 include the inv/spa and prg genes that aid in the invasion and infection of Salmonella.The genes encoding for SP 1-2 regulation, and production of its chaperone proteins, effector proteins, and T3SS2 apparatus were also detected.However, the T3SS esp genes involved in the regulation and transport of proteins in the host cells during the invasion were absent in all isolates.The lack of genes and operons for the synthesis and regulation of extracellular proteins was also evident.
Other functional sets of genes that are present were responsible for the uptake and transport of organic compounds such as magnesium and iron.All 105 isolates have genes for the uptake of magnesium from the environment.Interestingly, only two serovars possess genes for iron transport and synthesis and encode for a different type of siderophore.The single Kentucky serovar isolate from a chicken sample had genes that encode for aerophore, while most Infantis isolates (27/28) possess genes associated with the uptake, transport, and biosynthesis of yersiniabactin.Iron uptake is important for host-pathogen interactions that are often overlooked in non-typhoidal Salmonella.The presence of Yersinia high pathogenicity island (HPI) in the Infantis isolates may also promote infection (Fetherston et al., 1996;Oelschlaeger et al., 2003), suppress host immune response (Autenrieth et al., 1991;Gehring et al., 1998) and increase the fitness and persistence in the environment (Oelschlaeger et al., 2003).
The expression of genes associated with pathogenicity islands 1 and 2 as well as fimbriae production among all isolates suggest the potential harm and disease development in humans.The genome sequences in this study served as a basis for traceback investigations to animal and food products as the possible cause of human salmonellosis.A comparative analysis of the SPIs, adhesin molecules, secretion systems, virulence plasmid, and epidemiological characteristics could elucidate the role of food-borne infections in humans.
Conclusion
Through the application of next-generation sequencing, we found resistance and virulence determinants that contribute to the persistence of S. enterica in the poultry and swine food chains in the Philippines.Our study of 105 S. enterica isolates provides proof of principle that WGS approaches can decipher the complex AMR and virulence patterns and shows that sequencing should be implemented by meat inspection networks to augment the existing presence−absence detection tests as acceptability, safety, and quality criteria.In addition, WGS gave insights into strain clustering and evidence of infection cross-contamination.Indeed, there are large-scale follow-up studies to assess AMR and virulence diversity and capture transmission, and the current work provides a baseline set of data.Ultimately, the genetic insights from this study and similar works, especially with additional number of samples and isolates could lead to enhanced diagnostics for disease management and control across the entire food chain, including farm, abattoir, and market settings.New genetic markers revealed should be integrated into current testing routines and may replace those currently used to give a more accurate picture of the strains and their AMR profiles.More isolates must be obtained in order to more clearly and accurately visualize the current and possibly predict future trends relating to S. enterica transmission, AMR, and virulence.The identification of AMR and transmission chains from farm to fork will assist surveillance and clinical decision-making, thereby improving the food security and health of both humans and animals.
FIGURE 1
FIGURE 1 FIGURE 2Phylogenetic trees built using on the core genome of isolates across different serovars.Annotated are the sequence IDs, cgMLST ST, source location, type, animal and meat cut.Presence (black square) or absence (white square) of genes conferring resistance to drugs and presence of plasmids are also indicated.(A) Rissen serovar (N = 19).(B) Infantis serovar (N = 28).(C) London serovar (N = 14).(D) Anatum serovar (N = 20).
TABLE 1
Sample characteristics.
TABLE 3
AMR and linked genes and plasmids.
|
v3-fos-license
|
2019-03-17T13:11:50.528Z
|
2018-02-07T00:00:00.000
|
54518289
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.4172/2161-0673.1000200",
"pdf_hash": "78c3a1961bbdd22a1c35f50d80a083155691c9eb",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2936",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "39aee0960a571ec42af22165e4f96bd6cb9708f6",
"year": 2018
}
|
pes2o/s2orc
|
Exercise-Induced Bronchospasm in Soccer Athletes Living in a Tropical Humid Region
Background: Exercise-induced bronchospasm (EIB) is frequent in asthmatic athletes and, although less prevalent, also occurs in non-asthmatic ones. Breathing warm-humid air reduces this phenomenon. Objective: To evaluate EIB prevalence in semi-professional soccer athletes from a tropical humid region. Methods: We included athletes from a soccer team from the city of Recife-Brazil. A history of asthma or respiratory symptoms after exercise, demographic data and baseline FEV1 measurements were obtained. Thereafter, each athlete performed a standardized free field running to achieve 85% to 95% of the maximum calculated heart rate for the last 6 minutes of a total running time of 8 to 10 minutes. FEV1 was measured again at 5, 10, 15 and 30 minutes after exercise and EIB was defined as a reduction ≥ 10% from basal values in two consecutive time points. Results: Fifty-four male athletes aged between 13 and 21 years were evaluated. All denied exercise associated respiratory symptoms. Mean temperature and air humidity were 30.2°C ± 2.7°C and 82.1% ± 2.9%, respectively. EIB was found in two (7%) athletes (with a fall in FEV1 from baseline of 23.3% and 22.6%), none with history of asthma symptoms. Two other athletes had a history of asthma symptoms in childhood but no decrease in FEV1. Conclusion: The studied population of semi-professional soccer athletes with no exercise associated respiratory complaints showed a low BIE prevalence (7%). It may be necessary to perform the challenge with the athlete breathing dry air in those that also play in more dry and cold climates.
Introduction
Exercise-induced bronchospasm (EIB) is the transitory narrowing of lower respiratory airways which occurs during, or more often, after vigorous physical activity [1,2]. It is found in high proportion of asthmatic children and adolescents and among asthmatic athletes EIB is diagnosed in up to 90% of them, varying with the adopted diagnostic protocol, exercise intensity and breathing air conditions [3][4][5]. Depending on sport modality and the diagnostic protocol adopted, the prevalence of EIB in non-asthmatic elite athletes varies from 8% to 50% [6][7][8].
Self-reported respiratory symptoms after strenuous exercise is a common complaint in athletes but are deemed as non-reliable for EIB diagnosis [9,10]. In elite athletes the lack of adequate EIB recognition and prevention could impair maximal physical performance [11] and it is recommended that those who report respiratory symptoms should perform objective diagnostic tests to guide a correct diagnosis, a more appropriate training schedule and the pharmacological prevention [12].
Inherent personal factors such as atopy, allergic rhinitis and asthma, as well as ambient air conditions such as low temperature and relative humidity, can directly influence EIB triggering [13]. In asthmatic adolescents it has been shown that EIB prevalence drops by half after free running tests carried out in the summer when compared to winter in countries with well-defined climatic seasons [14].
Soccer is extensively played all over the world by amateurs, semiprofessionals or professionals athletes. In spite the fact that soccer is an Olympic sport and its World Cup has the largest world's sports audience, limited information is found in the literature regarding the prevalence of EIB in professional or semi-professional soccer players [3,12,[15][16][17]. The objective of this research was to prospectively screen a sample of semi-professional athletes of a soccer team from Recife -a Brazilian city characterized by a tropical humid climate for the prevalence of EIB using a field running protocol.
Materials and Methods
Participants were semi-professional athletes from the lower division of a professional Brazilian soccer team -Clube Nautico Capibaribelocated at the city of Recife, an estuarine city located on the northeast Brazilian coast (-08°04'03"S, 34°55'00"W) that has temperatures ranging between 23°C and 32°C and a relative air humidity oscillating between 70% and 85%, with small changes along the year. The athletes
Journal of Sports Medicine & Doping Studies
had physical and technical training sessions from Monday through Friday, at least three hours each day. The exclusion criteria were the reporting of recent symptoms of respiratory infection (less than three months), forced expiratory volume in one second (FEV1) below 80% of predicted, and inability to adequately perform an acceptable forced expiratory maneuver. The study was approved by the institutional ethics committee on human research and all individuals who participated in the study signed an informed consent form. The diagnosis of asthma compatible symptoms was done using the International Study of Asthma and Allergies in Childhood (ISAAC) questionnaire translated and validated for Brazil [18], which was administered to each of the athletes separately. The medical records of each participant were also reviewed.
Temperature and relative air humidity were recorded before exercise (TEMP Int/Ext, Cotronic Technology Ltd, Hong Kong, China), and anthropometric measurements were taken using a calibrated scale (model 111 class III, Balmak, São Paulo, Brazil) and a portable stadiometer (American Medical do Brasil, São Paulo). Spirometric parameters were measured according to the international criteria of standardization and acceptability of tests [19], employing a daily calibrated spirometer (WinDxTM Revelation, Creative BioMedics -San Clemente, CA, USA).
Physical exercise test was done by free field running in a 10 × 20 meters rectangle previously marked with cones in the soccer field. Two co-workers were placed at two corners in diagonal position to stimulate the athletes to maintain or change the rhythm to keep the target heart rate. The test duration was 8 to 10 minutes with a progressive speed increase for warm-up on the first two minutes and in the last six minutes to maintain the heart rate between 85 to 95% of maximum calculated [20].
Spirometry was conducted in duplicate before and 5, 10, 15 and 30 minutes after the run and the best FEV1 recorded at each time point.
The test was considered positive if a fall in FEV1 of ≥ 10% was observed over two consecutive time points compared with baseline and the biggest chosen for analysis [20]. The predicted values were those for Brazilians [21].
Results
Fifty-four male athletes were evaluated between the months of April to June -the most humid of the year. Subjects' characteristics such as age, anthropometric measures, and mean baseline predicted FEV1 are presented in Table 1. Only two athletes reported having asthma in childhood but no actual respiratory symptoms, while the others denied any respiratory symptom or asthma diagnosis. No athlete was excluded on the basis of the exclusion criteria. Tests were performed between 2 and 4 pm under 30.2°C ± 2.7°C mean temperature and a relative air humidity of 82.1% ± 2.9% (~24 g of water /liter of air). Mean achieved heart rate was 89.5 ± 4.7% of the maximum calculated.
After running, two athletes (15 and 17 years old) showed fall in FEV1 in two measured time points compared to basal values (7%), the maximum fall were of 23.3% and 22.6% respectively, none of them complained of exercise related respiratory symptoms or had a diagnostic history of asthma.
Discussion
Our results showed a low prevalence of EIB (7%) among 54 soccer players with no exercise related respiratory complaints in a tropical and humid climate city submitted to a standardized field-based running.
Guidelines recommend exercise intensity to reach and keep heart rate between 85% and 90% of maximum calculated [20]. All the studied athletes reached at least 85% of maximum calculated heart rate during the last six minutes of running. The adopted spirometry acceptability criteria [19] and the stricter diagnostic criteria of a fall of >10% at two consecutive time points [20] minimized bias as a possible cause for the observed results. Furthermore, the challenge stimulus for EIB diagnosis that utilize exercise in the field, similar to the athlete's sport activity, are considered valid and even more efficient than those done in the laboratory with controlled environment temperature and humidity [8]. It is possible that if the exercise challenges were done with athletes breathing dry air this prevalence would be higher.
The prevalence of respiratory symptom among elite athletes is high [22] and exercises of long duration with high loads, requiring high ventilation levels, are those that most often cause EIB [23][24][25]. It is believed that the mechanism responsible for EIB is airway periciliary fluid dehydration due to hyperventilation that leads to mediators release by airway epithelial and mast cells [26]. Environmental conditions such as low temperature and relative humidity, together with increased breathing rates and exposure to irritants, as well as aeroallergens and suspended air pollutants, strongly contribute to EIB triggering in athletes [27]. It is also relevant that even in non-asthmatic athletes the aggression caused by these repeated and prolonged stimuli -especially chlorine in swimming pools -result in an increase in the number of bronchial mucosa inflammatory cells (especially neutrophils and/or eosinophils) and in bronchial hyperresponsiveness [28]. Discontinuation of the physical activities leads to the reduction or gradual disappearance of these phenomena [29]. The presence of asthma or atopy potentiates all those factors [8].
Studies have investigated the prevalence of EIB in athletes without known history of asthma. In a sample of 212 athletes of different sports in a training facility who had no prior asthma diagnosis, 20 (9%) showed >10% fall in FEV1 from basal values after eucapnic voluntary hyperventilation (EVH) [30]. Among 198 British Olympic athletes submitted to EVH test who did not have a previous diagnosis of asthma or bronchial hyperresponsiveness, Dickinson and colleagues [31] demonstrated a FEV1 reduction > 10% from basal in 29% of them. The lower EIB prevalence observed in our sample (7%) may be related to temperature and humidity conditions of breathed air in Recife-Brazil. It is possible that if exposed to drier and/or colder air temperatures a greater proportion of those athletes would have an EIB diagnosis.
Soccer is a sport that imposes a high aerobic load demanding that, particularly close to competitions, the athletes reach their best physical conditioning that requires an increase in extent and intensity of trainings. During a professional soccer game some athletes run as much as 10 to 15 km. Few studies were found in the literature that have evaluated the presence of EIB in participant of this sport, despite the difficulties with the terms soccer and football to designate the sport as understood by FIFA (Féderatión Internationale de Football Association). SØdal [32] found a prevalence of 35% for bronchial hyperresponsiveness in soccer players of a national female team in Norway. In Iranian children who played soccer, Ziaee et al. [17] noted an EIB prevalence of 6%, but the challenge was made through a game of soccer lasting 15 minutes, which could have been insufficient to reach the exercise load considered adequate [20]. Other investigators found an EIB prevalence of 11% among soccer playing children aged 8 to 13 years without previous asthma diagnosis [33,34].
In Tehran, Iran, a prevalence of 6% in non-asthmatic athletes was found in 100 adult players from a soccer team [16]. The temperature conditions were similar to ours, but relative humidity was lower, between 18 and 51%. The diagnostic criterion of EIB was a 15% reduction in FEV1, which could have also underestimated the prevalence. The type of physical activity used for provocation was not specified in that study.
Until 2010 regulatory anti-doping agencies required that asthma or EIB among athletes should be proven by objective methods before requesting authorization for the use of preventive medication. WADA (World Anti-doping Agency) new list of allowed medications permits the use of albuterol and salmeterol since 2010 and formoterol since 2013 without clinical confirmation, requiring a notification only. Inhaled corticoids are also allowed [35].
It is necessary to emphasize that the diagnosis of EIB in athletes with exercise associated respiratory symptoms or a diagnosis of asthma should be made by an objective test. Ansley et al. [12] showed that of the 65 English professional soccer players diagnosed as asthmatics that they evaluated, 57 made regular use of medication to prevent EIB and, despite reporting complaints of respiratory symptoms during exercise, only 31 had a positive response after the bronchoprovocation by EVH and/or inhalation of mannitol, and the symptoms were not predictors of bronchial responsiveness to the bronchial provocation tests [12].
Although it may seem intuitive that EIB could impair athletic performance, especially in elite athletes, this by no means is a settled question [11]. The two patients that showed a fall in FEV1 of more than 20% from baseline had no respiratory complaints or reported any trouble in doing their training physical activities. Some sports organizations have established EIB screening programs for all of their internationally competitive athletes but this is far from being a consensus [35]. On the other hand, it is clear that individuals with exercise associated respiratory complaints and/or a clinical diagnosis of asthma that are engaged in competitive sportive activities should be evaluated for EIB [3,8,9,12,15,20,24,35].
Conclusion
It is odd that so few studies are reported in the literature on diagnosis and prevention of EIB among soccer athletes. As this is the world's most widespread and popular sport that is held outdoors, more studies are needed to evaluate EIB prevalence among soccer players, its association with respiratory complaints, asthma, allergic and non-allergic rhinitis, the influence of different climate conditions and prevention.
|
v3-fos-license
|
2019-03-31T13:32:37.721Z
|
2019-03-17T00:00:00.000
|
87830396
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2019/9610629",
"pdf_hash": "f21502b4edf5be6f0aa056477d1c2de7d182314f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2937",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f21502b4edf5be6f0aa056477d1c2de7d182314f",
"year": 2019
}
|
pes2o/s2orc
|
Formononetin Enhances the Tumoricidal Effect of Everolimus in Breast Cancer MDA-MB-468 Cells by Suppressing the mTOR Pathway
Background Formononetin, an active ingredient isolated from the traditional Chinese medicinal herb Astragalus membranaceus, has anticancer and chemoresistance-reducing biological activities. We evaluated the efficacy of formononetin in improving the tumoricidal effect of everolimus by suppressing the mTOR pathway in breast cancer cells. Methods Cell survival was assessed using an MTT assay. Apoptosis was detected using flow cytometry. Proteins related to the mTOR pathway were detected and assessed using real-time PCR and Western blot analysis. Results. The results showed that formononetin enhances the efficacy of everolimus in suppressing breast cancer cell growth both in vitro and in vivo. The combination of formononetin and everolimus resulted in a 2-fold decrease in tumor volume and a 21.6% decrease in cell survival. The apoptosis ratio in cells treated with formononetin and everolimus increased by 27.9%. Formononetin and everolimus also inhibited the expression of p-mTOR and p-P70S6K and increased the expression of PTEN and p-4EBP-1. Notably, formononetin alone inhibited p-Akt expression but not everolimus. Conclusions Formononetin enhances the tumoricidal effect of everolimus by inhibiting the activity of Akt.
Introduction
Breast cancer is the most common malignant tumor in women [1]. The incidence of breast cancer among women in China is rising [2,3]. Triple-negative breast cancer (TNBC) is a special type of breast cancer, where expressions of estrogen receptor, progesterone receptor, and human epidermal growth factor receptor-2 (Her-2) are all negative. It accounts for approximately 10%-20% of all breast cancers. It has the worst prognosis among all breast cancer types, with characteristics such as rapid metastasis, drug resistance, and high mortality [4].
mTOR is a type of serine/threonine protein kinase belonging to the PI3k family. It plays an important role in protein synthesis and autophagy. Abnormal expression of mTOR leads to conditions such as diabetes and tumor development [5]. mTOR has 2 key complexes, the mTORC1 and mTORC2. mTORC1 promotes protein synthesis by phosphorylating 2 key effectors, namely, p70S6 kinase 1 (S6K1) and eIF4E binding protein (4EBP) [6]. mTORC1 regulates cell growth and metabolism, whereas mTORC2 regulates cell proliferation and survival by phosphorylating the AGC family of kinases (PKA/PKG/PKC). mTORC2 is critical to the phosphorylation and activation of Akt. The activated Akt promotes cell survival, proliferation, and growth [7,8].
Everolimus is an inhibitor of serine-threonine kinase mammalian target of rapamycin (mTOR) [9]. It has been 2 Evidence-Based Complementary and Alternative Medicine reported that everolimus has broad antitumor activities in preclinical models and used in combination with trastuzumab in several clinical trials [10]. However, the effect of everolimus against TNBC does not have satisfactory efficacy for clinical use.
Formononetin is an active ingredient isolated from the traditional Chinese medicinal herb Astragalus membranaceus and has various pharmacologic effects, such as tumor growth inhibition, wound healing, estrogen-like effects, antioxidant activity, and anti-inflammatory effects [11,12]. Formononetin can exert antitumor effects by inducing cell apoptosis, arresting the cell cycle, inhibiting angiogenesis, and reversing multidrug resistance [13,14].
Recent studies have shown that formononetin can inhibit tumor growth and induce apoptosis by regulating the PI3k pathway [15]. However, whether the combination of formononetin and everolimus can synergistically provoke cancer cell death remains unclear. In this study, we showed that formononetin significantly enhances the tumoricidal effect of everolimus both in vitro and in vivo. Most importantly, we determined the underlying mechanisms for this effect in MDA-MB-468 cells.
Materials and Methods
. . Reagents and Cell Culture. Everolimus was purchased from Sigma-Aldrich (MO, USA). Formononetin was obtained from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). Annexin V-FITC and propidium iodide (PI) were obtained from Sigma-Aldrich (MO, USA). The antibodies against mTOR, p-mTOR, Akt, p-Akt, PTEN, p-4EBP-1, and p-p70s6k were obtained from Cell Signaling Technology (MA, USA). Human breast cancer MDA-MB-468 cells were cultured in an RPMI 1640 medium supplemented with 10% fetal calf serum and 0.01 mg/mL insulin at 37 ∘ C with 5% CO 2 in a humidified atmosphere.
. . Tumor Xenogra and Treatment. Seven-week-old female nu/nu athymic mice, 18-20 g, were obtained from Academia Sinica (Shanghai, China). All procedures conformed to animal welfare considerations and were approved by the Ethical Committee of Shanghai Traditional Chinese Medicine (09001, March 5, 2014). MDA-MB-468 cells (1 × 10 7 /mL) were injected into the mammary fat pad (m.f.p.) of the mice [16]. When tumors developed (approximately 10 days), the mice (n = 10) were treated with formononetin 50 mg/kg [17] and everolimus 2 mg/kg (Animal dosage of everolimus was converted according to clinical dosage) and combined treatment of formononetin (50 mg/kg) and everolimus (2 mg/kg). Untreated animals were given physiological saline as control. All mice were treated once a day via gavage for 4 weeks. After 4 weeks of treatment, blood was collected from the eyes and animals were sacrificed by cervical dislocation. The tumors were immediately removed, freed from connective and adipose tissue, and weighed.
. . Cell Growth Inhibition Test. The cell survival was determined by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay [18]. Various concentrations of formononetin with or without everolimus were then added to the MDA-MB-468 cells for varying lengths of time followed by the addition of MTT for another 4 h. Cytotoxicity was expressed as a percentage of (number of cells surviving/total number of untreated cells).
. . Western Blot Analysis. When human breast cancer MDA-MB-468 cells in 6-well plates reached 90% confluence, the cells were washed with PBS and cell total proteins were extracted. Then proteins were subjected to SDS-PAGE and Western blot analysis. Protein expressions were detected using primary antibodies (1:1000) and secondary antibodies (1:800) conjugated with horseradish peroxidase and ECL reagents (Pharmacia, Buckinghamshire, UK). Quantitative analyses of Western blots were performed using Alpha Ease FC (FluorChem FC2) software. The density ratio of proteins to GAPDH as spot density was calculated using analysis tools. . . Statistical Analysis. Statistical differences were identified using 2-tailed Student t tests. Data are presented as mean ± standard deviation. A p value of < 0.05 was considered statistically significant.
. . Combined Formononetin and Everolimus Treatment Suppresses Tumor Masses Significantly.
To determine the effect of formononetin on in vivo tumor outgrowth, we used MDA-MB-468 breast cancer xenografts. Tumor growth was significantly inhibited in the formononetin 50 mg/kg alone group compared with the control group (p < 0.05). Tumor volume shrank from 472.7 to 253.6 mm 3 on the 30th day of tumor growth. Moreover, in the presence of formononetin, everolimus resulted in a 2-fold reduction in tumor volume. These results suggest that formononetin can synergistically enhance the tumoricidal effect of everolimus in human breast cancer cells (Figure 1(a)).
Furthermore, formononetin was observed to be safe in the MDA-MB-468 xenograft model. None of the mice died with everolimus alone or combined formononetineverolimus treatment. Mice receiving formononetin and everolimus had no apparent weight loss and had a healthy appetite (Figure 1(b)). These results suggest that formononetin inhibited tumor growth safely.
. . Formononetin Significantly Enhances the Tumoricidal Effect of Everolimus.
To determine the underlying mechanism, we analyzed the cytotoxicity of formononetin with or without everolimus on MDA-MB-468 cells. Cells were then exposed to various concentrations of formononetin and everolimus for 12, 24, and 48 h. Cell viability was determined using an MTT assay. The results showed that formononetin or everolimus alone inhibited cell survival in a dose-and time-dependent manner. Moreover, the halfmaximal inhibitory concentrations of formononetin and everolimus alone were 150 mol/L and 100 nmol/L for 48 h, respectively (Figures 2(a) and 2(b)). When formononetin was used with everolimus, cell survival decreased by 21.6%. These results suggest that formononetin alone can inhibit breast cancer cell growth. Formononetin can synergistically enhance the tumoricidal effect of everolimus in MDA-MB-468 breast cancer cells (Figure 2(c)).
. . Formononetin and Everolimus Synergistically Induce Apoptosis in MDA-MB-Cells.
To determine the effect of formononetin and everolimus treatment on cell apoptosis, we performed flow cytometry on MDA-MB-468 cells that were exposed to either one of the drugs or both for 48 h. + everolimus was 21.5%, 25.7%, and 53.8%, respectively (Figure 3(a)). Cell cycle analysis showed that hypodiploid peaks also appeared with different treatments (Figure 3(b)).
These results indicate that formononetin and everolimus result in apoptosis.
. . Effect of Formononetin and Everolimus on the mTOR Pathway.
Tumor growth has been shown to be regulated through the mTOR pathway. Everolimus is an inhibitor of mTORC1. In this study, we investigated the mechanisms by which formononetin exerted its effect on tumor growth using qRT-PCR. PTEN mRNA and 4EBP-1 mRNA had higher expression in the formononetin group than in the control group. P70s6k mRNA levels decreased (p<0.05) (Figure 4(a)).
We further confirmed the efficacy of formononetin through Western blotting. Formononetin and everolimus also inhibited the expression of p-mTOR and p-P70S6K and increased that of PTEN and p-4EBP-1. However, formononetin alone inhibited the level of p-Akt but everolimus did not (Figure 4(b)).
To evaluate the effect of formononetin on the Akt pathway, silencing of Akt was validated using Western blotting. We found that the level of mTOR was restored to that in the control group after application of formononetin. However, in the presence of Akt siRNA, everolimus had no significant effect on the expression of mTOR. The expressions of p-4EBP-1 and p-P70S6K were all reversed by formononetin with Akt siRNA (Figure 4(c)). These results demonstrate that the inhibition of the mTOR pathway by formononetin is associated with Akt.
Discussion
mTOR is often considered a downstream effector of numerous mutant oncogene pathways, such as the PI3K/Akt pathway and the Ras/Raf/Mek/Erk (MAPK) pathway, causing overactivation of mTOR and hence cancer [19]. Patients with overexpression of p-mTOR had worse prognosis in early tri-negative breast cancer [20]. Everolimus was found to have inhibitory activity only on the mTORC1 complex and had no apparent effect on the mTORC2 complex [21]. This indicates its limitations as an antitumor agent. In this study, we evaluated whether formononetin can inhibit tumor growth by suppressing the mTOR pathway and whether it can enhance the efficacy of everolimus. We found that formononetin alone could inhibit the growth of breast cancer cells; our results are in line with those of a previous study [14]. We further showed that formononetin improved the efficacy of everolimus in suppressing breast cancer cell growth both in vitro and in vivo (Figures 1 and 2). In this study, we showed that a clinical dosage of 2 mg/kg everolimus significantly decreased tumor masses in the MDA-MB-468 xenograft model. Notably, the inclusion of formononetin in everolimus treatment resulted in a 2-fold decrease in tumor volume compared with that with everolimus alone (Figure 1(a)). Moreover, the combination treatment is safe (Figure 1(b)). Our results strongly indicate that formononetin may be used to enhance antitumoricidal effect of everolimus.
Proliferation and apoptosis of tumor cells are the key steps in the onset and development of cancer [22]. We found that both formononetin and everolimus or everolimus alone inhibited cell growth in MDA-MB-468 cells (Figure 2). To determine whether cell death induced by formononetin and everolimus was related to apoptosis, the apoptosis rate and cell cycle were evaluated. The results showed that the combination of formononetin and everolimus results in a 2-fold increase in apoptosis ( Figure 3). Our results support the hypothesis that the synergistic tumor-killing effect of formononetin and everolimus treatment at least partially contributes to a greater efficacy in inducing apoptosis than with formononetin or everolimus alone.
Moreover, we found that formononetin and everolimus alone inhibited the expression of p-mTOR and p-P70S6K and increased that of p-4EBP-1. However, formononetin alone inhibited the level of p-Akt and everolimus did not (Figures 4(a) and 4(b)). In the presence of Akt siRNA, the expressions of p-4EBP-1 and p-P70S6K were all reversed by formononetin. These results suggest that the inhibition of the mTOR pathway by formononetin is associated with Akt. Phosphorylation and activation of Akt are the most critical roles of mTORC2 [23,24]. Therefore, we conclude that formononetin may enhance the antitumor effect of everolimus by additionally inhibiting mTORC2.
In summary, our study demonstrated that formononetin can improve the tumoricidal effect of everolimus. Formononetin can augment everolimus in inhibiting the mTOR pathway by effectively inhibiting mTORC2. The combination treatment of formononetin and everolimus may be an effective approach for breast cancer chemotherapy.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Disclosure
Qianmei Zhou and Weihong Zhang are co-first authors.
Conflicts of Interest
The authors declare that they have no conflicts of interests.
|
v3-fos-license
|
2021-10-19T15:20:36.292Z
|
2021-09-24T00:00:00.000
|
239079286
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.preprints.org/manuscript/202109.0437/v1/download",
"pdf_hash": "be82fe2df68ae4e5634ef1f7f948624e3cf12dbf",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2938",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "1044bd200e02f3067b78d2c666a20bcb90cebd79",
"year": 2021
}
|
pes2o/s2orc
|
How Senescent Glia Drive And Underlie Alzheimer’s Disease: A Predictive Model
Alzheimer’s disease (AD) predominantly occurs as a late-onset form (LOAD), involving neurodegeneration and cognitive decline with progressive memory loss. Over time, risk factors and aging promote accumulation of well-known AD hallmarks in oxidative stress, amyloid-beta and tau protein pathology, as well as inflammation. Homeostatic glial functions regulate and suppress these AD hallmarks; however, other glial states involve increased pro-inflammatory cytokine release and further hallmark accumulation. Different stresses can additionally induce cellular senescence, or an irreversible differentiation process resulting in decreased supportive functions and increased, pro-inflammatory cytokine release. While these pathophysiological underpinnings all contribute to LOAD, they require temporal and mechanistic integration. This Perspective proposes that traditional AD hallmarks induce glial senescence in LOAD, where sufficient senescent glia exacerbate ongoing AD pathology and primarily drive LOAD into clinical, cognitive decline. We first explore age-related increases in pro-inflammatory glial activity, and then discuss emerging evidence linking oxidative stress, neurons containing tau pathology, and amyloid-beta to microglia, oligodendrocyte progenitor, and astrocyte senescence. Our evidence-based model mainly predicts that senescent astrocytes and oligodendrocyte progenitors together pressure microglia to phagocytose neurons containing tau pathology, where resulting senescent microglia create neuritic plaques and induce paracrine senescence transitioning into and progressing clinical, dementia presentation. This predictive model accounts for why medications used to treat LOAD fail, as previous treatments have not reduced senescent glial burden. It is also coherent with the predominant hypotheses surrounding LOAD involving the amyloid cascade, tau, glia and inflammation, creates testable hypotheses about LOAD, and increases rationale in testing senolytics as targeted treatments for LOAD arrest and reversal.
Introduction
Alzheimer's disease (AD) involves cognitive decline, most prominently manifested as progressive memory loss. Risk factors linked to AD include vascular pathology, declining metabolism, and most dramatically, aging [1][2][3][4] . These risk factors contribute especially towards the highly-prevalent (>90% of AD cases) sporadic or late-onset AD (LOAD), and to a lesser extent to the early-onset or inherited, familial AD (EOAD) 3 . In LOAD, synaptic loss is pathologically correlated with cognitive decline 1,2 ; however, little mechanistic insight is available to drive the development of effective therapeutics. Widely accepted explanations for neuronal loss and tracking LOAD progression have been proposed in the tau hypothesis and amyloid cascade 2 (Box 1).
Although exploring each classical AD paradigm has significantly improved progress towards understanding AD, no clinical trial targeting these pathologies has successfully halted AD 3,4 . A recent shift has geared towards understanding AD by focusing on next generation sequencing, aging-related pathology, and glia, or the non-neuronal brain cells. Particularly, many new genetic risk factors including TREM2, CR1, and APOE have exclusive roles in glia 3 . Some
Box 1 Dominant LOAD theories
In the tau hypothesis, upstream enzymes phosphorylate multiple residues in tau protein to create hyperphosphorylated tau (hp-tau) 1,3 . Hp-tau starts as monomers, but undergoes oligomerization and forms filaments, neuropil threads, and neurofibrillary tangles (NFTs). Neuritic plaques are compact amyloid plaques that also contain both hp-tau and dystrophic neuronal elements 1,3,5 . Hp-tau aggregation is equivalently classified under "neurofibrillary pathology" or degeneration, where neurofibrillary degeneration has been clinically documented into a set of six Braak stages 5 . In stages I and II, initial neurofibrillary pathology spreads from the transentorhinal region into the entorhinal cortex and hippocampus. Stages III and IV involve increased lesions and pathologies in these areas, and novel spread into the temporal lobe and insular cortex. Stages V and VI affect the remaining superior temporal gyrus and neocortex. Finally, Braak staging of hp-tau pathology has been well shown to correlate with AD progression 1,5,6 .
In the amyloid cascade, accumulation of amyloid-beta (A) protein, especially A oligomers, act through multiple pathways to cause neurodegeneration and synaptic loss 2,3 . Misfolded A monomers undergo a slow 'seeding', nucleating phase into damaging oligomers in AD, which then further aggregate into insoluble A fibrils and diffuse amyloid plaques without hp-tau aggregates 3 . Misfolded hp-tau and A can also destabilize their respective native conformations and spread equivalently as prions 3 . Both protein pathologies also add to each other (Fig. 1). However, diffuse amyloid plaques and insoluble A fibrils do not correlate well with AD progression 1,5 .
More theories of LOAD also exist, primarily through impaired metabolism by a combination of increased insulin resistance, hypertension, obesity, and/or insufficient sleep 3,4 . Vascular dementia, consisting of progressive blood supply blockades, cerebral amyloid angiopathy and A deposition into cerebral blood vessels, likely leads to and/or correlates with LOAD incidence 4 . Finally, all these hypotheses involve a neuroimmune component; inflammation caused by pro-inflammatory cytokine secretion increases during LOAD, as a result of accumulating protein aggregation and declining cell metabolism 2-4 . genes mediate increased phagocytic function in these glia, whereas other genes allot glia towards adopting pro-inflammatory states leading to chronic damage and cell death. Furthermore, glia also decrease neuronal support by undergoing senescence 7,8 . Cellular senescence, an irreversible cell arrest caused by environmental stresses such as DNA damage and oxidative stress, enables cells to gain apoptotic resistance. Although not universal, senescent cell markers include increased p16 INK4A , senescence-associated beta-galactosidase (SA-ß-gal), lipofuscin accumulation and transient p21 upregulation 7,8 . Senescent cells also adopt a senescence-associated-secretoryphenotype (SASP), which generates a pro-inflammatory profile initiated by cGAS-cCAMP-STING pathway signaling and subsequent cytokine, matrix metalloproteases (MMPs), and other protein secretion that can vary across different cell types 9,10 . Senescence moreover impairs most cellular functions and roles outside of wound healing, and reliably correlates with aging and accelerated mortality 10,11 .
Altogether, LOAD is posited here as an aging disease of glial senescence. LOAD pathology is first explained through an aging lens, followed by evidence of glial dysfunction and decline in LOAD. Novel mechanisms involving AD neurodegeneration and aged glia are then discussed in an evidence-based framework that integrates A aggregation, hp-tau, and synaptic-loss pathology.
Part I: Late-onset AD depends on aging and glial dysfunction Aging and oxidative stress are understated risk factors in dominant LOAD paradigms In all cells, oxygen intake and reactions generate free radicals that accumulate as reactive oxygen and nitrogen species (RONS), such as nitric oxide and superoxide 11 . When RONS accumulation overwhelms antioxidant defenses, RONS irreversibly alter and damage nucleotides, lipids, and proteins in cells. This phenomenon, oxidative stress, progresses with age due to accumulated double-stranded DNA breaks, mitochondrial dysfunction, and declining efficiency of metabolic processes 7,8,10,11 . Oxidative stress propagates pathways that elevate A production, tau hyperphosphorylation, and release of pro-inflammatory cytokines including IL-1 and TNF- 2,9-13 . Over time, increased pro-inflammatory cytokine production becomes a chronic systemic "inflammation" that increases with aging, and in turn exacerbates cell damage and death. Many LOAD risk factors (such as vascular pathology, sleep impairment, and chronic stress) also result in increased RONS levels and chronic, systemic inflammation [2][3][4] .
Oxidative stress damages all cell types, including neurons. Particularly, RONS inhibit phosphatidylserine (PS) lipid translocases, increase intracellular calcium, deplete ATP, and activate phosphatidylserine scramblases; this "flips" PS from the inner to outer cell membrane, where outer PS acts as an apoptotic or "eat-me" signal [14][15][16][17] . RONS also create "oxysterols" by oxidize membrane cholesterols. Oxysterols have been hypothesized to drive LOAD [18][19][20][21] , where one such mechanism will be elaborated below in Part II. Overall, these pathological factors or hallmarks engage each other in a shared environment, encourage further release and accumulation of each hallmark, and exacerbate cellular damage in positive loops 2,3,7-9,11-14,19-21 (Fig. 1). Finally, patients must minimally be aged 65 to be potentially diagnosed with LOAD 1,3 . This is significant time towards accruing aging and increased exposure to oxidative stress, and reflects in increased systemic inflammation, oxysterols, and protein aggregation resulting from RONS interactions 3,18 . What then allows for healthy, aging cognition against elevation of traditional AD hallmarks in A aggregation, neurofibrillary tau pathology, chronic systemic inflammation, and oxidative stress?
CNS glia lose homeostatic capacity in aging
In the healthy CNS, multiple types of glia exist in multiple states and serve a dizzying array of functions, a subset of which can be labeled as "homeostatic". Homeostatic glial activities sequester and break down cellular products to preserve optimal nervous system function 22 ; this includes products comprising traditional AD hallmarks (Fig. 1). This parenchymal maintenance is achieved by paracrine signaling and endocytosis. Homeostatic glial functions involve trophic factors and secretion of anti-inflammatory cytokines including IL-10 and TGF-ß, which minimize cell stress and pro-inflammatory glial reactions at least acutely [23][24][25][26][27] . These molecules mediate communication in bi-directional and overlapping loops of glia-neuron and glia-glia interactions, through which glia supporting one another facilitate neuronal activity and plasticity. Microglia, particularly, require OPC-mediated secretions via the TGF-β2-TGFBR2-CX3CR1 signaling axis to sustain homeostatic functions 24,25 . Astrocytes also release trophic factors and cytokines including IL-3 to reduce pro-inflammatory microglial states 28 , and both glia endocytose neuronal, synaptic, and extracellular elements 14,20,[29][30][31][32][33][34][35] . Some of these elements include the traditional AD hallmarks in Fig. 1 3,22 ; for example, microglia phagocytose extracellular Aand tau through receptors such as TREM2 or IGF1R 3 .
However, expression of receptors that facilitate clearing of Aand tau can also push homeostatic glia into "pro-inflammatory" states. Examples include the RAGE and LRP1 receptors, both of which activate the NLRP3 inflammasome 3,36-38 . While pro-inflammatory glia provide effective protection in acute illness and infection, their continuous and exaggerated reactions during aging contribute to cellular stress, death, and (indirectly) accumulation of traditional AD hallmarks 2,3,7 . The resulting environment gradually lowers homeostatic support for glia and may partially explain a "priming" effect, wherein aged glia expand and exaggerate pro-inflammatory responses towards smaller amounts of pro-inflammatory cytokines or A 9,39,40 . In parallel, aging and exposure to RONS reduce homeostatic signaling, impairing the TGFßII-Smad pathway in aged microglia 33,34 . Aged glia additionally have increasingly impaired metabolism and endolysosomal systems 8,10,39,40 , causing decline in phagocytotic capacity and clearing of A as well as hp-tau accumulation. Thus, while homeostatic glial functions protect the brain for most of adult life, their ability to combat both increasing pro-inflammatory glial responses and traditional AD hallmarks wanes over aging and environmental risk factor exposure (Fig. 2).
Glial senescence is associated with LOAD progression Multiple lines of evidence indicate that APP overexpression and subsequent A hyperaccumulation is associated with exacerbated OPC senescence (Fig. 3). In APP/PS1 mouse models representing EOAD, OPCs surrounding amyloid plaques display senescence per dystrophic shrinkage in cell volume and immunopositive p21, p16 INK4A and SA-ß-gal staining 41,42 . OPC numbers also decrease in APP/PS1 mice, and correlate with decreased myelin levels likely secondary to reduced oligodendrocyte differentiation and myelin generation 42,43 . In LOAD patients, global A accumulation and increased numbers of p21+, senescent OPCs proportionally scale with progressive Braak staging 41 . This suggests that OPC senescence begins in early Braak stages and precedes clinical LOAD presentation, then continues to progress in parallel with LOAD ( Fig. 2).
Astrocytes also undergo increased senescence in LOAD. Particularly, oxidative stress, A, and tau have all been shown to induce human astrocyte senescence; senescent astrocytes exhibit a SASP profile, releasing pro-inflammatory cytokines and MMPs [44][45][46] . Senescent astrocytes also down-regulate potassium and glutamate transporters Kir4.1, EAAT1, and EAAT2. This results in increased extracellular glutamate, contributing to neuronal excitotoxicity and death 47 , alongside NMDA-receptor signaling and pro-inflammatory behaviour in microglia 48 . Senescent astrocytes positive for p16 INK4 were additionally found to increase in human brains over the course of aging and be over-represented in frontal cortices from LOAD patients versus age-matched controls 45 .
Increased OPC and astrocyte senescence likely exacerbate traditional AD hallmark burden ( Fig. 3) via multiple effects: they can directly promote all LOAD hallmark accumulation ( Fig. 1), weaken neuronal support, and increase pro-inflammatory states in other glia 7,8 . Astrocyte and OPC senescence also induce paracrine glial senescence, at least indirectly 7,49 . However, A and/or oxidative stress can trigger OPC and astrocyte senescence without tau 41,42,44 . Thus, the senescence of these glia alone does not correspond well with neurofibrillary degeneration in Braak staging, and inadequately explains the preclinical to clinical LOAD transition. Another crucial role of OPC and astrocyte senescence during preclinical LOAD may be their impact on microglia: when senescent OPCs and astrocytes withdraw homeostatic support and exacerbate AD hallmark accumulation, they overwhelmingly exhaust and "prime" homeostatic microglia for increased proinflammatory reactions and phagocytosis of neurons 7,8,[23][24][25]28,39 . Moreover, the preclinical to clinical LOAD transition putatively corresponds to increased microglial senescence and spread of neurofibrillary pathology, particularly in NFT and neuritic plaque formation (Fig. 2, Fig. 3).
Part II: Accumulating microglial senescence drives clinical progression of LOAD Microglial senescence and priming in LOAD is evident in single cell transcriptomics Transcriptomic studies of microglia in LOAD patients provide additional evidence for APOE priming and senescence. The largest LOAD microglial sample size for single cell RNA sequencing was examined by Olah et al., where microglial clusters 7, 8, and 4 particularly upregulated APOE 50 . Clusters 4 and 8 also displayed increased ferritin in FTH1 and FTL, and cluster 4 uniquely upregulated multiple IRF transcription factors. This indicates that microglial cluster 4 possesses an upregulated type I interferon (IFN) response to environmental pathology. Type I IFN signaling has been implicated in pro-inflammatory microglial responses to A pathology across multiple mouse models [51][52][53] , and is associated with plaque-associated microglia in LOAD patient samples 52 . In absence of an acute viral infection, it is notable that the cGAS-STING pathway in senescence also upregulates type I IFN responses 10,54 . Increased type I IFN responses, induced by senescence, may thus further exacerbate pro-inflammatory microglial reactions to A pathology.
Further complementing these results, a microglial population observed in LOAD patients was separately validated both transcriptionally and by immunostaining per Nguyen et al. 55 . This population was found to increase both FTH and FTL expression, associate with neurofibrillary pathology, and particularly neuritic plaques, as well as display signs of dystrophy indicating senescence. Although dystrophy will be later explained when discussing senescent microglia in LOAD patients, this "dystrophic" microglial population was speculated to be the end result of microglial subsets responding to Aaccumulation. These A-responding microglia were also found to upregulate APOE 55 . As described later, this APOE upregulation likely confers a "primed" state before microglia become dystrophic (Fig. 3).
Microglial senescence is driven by phagoptosis of hp-tau-containing neurons When microglia are primed to phagocytose neurons in LOAD, they likely enter a vicious pathological cycle: multiple lines of evidence suggest that phagocytosis of hp-tau renders microglia senescent. In a P301S tauopathy mouse model, which approximately models AD pathology through human tau overexpression and subsequent tau hyperphosphorylation, senescent microglia were identified with positive p16 INK4A and SA-ß-gal staining 56 . Both extracellular and intracellular hp-tau stress human neurons in vitro, induce outer PS exposure, and initiate premature neuron phagocytosis or "phagoptosis" by human microglia 57,58 (Box 2). When co-cultured with P301S neurons growing intracellular hp-tau filaments, human microglia phagoptosed the viable neurons and became senescent 70 . These senescent microglia exhibited increased NF-kß activation, MMP-3 release, and positive SA-ß-gal staining; additionally, they displayed hypo-phagocytic capacity, or poor phagoptosis of P301S neurons containing hp-tau aggregates, and aberrant release of insoluble hp-tau aggregates into the local environment 70 . In both humans and mice, plaqueassociated microglia release soluble tau that can seed insoluble hp-tau aggregates [71][72][73] . Thus, tau seeding and release by senescent microglia can spread in vivo to induce a pathological cycle of neuronal phagoptosis and microglial senescence.
Box 2: Microglial priming, ramified morphology, phagoptosis, and neurodegenerative synaptic loss in LOAD While microglia in homeostatic states often comprise a ramified morphology, human ramified microglia do not always perform homeostatic functions. Contrarily, pro-inflammatory lipopolysaccharide has been shown to induce a complex, ramified morphology in human microglia 59 . Pro-inflammatory microglia states thus are not always equivalent to "activated" or hypertrophic, amoeboid-shaped microglia that cluster around neuritic plaques in LOAD patients [60][61][62] ; more accurately, amoeboid or hypertrophic morphologies can indicate that human microglia have phagocytosed extracellular materials 61 . This hypertrophic appearance happens regardless of the microglia's inflammatory role, as later discussed.
Contrarily, in homeostatic function, microglia are shown to mainly utilize TREM2 in phagocytosing synapses and whole neurons for CNS development, plasticity, and maintenance 17,29 . While homeostatic microglia can phagocytose dead neurons to optimize the surrounding environment, they can also phagocytose neurons that are stressed but not yet apoptotic. This process is "phagoptosis"; a premature, non-apoptotic death through microglial phagocytosis 15,16 . Mechanistically, oxidative stress exposes PS at the neuronal outer cell membrane to act as a ligand. Simultaneously, microglia reacting to pro-inflammatory stimuli produce sialidase that removes or "desialylates" neuronal sialic acids [14][15][16] . As microglia enter pro-inflammatory states more frequently during aging 39 , and sialic acids protect neurons from being phagocytosed [14][15][16] , aging renders neurons more susceptible to phagoptosis. Thus, although microglia in pro-inflammatory states may not directly partake in phagoptosis while secreting pro-inflammatory cytokines, their actions "prime" microglia in homeostatic states to phagoptose neurons.
Once neuronal PS is sufficiently exposed, extracellular opsonin proteins coat outer PS and induce phagoptotic death by microglia. This process occurs via multiple ligand-receptor pairs that converge into the DAP12/TYROBP/KARAP pathway, stimulating downstream signaling and subsequent phagocytosis through TAM receptor tyrosine kinases in Mer and Axl [15][16] . Specific pathways leading into phagoptosis and DAP12 signaling include TREM2 binding directly either to PS, or extracellular APOE bound to PS 16,[30][31][32]63 . Complement C1q is another opsonin that coats PS, allowing for subsequent C3b, iC3b complement binding and phagoptosis by CR1/CR3/CR4 complement receptor activation 64 . These complement receptors also conduct downstream DAP12 signaling in microglia 15,16 . A separate pathway alternatively exists where calreticulin opsonin bins to C1q, resulting in neurons that are phagoptosed by microglial LRP1 and induction of more microglial, pro-inflammatory responses 3,16 .
Of relevance, some homeostatic microglial states in A-overexpressing mice phagoptose neurons and correspondingly exhibit a neurodegeneration/disease-associated microglial phenotype (DAM or MGnD) [65][66][67] . The transcriptome includes an initial DAM1, involving APOE upregulation independent of phagoptosis activation, and a subsequent DAM2, which necessitates TREM2-dependent activation, and further upregulates APOE and ITGAX relating to CR4 subunit CD11c 66 . Further DAM2 subsets have also been found, with a pro-inflammatory subset up-regulating the Kv1.3 channel protein that mediates increased RONS and pro-inflammatory cytokine release 67 . Dark microglia represent another relevant state, which has been observed in mouse models of chronic stress, aging and A pathology, as well as in patients diagnosed with schizophrenia 68,69 . Dark microglia display markers of oxidative stress and lipofuscin accumulation related to senescence 2,10 . They likely participate in phagoptosis; dark microglia have increased phagocytic inclusions and commonly enwrap processes around shrinking but viable neuronal elements 68 .
Particularly, microglial states expressing TREM2, performing homeostatic functions, and upregulating APOE likely bind PS-exposed neurons with hp-tau aggregates. This triggers neuronal and hp-tau phagoptosis through a combination of TREM2, complement receptors, and opsonins 16 . This TREM2-involved phagoptosis likely induces simultaneous senescence and a DAM2(-like) microglial transcriptome [65][66][67] , which when correlating to human microglia data, may represent the transcriptomic cluster 8 identified by Olah et al. 50 . However, it is unknown whether the TREM2dependent, DAM2 program itself initiates senescence, or if phagoptosing neurons with hp-tau simultaneously initiates separate DAM2 and senescence programs. Evidence regarding tau inducing senescence has been presented 70 ; contrarily, phagoptosis of stressed neurons without hptau aggregates was not yet shown to induce microglial senescence. Therefore, it can only be currently assumed that a DAM2(-like) microglia state becomes senescent. Finally, extracellular A aggregates and tau can both induce phagoptosis, although significantly increased PS exposure requires much more time and occurs relatively late in APP/PS1 mice at 9.5 months of age 81 . Thus, while A aggregation can also lead to phagoptosis and potentially senescent microglia, microglial phagoptosis, DAM2 induction, and senescence caused by hp-tau uptake are much more specific to neurofibrillary pathology and LOAD progression 82 .
The process of hp-tau-induced senescence is still unclear, but hp-tau may inhibit apoptosis acutely as a trade-off in initiating senescence 83 . NFT-containing neurons in both P301L mice and frontotemporal dementia patients exhibited increased senescence transcriptome scores, revealed by up-regulated senescence markers such as CDK2NA, TNF, and IL-1 7,84 . This supports that stressed neurons may turn senescent, and expose PS prior to rendering microglia senescent. It also suggests a universal mechanism by which senescence can be triggered in neurons, astrocytes, and microglia by neurofibrillary, hp-tau pathology (Fig. 3).
Clarifying microglial senescence and dystrophy in LOAD patients
Although microglial senescence remains to be rigorously confirmed with multiple markers in clinical LOAD, patients were found to present dystrophic microglia indicating senescence. These dystrophic microglia display cytoplasmic fragmentation, swollen bead process extensions, and associate with and precede NFT pathology following Braak staging 6,62,85,86 . Dystrophic microglia increase TREM2, APOE, and ferritin expression, resulting in increased iron intake [86][87][88] ; senescent cells selectively uptake and accumulate iron 89,90 . These microglia not only became more abundant with aging, but also significantly increased in number among human hippocampi from differing types of dementias versus age-matched controls 91 .
Dystrophic microglia were found to associate with neuritic plaques and neurofibrillary tangles in LOAD patients 55 , with morphological differences depending on their local environment. Dystrophic microglia displaying hypertrophic somas were observed away from neuritic plaques, whereas microglia showing dystrophy and an extremely hypertrophic, "amoeboid" appearance were found to associate with neuritic plaques 62 . This neuritic plaque association was particularly observed in the prefrontal cortex, leading to propose that hypertrophic and dystrophic microglia form around the preclinical to clinical LOAD transition 62 . This complements a previous study by Sheng, Mrak, and Griffin, showing that microglia in LOAD patients initially transform from a "primed" or ramified appearance to a hypertrophic or amoeboid, phagocytic morphology nearby neuritic plaques 61 .
Finally, neuritic plaque formation and accumulation correlates well with Braak staging, and the overall LOAD clinical progression 1,60,62 . Experimentally, ablating microglia early and minimizing downstream microglial senescence also reduced neuritic plaque deposition in Aoverexpressing 5xFAD mice 92 . Altogether, this indicates that dystrophic microglia are not solely due to aging, but specifically involved in LOAD progression. Notably, dystrophic, hypertrophic microglia that associate with neuritic plaques can be named plaque-associated microglia. However, this term of "plaque-associated" microglia requires more precise definition. In LOAD patients, "plaque-associated" microglia can only be classified in association to neuritic plaques; human microglia in LOAD patients do not significantly associate with diffuse plaques 60 .
Senescent microglia putatively transform morphology and drive neuritic plaque formation in LOAD From here, we propose that primed microglia in LOAD patients initially comprise a ramified, or hyper-ramified morphology 59 . After phagoptosing neurons with hp-tau and becoming newly senescent, microglia may adopt a ramified or slightly hypertrophic morphology. Recalling that senescent microglia display hypophagocytic capacity for hp-tau neurons and have compromised endolysosomal systems 70 , their endocytosed aggregates are likely not fully digested. The impaired capacity to further phagocytose hp-tau aggregates putatively leads to further NFT creation, as both microglial degeneration in humans and microglial senescence in mice precede NFT formation 6 . Here, senescent microglia are accordingly proposed to actively phagocytose, yet fail in degrading further Aand hp-tau aggregates 10,31,70 .
We then predict that senescent microglia having performed phagoptosis transform into a dystrophic morphology, as both ramified and hypertrophic somas were found in dystrophic microglia with beaded processes 6,55 . If not too exhausted, senescent microglia likely attempt to digest A and potential hp-tau aggregates. In terms of preference, A oligomers were shown to bind with high affinity to TREM2 93 , indicating that TREM2-A interaction may become preferential in senescent microglia after phagoptopsis. APOE also binds to A oligomers 74,80 , presenting an alternative TREM2-APOE-A endocytic pathway for senescent microglia. Overall, this would induce senescent microglia to become bloated with these aggregates, displaying an amoeboid or extreme hypertrophic morphology correlating to microglial "activation" seen in LOAD patients 55,[60][61][62] (Fig. 3).
While non-fibrillar, A secretion may be useful in forming protective diffuse plaques, senescent microglia are predicted to fail in degrading these endocytosed products; instead, they putatively secrete A-hp-tau aggregates with dystrophic neurites from phagoptosed neurons 1,36,[71][72][73] . This is specific to senescent microglia, and critically accelerates buildup and spread of both A and hp-tau 36,72 ; these A-hp-tau aggregates with dystrophic neurites then are predicted to aggregate into neuritic plaques 60,61 . Particularly, these secreted hp-tau and A seeded aggregates are likely being uptaken by nearby microglia, to attempt limiting damage in a localized region. Albeit, this would render nearby microglia senescent too, creating neurofibrillary pathology that further causes a mobile, transcriptomic response in nearby microglia 55 . Hp-tau spread, paracrine senescence, and further aggregate seeding through these attracted microglia thus are posited to create the "plaque-associated" microglial clustering seen in LOAD patients. Furthermore, this overwhelming concentration of A and hp-tau pathology in neuritic plaques also draws in, creates, and causes nearby "clustering" of more senescent astrocytes and OPCs 41,44 (Fig. 2).
Senescent microglia are thus putatively accountable for neuritic plaque formation, critical A and hp-tau pathology beyond threshold tolerance in localized brain regions corresponding to Braak staging and LOAD progression 1,5,60,61 . Continued microglial exhaustion and attempted aggregate digestion may then result in dystrophic morphologies associated with LOAD, with subsequent negligence to protect neurons leading to further NFT creation 5,6,55,85 . Furthermore, dystrophic and potentially hypertrophic senescent microglia further respond to overwhelming A aggregation with an increased type I IFN response [51][52][53][54] , leading to pro-inflammatory cytokine secretion and Kv1.3 expression exacerbating local inflammation [94][95][96] . These exhausted, senescent microglia may also correspond to the transcriptomic cluster 4 per Olah et al. 50 , featuring increased FTH1 and FTL expression implicating iron accumulation and senescence 55,88-91 , APOE upregulation, and IRF transcription factors expression corresponding to an increased type I IFN response [51][52][53][54] . In mouse models with A pathology, this microglial stage may correspond to a proinflammatory, TREM2-dependent DAM2 state in response to A pathology 67,97 . Notably, TREM2 in hypertrophic and/or dystrophic states may also be downregulated due to pro-inflammatory exposure 98 .
In LOAD, microglial clustering around neuritic plaques and symptom presentation emerge together around Braak stages III-IV 1,5 . Thus, it is predicted that senescent microglia actively accelerate hp-tau aggregates and form neuritic plaques that tip preclinical LOAD into clinical progression. Furthermore, the exacerbated, resulting localized concentration and accumulation of A and neurofibrillary pathology determines clinical LOAD progression, gradually accumulating more synaptic loss, pro-inflammatory glial responses, and overall glial and paracrine senescence (Fig. 2).
Discussion
While the framework has not yet been empirically tested, its yields further understanding regarding LOAD and proposes many testable components 99 . This framework is compatible with current field paradigms regarding the amyloid cascade, tau, and synaptic loss hypotheses, and likely numerous other theories that have helped provide better knowledge of LOAD. It can also be expanded upon, as this perspective has not covered endothelial cell senescence and blood-brain barrier leakage contribution 100 , other LOAD risk factors, declining metabolism in aged glia, and much more.
As an additional clarification, microglia also proliferate in response to accumulated A pathology 35 . Over the course of aging, this further accelerates homeostatic microglial transition towards senescence through telomere shortening 7,8,101 . This may independently create senescent microglia that exacerbate A pathology and chronic local inflammation, indirectly accelerating the conversion of senescent microglia through phagoptosing A-stressed neurons containing hp-tau; alternatively, these senescent microglia may also accelerate tau seeding and spreading. This framework may further apply to EOAD, with one main exception: if glial and particularly microglial senescence are responsible for clinical AD progression, mutations responsible for EOAD may involve quicker microglial senescence through exhaustion. Particularly, with the TREM2 R47H mutation, microglia may have relatively decreased metabolic capacity to clear out quicker A and consequent hp-tau buildup in EOAD 102,103 . Conjecturally, mutations could make it harder for other immune cells to clear out senescent microglial accumulation over aging 7 .
Dark microglia may also be found to be senescent, provided that dark microglia likely phagoptose neurons 68 . Additionally, senescent microglia were absent from the inferior parietal cortex of LOAD patients in Braak stages I -VI 41 . Particularly, plaque-associated microglia were immunonegative for p21, and thus concluded to be non-senescent. As this contradicts the proposed framework, an alternative explanation is given. Foremost, although p21 has been used a senescence marker, p21 was shown to be transiently expressed after senescence induction 104 ; furthermore, senescent human microglia were not yet characterized for their long-term expression of p21. It is thus recommended to confirm or reject the current framework by assessing microglial senescence in LOAD patients using multiple methods, such as lipofuscin accumulation, iron and/or SA-ß-gal staining, and a more constitutively-upregulated p16 7,10,101,104 . Moreover, the particular morphological and molecular signatures of senescent glia in LOAD should be further confirmed.
More broadly, tauopathies including chronic traumatic encephalopathy may represent a favourable target regarding treating senescent microglia and administrating therapeutics. LOAD pathology is also especially co-morbid with other dementias such as vascular dementia and Parkinson's disease with dementia 1 , so solutions that target senescent glia in LOAD will likely help managing or treating symptoms in these other diseases. If the current framework is correct, increasing glial senescence explains why previous therapies have not worked; these medicines have not targeted senescent glia, and thus have not slowed consequent AD hallmark accumulation. A specific solution towards treating LOAD is encouraged here through senolytics, or drugs that selectively kill senescent cells (Fig. 4). As detailed, successful senolytic administration will likely treat the accelerated, traditional AD hallmark burden caused by senescent glia; this should allow renewal and recovery from remaining brain parenchyma, and for subsequent plasticity to extend or sustain an improved quality of life. While senolytics can or should be paired with other AD treatments for clinical trials, several senolytic experiments have demonstrated usefulness in preventing cognitive impairment in tauopathy and APP/PS1 mouse models 41,56,84 . The senolytics dasatinib and quercetin have also successfully eliminated senescent cells in a clinical trial for patients with diabetic kidney disease, purporting hope for treating LOAD as an aging disease of glial senescence. Risk factors for late-onset Alzheimer's disease (LOAD) end up adding to at least one of six main pathologies indicated in each corner, where each main pathology eventually adds to and increases the burden of other main LOAD pathology. These pathologies converge to ultimately drive improper neuronal support and death, resulting in synaptic loss clinically corresponding to cognitive decline. Centrally, overwhelming levels of these six LOAD pathologies also lead to threshold glial senescence. Glial senescence likely further enhances all six pathologies, promoting a disease environment favouring and irreversibly inducing synaptic loss and neuronal death. A LOAD framework is proposed from healthy aging to late-stage AD, presenting the testable claim that glial senescence accumulation directly corresponds to clinical LOAD progression. (1) Oxidative stress, amyloid-beta (A, neurofibrillary pathology in hyper-phosphorylated tau (hp-tau), and chronic inflammation constantly accrue throughout healthy aging as by-products of central nervous system function and metabolism. (2) Main LOAD pathology levels are enhanced by inflammatory glial states, and are sufficiently cleared by glia performing homeostatic roles. However, homeostatic glial functions decline throughout aging and decrease proficiency in containing LOAD pathology. (3) In preclinical AD, increasing proportions of aged glia interact with LOAD pathology to become senescent. This includes oligodendrocyte progenitor cells and astrocytes. Microglia performing homeostatic roles become senescent after engulfing neurons containing hp-tau, and then become incompetent in breaking down further phagocytosed hp-tau and Aaggregates. (4) Senescent microglia instead secrete these failed, phagocytosed products, seeding further aggregates and creating neuritic amyloid plaques corresponding to Braak staging and clinical LOAD progression. Finally, this combination of senescent glia, reduced homeostatic glia support, and non-senescent, inflammatory glial states drive synaptic loss and cognitive decline in LOAD. Colour shading denotes healthy cognition (blue) transitioning into more severe degrees of LOAD progression (purple). and astrocytes both react with amyloid-beta (A) to become senescent; oxidative stress at least indirectly accelerates this senescence induction. Senescent OPCs and astrocytes enhance increasing levels of main LOAD pathologies, ongoing pro-inflammatory microglial states, oxidative stress, and A accumulation that eventually favours conditions that induce overwhelming microglial senescence. (Right) In LOAD, Apathology contributes to oxidative stress that oxidizes cholesterol to form oxysterols. Homeostatic microglia likely utilize liver-X-receptor (LXR) to bind with oxysterols, resulting in upregulated apolipoprotein (APOE) expression. This leads to both increased A phagocytosis and degradation. Simultaneously, oxidative stress and tau aggregation induce neurons to translocate and expose outer phosphatidylserine (PS). APOE-upregulated microglia in homeostatic states bind PS using various receptors, and perform "phagoptosis" to phagocytose and prematurely kill PS-exposed neurons. Engulfing hyperphosphorylated tau turns microglia senescent and hypofunctional, rendering them unable to properly phagocytose LOAD pathology. Senescent microglia initially secrete failed, phagocytosed products that induce paracrine microglial senescence and form neuritic plaques; as decreasing homeostatic microglia are available to phagocytose A, A aggregation continuously accumulates and likely initiates a type I interferon response in senescent microglia. This A-induced response causes APOE and KCNA3 upregulation in senescent microglia, where resulting Kv1.3 protein causes increased, pro-inflammatory cytokine release. Senescent microglia then undergo dystrophy, after adopting a pre-senescent (hyper)-ramified morphology. When engulfing and attempting to contain A and hp-tau aggregates, senescent, dystrophic microglia likely take on a hypertrophic appearance. These "amoeboid" appearing microglia secrete failed aggregates to form neuritic plaques, and negligently permit further neurofibrillary tangle formations in nearby neurons. Fig. 4: Killing Senescent Glia To Treat Alzheimer's. Senolytics are drugs that selectively kill senescent cells, and have shown to reduce cognitive decline in disease mouse models. If senescent glia cause irreversible synaptic loss and accumulation of main LOAD pathologies, senolytics should likely treat AD. Remaining glial populations can repopulate and renew homeostatic function to effectively clear out and reduce main LOAD pathologies, without senescent glia detrimentally exerting a gradient of LOAD pathology accumulation. Resulting plasticity should hopefully return the aging brain back to a healthy cognition or preclinical LOAD state, or at least halt further LOAD progression. Future treatments can also be utilized in combination with senolytics. This would allow better restoring of glial homeostatic functions, and minimize damage accrued by pro-inflammatory glial states and other LOAD pathologies. It is unknown if neurodegeneration or disease-associated microglial phenotypes are targeted by senolytics.
|
v3-fos-license
|
2023-05-10T15:06:26.034Z
|
2023-05-01T00:00:00.000
|
258574818
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "289aba03a0886a8ed26b3ab7476268fccfbed280",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2939",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "8f38e6233cc21e857581ef940733b4e2c397279c",
"year": 2023
}
|
pes2o/s2orc
|
Experimental Investigation on the Influence of Crack Width of Asphalt Concrete on the Repair Effect of Microbially Induced Calcite Precipitation
The appearance of cracks is one of the reasons that affect the performance of asphalt pavement, and traditional repair methods have the potential problem of causing adverse effects on the environment. In this paper, an environmentally friendly method for asphalt concrete crack repair was investigated using microbially induced calcite precipitation (MICP) for asphalt concrete cracks of different widths (0.5 mm, 1.0 mm, 1.5 mm, and 3 mm), and the effectiveness of repair was evaluated using nondestructive and destructive experiments. A varied ultrasonic pulse velocity was used to evaluate the healing process, and it was found that the samples with an initial crack width of 0.5 mm showed the most significant increase in wave velocity of 18.06% after repair. The results also showed that the uniaxial compressive strength and indirect tensile strength of the MICP-repaired samples recovered up to 47.02% and 34.68%. Static creep test results showed that MICP-repaired samples with smaller width cracks had greater resistance to permanent deformation. The results of uniaxial compressive strength tests on larger width (3 mm) cracks repaired by MICP combined with fibers showed that the strength of the samples was significantly increased by the addition of fibers. In addition, the SEM/EDS results showed that the MICP products were spherical calcite particles with a particle size distribution from 0 to 10 μm. This study shows that MICP has some potential for repairing cracks in asphalt concrete of different widths within the range investigated.
Introduction
Asphalt pavements are widely used in traffic engineering all over the world because of the advantages of a short construction period, a smooth surface, and comfortable driving [1]. There are two main types of asphalt pavement cracks: load and non-load cracks. Cracks are almost inevitable in asphalt pavement structures, and long-term traffic loading is an important cause of cracks [2]. Studies have shown that pavement deterioration and cracking can lead to a reduced engineering application performance of asphalt pavements, such as indirect tensile strength [3] and permanent deformation [4].
There are generally two types of solutions for this pathology. One acts before the development of cracks and the other after the appearance of cracks, the former being a proactive way to reduce the possibility of cracks and the latter being a reactive repair after the cracks appear. The first method is to enhance the crack resistance of the asphalt mix, with the effect of preventing the extension of cracks or reducing the likelihood of cracks appearing. This approach includes enhancing the bonding effect of the binder (with the addition of asphalt mastic, rubber, etc. [5][6][7]) and enhancing the properties of the asphalt concrete aggregate (with the addition of polyester, lignin, glass, basalt fibers, etc. [8][9][10][11][12]).
When cracks appear in asphalt concrete, the traditional methods of treating cracks are heatinduced self-healing, crack sealing, or filling with organic asphalt-based materials [13,14]. Crack sealers are hardly cost-effective because of their poor treatment performance, high maintenance costs, and high environmental risks [15,16].
Recently, promising results have been achieved in eco-friendly fracture repair using biocolloid technology. Inorganic materials precipitated by microbially induced calcite precipitation (MICP) for the effective repair of cracks in cement concrete, brick, mortar, and rock have been extensively investigated due to their environmental friendliness and low-carbon nature [17][18][19]. The basic principle is to drop a biological agent into an already existing crack in the substrate material. After the biological agent enters the crack, the bacteria contained in the biological agent solution induce the formation of calcium carbonate precipitation by metabolizing the calcium ions and urea in the surrounding environment (cementing solution) [20]. However, the application of MICP in asphalt concrete crack filling has rarely been reported so far.
It has been shown that pure inorganic materials can be used to rapidly repair asphalt concrete cracks in municipal pavement manhole structures [16]. In addition, there have been attempts to combine MICP and asphalt mixtures. The application of microbial technology in asphalt mixtures was first used for modified cold asphalt emulsion mixes (CAEMs) [21]. By comparing the MICP method for CAEMs through two scenarios, it was found that the MICP method could significantly improve the mechanical properties of CAEMs. Among the studies on the moisture sensitivity of CAEMs, it was also found that the resistance to moisture damage was significantly improved after treatment with the MICP technique [22]. The above studies showed that MICP products have some compatibility with asphalt mixtures. Theoretically, it is possible to repair cracks in asphalt concrete using the MICP method.
In this paper, the effect of MICP on the repair effect of cracked asphalt concrete of different widths was investigated experimentally, and the repair performance of cracked asphalt concrete, including uniaxial compressive strength, indirect tensile strength, and permanent deformation, was evaluated. In addition, the physical morphology and chemical components were analyzed using scanning electron microscopy (SEM) and energy dispersive spectrometer (EDS) analysis.
Sample Preparation
The binder used for the asphalt concrete samples was cationic emulsified asphalt (52 mm needle penetration, 66 cm ductility, and 50.5 • C softening point) produced by Dongguan Dongjiao Asphalt Co. The aggregates for the asphalt concrete samples were crushed gravel, mineral powder, and coarse and medium-fine sand, with 10% asphalt content for each sample. Based on the Talbot classification method [23], the mass ratio of five sizes of particles P i (i = 1, . . . , 5) is as shown in Equation (1): where d max is the maximum size of the particle and n is the Talbot index (n < 1). The total mass of each sample was 800 g. The grading of the aggregates at a Talbot index of 0.5 is shown in Table 1. To investigate the effect of pre-crack width on MICP repair, intact samples and samples with crack widths of 0.5 mm, 1.0 mm, and 1.5 mm were designed for direct MICP repair, as well as samples with crack widths of 3 mm for MICP repair with fibers. It has been shown that the maximum crack depth of asphalt concrete in the field is less than 50 mm [24]. In this paper, for small-width (≤1.5 mm) cracks, the crack depth was set to 30 mm. For cracks of large width (3 mm), the crack depth was set to 40 mm. The length of the precast cracks used was fixed at 40 mm.
NCHRP Report No. 425 [25] was used to prepare the sample. First, the aggregates were weighed with an electronic scale (according to Table 1) and stirred clockwise and counterclockwise for 30 s, and then emulsified asphalt was added to the aggregates and stirred with the same mixing method. The mixture was then poured into a steel mold with 70.7 mm sides and the samples were compacted with a hammer. Next, a PVC hard plastic sheet was embedded in the top center of each sample and the PVC sheet was pulled out with tweezers after 1 h. After 48 h, the samples were de-molded and dried in a programmable temperature and humidity tester (TH-80CH) at 60 • C for 24 h. In this study, 15 samples of each crack width (0.5 mm, 1.0 mm, 1.5 mm, and 3 mm) were prepared, for a total of 60 samples with cracks. In addition, 12 complete samples without cracks were used as control.
Bacterial Culture and Cementing Solution
The bacteria used in this experiment were Bacillus pasteurii because of its significant advantages, such as easy extraction and nonbiological pathogenicity. The composition of the liquid nutrient medium used in the experiment was urea (20 g/L), soy protein (5 g/L), casein (15 g/L), and sodium chloride (5 g/L), and the pH of the medium was adjusted to 7.3 with NaOH. After the nutrient medium was prepared, the bacteria were inoculated into the culture medium at a rate of 1%. Finally, the inoculated bacterial culture was put into a constant temperature shaker and incubated at 30 • C and 130 rpm for 24 h. The OD600 and conductivity were measured every eight hours until their OD600 and conductivity reached about 2 and 20, respectively. An equimolar urea-calcium chloride solution with a concentration of 1 M was used as the MICP cementing solution.
Crack Repair Methods
The repair method varies depending on the width of the crack. For small-width cracks (≤1.5 mm), precast cracked asphalt concrete samples were repaired by peristaltic pump drip injection. A BT100M peristaltic pump was used to simultaneously pump 10 mL of bacterial solution and cementing solution into the cracks at a rate of 3.0 rpm for eight days. The crack repair is shown schematically in Figure 1. To investigate the effect of pre-crack width on MICP repair, intact samples and samples with crack widths of 0.5 mm, 1.0 mm, and 1.5 mm were designed for direct MICP repair, as well as samples with crack widths of 3 mm for MICP repair with fibers. It has been shown that the maximum crack depth of asphalt concrete in the field is less than 50 mm [24]. In this paper, for small-width (≤1.5 mm) cracks, the crack depth was set to 30 mm. For cracks of large width (3 mm), the crack depth was set to 40 mm. The length of the precast cracks used was fixed at 40 mm.
NCHRP Report No. 425 [25] was used to prepare the sample. First, the aggregates were weighed with an electronic scale (according to Table 1) and stirred clockwise and counterclockwise for 30 s, and then emulsified asphalt was added to the aggregates and stirred with the same mixing method. The mixture was then poured into a steel mold with 70.7 mm sides and the samples were compacted with a hammer. Next, a PVC hard plastic sheet was embedded in the top center of each sample and the PVC sheet was pulled out with tweezers after 1 h. After 48 h, the samples were de-molded and dried in a programmable temperature and humidity tester (TH-80CH) at 60 °C for 24 h. In this study, 15 samples of each crack width (0.5 mm, 1.0 mm, 1.5 mm, and 3 mm) were prepared, for a total of 60 samples with cracks. In addition, 12 complete samples without cracks were used as control.
Bacterial Culture and Cementing Solution
The bacteria used in this experiment were Bacillus pasteurii because of its significant advantages, such as easy extraction and nonbiological pathogenicity. The composition of the liquid nutrient medium used in the experiment was urea (20 g/L), soy protein (5 g/L), casein (15 g/L), and sodium chloride (5 g/L), and the pH of the medium was adjusted to 7.3 with NaOH. After the nutrient medium was prepared, the bacteria were inoculated into the culture medium at a rate of 1%. Finally, the inoculated bacterial culture was put into a constant temperature shaker and incubated at 30 °C and 130 rpm for 24 h. The OD600 and conductivity were measured every eight hours until their OD600 and conductivity reached about 2 and 20, respectively. An equimolar urea-calcium chloride solution with a concentration of 1 M was used as the MICP cementing solution.
Crack Repair Methods
The repair method varies depending on the width of the crack. For small-width cracks (≤1.5 mm), precast cracked asphalt concrete samples were repaired by peristaltic pump drip injection. A BT100M peristaltic pump was used to simultaneously pump 10 mL of bacterial solution and cementing solution into the cracks at a rate of 3.0 rpm for eight days. The crack repair is shown schematically in Figure 1. For samples with large-width cracks (3 mm), the MICP repair process used a peristaltic pump to simultaneously pump 20 mL of bacterial solution and 20 mL of the cementing solution into the cracks at 20.0 rpm, and the fibers (basalt fibers, polyester fibers, glass fibers, and steel fibers), as shown in Figure 2, were added separately to the cracks, along with the slurry solution, while grouting. In order to ensure the fibers were evenly distributed in the cracks, the fibers were slowly moved in place using a fine iron rod. For samples with large-width cracks (3 mm), the MICP repair process used a peristaltic pump to simultaneously pump 20 mL of bacterial solution and 20 mL of the cementing solution into the cracks at 20.0 rpm, and the fibers (basalt fibers, polyester fibers, glass fibers, and steel fibers), as shown in Figure 2, were added separately to the cracks, along with the slurry solution, while grouting. In order to ensure the fibers were evenly distributed in the cracks, the fibers were slowly moved in place using a fine iron rod. Table 2 shows the experimental procedures of this study.
Compressive Strength
The effect of MICP repair on the strength of small-width asphalt concrete cracked samples was determined using indirect and direct methods. For the indirect method, a conventional nondestructive test method called ultrasonic pulse velocity (UPV) was used to assess the degree of internal damage [26]. Prior to testing, petroleum jelly was applied to two opposing planes of the sample parallel to the fracture. A pair of transmitter and receiver transducers of the rock acoustic parameter tester (HS-YS4A) were placed close to these two planes for transmitting and receiving ultrasonic pulses. The ultrasonic velocity is the length of the path between the two transducers divided by the time for the ultrasonic pulse to pass through that path length. For the direct method, the UCS of the sample was tested with a servo tester (WHY-200/10). It is worth noting that during the test, the samples were loaded with cracks of different widths in different directions, as shown in Table 2 shows the experimental procedures of this study.
Compressive Strength
The effect of MICP repair on the strength of small-width asphalt concrete cracked samples was determined using indirect and direct methods. For the indirect method, a conventional nondestructive test method called ultrasonic pulse velocity (UPV) was used to assess the degree of internal damage [26]. Prior to testing, petroleum jelly was applied to two opposing planes of the sample parallel to the fracture. A pair of transmitter and receiver transducers of the rock acoustic parameter tester (HS-YS4A) were placed close to these two planes for transmitting and receiving ultrasonic pulses. The ultrasonic velocity is the length of the path between the two transducers divided by the time for the ultrasonic pulse to pass through that path length. For the direct method, the UCS of the sample was tested with a servo tester (WHY-200/10). It is worth noting that during the test, the samples were loaded with cracks of different widths in different directions, as shown in
Indirect Tensile Strength
An indirect tensile strength test was used to characterize the asphalt mixture in the tensile state [21], which can estimate the possibility of low temperature and fatigue cracking of asphalt concrete material under the action of thermal load and wheel load. Asphalt concrete samples underwent indirect tensile testing according to test method T0716-2011 of the technical specification [27]. A thin iron bar was placed parallel to the crack on the top surface of the sample, and a uniaxial compressive load was applied to the sample until
Indirect Tensile Strength
An indirect tensile strength test was used to characterize the asphalt mixture in the tensile state [21], which can estimate the possibility of low temperature and fatigue cracking of asphalt concrete material under the action of thermal load and wheel load. Asphalt concrete samples underwent indirect tensile testing according to test method T0716-2011 of the technical specification [27]. A thin iron bar was placed parallel to the crack on the top surface of the sample, and a uniaxial compressive load was applied to the sample until
Indirect Tensile Strength
An indirect tensile strength test was used to characterize the asphalt mixture in the tensile state [21], which can estimate the possibility of low temperature and fatigue cracking of asphalt concrete material under the action of thermal load and wheel load. Asphalt concrete samples underwent indirect tensile testing according to test method T0716-2011 of the technical specification [27]. A thin iron bar was placed parallel to the crack on the top surface of the sample, and a uniaxial compressive load was applied to the sample until the sample broke ( Figure 4c). By applying a vertical load to the sample, this test produces a relatively uniform tensile along the plane where the crack is located, and fracture usually occurs in the same plane. The indirect (splitting) tensile strength is calculated as follows: where P is the load when the sample is damaged and A is the cross-sectional area of the sample parallel to P.
Static Creep Test
Creep stiffness modulus is an important parameter for evaluating the deformation performance of asphalt concrete. According to NCHRP Project 9-6(1) [28], the test procedure was carried out using a servo tester (WHY-200/10) under a fixed loading compressive stress (σ c ) of 0.12 MPa (5-25% of the compressive strength without lateral limit), as shown in Figure 4d. The creep stiffness modulus is calculated using Equations (3) and (4): E cq(t) is the creep modulus at time t; σ c is the compressive stress (Pa) applied to the sample; and ε (c(t)) is the strain in the sample at time t (Calculation of creep stiffness modulus at 1, 10, 100, 500, 1000, 1500, 2000, 2500, 3000, 3500, and 3600 s).
where ∆ υ (t) is the uniaxial deformation of the sample in time t and is the average height of the sample.
SEM/EDS
The split samples were analyzed using a scanning electron microscope (SEM) and energy dispersive spectroscopy (EDS) to obtain the microscopic morphological characteristics and chemical composition of microbial mineralization products.
The electron microscope and energy dispersive spectrometer used in this experiment were both from Hunan Kewei Testing Technology Co., Ltd., and the parameters were as follows: the model of the electron microscope was JSM-7610FPlus with a magnification of 500~10,000 times; the model of energy dispersive spectrometer was ULTIM MAX 40, and the detection elements were mainly C, O, and Ca. Figure 5 shows the top surface (cross section) of the asphalt concrete samples after one day and 5 to 8 days of MICP treatment. It can be observed that in the MICP-treated samples, the cracks gradually healed as the number of days of treatment increased. After five days of repair, the samples with crack widths of 0.5 mm and 1.0 mm (Figures 3a and 5b) were basically healed, and visible cracks were still evident in the samples with crack widths of 1.5 mm (Figure 5c). After eight days of repair, the surface cracks of the samples were almost completely healed or sealed. In addition, the entire crack section of the sample was covered with precipitated calcium carbonate produced by the MICP process in the vicinity. The reason for the different distribution of white precipitates at the crack openings is that some organic matter produced during bacterial metabolism adheres the mineralization product particles together, thus forming dense, cohesive larger particles. As Figure 6 shows the changing pattern of the surface repair rate and the fitted straight line in the repair process for different-width cracks. From the slope of the fitted straight line, it can be seen that the surface repair rate increases more rapidly for the sample with a crack width of 0.5 mm; the surface repair rate is the slowest for the sample with a crack width of 1.5 mm.
UPV
The greater the ultrasonic velocity, the better the repairing effect of asphalt concrete cracks. Therefore, the ultrasonic detection method can indirectly reflect the healing efficiency of asphalt concrete cracks. The results of the ultrasonic method for asphalt concrete before and after crack repair are shown in Figure 7a. After 8 d of healing, the acoustic As Figure 6 shows the changing pattern of the surface repair rate and the fitted straight line in the repair process for different-width cracks. From the slope of the fitted straight line, it can be seen that the surface repair rate increases more rapidly for the sample with a crack width of 0.5 mm; the surface repair rate is the slowest for the sample with a crack width of 1.5 mm. As Figure 6 shows the changing pattern of the surface repair rate and the fitted straight line in the repair process for different-width cracks. From the slope of the fitted straight line, it can be seen that the surface repair rate increases more rapidly for the sample with a crack width of 0.5 mm; the surface repair rate is the slowest for the sample with a crack width of 1.5 mm.
UPV
The greater the ultrasonic velocity, the better the repairing effect of asphalt concrete cracks. Therefore, the ultrasonic detection method can indirectly reflect the healing efficiency of asphalt concrete cracks. The results of the ultrasonic method for asphalt concrete before and after crack repair are shown in Figure 7a. After 8 d of healing, the acoustic
UPV
The greater the ultrasonic velocity, the better the repairing effect of asphalt concrete cracks. Therefore, the ultrasonic detection method can indirectly reflect the healing efficiency of asphalt concrete cracks. The results of the ultrasonic method for asphalt concrete before and after crack repair are shown in Figure 7a. After 8 d of healing, the acoustic It can be seen that the ultrasonic velocity coefficient of variation for the three crack widths does not change much before and after the 8-day healing time (Figure 7b), and the changes are all within the range of 0.1. It is noteworthy that the coefficient of variation shows a decreasing pattern in general. The coefficient of variation for the cracks at a later stage of repair was lower than those for the cracks before repair, which may be due to the fact that the cracks were all better filled at the later stage of repair than at the earlier stage of repair, and thus the measured average wave velocities were more representative.
The asphalt concrete samples consisted of a solid and a gas phase because the asphalt concrete samples were sufficiently dry prior to the ultrasonic pulse velocity test. Due to the speed of propagation of the sound in solids being greater than that in air, when the cracks are filled, the solid phase of the asphalt concrete increases, the gas phase decreases, and the wave velocities show an increasing trend. The structural integrity of the 0.5 mm crack-width samples is greater than that of the 1.0 mm and 1.5 mm crack-width samples, and it is less difficult to form an effective 'bridging' effect on both sides of the crack than in the 1.0 mm and 1.5 mm crack-width samples, so the increase in wave velocity is most significant. The speed of the ultrasonic pulse increases as the crack width decreases, which is consistent with experimental expectations, and is in line with the results of Jongvivatsakul [29] et al. who showed the same pattern for the ultrasonic pulse velocities for the MICP repair of mortar cracks of different widths. It can be seen that the ultrasonic velocity coefficient of variation for the three crack widths does not change much before and after the 8-day healing time (Figure 7b), and the changes are all within the range of 0.1. It is noteworthy that the coefficient of variation shows a decreasing pattern in general. The coefficient of variation for the cracks at a later stage of repair was lower than those for the cracks before repair, which may be due to the fact that the cracks were all better filled at the later stage of repair than at the earlier stage of repair, and thus the measured average wave velocities were more representative.
The asphalt concrete samples consisted of a solid and a gas phase because the asphalt concrete samples were sufficiently dry prior to the ultrasonic pulse velocity test. Due to the speed of propagation of the sound in solids being greater than that in air, when the cracks are filled, the solid phase of the asphalt concrete increases, the gas phase decreases, and the wave velocities show an increasing trend. The structural integrity of the 0.5 mm crack-width samples is greater than that of the 1.0 mm and 1.5 mm crack-width samples, and it is less difficult to form an effective 'bridging' effect on both sides of the crack than in the 1.0 mm and 1.5 mm crack-width samples, so the increase in wave velocity is most significant. The speed of the ultrasonic pulse increases as the crack width decreases, which is consistent with experimental expectations, and is in line with the results of Jongvivatsakul et al. [29] who showed the same pattern for the ultrasonic pulse velocities for the MICP repair of mortar cracks of different widths. The above analysis shows that microorganisms can produce mineralization products for repairing asphalt concrete cracks of different widths, and the repairing effect is better for samples with smaller crack widths than those with larger crack widths.
UCS
The compressive strength of the mortar samples is shown in Figure 8. The presence of cracks in the samples was the main reason why the compressive strength of asphalt concrete was lower than that of intact samples. The compressive strengths of cracked asphalt concrete samples were 0.80, 0.70, and 0.55 MPa when the crack widths were 0.5, 1.0, and 1.5, respectively, which were 39.84%, 47.37%, and 58.65% lower than those of the control. This is due to the high-stress concentration caused by cracks, which resulted in damage at lower stresses compared to the control samples. After treatment with bacterial healing agents, the available bearing area increased with the formation of CaCO 3 , thus allowing load transfer. The results showed that the stress concentration at the crack tip was reduced. As a result, the compressive strength of the cracked asphalt concrete increased to 1.05, 0.90, and 0.64 MPa. In addition, the compressive strength ratios were up to 78.94%, 67.67%, and 48.12% compared to the control samples. The increase in strength with crack healing is consistent with the increase in pulse velocity. Zheng et al. [30] and Qian et al. [31] confirmed that the higher the velocity of acoustic waves through the sample, the stronger the material.
OR PEER REVIEW 9 of 19 The above analysis shows that microorganisms can produce mineralization products for repairing asphalt concrete cracks of different widths, and the repairing effect is better for samples with smaller crack widths than those with larger crack widths.
UCS
The compressive strength of the mortar samples is shown in Figure 8. The presence of cracks in the samples was the main reason why the compressive strength of asphalt concrete was lower than that of intact samples. The compressive strengths of cracked asphalt concrete samples were 0.80, 0.70, and 0.55 MPa when the crack widths were 0.5, 1.0, and 1.5, respectively, which were 39.84%, 47.37%, and 58.65% lower than those of the control. This is due to the high-stress concentration caused by cracks, which resulted in damage at lower stresses compared to the control samples. After treatment with bacterial healing agents, the available bearing area increased with the formation of CaCO3, thus allowing load transfer. The results showed that the stress concentration at the crack tip was reduced. As a result, the compressive strength of the cracked asphalt concrete increased to 1.05, 0.90, and 0.64 MPa. In addition, the compressive strength ratios were up to 78.94%, 67.67%, and 48.12% compared to the control samples. The increase in strength with crack healing is consistent with the increase in pulse velocity. Zheng [30] et al. and Qian [31] et al. confirmed that the higher the velocity of acoustic waves through the sample, the stronger the material. The asphalt has a strong bond between the aggregate and the calcium carbonate, which helps to form a bridge between the two sides of the crack [32]. The UCS decreases with increasing crack width, which indicates that strength loss increases with cracking. In addition, smaller crack widths exhibit greater microbial utilization during MICP repair [33], such that samples with a crack width of 0.5 mm show a larger UCS after MICP repair.
ITS
The indirect tensile strength (ITS) of MICP-repaired cracks of different widths is shown in Figure 9. It can be seen that the ITS values of unrepaired samples decreased with the increase in crack width. The ITS values of the samples repaired by MICP increased from 0.091, 0.070, and 0.055 MPa to 0.122, 0.092, and 0.063 MPa, respectively, and the recovery of tensile strength reached about 34.68%, 32.19%, and 13.64%.
It is speculated that the lower value of ITS may be due to insufficient crack healing and the formation of an imperfect bond between cracked samples. The samples with larger crack widths repaired by MICP showed smaller ITS. One reason for this may be the lack of bonding between the cracks and the smaller size and microstructure of the CaCO3 distribution. The researchers found similar effects for potential applications in terms of The asphalt has a strong bond between the aggregate and the calcium carbonate, which helps to form a bridge between the two sides of the crack [32]. The UCS decreases with increasing crack width, which indicates that strength loss increases with cracking. In addition, smaller crack widths exhibit greater microbial utilization during MICP repair [33], such that samples with a crack width of 0.5 mm show a larger UCS after MICP repair.
ITS
The indirect tensile strength (ITS) of MICP-repaired cracks of different widths is shown in Figure 9. It can be seen that the ITS values of unrepaired samples decreased with the increase in crack width. The ITS values of the samples repaired by MICP increased from 0.091, 0.070, and 0.055 MPa to 0.122, 0.092, and 0.063 MPa, respectively, and the recovery of tensile strength reached about 34.68%, 32.19%, and 13.64%. Therefore, it can be concluded that the repair of asphalt concrete samples by MICP does increase the indirect tensile strength of the samples [35]. The mineralization products act as cementitious fillers in the cracks, and the ITS increase is more pronounced for small crack widths.
Permanent Deformation
The results of the static creep test are shown in Figure 10. It can be seen that the creep stiffness modulus has a large abrupt change within 100 s of loading, and after 500 s, the value of the creep stiffness modulus gradually stabilizes. The creep stiffness modulus decreases as the crack width increases. Notably, the creep stiffness modulus of asphalt concrete samples with crack widths w = 1.0 mm and 1.5 mm is slightly more prominent in the former than in the latter. In general, it is still consistent with the fact that the repaired creep stiffness modulus of the large crack width is smaller than that of the small crack width. This is because samples with large crack widths may have uneven filling as the cracks at the same depth are filled with mineralization products differently during microbial grouting. The researcher, Manfro [36], found that the addition of calcium carbonate provides a resistance gain at the asphalt binder-aggregate interface, resulting in a modified asphalt It is speculated that the lower value of ITS may be due to insufficient crack healing and the formation of an imperfect bond between cracked samples. The samples with larger crack widths repaired by MICP showed smaller ITS. One reason for this may be the lack of bonding between the cracks and the smaller size and microstructure of the CaCO 3 distribution. The researchers found similar effects for potential applications in terms of indirect tensile testing following the bacterial repair of cracks in cement mortars [34]. Therefore, it can be concluded that the repair of asphalt concrete samples by MICP does increase the indirect tensile strength of the samples [35]. The mineralization products act as cementitious fillers in the cracks, and the ITS increase is more pronounced for small crack widths.
Permanent Deformation
The results of the static creep test are shown in Figure 10. It can be seen that the creep stiffness modulus has a large abrupt change within 100 s of loading, and after 500 s, the value of the creep stiffness modulus gradually stabilizes. The creep stiffness modulus decreases as the crack width increases. Notably, the creep stiffness modulus of asphalt concrete samples with crack widths w = 1.0 mm and 1.5 mm is slightly more prominent in the former than in the latter. In general, it is still consistent with the fact that the repaired creep stiffness modulus of the large crack width is smaller than that of the small crack width. This is because samples with large crack widths may have uneven filling as the cracks at the same depth are filled with mineralization products differently during microbial grouting.
The researcher, Manfro [36], found that the addition of calcium carbonate provides a resistance gain at the asphalt binder-aggregate interface, resulting in a modified asphalt mixture with greater resistance to permanent deformation. Samples with small crack widths and small voids between the two sides of the crack make it easier for MICP to achieve a relatively complete cementitious filling in the voids. They are easier to 'bridge', and the CaCO 3 bridge formed is less likely to break in the middle, so it has a greater ability in preventing permanent deformation.
creases as the crack width increases. Notably, the creep stiffness modulus of asphalt concrete samples with crack widths w = 1.0 mm and 1.5 mm is slightly more prominent in the former than in the latter. In general, it is still consistent with the fact that the repaired creep stiffness modulus of the large crack width is smaller than that of the small crack width. This is because samples with large crack widths may have uneven filling as the cracks at the same depth are filled with mineralization products differently during microbial grouting. The researcher, Manfro [36], found that the addition of calcium carbonate provides a resistance gain at the asphalt binder-aggregate interface, resulting in a modified asphalt Figure 11) show that the MICP mineralization products of this test have about four morphologies of crystallization. They are triangular conical, parallel hexahedral, prismatic, and spherical. Among them, the most common crystalline morphology is spherical.
SEM scans (
Materials 2023, 16, x FOR PEER REVIEW mixture with greater resistance to permanent deformation. Samples with sma widths and small voids between the two sides of the crack make it easier for M achieve a relatively complete cementitious filling in the voids. They are easier to and the CaCO3 bridge formed is less likely to break in the middle, so it has a greate in preventing permanent deformation.
SEM/EDS Analysis
SEM scans ( Figure 11) show that the MICP mineralization products of this t about four morphologies of crystallization. They are triangular conical, parallel dral, prismatic, and spherical. Among them, the most common crystalline morph spherical. The particle size distribution of the particles in Figure 12 was measured usin J2 software, and the selected particles are the particle sizes observable on the im face, as shown in Figure 12a. Figure 12b shows the distribution range of the part from which it can be seen that the majority of the MICP products have a particle tribution in the range of 0 to 10 µm, and the mean value of the diameter of the 170 selected was 4.54 µm. The particle size distribution of the particles in Figure 12 was measured using Image J2 software, and the selected particles are the particle sizes observable on the image surface, as shown in Figure 12a. Figure 12b shows the distribution range of the particle size, from which it can be seen that the majority of the MICP products have a particle size distribution in the range of 0 to 10 µm, and the mean value of the diameter of the 170 particles selected was 4.54 µm.
The particle size distribution of the particles in Figure 12 was measured using Image J2 software, and the selected particles are the particle sizes observable on the image surface, as shown in Figure 12a. Figure 12b shows the distribution range of the particle size, from which it can be seen that the majority of the MICP products have a particle size distribution in the range of 0 to 10 µm, and the mean value of the diameter of the 170 particles selected was 4.54 µm. Figure 12. MICP product diameter measurement: (a) selected particles; and (b) particle size distribution. Figure 12. MICP product diameter measurement: (a) selected particles; and (b) particle size distribution.
A total of three EDS scans were performed in this experiment, and the analytical results are shown in Table 3 and Figure 13. The results showed that the precipitates were mainly composed of C, O, and Ca, with an atomic number approximately close to 1:3:1 and an atomic mass ratio of approximately 1:3:3 ( Figure 13). It can be inferred that the MICP production is calcite. Figure 14 shows the stress-strain curves of the samples with different fiber admixtures for repairing large-width asphalt concrete cracks. As a control, the stress-strain Figure 14 shows the stress-strain curves of the samples with different fiber admixtures for repairing large-width asphalt concrete cracks. As a control, the stress-strain curves of the samples with 3 mm cracks without repair and without any fiber admixture are also shown in the figure. fungus solution and the cementing solution to contact with it fully when the fiber admixture is too high, i.e., the MICP mineralization effect is negatively affected. Figure 14d shows that during the MICP repair process, the strength gradually decreased as the steel fiber (SF) admixture increased from 3.0 g to 9.0 g, and the gradient of decrease was similar, about 0.4 MPa. This indicates that when the steel fiber admixture was too large, it squeezed too much space inside the cracks, and the effective bridging effect between the fiber and MICP products could not be formed. Figure 15 visualizes the compressive strength of each sample with different fiber types and fiber mass for MICP repair. It can be seen that the measured compressive strengths differed from the previous sections when conducting the experimental study of repairing asphalt concrete samples with large-width cracks and were generally greater than those of the previous samples repairing cracks of different depths and small widths, which can be explained by the different placement of the samples when measuring the compressive strengths in this subsection compared to the previous subsections (shown in Figure 3). In addition, it is worth noting that the MICP repair without fiber doping showed an increase in strength compared to the samples with 3 mm cracks, but the increase was not significant, only 11.22%, indicating that when the crack width is greater than 3 mm, it is no longer appropriate to use a direct MICP repair. Scholars [37][38][39][40] have found similar effects when studying the coupling of MICP with each fiber. Figure 14a illustrates that when the basalt fiber (BF) doping is increased from 0.5 g to 1.5 g, the strength increases, and the magnitude of the increase is close to about 0.45 MPa. In addition, the strain increases when the peak strength is reached. This indicates that the strength and toughness of the samples increased with the increase in basalt fiber doping during the MICP repair. Figure 14b demonstrates that when the doping of polyester fiber (PF) was increased from 0.5 g to 1.0 g, the strength increased, but not significantly, while the fiber doping was further increased from 1.0 g to 1.5 g, the strength increased relatively more significantly by about 0.54 MPa. This indicates that the small amount of basalt fiber (BF) and polyester fiber (PF) doping during MICP remediation is beneficial for bacteria to remain on them, and as a connecting bridge for MICP products, the fibers can form a bridging effect within the cracks for the integrity of the repaired samples. Figure 14c shows that the strength of glass fiber (GF) increased by 0.27 MPa when increasing from 0.5 g to 1.0 g. In contrast to the pattern shown in Figure 14a,b, the strength did not increase but showed a significant decrease when increasing the fiber doping from 1.0 g to 1.5 g. The decreased strength was comparable to that of the MICP repair without any fiber doping. The reason for this phenomenon can be explained by the fact that the glass fiber admixture does help to improve the strength of asphalt concrete to some extent, the space inside the cracks of the samples is limited, and it is difficult for the fungus solution and the cementing solution to contact with it fully when the fiber admixture is too high, i.e., the MICP mineralization effect is negatively affected. Figure 14d shows that during the MICP repair process, the strength gradually decreased as the steel fiber (SF) admixture increased from 3.0 g to 9.0 g, and the gradient of decrease was similar, about 0.4 MPa. This indicates that when the steel fiber admixture was too large, it squeezed too much space inside the cracks, and the effective bridging effect between the fiber and MICP products could not be formed. Figure 15 visualizes the compressive strength of each sample with different fiber types and fiber mass for MICP repair. It can be seen that the measured compressive strengths differed from the previous sections when conducting the experimental study of repairing asphalt concrete samples with large-width cracks and were generally greater than those of the previous samples repairing cracks of different depths and small widths, which can be explained by the different placement of the samples when measuring the compressive strengths in this subsection compared to the previous subsections (shown in Figure 3). In addition, it is worth noting that the MICP repair without fiber doping showed an increase in strength compared to the samples with 3 mm cracks, but the increase was not significant, only 11.22%, indicating that when the crack width is greater than 3 mm, it is no longer appropriate to use a direct MICP repair. Scholars [37][38][39][40] have found similar effects when studying the coupling of MICP with each fiber. The reasons for the above results can be inferred as follows:
Effect of MICP Repair for Different Types of Fibers
(1) In the MICP repair process, the different fibers added in the cracks, even though they have the same quality, show different repair effects due to their different softness and volume, resulting in different distribution patterns in the cracks and different interaction relationships with the asphalt binder, aggregates, and MICP products. (2) In the case of basalt fiber, its texture is hard, and it plays the role of strengthening the The reasons for the above results can be inferred as follows: (1) In the MICP repair process, the different fibers added in the cracks, even though they have the same quality, show different repair effects due to their different softness and volume, resulting in different distribution patterns in the cracks and different interaction relationships with the asphalt binder, aggregates, and MICP products. (2) In the case of basalt fiber, its texture is hard, and it plays the role of strengthening the interconnection between calcium carbonate particles, and also provides the landing point for bacteria; in the case of polyester fiber, its texture is soft and it is more aggregated and distributed under the infiltration of slurry, which plays the local geotextile effect; for glass fiber, its softness and hardness are between basalt fiber and polyester fiber, and due to its longer length, it becomes entangled in the cracks. This winding effect can also strengthen the cementation between calcium carbonate particles, aggregates, and fibers to a certain extent; in the case of steel fibers, it has the hardest texture and the largest mass in the same volume, and the interlocking between fibers can also lead to the local siltation of calcium carbonate when repairing cracks via MICP due to the sinking through gravity, resulting in the effect with steel fibers where the MICP repair only occurs in a certain depth range of the cracks. Figure 16 shows the fitted curves of the doping mass of basalt fiber, glass fiber, polyester fiber, and steel fiber in the MICP process versus UCS, from which the fiber doping mass that has the best effect on UCS improvement can be deduced. The amount of addition for each fiber in this paper can be calculated using the following equation:
Investigation of Optimal Fiber Doping
where m is the mass of fiber added and V c is the crack volume. Figure 16 shows the fitted curves of the doping mass of basalt fiber, glass fiber, polyester fiber, and steel fiber in the MICP process versus UCS, from which the fiber doping mass that has the best effect on UCS improvement can be deduced. The amount of addition for each fiber in this paper can be calculated using the following equation:
Investigation of Optimal Fiber Doping
where m is the mass of fiber added and Vc is the crack volume. It can be seen in the figure that the best doping mass of glass fiber is about 0.18 g/cm 3 , the best doping mass of steel fiber is about 0.83 g/cm 3 , and the best doping mass of polyester fiber and glass fiber is more than 0.31 g/cm 3 . The specific values are subject to further experimental study.
Discussion
The spherical calcium carbonate particles of MICP mineralization products have better fluidity and dispersion [41], which are easily attached near the crack surface and clustered with each other from particle to particle, allowing the cracks to be fully cemented and filled.
Since the propagation speed of acoustic waves in solids is greater than that in air, the denser the mineralization quantities in products, the greater the measured wave speed, and the better the filling effect of cracks [42]. After the cracks are filled by MICP products, the bridging effect is produced on both sides of the cracks [43]. The degree of bridging effect is different due to the different widths of the cracks. The direct manifestation of the bridging effect is the ultrasonic wave velocity, and the ultrasonic wave velocity of the sample with large crack width after MICP repair is smaller than that of the smaller crack It can be seen in the figure that the best doping mass of glass fiber is about 0.18 g/cm 3 , the best doping mass of steel fiber is about 0.83 g/cm 3 , and the best doping mass of polyester fiber and glass fiber is more than 0.31 g/cm 3 . The specific values are subject to further experimental study.
Discussion
The spherical calcium carbonate particles of MICP mineralization products have better fluidity and dispersion [41], which are easily attached near the crack surface and clustered with each other from particle to particle, allowing the cracks to be fully cemented and filled.
Since the propagation speed of acoustic waves in solids is greater than that in air, the denser the mineralization quantities in products, the greater the measured wave speed, and the better the filling effect of cracks [42]. After the cracks are filled by MICP products, the bridging effect is produced on both sides of the cracks [43]. The degree of bridging effect is different due to the different widths of the cracks. The direct manifestation of the bridging effect is the ultrasonic wave velocity, and the ultrasonic wave velocity of the sample with large crack width after MICP repair is smaller than that of the smaller crack width, indicating that the large crack width is not easy to form a significant bridging effect. The indirect manifestation of the bridging effect is UCS, and the study showed that adding fibers to the cracks could enhance this bridging effect [37]. Corresponding to the ultrasonic wave velocity test results, the UCS of the MICP-repaired samples decreased as the crack width increased.
The adhesion between the microbial mineralization products and the cracks of the asphalt concrete samples depended upon the compatibility between the roughness of the crack surface and the mineralization products, while the strength of the mineralization products depended on their own properties [44]. Since the crack surface roughness was almost the same for all samples, the tensile strength depended more on the direct adhesion between the mineralization products and the cracks for the samples with small-width cracks. The indirect tensile strength of the MICP-repaired samples increased when the crack width decreased, which can be explained by the stronger adhesion between the MICP mineralized products and the cracks for small-width cracks. In addition, the stronger adhesion also resists a greater resistance to permanent deformation under this adhesion in the static creep test.
In order to facilitate quantitative research, all of the repair work in this paper was carried out on regular cracks. However, in practice, regular cracks are almost nonexistent, and, therefore, the effect of irregular cracks on the repair effect needs to be considered. Irregular crack extensions make it difficult to control the percolation path of the biological slurry within the crack and make the repair more difficult and complex. In future research, the percolation channels of MICP slurry under complex fracture extension and the mechanism of MICP product attachment under such complex percolation can be further explored.
Conclusions
In this study, the effectiveness and mechanism of the (MICP) technique for repairing asphalt concrete with different crack widths were investigated through a series of experiments. Several conclusions were drawn as follows: (1) The wave speed increased from 1545. 36 SEM and EDS images showed that the MICP product of this study was CaCO 3 , which was mainly in the form of spherical calcite. the particle size of CaCO 3 was mostly concentrated within 10 µm, with an average value of 4.54 µm. (4) The results of MICP combined with different fibers to repair large-width cracks showed that the addition of fibers contributed to the filling effect of the MICP product in the cracks of the as-built concrete. the UCS results showed that the optimum dose was about 0.18 g/cm 3 for glass fibers, about 0.83 g/cm 3 for steel fibers, and over 0.31 g/cm 3 for polyester and basalt fibers.
The above findings indicate that the asphalt concrete samples with single cracks repaired with MICP showed some degree of recovery in performance. In general, the repair effect increased significantly as the crack width decreased. When the crack width was more extensive, the combined repair effect of MICP and fiber was better than the direct MICP repair.
Author Contributions: Methodology, S.P. and L.F.; software, J.Z.; validation, J.Z. and S.P.; formal analysis, L.F. and G.C.; data curation, J.Z. and Z.X.; writing-original draft preparation, J.Z.; writing-review and editing, S.P. and L.F.; project administration, S.P. and L.F.; funding acquisition, S.P. and L.F. All authors have read and agreed to the published version of the manuscript.
Funding: This project is financially supported by the National Natural Science Foundation of China (grant # 52174100 and # 51674287), and the National Science Foundation of Hunan Province, China (grant # 2021JJ30834).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data that support the findings of this study are available upon request from the authors.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-04-03T00:00:38.996Z
|
2016-09-12T00:00:00.000
|
8063354
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.impactjournals.com/oncoscience/files/papers/1/320/320.pdf",
"pdf_hash": "7cef999bb41d400197eabb879bc297dacb796d5e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2940",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7cef999bb41d400197eabb879bc297dacb796d5e",
"year": 2016
}
|
pes2o/s2orc
|
The importance of tissue confirmation of metastatic disease in patients with breast cancer: lesson from a brain metastasis case
Background The discrepancy of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) statuses in breast cancers has been reported. Available systemic therapy for patients with breast cancer is based on the molecular subtypes as identified by IHC and/or FISH. However, these biomarkers may change throughout tumor progression. Case presentation We report a relatively uncommon case of a 39-year-old Chinese woman with local advanced breast cancer (LABC) treated with 6 cycles of docetaxel, doxorubicin and cyclophosphamide (TAC) regimen neoadjuvant chemotherapy, and subsequently mastectomy, intensity-modulated radiation therapy (IMRT) and tamoxifen followed as regularly. Brain metastatic event appeared in 6 months after mastectomy. Treatment for brain metastasis was surgical resection and followed by whole brain radiotherapy (WBRT) approved by multidisciplinary team (MDT). Initial pathological diagnosis was IDC, cT4N1M0, luminal B (ER+ 90%, PR+90%, HER2 0, Ki67+ 70%) based on ultrasound-guided core needle biopsy. Surgical pathology revealed IDC, pT2N3M0 luminal B (ER+ 20%, PR+20%, HER2 0, Ki67+ 20%). Histological response to neoadjuvant chemotherapy is grade 3 according to the Miller/Payne grading system. Final pathology of brain metastasis showed a HER2 overexpression metastatic breast cancer luminal B (ER+ 70%, PR+ 70%, HER2 2+, Ki67+ 30%), FISH confirmed HER2 overexpression. Weekly paclitaxel plus trastuzumab was given for 12 weeks, then trastuzumab every 3 weeks for a whole year. Patient follow-up is still ongoing, no new events appear yet. Conclusions The determination of hormone receptors and HER2 status should be routinely performed in all involved tissues, if possible, and systemic therapy should be tailored following the latest finding.
INTRODUCTION
Breast cancer is one of the most common malignancies in women, and its incidence has continuously increased in recent years [1]. Locally advanced breast cancer (LABC) accounts for about 15% of newly diagnosed cases in our center, most of who come from rural countryside for lack of attention. Neoadjuvant chemotherapy was usually given to these patients in an attempt to downstage the primary tumor and also to reduce or eliminate micrometastatic disease [2,3]. Available systemic therapies for breast cancer patients are based on the estrogen receptor (ER) and progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2) characteristics as identified by IHC and/or FISH in the tissue acquired by ultrasound-guided core needle biopsy [4][5][6]. In routine clinical practice, management of patients with metastatic breast cancer is also referred to the biological traits of the primary tumor. However, hormone receptors and HER2 status may change during tumor progression from the primary tumor to the metastatic side. Accumulating studies have indicated that there may be of clinical significance in discrepancy of ER, PR, and HER2 status between primary breast tumor and metastatic disease [5,[7][8][9][10][11]. Normally, this phenotype discordance suggests an even worse prognosis. Consequently, biopsies of metastatic tissue should be taken into account as a routine procedure in daily clinic, and these biomarkers confirmation at recurrence or metastatic carcinomas may potentially get clinically significant benefits to improve patient management and survival.
Here, we presents a relatively uncommon case with a HER2 negative breast cancer switching into HER2 overexpression breast cancer after a series of systemic therapies.
CASE PRESENTATION
A 39-year-old Chinese woman with local advanced breast cancer (LABC) as pathologically confirmed by core needle biopsy in our breast cancer center. Before coming to my clinic, she was treated with Traditional Chinese Medicine for misdiagnosis as breast hyperplasia in local hospital for about one year, no obvious symptom improvement as she mentioned. A red nodule appeared in the left upper side of left breast one month before she came to my clinic (Figure 1), which made her come to our breast cancer center. Color Doppler Ultrosonography for the left breast demonstrated a left-sided hypoechoic mass measuring 3.5 cm and located at the 3 o'clock position adjacent to the nipple-areolar complex, and also revealed suspicious left axillary lymph nodes ( Figure 1). Ultrasound-guided biopsy of the breast mass demonstrated an infiltrating ductal carcinoma (IDC) of the left breast with ER+ 90% mild, PR+90% mild, HER2 0, Ki67+ 70% by immunohistochemistry (IHC), luminal B subtype ( Figure 2). No detectable involved organs as screened by systemic assessment, including brain, lungs, liver, bone, and uterus and its accessories. The clinical stage of the case was cT4N1M0 based on American Joint Committee on Cancer Breast Cancer Staging 7th edition [12].
The patient received 6 cycles of docetaxel, doxorubicin and cyclophosphamide (TAC) regimen neoadjuvant chemotherapy. Clinical assessment is partial response (PR) after 6 cycles of neoadjuvant chemotherapy, and then mastectomy. Surgical pathology revealed IDC, pT2N3M0, luminal B (ER+ 20% weak, PR+20% weak, HER2 0, Ki67+ 20%). Histological response to neoadjuvant chemotherapy is grade 3 according to the Miller/Payne grading system [13] (Figure 3). Forward planning intensity-modulated radiation therapy (IMRT) and then tamoxifen followed. The target volume of radiotherapy included the chest-wall and supraclavicular lymphonodus drawing region. Brain metastatic event appeared in 6 months after mastectomy as firstly presented by terrible headache and intracranial hypertension, which were confirmed by cranial computerized tomography ( Figure 4). Emergency management of brain metastasis was surgical resection and followed by whole brain radiotherapy (WBRT) approved by multi-disciplinary team (MDT). Final pathology of brain metastasis showed a HER2 overexpression metastatic breast cancer with ER+ 70% mild, PR+ 70% mild, HER2 2+, Ki67+ 30%), FISH confirmed HER2 overexpression ( Figure 5). Weekly paclitaxel plus trastuzumab was given to this patient for www.impactjournals.com/oncoscience 12 weeks, then trastuzumab every 3 weeks for a whole year. Endocrine therapy switched into ovarian function suppression plus exemestane according to the latest clinical evidence [14]. Patient follow-up is still ongoing, the last follow-up is February 20 th 2016, and no new events appear yet.
DISCUSSION
Breast cancer threatens womens' health worldwide [15]. It is the second most common malignancy in China [16,17]. Breast cancer includes hormone-dependent and hormone-independent tumor based on the expression of ER and/or PR status, which is the important indicator for efficacy and prognosis in patients with endocrine therapies [6]. According to current National Comprehensive Cancer Network (NCCN) guidelines, endocrine treatment is indicated in all patients with a positive hormone receptor (HR) status, which is defined as ER positive and/or PR positive. The human epidermal growth factor receptor 2 (HER2) is amplified in approximately 15-25% of breast cancers. HER2 overexpression in breast cancer has been associated with tumor invasiveness, progressive regional and distant metastases, and poor prognosis [18][19][20][21]. The NCCN guideline recommends molecularly targeted therapy for first-line treatment of patients with HER2-positive metastatic breast cancer. Neoadjuvant chemotherapy is gradually used for LABC and there is a trend for tailored therapies based on molecular subtypes of breast cancer. The accuracy of the core needle biopsy (CNB) for determination of the hormone receptor status in breast cancer patients has been extensively studied and can be used with confidence for ER and HER2 determination. However, the results for PR are more variable and need to be used with caution [22]. Neoadjuvant therapy is mainly based on the immunohistochemical findings of the HR and HER2 status on the core needle biopsy. With the growing use of neoadjuvant therapy, it is important to know whether it modulates the biological behaviors of breast cancer cells [3,23,24].
However, we know little about the impact of neoadjuvant chemotherapy drugs on those biomarkers and the possible consequences for subsequent systemic salvage therapy at present time. Nowadays, the evaluation www.impactjournals.com/oncoscience CT revealed the brain metastatic lesion and brain midline shift. There is a slightly high density nodule in the left parietal lobe, and the surrounding is the low density edema before operation, and patchy opacity left there after operation. www.impactjournals.com/oncoscience of ER, PR and HER-2 status is mainly through IHC or FISH. The assessment was generally performed on the primary neoplasm in the assumption that the status should remain stable in most of the cases as demonstrated in previous reports. Moreover, in certain circumstance, the biopsy of metastatic site is not an easy task, such as brain metastasis and metastasis in deep organs. HER2 overexpression is recognized to be of strong predictive value in the treatment with HER2 inhibitors. It is reported that patients with HER2 positive breast cancer have better responses and higher pCR rates when adds trastuzumab to neoadjuvant chemotherapy [25][26][27].
In fact, some reports have suggested that ER, PR, and HER2 status switch between primary breast cancer and metastatic sites, and therefore, the confirmation of hormone receptors and HER2 status of metastatic sites should be routinely performed, which together with that of primary tumor to provide evidence for the choice of systemic salvage therapies. Several studies have carried out retrospective analyses comparing the ER, PR, and HER2 status of primary tumors and paired metastasis. For example, Fabris et al identified HER-2 status on 119 cases of primary infiltrating breast carcinoma and paired metastases. Therapeutically significant HER-2 status discordance was verified between primary carcinoma and synchronous lymph node metastases (6.7%), local recurrence (13.3%) and metachronous distant metastases (28.6%). In the comparison, they found that both normal HER-2 status in primary tumors to HER-2 amplification in paired metastases and HER-2 overexpression in primary tumors to normal HER-2 status in metastatic sites were evident. Taking together, 14 out of 65 cases (21.5%) showed a therapeutically significant discordance of HER-2 status between the primary tumor and the paired metachronous recurrence or metastasis, the 15.4% of cases showing normal HER-2 status in the primary tumor and HER-2 overexpression in the metastatic sites [28]. A convincing explanation for this phenomenon is still unborn. Nevertheless, controversial opinions do exist both about the stability of HER-2 status in breast carcinoma throughout the course of the disease, and about whether chemotherapy (neoadjuvant or adjuvant) may modify HER2 expression. Such as a possible genetic drift or clonal selection for HER-2 which may happen during tumor progression, intratumoral heterogeneity of HER-2 status, or a clone selection of having enhanced metastatic potential. The authors demonstrated that these marker reinvestigations at metastatic sites may potentially improve patient management and survival [29][30][31].
In our daily clinic, the discordance of the biomarkers is not uncommon. Nevertheless, little attention was paid. The case presented here sound the alarm for us to emphasize importance of tissue confirmation of metastatic disease in patients with breast cancer. Figure 5: Surgical pathology of the brain metastatic lesion. Hematoxylin-eosin (HE) morphologically identified breast cancer metastasis in brain, and immunohistochemistry examination reassessment of the metastatic lesion showed ER+70% mild, PR+70% mild, HER2++, Ki67+ 30%, fluorescence in situ hybridization confirmed HER2 overexpression. www.impactjournals.com/oncoscience CONCLUSION A change in hormone receptor or HER2 status would have important therapeutic, prognostic and financial consequences for both patients and health care providers. Though data on the influence of neoadjuvant therapy on the expression status of ER, PR and HER2 are few, even some reports show controversial results. We strongly recommend that ER, PR and HER2 of recurrent or metastatic lesions should usually be confirmed whenever possible, especially for patients whose clinical manifestations are different from the biomarker characteristics of the primary tumors. Such as a short natural history of the disease, site(s) of recurrence, comorbidities and previous treatments. Subsequent treatment measures should be modulated accordingly. Moreover, this procedure may be also recommended in the patients who are metastatic at the time of diagnosis.
CONFLICTS OF INTEREST
The authors declare that they have no conflicts of interest.
STATEMENT OF ETHICS
The patient was informed consent for her inclusion in this case report. Further, the case report was approved by our institution's ethics committee.
|
v3-fos-license
|
2021-11-12T16:18:30.227Z
|
2021-11-01T00:00:00.000
|
244027314
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/13/11/3991/pdf",
"pdf_hash": "452d507956303fb86630a5b8a09fa964853c6c90",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2943",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "040bd472795a212e6c3e3a3c28da1daee62cfaec",
"year": 2021
}
|
pes2o/s2orc
|
Nannochloropsis oceanica as a Microalgal Food Intervention in Diet-Induced Metabolic Syndrome in Rats
The microalgal genus Nannochloropsis has broad applicability to produce biofuels, animal feed supplements and other value-added products including proteins, carotenoids and lipids. This study investigated a potential role of N. oceanica in the reversal of metabolic syndrome. Male Wistar rats (n = 48) were divided into four groups in a 16-week protocol. Two groups were fed either corn starch or high-carbohydrate, high-fat diets (C and H, respectively) for the full 16 weeks. The other two groups received C and H diets for eight weeks and then received 5% freeze-dried N. oceanica in these diets for the final eight weeks (CN and HN, respectively) of the protocol. The H diet was high in fructose and sucrose, together with increased saturated and trans fats. H rats developed obesity, hypertension, dyslipidaemia, fatty liver disease and left ventricular fibrosis. N. oceanica increased lean mass in CN and HN rats, possibly due to the increased protein intake, and decreased fat mass in HN rats. Intervention with N. oceanica did not change cardiovascular, liver and metabolic parameters or gut structure. The relative abundance of Oxyphotobacteria in the gut microbiota was increased. N. oceanica may be an effective functional food against metabolic syndrome as a sustainable protein source.
Introduction
Microalgae are unicellular organisms that, in the presence of sunlight, convert carbon dioxide into biomass [1]. The members of the microalgal genus Nannochloropsis can be defined as biorefineries to produce biofuels, animal feed supplements and pigments [2,3]. Nannochloropsis species have a wide range of applications in biotechnology, using techniques to modify biomass composition [4]. Nannochloropsis species contain 28.7-40.4% carbohydrates, 22.2-37.4% crude protein and 15.1-21.7% total lipids on dry weight basis [5] as well as minerals, vitamins and antioxidants such as carotenoids [5,6]. Nannochloropsis species contain polyunsaturated fatty acids (PUFA), mainly eicosapentaenoic acid (EPA), polyphenols, carotenoids and vitamins with toxicological tests on animals confirming the safety of this microalga for consumption in food [7]. There are six known species of Nannochloropsis, including N. oceanica, N. gaditana and N. oculata. Baseline information on the biology of N. oceanica has been published for use in the risk analysis of genetically modified N. oceanica in Australia [8]. As one example of its potential, N. oceanica strains isolated in Southeast Queensland, Australia, showed improved lipid characteristics [9].
Physiological Variables
As expected, the body weight of H rats was higher than C rats ( Table 2). The body weight of HN rats was not different from H rats, while the CN rats were heavier than C rats. Lean mass was not different between C and H rats. For CN and HN rats, the lean mass was higher than their respective controls. Bone mineral content was higher in H and HN rats compared to C and CN rats. Bone mineral density of H rats was higher than that of C rats. The bone mineral density of CN and HN rats was not different from the respective controls. Food intake was higher in C rats compared to H rats. CN rats had lower food intake than C rats. HN rats had similar food intake to H rats. Water intake was higher in H rats compared to C rats and further increased in CN and HN rats, but this increase with N. oceanica intervention was not associated with an increased energy intake; these changes may be caused by the increased salt or protein intake. Energy intake was highest in H rats compared to C rats. HN rats had similar energy intake as H rats. CN rats had the lowest energy intake (Table 2).
Whole body fat mass by DXA was higher in H rats compared to C rats. CN rats had similar whole-body fat mass as C rats, but HN rats had lower fat mass than H rats. Total abdominal fat was higher in H rats compared to C rats, and HN rats had less abdominal fat than H rats. Epididymal and omental fat pads were not different from their respective controls. Retroperitoneal fat was higher in H rats compared to C rats, while HN rats had less retroperitoneal fat compared to H rats. Values in CN rats were not different from C rats ( Table 2).
Plasma triglyceride concentrations were higher in H rats compared to C rats, whereas HN rats were similar to H rats, while CN rats were higher than C rats. Plasma non-esterified fatty acids were the same for C and H rats, but CN and HN rats were higher than their respective controls. Plasma total cholesterol concentrations were unchanged among all groups (Table 2). H rats had higher 120-min blood glucose concentrations and area under the curve compared to C rats. CN and HN rats were not different from their respective controls. H rats had higher 120-min blood glucose concentrations and area under the curve after insulin administration compared to C rats; CN rats were higher than C rats; and HN rats were higher than H rats (Table 2).
After eight weeks, systolic blood pressures of H and HN rats were higher than of C and CN rats. Systolic blood pressures in H rats were higher at 16 weeks than in C rats. CN and HN rats were not different from their respective controls. Left ventricular diastolic stiffness was higher in H rats compared to C rats. CN and HN rats were different from their respective controls. Left ventricular weights with septum and right ventricular wet weights were unchanged in all groups.
Left ventricles from H rats showed increased infiltration of inflammatory cells and collagen deposition whereas these changes were not seen in left ventricles from C rats. CN and HN rats were not different from their respective controls ( Figure 2). Livers from H rats showed increased fat vacuole size and infiltration of inflammatory cells compared to livers from C rats, while HN rats had decreased fat vacuole size and fewer inflammatory cells compared to H rats ( Figure 2). Plasma activities of alanine transaminase and aspartate transaminase were not different between all groups ( Table 2). Values are presented as mean ± SEM, n = 10-12. Means in a row with unlike superscripts (a, b or c) differ, p < 0.05. C, rats fed with corn starch diet; CN, rats fed with corn starch diet + Nannochloropsis oceanica; H, rats fed with high-carbohydrate, high-fat diet; HN, rats fed with high-carbohydrate, high-fat diet + Nannochloropsis oceanica.
Gut Structure and Microbiota
Histology of ileum and colon did not show any structural abnormalities in the experimental groups demonstrated by normal crypt depth, villi length, goblet cells and lack of inflammatory cell infiltration ( Figure 2).
For gut microbiota characterisation, a total of 788,078 quality-filtered sequences were clustered into 1282 zOTUs; Good's coverage score of 99.69 ± 0.08% suggested an almost full recovery of bacterial communities. Shannon's diversity and richness indices were unchanged among the groups (Figure 3). Diet and N. oceanica affected the overall bacterial com- to livers from C rats, while HN rats had decreased fat vacuole size and fewer inflammatory cells compared to H rats ( Figure 2). Plasma activities of alanine transaminase and aspartate transaminase were not different between all groups (Table 2).
The effects of diet and N. oceanica on the ratio of Firmicutes and Bacteroidetes (Supplementary Figure S1) bacterial communities at the family level (Supplementary Figure S2) and bacterial communities at the genus level (Supplementary Figure S3) are provided in the supplementary file. Detailed correlation analysis of gut microbiota with physiological parameters showed relationships between 12 physiological variables and gut microbiota in Supplementary Tables S6 and S7. The physiological variables most often related to changes in the gut microbiota were systolic blood pressure, liver wet weight and abdominal (retroperitoneal, epididymal and omental) fat pads (Table S6).
The effects of diet and N. oceanica on the ratio of Firmicutes and Bacteroidetes (Supplementary Figure S1) bacterial communities at the family level (Supplementary Figure S2) and bacterial communities at the genus level (Supplementary Figure S3) are provided in the supplementary file. Detailed correlation analysis of gut microbiota with physiological parameters showed relationships between 12 physiological variables and gut microbiota in Supplementary Tables S6 and S7. The physiological variables most often related to changes in the gut microbiota were systolic blood pressure, liver wet weight and abdominal (retroperitoneal, epididymal and omental) fat pads (Table S6).
Discussion
The diet-induced changes in metabolic, cardiovascular and liver parameters in the rat model used in this project mimic the changes in human metabolic syndrome [16]. Interventions with seaweeds have been previously shown to reverse these changes [19][20][21]. This study shows that high-carbohydrate, high-fat diet-fed rats supplemented with the microalgae N. oceanica had higher lean mass and lower abdominal and liver fat than rats fed only the obesogenic diet. Further, the abundance of Oxyphotobacteria in the colon was changed. However, intervention with N. oceanica did not change cardiovascular parameters, lipid profile or glucose responses.
Microalgae are considered part of a healthy diet as they contain fatty acids, proteins, amino acids, pigments, vitamins and minerals [22]. Microalgae are a sustainable source of these compounds because they grow in a wide range of environments such as fresh, brackish and saline waters [23] and they do not compete with arable land or biodiverse landscapes [24]. Microalgal constituents are versatile and have potential applications in energy, pharmaceutical, cosmetics and food industries [25]. Nannochloropsis components such as whole biomass, pigments, long-chain PUFA, triglycerides, alkanes and alkenes have many biotechnological applications including production of biofuels [26], aquaculture, fish food, livestock feeds and wastewater treatment [27]. Because of these applications, Nannochloropsis grown for other uses could be diverted for the development of functional food products at minimal additional cost.
Nannochloropsis is nutritionally safe and can be used as a human health supplement [28]. Microalgae-supplemented food such as bread would address the general deficiency of omega-3 fatty acids and minerals, such as zinc, in the human population [2], although the change in colour may decrease consumer acceptance. Nannochloropsis can be added to food, such as bread [29] and pasta [30], to create highly nutritious functional foods. The addition of N. gaditana to bread changed the colour to green-yellow crust and crumb, suggesting an increased browning. The textural parameters of the bread such as hardness, chewiness and resilience were unchanged [29] whereas the appearance of pasta was minimally impacted with 10% replacement of wheat flour [30].
No studies have reported the effects of N. oceanica on changes in all components of metabolic syndrome using a single model, as in the current study. In rats, streptozotocin was used to produce acute pancreatic β-cell damage and induce hyperglycaemia [14]. Diabetic rats received N. oculata (10 and 20 mg/kg) for three weeks. N. oculata reduced serum concentrations of glucose, cholesterol, triglycerides and LDL and increased the serum concentrations of insulin and HDL-cholesterol. In another streptozotocin study, rats were fed with N. gaditana (10%) for two months [13]. N. gaditana supplementation decreased concentrations of glucose and HbA 1c and improved renal and hepatic functions while attenuating the oxidative stress and inflammation in diabetic rats. The marine-water microalga N. oculata and its extract minimised the pancreatic tissue damage and maintained the integrity of the genomic DNA [31]. N. oculata is a good source of omega-3 fatty acids, specifically EPA. Intervention with N. oculata suspension (10 8 viable cells/animal) for 14 days had no effect on body weight, which is similar to the current study [32]. Using the same model of metabolic syndrome as the current study, ALA, EPA and DHA [15] improved cardiovascular and hepatic parameters. However, the EPA dose in this previous study was 1300 mg/kg/day for 8 weeks, about five times higher than the EPA dose in the current study of around 260 mg/kg/day, also for 8 weeks. Further, the major four xanthophyll carotenoids in N. oceanica were present at 1940 mg/kg of the microalgal biomass which then gives a dose of approximately 4 mg/kg/day when mixed in the food. For comparison, a much higher dose of astaxanthin (200 mg/kg/day) given to Spontaneously Hypertensive Rats for 11 weeks reduced blood pressure [33]. These comparisons suggest that neither EPA nor the xanthophyll carotenoids are the major bioactive components of N. oceanica algal biomass. Further, this study tested N. oceanica without disrupting the cell structure, which is likely to further reduce the bioavailability of these components as ball-mill disruption enabled the protein and fatty acids to become bioavailable to mice [34]. It can be expected that the N. oceanica biomass, when used after processes such as ball milling [34], may improve the bioavailability of carotenoids and omega-3 fatty acids from the cell walls.
The gut microbiota plays an important role in health and disease [35]. Dietary interventions such as macro-and micro-algae can directly interact with the gut microbiota, leading to changes in physiological variables [35]. The search for microbial signatures of disease has led to the use of changes in the Firmicutes/Bacteroidetes ratio as a marker of obesity; however, use of this ratio may not be valid to determine health status because of lifestyle-associated variations in patients from a single population [36]. Our previous studies have shown interaction of polysaccharides from macroalgal interventions with the gut microbiota in improving metabolic and cardiovascular health [19][20][21]. As an example, our study on the macroalgae Caulerpa lentillifera showed correlations between gut microbiota and 15 physiological variables, especially oral glucose tolerance, liver weight and abdominal fat pads [21]. The current study extends this correlation to intervention with microalgae, suggesting that changes in gut microbiota are widely relevant in metabolic syndrome. Further, we have identified changes in Oxyphotobacteria with N. oceanica intervention.
Marine fatty fish such as salmon, mullet and mackerel are the main sources of EPA and DHA for human consumption [37]. However, due to the excessive and sometimes poorly regulated fishing industry, the depletion of worldwide fish stocks is straining the sustainability of production of omega-3 long-chain PUFA [38]. In contrast, microalga can be used for sustainable production of omega-3 PUFA [39] and so can be an important PUFA source for farmed fish [11,12]. Microalgae grow well in South-East Queensland, Australia [23]; hence, this may be a key location to provide good quality microalgae for Australian and international use. The biomass from Nannochloropsis species also contains high-value products such as other fatty acids, sterols and carotenoids with applications in food, cosmetic and pharmaceutical industries [40]. Defatted Nannochloropsis biomass is a good source of protein and carbohydrates which may have health benefits in addition to the increased EPA and carotenoids if the cell wall is broken [8]. Further, the dietary fibre from microalgal biomass could act as prebiotics to alter the gut microbiota leading to health benefits including reduced blood pressure, blood glucose, cholesterol, plasma triglycerides and LDL-cholesterol [41].
An advantage of this study was that the cell wall was not disrupted, and therefore, the effectiveness of microalgal components other than cell wall-bound EPA and carotenoids could be determined. A key feature of microalgae is the rigidity of the cell wall, which can limit the bioavailability of nutrients; hence, other studies have used several cell disruption methods such as mechanical, physical, chemical and enzymatic approaches [42] or solvent extraction [43]. Cell wall thickness in Nannochloropsis species varies from 63 to 119 nm due to the distinct genetic traits in each strain, with N. oceanica having one of the thickest cell walls [44]. Cell membrane disruption of N. oceanica may be necessary for optimal biological activity [45]. Health products including omega-3 fatty acids and vitamin D supplements can be obtained from Nannochloropsis using microwave, super-critical, ultrasound and enzyme-assisted extractions at industrial scales [46]. Our study shows that the biological activity of the biomass does not rely solely on EPA and carotenoids.
There may be a role for microalgal protein in providing a sustainable source of protein to augment diets that maintain weight loss. Typically, people regain weight after weight loss, with only diets with increased protein content having a beneficial effect in maintaining the reduced weight [47]. Animal protein consumption has been linked to abdominal adiposity and was generally detrimental to overall health in an adolescent population [48], whereas plant protein consumption was linked to better health. Therefore, microalgal protein may be a suitable alternative to animal proteins in maintaining a reduced body weight and health. Microalgal interventions may be useful as additives with other functional foods to increase the therapeutic effectiveness in metabolic syndrome.
The dose of 5% of diet in rats corresponds to approximately 30 g per day intake in adult humans [49]. This is a realistic and commercially viable dose in humans. Any higher doses may make it unrealistic and non-commercial, apart from decreasing the compliance and affordability.
Conclusions
N. oceanica intervention increased lean mass in rats, possibly due to the increased protein intake and decreased fat mass in obese rats, but this intervention did not change cardiovascular, liver and metabolic parameters or gut structure. As N. oceanica biomass can be produced sustainably in large quantities, it could be a source of essential amino acids and prebiotics that may improve health in chronic diseases such as metabolic syndrome. These are additional effects to the production of EPA and carotenoids by Nannochloropsis as a biorefinery. Further, the industrial usefulness of Nannochloropsis biomass for biofuels and animal feed supplements means that production of these amino acids and prebiotics can be undertaken using existing processes.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/nu13113991/s1, Table S1: PERMANOVAs based on Bray−Curtis similarity measure for square-root-transformed abundances of all rat faecal samples; Table S2: PERMANOVAs based on Euclidean distance matrix for physiological data of all rat faecal samples; Table S3: Summary of statistical tests on differential zOTU abundance; Table S4: Relative abundance of zOTUs affected by diet (ANOVA with p adjusted < 0.05) between C, CN, H and HN rats; Table S5: Relative abundance of zOTUs affected by treatment (ANOVA with p adjusted < 0.05) between C, CN, H and HN rats; Table S6: Correlation between bacterial community structure and physiological parameters (p < 0.05); Table S7: Taxonomic assignments of the zOTUs strongly correlated with physiological parameters; Figure S1: Effect of supplementation of diet (C or H) with Nannochloropsis oceanica on the ratio of Firmicutes and Bacteroidetes (F/B) abundances in rat faecal samples; Figure
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
v3-fos-license
|
2022-06-24T15:23:10.757Z
|
2022-06-22T00:00:00.000
|
249967394
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/22/13/4693/pdf?version=1655876089",
"pdf_hash": "35a4876446ba22cde88dd71117b0b4b3ad35d607",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2945",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "f2f2cd0a3c0f270cb9fd6ad2ff607434326a4f20",
"year": 2022
}
|
pes2o/s2orc
|
A Comparative Review of Thermocouple and Infrared Radiation Temperature Measurement Methods during the Machining of Metals
During the machining process, substantial thermal loads are generated due to tribological factors and plastic deformation. The increase in temperature during the cutting process can lead to accelerated tool wear, reducing the tool’s lifespan; the degradation of machining accuracy in the form of dimensional inaccuracies; and thermally induced defects affecting the metallurgical properties of the machined component. These effects can lead to a significant increase in operational costs and waste which deviate from the sustainability goals of Industry 4.0. Temperature is an important machining response; however, it is one of the most difficult factors to monitor, especially in high-speed machining applications such as drilling and milling, because of the high rotational speeds of the cutting tool and the aggressive machining environments. In this article, thermocouple and infrared radiation temperature measurement methods used by researchers to monitor temperature during turning, drilling and milling operations are reviewed. The major merits and limitations of each temperature measurement methodology are discussed and evaluated. Thermocouples offer a relatively inexpensive solution; however, they are prone to calibration drifts and their response times are insufficient to capture rapid temperature changes in high-speed operations. Fibre optic infrared thermometers have very fast response times; however, they can be relatively expensive and require a more robust implementation. It was found that no one temperature measurement methodology is ideal for all machining operations. The most suitable temperature measurement method can be selected by individual researchers based upon their experimental requirements using critical criteria, which include the expected temperature range, the sensor sensitivity to noise, responsiveness and cost.
Introduction
Metal machining creates substantial thermal and mechanical loads that can impact the mechanical behaviours of both tools and workpieces [1][2][3][4][5][6][7]. However, the machining industry focuses more on the mechanical loads of the operations. For example, one of the most common mechanical parameters is the cutting force [8,9]. Measuring the cutting force, tool geometry and analysing chip formation allows for the estimation of operating temperature and tool wear, amongst other factors. The accurate and reliable determination of the temperature, and temperature distribution, in the areas around the cutting toolworkpiece interface using cutting forces can be challenging [10].
Astakhov et al. [11] estimate that 15% of the value of all mechanical components manufactured worldwide is derived from machining operations. With the manufacturing industry moving away from the use of lubricants and coolants due to their effects on the environment, reliable temperature measurements during machining operations are becoming increasingly important [12]. In addition, machining performance in terms of cutting forces, surface roughness and tool life, amongst others, is directly linked with temperature [13]. Therefore, in-situ temperature sensing and monitoring aids in realising Industry 4.0 goals.
A review article by Childs et al. [14] reviewed the general temperature measurement methodologies available in terms of their accuracy, thermal disturbance and calibration. The temperature measurement methodologies described are not specific to one application, but some of the methodologies are applicable for machining operations. Their work also provides an insight into the background theory for each methodology. Davies et al. [15] reviewed the methodologies applicable for material removal processes, giving a historical overview of the advancements in temperature measurement for material removal processes. The authors provided a brief overview on the background theory for each methodology and proceeded to discuss major work performed in various material removal processes, highlighting the capabilities and challenges faced in each process. The study by Pimenov et al. [16] reviewed literature for tool-condition monitoring systems, including systems for temperature measurements, used with artificial-intelligence algorithms. The study by Pashnyov et al. [17] reviewed mathematical models for temperature distribution during grinding operations of metal-composite structures followed by analysis of metal composite systems on the basis of these models to identify the influence of lamination on the nature of temperature distribution. The work by Kuntoglu et al. [9] reviewed methodologies for indirect tool-condition monitoring systems for turning operations. A later study by Kuntoglu et al. [18] reviewed sensors and signal monitoring systems in machining processes, which briefly included temperature measurement systems. This is a valuable resource for signal conditioning and processing. Korkmaz et al. [13] reviewed literature on online detection methods and signal processing systems used in machining, including some literature for cutting temperature online monitoring systems. Zhao et al. [19] reviewed the temperature measurement methodologies employed by researchers investigating the effect of tool coatings on cutting temperature. Their work was focused on predictive modeling with coated cutting tools and only discussed temperature measurement methodologies used by researchers when experimentally validating their models. Therefore, the methodologies discussed in their work were limited and not representative of the temperature measurement methodologies found in the literature.
The focus of this article is to review the latest advancements in temperature measurement methodologies for metal turning, drilling and milling operations, with emphasis given to the methodologies' applicability in industrial or research environments. Our research methodology, therefore, focused on the following search terms: "temperature", "measurement", "machining", "turning", "milling", "drilling", "metal cutting", "monitoring", "thermocouple", "pyrometry", "infrared radiation thermometer" and "thermometry". These terms were used to search three online scientific citation indexing services (Web of Science, Scopus, and Google Scholar) in order to obtain the articles that make up this review. These databases were used to elucidate the key researchers that have published in the aforementioned categories of temperature measurements for metal machining. This allowed us to see the current leading edge in the field, by seeing where the research leaders were most active. The second strand to our method was to build an understanding of the influence of temperature on the workpiece and the tool, as well as how heat is generated and partitioned. Heat generation and partitioning enables researchers to understand heat distribution within the material in order to identify areas of interest to measure their temperature. These are described below and we have conjoined our understanding of heat generation and distribution with our review of the state-of-the-art methods. As our focus is on thermocouple-and radiative-temperature measurement methodologies in the turning, drilling and milling of metals, abrasive machining and other temperature measurement methodologies have largely been excluded from this work. Whilst other temperature measurement methodologies, such as thermal paints and PVD coating, have been found in the literature, they do not fit the scope of this article because they are not very accurate and are prone to errors. It is also worthwhile mentioning that the temperature measurement methodologies can also be applied for the machining of non-metallic materials. In this review, we have provided a comprehensive repository for information on the different thermocouple and radiative temperature measurement methodologies applied to metal machining operations; by whom the leading research is being conducted; the nature of the research; and our interpretation of how the research fits within the overall research field.
Temperature during Machining
The importance of the effect of cutting temperature has been recognised since 1907, when F.W. Taylor [20] was the first to demonstrate the relationship between tool life and cutting speed, and thus the cutting temperature. Ever since Taylor's work, there has been a stimulus to identify methods of measuring the cutting temperatures during machining. The manufacturing industry, especially the machining industry, has shown great interest in knowing and understanding how heat is generated and distributed during cutting processes [21,22]. Abukshim et al. [23] stated that the machining of metals is still not completely understood due to the highly non-linear nature of the process and the complex coupling between deformation and temperature fields.
During the machining operation of metals, there is a significant increase in temperature at the workpiece-tool interface. Depending on the loading parameters and the metal to be machined (such as titanium, which has a low thermal conductivity [24,25]), the cutting temperatures at the tool-workpiece interface can reach an excess of 1100 • C [26], with the most common temperatures in general machining in the range of 500 • C to 1000 • C [22,27]. This interfacial temperature increase comes from friction and the plastic deformation energies being transformed to heat [28][29][30]. Some of these influences include [31][32][33][34][35][36][37]: • Acceleration of tool wear and subsequently reducing lifespan, therefore increasing operational costs. • Thermal deformation of the work piece, cutting tool and machine tool leading to degradation in machining accuracy. This is mainly observed in the form of dimensional inaccuracies due to thermal distortion as well as expansion and contraction on the workpiece during and after machining. • Subsurface layers of the workpiece are destabilised through phase transformation, residual stresses as well as other thermally induced defects that affect the metallurgical properties of the machined component. This can lead to the introduction of residual tensile stresses and micro cracks at surface and subsurface levels, as well as cause surface damage via oxidation and corrosion.
With the industry moving towards dry and cryogenic machining for a reduction in its carbon footprint [38], as well as for environmental reasons linked with the use of lubricant coolants [39,40], temperature monitoring at the workpiece-tool interfaces is becoming increasingly paramount [41]. A critical analysis of the ecological aspects of various machining conditions was performed by Krolczyk et al. [42]. Temperature is an important parameter in machining, if not the most important [31], and it can have several influences on the machining outcome. The machining operation can be more effectively controlled by obtaining accurate and reliable temperature readings, resulting in a more efficient process. The aforementioned reasons signify the importance of the role of temperature on the outcome of machining processes. Therefore, they indicate the need for further development in the field of temperature measurement to further our understanding of metal machining.
Heat Generation during Machining
It is necessary to have an in-depth understanding of the factors contributing to heat generation and distribution during machining operations for metals. This is an essential element in tool development and process optimisation. It is known that, during metal machining, significant quantities of heat are generated due to tribological factors and plastic-deformation energy transformations [28][29][30]. Trent and Wright [10] suggested that 99% of the work performed during machining is converted to heat. Therefore, it is assumed that nearly all of the energy due to tribological factors and plastic deformation, during chip or swarf formation, is converted to heat, which is observed as a temperature rise in the cutting zone [3].
There are three main regions where heat is generated in the cutting zone, as schematically shown in Figure 1.
• Region A is the primary shear, or deformation, zone. In this region, the workpiece material is subject to shearing and plastic deformation to form a chip. The majority of the energy released during plastic deformation is converted into heat. The heat generated in this region is transferred to the chip and the workpiece [10,43]. • Region B is the secondary deformation zone which lies on the tool-chip interface. Heat is generated due to the deformation of the chip material and tribological factors on the tool rake face as the chip material overcomes both the adhesive and the sliding friction as it separates from the tool-chip interface [43,44]. This interface is where the maximum heat is encountered [22,45]. • Region C is the tertiary deformation zone, or tool-workpiece interface, where the tool flank moves along the newly formed workpiece surface with heat generation influenced by tribological factors. The geometry of the cutting tool is the main factor affecting how much heat will be generated in this region. To reduce friction in this region, the cutting tool provides a clearance angle between the workpiece and the flank surface, typically ranging between 3°and 15° [46][47][48]. As heat generation due to friction in this region is dependent on the tool geometry, more heat is generated as the tool wears [49], which can significantly impact the surface quality of the workpiece [45]. Heat generation during metal machining is dependent on combination of the physical and chemical properties of the workpiece and cutting-tool materials (including cutting-tool coating), machining parameters and cutting-tool geometry. In general, the amount of heat generated, and hence the temperature, in the primary and secondary deformation zones is dependent on the material properties and the machining parameters [10,50]. In contrast, the amount of heat generated in the tertiary deformation zone is mainly dependent on the geometry of the tool flank [43,49].
Temperature Measurement Methods
Many research studies have been conducted to find a reliable way to monitor the cutting temperature during machining; however, none of the research solutions have been implemented by the industry. The reason for this is mainly due to difficulties associated with appropriate sensor selection for any given machining operation. The main criteria to be considered for a temperature measurement technique are [14,51,52]: 1. The expected temperature range to be monitored; 2. The sensor robustness to withstand the machining environment conditions; 3. The sensor response time to temperature changes; 4. The sensor's sensitivity to electrical noise; 5. Temperature field disturbances of the sensor; 6. Cost.
The temperature monitoring techniques found in the literature fall into two main categories: contact (or conductive) and non-conduct (or radiative) techniques, as summarised in Figure 2 [31,53]. Detailed summaries of background theory of these methods and their uncertainties were provided by Childs [51] and Nicholas and White [54], with Heeley et al. [55] providing an uncertainty analysis for their custom fibre-optic infrared thermometer.
Contact Temperature Measurement Methods
Contact, or conductive, techniques measure the temperature gradient by monitoring the heat transfer between two points that are in direct contact with the material to be measured [56]. The thermal paint approach is one of the simplest and more economic temperature measurement techniques available [4]. It requires the material to be coated with the paint, which changes colour depending on the temperature. PVD (physical vapour deposition) coating techniques help determine the internal temperature of the material by melting once the material temperature exceeds a threshold. Thermochemical powders work in a similar way to PVD coating, relying on visual changes resulting from the melting of the powders. One of the main limitations of the techniques mentioned so far is that they are not suitable for more aggressive machining operations, where cutting coolants can remove the coatings from both tool and workpiece.
Metallographic methods utilise a more theoretical approach that correlates metallurgical changes in the workpiece, or tool, to the temperature to which it was exposed, based on existing reference models of the micro structure-temperature relation [31]. However, this technique is limited to certain materials and is unsuitable for common cutting-tool materials, as well as being predisposed to inaccuracies due to difficulties related to acquiring accurate theoretical models. The techniques mentioned so far have a several limitations, rendering them unsuitable for the temperature monitoring during machining applications.
The use of thermocouple sensors is the most common conductive temperature measurement technique [57,58]. The principal factors for thermocouple use is that they are durable, can be relatively inexpensive, and are able to operate over a wide range of temperatures [14,15]. There are several ways to implement thermocouples to obtain temperature measurements during machining, as shown in Figure 2. They could be embedded in the workpiece or tool material for single point measurements, or positioned in various points to obtain a temperature distribution [31]. A limitation common among most thermocouple types is their relatively slow response time, making it difficult to measure quick changes in high levels of temperature, especially in high-speed milling applications. Additional limitations include that thermocouples are known to drift over time due to homogeneity changes at the thermocouple junction at elevated temperatures, as well as hysteresis effects during thermal cyclic conditions, such as the ones observed during milling [59,60]. In a study by Pavlasek et al. [59], it was found that thermocouples drifted at a rate of around 0.22°C per hour. These limitations would result in a reduction in accuracy with each measurement.
Tool-Workpiece Thermocouples
The tool-workpiece thermocouple method can be used when both the cutting tool and workpiece are electrically conductive and made from dissimilar materials, turning the system into a thermocouple, as shown in Figure 3 [61]. This technique is based on the thermocouple principle, where two dissimilar materials give rise to an electromotive force (EMF) due to a temperature difference between hot and cold junctions [3,62]. The EMF can be measured and correlated with the temperature of the tool-chip interface. Kitagawa et al. [63] used the tool-workpiece thermocouple technique to investigate the differences in tool life between the dry end milling and turning operations of a titanium alloy under various cutting conditions using a K10 tool. A greater rate of tool-life improvement with increasing cutting speed was observed in end milling than turning. An increase in the mean rake temperature with increasing cutting speeds was also observed.
Grzesik [64] used this technique to investigate the influence of tool coating on cutting temperature during the turning of medium carbon steel and austenitic stainless steel. To obtain reference temperature measurements, a type-K thermocouple was embedded into the workpiece, at a distance of approximately 0.1 mm away from the workpiece surface. To calibrate the thermocouples, a tungsten inert gas (TIG) welding machine was used to heat up the workpiece surface within a small area adjacent to the cutting interface with the measured EMF relating to a temperature. The area heated up by the TIG was used as the hot junction, whereas an ice bath was used as a cold junction. This process was repeated for both workpiece materials with differently coated cutting tools. The main limitation of this approach was that the ice bath used as a cold junction is not ideal for an industrial setting. Furthermore, for any change in the experimental configuration, more calibrations are required.
Recognising this limitation, Abhang and Hameedullah [62] improved on this technique by setting the tool-workpiece interface as a hot junction and remote sections of the tool and workpiece, which were kept at a constant reference temperature, were used as the cold junction. Doing so, a mercury bath was used at the cold junction, which has health risks associated with it.
To avoid the use of liquids, and their inherent limitations, as cold junctions, Santos Jr et al. [65] considered seven different junctions between various components of their experimental setup as cold junctions. The cold junctions were assumed to be at room temperature and the hot junction was the interface between a K15 cemented carbide tool and 1350-O and 7075-T6 aluminium alloys as workpiece materials. The calibration process took place in a furnace alongside a calibrated K-type thermocouple as a reference. This experimental setup was successful at monitoring the surface-temperature response of the two aluminium alloys with respect to cutting speed, depth of cut and feed rate.
A simpler approach was utilised by Ghodam [66,67], where they insulated and directly connected the workpiece and tool junctions to a multi-meter to measure the induced EMF corresponding to the average temperature of the tool tip. The tool-work thermocouple was calibrated using a heating rod and a K-type thermocouple as a reference. This study aimed to evaluate the machining performance of coated and uncoated cutting tools whilst turning an EN8 steel alloy.
Mia and Dhar [68] used a tool-workpiece-thermocouple setup to obtain temperature measurements to validate their predictive neural network model for turning operations of AISI 1060 steel using a tungsten carbide tool. The tool-workpiece thermocouple was calibrated by heating the tool-workpiece junction in a graphite heating block and validating the temperature by using another standard thermocouple. The same thermocouple setup was used by Mia et al. [69] to monitor the cutting temperature during the turning of AISI 1060 steel to investigate the sustainability of various cooling and lubricating conditions. It was found that the MQL system was the most sustainable out of the conditions tested. Mia et al. [70] used the same tool-workpiece approach to measure the cutting temperature during a cryogenic-assisted turning of Ti-6Al-4V. This study aimed to asses the role of loading parameters on temperature to allow for parameter optimisation and to assess the life cycle of cryogenic-assisted turning. It was found that the cutting temperature was minimised with higher cutting speeds and lower feed rates. Furthermore, it was found that the cryogenic turning of Ti-6Al-4V was a more sustainable alternative to dry machining. Future work from these researchers aims to perform life-cycle assessments for more cooling-lubricating machining conditions for more workpiece materials.
Section Findings: The tool-work thermocouple technique is relatively simple to implement in laboratory settings; however, it is not practical to be implemented in industrial production environments. Early implementations of this technique used hazardous liquids at their cold junctions, though more recent implementations have demonstrated that other areas of the setup could be used as cold junctions. A combination of the limitations of the tool-workpiece thermocouple technique from the comprehensive study by Stephenson [61] and those identified in the literature are summarised: • Limited to electrically conductive tool and workpiece materials, which must also be electrically isolated from the machine tool so as to achieve an accurate signal. • Electrically isolating the tool could cause the machine tool to be dynamically unstable, making chatter generation more probable during heavy cuts at higher speeds. • Electrically conducting lubricants and cooling fluids cannot be used with this technique. • Limited to non-indexable tools, as indexable tools could result in secondary EMF signals being generated between the cutting insert and the tool holder, resulting in measurement errors. • Calibration would be required for all the different combinations of tool and workpiece materials, which can be inconvenient for practical applications. • The temperature recorded is an average of the entire contact area. • The thermocouple circuit is calibrated under static conditions. • Oxide layers tend to form on carbide tools during machining, which affect the temperature readings as these oxide layers would not be considered during calibration.
Embedded Thermocouples
The embedded thermocouple method is perhaps the most widely used approach by researchers for temperature measurements. It is also used to obtain reference values alongside other temperature measurement methods [64,65]. This thermocouple approach has two variations, the tool-embedded and the workpiece-embedded variations; the thermocouple is inserted into a drilled hole in the tool, for the tool-embedded variation, or the workpiece, for the workpiece-embedded variation, at a precise distance from the cutting edge, as illustrated in Figure 4. The hole is typically filled with thermally conductive material, such as ceramic cements, preventing heat losses inside the hole by displacing the air. Zeilmann and Weingaertner [71] used the workpiece-embedded thermocouple approach to measure the temperature during a minimum quantity lubrication (MQL) drilling of Ti-6Al-4V with cutting speeds between 10 to 40 m/min. K-type thermocouples were embedded into holes in the workpiece at a distance of 0.2 mm from the edge of the hole that was to be drilled into the workpiece. Maximum temperatures of around 200°C, 280°C and 320°C at cutting speeds of 15, 30 and 40 m/min were reported. These findings were compared with those of external MQL drilling and it was found that the temperatures obtained with internal lubrication were 50% lower than those obtained through external lubrication.
Le Coz et al. [72] used the tool-embedded thermocouple approach to measure the cutting tool temperature during internal MQL drilling operations of Ti-6Al-4V. A K-type thermocouple was embedded inside 20 mm deep holes close to the corner of 10 mm diameter drills with internal lubricant feeds. The thermocouple signals were amplified by a transmitter integrated inside the tool holder. A radio frequency antenna was positioned close to the tool holder to allow for high-speed data acquisition. The results obtained by Le Coz et al. [72] were compared to those obtained by Zeilmann and Weingaertner [71] and were found to be much higher, ranging from 590°C to 640°C. The explanation for this was that the temperatures obtained through the workpiece-embedded approach were at a distance of 0.2 mm from the wall of the drilled hole, where the tool-embedded approach was able to record the temperature at the tool-workpiece interface.
Bagci and Ozcelik [73,74] employed the tool-embedded thermocouple approach in a novel way by utilising the existing coolant hole on TiAlN-coated carbide drills to measure temperature during the drilling of AISI1040 steel and Al 7075-T651. A PFA Teflon-coated type-K thermocouple was attached in an unspecified manner inside the coolant hole of the drill. The experimental setup kept the drill stationary and the workpiece rotating to simplify data acquisition. In their first study, the effect of drill depth, spindle speed and feed rate on the drill-bit temperature was investigated during step and continuous drilling. In the second study, the same methodology was used to validate their finite-element model. Ay and Yang [75] used the tool-embedded thermocouple variation to analyse the temperature variations of the tool and workpiece during orthogonal turning of various materials. A triangular carbide tool embedded with nine type-K thermocouples, three at each surface of the cutting edge, was used to machine copper, cast iron, aluminium 6061 and AISI 1045 steel as workpiece materials. They observed oscillations in temperature near the cutting edge that were more rigorous for ductile materials than in hard-to-machine materials.
Hamzawy et al. [76] used the workpiece-embedded thermocouple approach to monitor the workpiece temperature during the friction drilling of 4 mm thick Al-6082 and Al-7075-T6 sheets under various loading parameters. Four type-K thermocouples were positioned at distances of 10, 12, 20 and 24 mm from the centre of the drilled hole to observe the heat distribution within the workpiece materials. The findings of this study included that the temperature increased with increasing rotational speed and tool cone angle, whereas a decrease in temperature was observed with an increased feed rate.
The workpiece-embedded thermocouple approach was also employed by Uçak and Çiçek [77] to investigate the effect of cutting conditions on temperature and hole quality during the drilling of Inconel 718 using solid carbide drills. Five thermocouples were positioned 0.1 mm away from the wall of the drilled hole, equally spaced at a depths of 1.5 mm to 13.5 mm. The experiments were repeated under dry, cryogenic-cooling and wet conditions. Cryogenic drilling was shown to have the greatest impact on reducing the temperature, however, it was shown to greatly reduce tool life. Dry drilling was found to lead to the highest cutting temperatures as well as the greatest tool wear rate. Wet conditions resulted in better surface roughness and better tool life and more stable machining was observed compared to both cryogenic and dry conditions.
O'Sullivan and Cotterell [30,34] utilised the workpiece-embedded thermocouple technique to attach two type-K thermocouples inside an Aluminium 6082-T6 tube. The thermocouples were positioned 4 mm away from the new surface created in a continuous turning operation, and were placed 20 mm and 80 mm from the tube end. A mercury rotating slip ring was used to enable the thermocouple connection, as the thermocouples were rotating with the workpiece. An increase in tool wear was observed with increasing temperatures at the new surface. This approach is hazardous and unsuitable for use in industrial applications because of the health factors associated with mercury.
Ren et al. [78] used a slightly different approach, where a thin type-K thermocouple was inserted in a groove between the shim and the tool. Their experiments aimed to evaluate the average cutting temperature developed at the interface between the shim and the PCBN cutting-tool insert during turning operations of titanium alloy and chromium hard facings. The obtained thermocouple data were used to inform a finite-element (FE) model to reverse calculate the temperature at the tool-chip interface.
Il et al. [79] utilised the workpiece-embedded thermocouple approach to measure the subsurface temperature on an Aluminium 2024-T3 alloy during milling operations with a four flute DLC-coated end-mill of 19.05 mm in diameter. The aim of this study was to investigate the influence of feed per tooth, spindle speed, depth of cut and radial width of cut on the subsurface temperature of the workpiece. The type-K thermocouples were placed inside a blind hole which was filled with thermal cement to secure the thermocouple position and to ensure that there was heat transfer from the workpiece to the thermocouple. From this study, it was found that the optimal milling conditions were achieved for a low depth of cut, high cutting speed and a feed per tooth of at least 0.127 mm.
A similar implementation was used by Akhil et al. [4] and Gosai et al. [80]. In both studies, the thermocouple measurements were used for process optimisation. Krishna and Reddy [81] employed the tool-embedded thermocouple variation to measure the temperature at the tool chip interface using a K-20 carbide tool whilst performing orthogonal turning of aluminium 6061-based metal-matrix composites. The thermocouple data were correlated to an FE model with an observed variation of 1.25%, and concluded that the maximum temperature observed occurred at the tool-chip interface. A similar methodology was employed by Kus et al. [22], where tool-embedded thermocouple measurements were used to inform an FE model.
A more advanced implementation of the tool-embedded thermocouple was used by Kryzhanivskyy et al. [82,83]. Their experiments included eight thermocouples embedded at different locations within the cutting tool to inform their FE model with heat flux as a parameter.
Section Findings: The embedded-thermocouple technique is widely used for research in laboratory settings, due to its relatively low cost and simple implementation. However, more research on improving the implementation of this thermocouple technique may be required before it is practical to be implemented in industrial production environments. The workpiece-embedded thermocouple technique can be used where knowledge about heat distribution within the workpiece material is required. This would require several precise holes to be drilled into the material, to accommodate the thermocouples, which can lead to a significant increase in system costs. This technique is useful for obtaining measurements to inform FE models for improved temperature predictions.
The tool-embedded thermocouple technique only requires one hole to be drilled, into the tool. This is more desirable, as there is no additional machining required on the workpiece and it, thus, could be used for industrial environments. Embedded thermocouples provide more accurate measurements compared to other thermocouple techniques, especially in continuous turning operations. The implementation of embedded thermocouples in milling operations is not as straight forward. The limitations of the embedded-thermocouple technique include the following: • The drilling of a large number of holes may lead to inaccurate results as a consequence of uneven temperature distribution. • Surface temperature cannot be directly measured with the embedded-thermocouple technique. It can be extrapolated from the temperature measured by the thermocouple deeper within the metal. • There could be a great increase in cost associated with the difficulty of drilling holes in certain hard-to-machine materials, especially when employing the workpieceembedded thermocouple technique. • The thermocouple response time might not be sufficient to measure sudden temperature changes in high-speed milling operations due to the short contact time of the tool with the workpiece.
Single Wire Thermocouples
The single-wire thermocouple technique, also referred to as a semi-artificial thermocouple, is a variation of the workpiece-embedded thermocouple technique. This technique requires an insulated thermocouple wire, typically a constantan wire, to be fitted inside the workpiece with the temperature measured as the wire is cut with the workpiece material, as illustrated in Figure 5. The workpiece is required to be sliced in two across the line of cutting for the wire to be fitted inbetween the two halves. This technique was developed by Black et al. [84] to measure the temperature distributions during both wet and dry grinding operations, and has been demonstrated for use in other machining operations.
This technique was used by Dewes et al. [85] to measure the tool-workpiece temperature during dry milling using a ball nose tool at an angle of 0°and 60°relative to the workpiece. Higher temperatures were observed when machining at 60°than at 0°. Sun et al. [86] used this technique to measure the temperature of a titanium alloy (Ti-6Al-4V) as it was machined. It was assumed that the highest temperature recorded corresponded to the point when the cutting tool first made contact with the workpiece. Therefore, correlating the recorded temperature to the specific instances during the milling process to obtain the temperature at the tool-chip interface as well as the workpiece temperature.
In another study, Baohai et al. [87] obtained temperature measurements to verify their temperature FE model during milling operations of Inconel 718. Both the experimentally obtained measurements and the data obtained from the FE model followed the same trends. However, the experimental measurements appeared to be lower than the equivalent data from the FE model because of experimental constraints not accounted for in the FE model.
Section Findings: This technique is suitable for experimental work because it can aid in validating FE models for temperature predictions. However, as a consequence of the workpiece requiring to be sliced, this technique is not suitable for industrial applications. The single-wire thermocouple technique is subject to the following limitations: • The thermocouple wire needs to be replaced after each cutting operation, which can lead to a significant increase in cost. • There are additional costs in the manufacturing process by having to calibrate the thermocouple system each time the wire is replaced, as it is not a standard thermocouple. • Additional machining is required on the workpiece to fit the thermocouple wire, which has a detrimental impact on the structural integrity of the workpiece, greatly increasing machining-associated costs. • The maximum temperature at the tool-chip interface is not always recorded due to experimental errors introduced by variations in detecting the tool position along the cutting edge. To overcome this limitation, additional replications are required, which lead to an increase in the overall cost. • Not suitable for turning operations as a result of additional complexities introduced attempting to fit the sensor electronics on a workpiece in constant rotation. • Drilling a hole into the cutting tool could affect its structural integrity.
Radiative Temperature Measurement Methods
Non-contact, or radiative, measurement techniques interpret and measure the thermal energy, in the form of infrared (IR) radiation, emitted by an object of interest to determine its thermodynamic temperature based on the wavelength of the emitted radiation [15,88]. This often allows for non-intrusive temperature measurements to be taken from a distance [56]. Thermal, or infrared, cameras are widely used in milling applications [30,34], as they can record the surface temperature gradient over a large area of the workpiece and tool in the form of live thermal images.
Infrared thermometers, which are often referred to in the literature as pyrometers, operate on the same principle as thermal cameras but only record a single temperature measurement over their field of view. Infrared thermometers (IRTs) have two basic configurations. The first configuration consists of focusing optics and a detector, with the IRT recording the average temperature of the surface within its field of view. This configuration of IRTs and thermal cameras requires a direct line of sight to the workpiece or tool surface that is measured. The second configuration includes an optical fibre to transmit the infrared radiation emitted from a source to the detector. Instruments with this configuration are usually referred to as fibre optic infrared thermometers (FO IRTs). FO IRTs enable the processing electronics to be positioned away from aggressive environments, such as the one encountered during machining. FO IRTs can be embedded, such as the case with tool-and workpiece-embedded thermocouples, as well as used in a non-intrusive manner, operating in a similar way to an IRT.
Infrared Cameras
The use of infrared cameras for determining the distribution in the temperature during machining operations was developed by Boothroyd [89,90] in 1961. Boothroyd used a photographic plate that required an exposure time of 15 s to obtain an image. However, thermal-imaging technology has gone through a large transformation through the advancement of digital-camera sensors. Thermal cameras are now capable of providing live thermal images of the process at very high frame rates. For an IR camera to obtain the area measurement, it needs to have a direct line of sight to the area of interest, as illustrated in Figure 6. Danish et al. [91] used a FLIR T640 thermal camera, with a set emissivity value of 0.18, to investigate the temperature at the machined surface during dry and cryogenic turning operations of a AZ31 magnesium alloy. During the calibration process of the camera, a sample was heated on a hot plate from 20°C to 200°C with the measurements validated with the use of a thermocouple attached to the workpiece at the area observed by the camera. It is important to note that, in this study, one emissivity value was set for both dry and cryogenic conditions, which might not reflect the real values of emissivity for both conditions. The findings of these experiments were used to validate their developed finite element model with a maximum percentage error of 7%.
Satur et al. [92] used a 240 × 180 pixel infrared camera by Testo company, to investigate the effects of MQL and dry environments during the end milling of AISI 1040 steel on cutting temperature, tool wear and power consumption. The thermal camera was positioned approximately 50 cm from the cutting zone and considered only the peak temperature values pixel by pixel from the interface. It was observed that the cutting temperature, tool wear and power consumption yielded better results for MQL environments compared to dry environments.
In the study by Gupta et al. [93] a Fluke thermal camera was used to measure the temperature at the primary shear zone with the purpose of experimentally validating their predictive model for the cryogenic-assisted turning of AA2024-T351. The main challenge faced during the experimental validations was that the camera was observing the back of the newly formed chip, whereas their model was designed to calculate the temperature at the tool-chip interface. To overcome this challenge, initial experimental data were taken five times to calibrate their results against modeling data. This enabled the extrapolation of the interfacial temperature from measuring the back surface of the chip.
Dewes et al. [85] used an IR camera sensitive to the spectral range of 8-12 µm alongside the single-wire thermocouple technique to measure the tool-chip temperature in dry milling. Lower temperatures were recorded with the IR camera than the single wire thermocouple, which were explained due to variability in emissivity as well as the fact that the camera observed the back side of the formed chip. An IR camera was employed by O'Sullivan et al. [30,34] to determine the temperature of the machined surface as a means to verify their tool-embedded thermocouple results. Decreased surface temperature was observed for higher cutting speeds, and increased tool-flank wear led to higher surface temperatures. Young [94] measured the temperature of the back of the chip, as well as the interfacial temperature, to investigate the effect of tool wear on temperature, during the orthogonal cutting of an AISI 1045 steel. Similar implementations were used by Arrazola et al. [95], Thakare and Nordgren [96], Liu et al. [97] and Masoudi et al. [98] to measure the temperature of tools to investigate the effect of process parameters and tool wear on the cutting temperature during the orthogonal cutting operations of various metals. In the latter study, the factors affecting the accuracy of the IR-camera measurements were also evaluated, indicating the necessity for more precise emissivity calibrations. Kryzhanivskyy et al. [82], Monica et al. [99], Jafarian et al. [100] and Liao et al. [101] used IR-camera measurements to investigate the effect of cutting temperature on the surface microstructure of the workpiece and used their findings to inform their predictive FE models.
Yang et al. [102] investigated the effect of different pure-iron grain sizes on the cutting temperature. Bjerke et al. [103] employed the IR-camera technique to monitor the tool temperature distribution during an investigation into the influence of oxygen on the degradation of tool coatings. Hao et al. [104] investigated the effect of a TiAlN-coated tool on the cutting temperature during the turning of H13 hardened steel. A reduction in cutting temperature was observed when machining with the coated tool compared to that of the uncoated tool. Menon and Madhavan [105] and Heigel et al. [37] used transparent yttrium aluminum garnet (YAG) cutting tools and a mirror alongside an IR camera to monitor the temperature at the tool-chip interface during the orthogonal cutting of a Ti-6Al-4V titanium alloy.
Solter et al. [12] used an IR camera to obtain spacial temperature measurements of the chip, tool, workpiece and their interfaces, to inform a predictive heat-partitioning FE model. Saez-de-Buruaga et al. [106] used an IR camera to measure the tool-side temperatures to calculate the tool-chip tool temperatures on the tool. The aim of this study was to determine the effect of the workpiece material, various ferrite-pearlite steels, on the tool-chip temperature to optimise the machining process for each alloy. Saleem et al. [107] used the IR camera to obtain cutting temperatures during face milling of Inconel 625. The researchers aimed to evaluate and quantify the tool life and workpiece surface integrity under various cutting parameters. A micro-thermal camera was utilised by Armendia et al. [108] to record the temperature distribution during the continuous and interrupted cutting of a Ti-6Al-4V titanium alloy and AISI 4140 steel using coated-carbide milling inserts.
Section Findings: Thermal cameras have evolved to be capable of high-speed thermal imaging that enables researchers to capture rapidly occurring temperature changes in the surface of the tool, workpiece and chips during machining operations. The IR-camera approach enables for non-destructive means of obtaining and analysing the temperaturedistribution map of the area of interest, without affecting how heat is distributed within the materials. Furthermore, it can be consistently used without creating any mechanical wear.
Overall, IR cameras can be an effective method for temperature measurements in machining operations. However, due to limitations associated with this method, careful considerations must be made to increase the accuracy and validity of the temperature measurements. The limitations of IR cameras include the following: • The high-speed IR cameras are an expensive investment. • It is limited to surface temperature monitoring. • Capable of temperature measurements of areas with a direct line of sight. In milling operations where information about the tool, or the tool-chip interface, temperature is required, an IR camera's line sight might become obscured by swarf breaking off from the workpiece, subsequently impeding the measurement accuracy. • The cameras must be appropriately calibrated using approximate black-body calibration sources and also by defining emissivity coefficients for a range of temperatures, as the measurements are dependent on the emissivity of the material [109,110]. In the literature reviewed, most researchers assumed the material emissivity to be a constant value, which led to the introduction of uncertainties and errors in their measurements. In reality, the emissivity of a material can change during machining due to topological changes, as well as the formation of oxide layers. • They are unsuitable for reliable and accurate measurements in machining operations where lubricants or coolants are used due to changes in emissivity.
Infrared Thermometers
Infrared thermometers (IRTs) operate on similar principles and execution as the IR cameras. The main difference between IRTs and IR cameras is that IRTs take single point measurements, averaging the temperature of the observed area, instead of plotting a temperature map. Lenses are utilised to focus the incoming radiation, from within the IRTs' field of view, onto a detector where the average temperature of the area observed is measured. IRTs are often fitted with a laser as a guide to precisely locate the area of interest.
Muller-Hummel and Lahres [111] used an IRT to measure the temperature at two different positions of the tool-chip interface, at the tool's flank and at the tool's face, during turning operations. Two different modifications were made to the tool: the first required a straight borehole to be made from the edge of the tool to its side, as illustrated in Figure 7a; the second required a right angle borehole with mirrors to reflect the emitted radiation to the IRT, as illustrated in Figure 7b. A CVD diamond window was integrated onto the cutting tool to prevent the holes from getting blocked and to enable for measurements at the tool-chip interface. In addition, this technique allows for single-point temperature measurements at different depths within the material, as long as the right focusing optics are selected for an appropriately sized hole on the material. This approach may result in additional costs to the manufactured parts, as the cutting tool and tool holder require additional machining and treatments to enable tool-chip temperature measurements.
Ng et al. [44] positioned an IRT to measure the surface temperature at the back of the chip during turning operations. Similarly, Kus et al. [22], Motorcu et al. [112], Rezende et al. [113] and Kuntoglu et al. [114] positioned their IRTs to measure the temperature at the tool-chip interface during orthogonal turning operations.
Ming et al. [115] used an IRT to monitor the workpiece surface temperature during high-speed milling. Their IRT measurements were used to inform a mathematical threedimensional heat-conduction model. Longbottom et al. [116] used a mounting bracket to secure the IRT onto the machine quill.The IRT was pointed to measure the temperature of the new workpiece surface and the measurements were compared to those from a mathematical predictive model. Their approach was used to determine the heat partition ratio during milling operations.
Section Findings: IRTs are capable of accurately measuring temperature with very fast response times without requiring contact. These capabilities make IRTs suitable for high-speed machining applications, especially for interrupted machining processes, where the temperature of the tool can fluctuate at very high intervals, proportional to the cutting speed. IRTs share some of the limitations of the IR cameras, as they operate on the same principles. Their limitations include: • A direct line of sight to the point of interest is required. • Unable to measure the temperature gradient as they are only capable of singlepoint measurements.
•
The detector can be sensitive to ambient temperature. • Infrared thermometers need to be calibrated with an emissivity value to give a temperature measurement. However, the emissivity of the tool or workpiece can vary during the machining process due to the use of coolants and lubricants or the formation of oxide layers with different emissivity values.
Ratio, or two color infrared thermometers, can overcome issues with emissivity uncertainties due to obscured views. This is possible by measuring the spectral radiance of the target at two distinct wavelengths and determining the target's temperature from the ratio of the two signals [88,117]. Zhao et al. [118] used a two-colour IRT to measure the cutting temperature profile and maximum cutting temperature of Inconel 718 with ceramic tools during turning operations. They reported results comparable with results reported in the literature and found their proposed methodology to be an appropriate and economical approach of assessing the heat generation in turning operations of Inconel 718.
The emissivity of the material at the two wavelengths measured by the detector was different. As a consequence of this, two-colour IRTs have a high sensitivity to measurement errors, which can lead to comparatively large temperature errors.
Fibre-Optic Infrared Thermometers
A FO IRT consists of a fibre to collect and transmit the emitted radiation onto a photosensitive detector. A fiber-optic infrared thermometer can ameliorate issues with line-of-sight disruptions and emissivity uncertainties introduced by reflected radiation by placing the fibre closer to the area of interest. Furthermore, FO IRTs are capable of better protection of the electronics from the aggressive machining environment by being placed away from the target with the emitted radiation transmitted by the fibre.
Müller and Runz [119,120] used quartz fibre and two-color IRT to measure the temperature at the tool-chip interface during turning operations. A hole was made through the cutting tool, in a manner similar to that illustrated in Figure 7a, where the quartz fibre was embedded. This implementation allowed for measurements with a high sample rate with a reported response time of 1 ms and a high accuracy, which enabled the researchers to observe the transient heating-up phase, followed by the heating up due to tool wear until the cutting edge was broken in an equivalent cutting time of 2 s. Al Huda et al. [121] utilised a two-colour IRT with an unspecified optical fibre using a translucent alumina cutting tool in an implementation similar to that of Müller and Runz [119,120]. The aim of this study was to measure the temperature distributions at the rake face of the tool under wet and dry conditions whilst turning, validating the experimental results with a FE model. It was found that the maximum temperature occurred at a distance of 0.6 mm from the cutting edge on the tool-chip interface.
A similar implementation for turning operations was employed by Tapetado et al. [122], where a silica fibre was used with a two-colour IRT. This study evaluated the effect of the distance between the fibre tip from the cutting surface on the spacial resolution. Furthermore, the influence of potential damage on the fibre tip, due to factors from the machining environment, on the output power measured by the IRT. It was demonstrated that the measured temperature was independent of the distance of the fibre as well as of any potential damage to the fiber tip.
Oezkaya et al. [123] embedded two quartz fibres symmetrically into a twist drill tool to measure the cutting temperature whilst drilling Inconel 718 under wet conditions. During the drilling process, the fibre was required to remain in contact with the cutting edge due to the high-pressure cutting-fluid supply used, resulting in the fibre being machined throughout the process. It was found that increasing the cutting force did not result in any significant temperature change; however, increasing the cutting speed from 35 m/min to 45 m/min resulted in a temperature rise of 140°C.
Saelzer et al. [124] and Afrasiabi et al. [125] used a commercially available FO IRT to measure the temperature of the rake face during orthogonal cutting to investigate the effect of different tool surfaces on the machining temperature. The fibre was positioned perpendicularly to the tool's rake face. To enable for rake-face temperature measurements, the researchers introduced three slots into the workpiece surface, with the distance between each slot selected to allow for a stationary temperature to be reached before the next slot. This methodology created a locally and temporarily interrupted chip with a continuous chip flow, to allow for the rake-face temperature to be measured.
Sato et al. [126,127] and Ueda et al. [58] demonstrated the potential of tool-embedded FO IRTs in milling operations. Their experimental setup consisted of two fluoride glass fibres; the first fibre ran through the inside of the tool holder and was inserted into a blind hole in the tool insert at a known depth away from the tool rake face; the second fibre was stationary, coupled to the first fibre using a non-contact fibre coupler at one end, and attached to a custom two-colour IRT and a calcium-fluoride focusing lens. The FO IRT used for their experiments was reported to have a response time of 1 ms, making it suitable for most high-speed milling operations.
The methodology employed by Sato et al. [126] and Ueda et al. [58] used a noncontact fibre coupler to help resolve the problem of manipulating thermometer wires and fibres within the rotating tool. This study demonstrated one of the best approaches for temperature measurements in milling operations, as it yielded highly accurate results with a high response time in a robust manner. The main limitations of this approach include additional complexities and costs for the customised tool spindle to accommodate the fibres and the coupler.
A different approach to eliminate issues with unknown or varying emissivity without using a two-colour IRT was proposed by Heeley et al. [55]. Their proposed approach involved embedding a sapphire fibre inside a blind hole on the cutting tool and attaching it to a small sized, custom single-colour IRT positioned onto the tool holder. By embedding the fibre inside a blind hole, the researchers created a black-body cavity where the emissivity within the cavity was approximately 0.99. Furthermore, this approach can be used in machining operations where lubricants or coolants are necessary. The authors reported a good temperature resolution at a measurement frequency of 1 kHz, corresponding to a response time of 1 ms, amongst the fastest temperature response times reported in literature. The main limitation of this approach is ensuring that the fibre is well-protected from the swarf separating from the workpiece at high velocities, as well as ensuring there are no additional stresses applied to the fibre due to vibrations. In addition, this tool-embedded approach does not measure the surface temperature; however, as with the embedded thermocouple technique, the surface temperature can be extrapolated.
Tanaka et al. [128] utilised a workpiece-embedded approach with an unspecified fibre and a two-colour IRT, to investigate the influence of cutting fluid on the tool-edge temperature for end-milling operations of a titanium alloy. The fibre was fitted through holes made in the workpiece and a pressurised air system was used to prevent foreign matter from clogging the holes, subsequently obscuring the view of the fibre. Furthermore, the fibre was moved within the hole during the milling process, so that it remained at a constant distance away from the cutting edge. Tanaka's et al. [128] approach was an effective way of measuring the temperature of the cutting tool's edge Han et al. [129] embedded a multimode fibre, with a pure-silica core of 400 µm in diameter, into a cutting tool at a distance of 0.3 mm away from the cutting edge. The fibre was optically coupled to a near-infrared two-colour IRT. This study aimed to obtain real-time measurements of the temperature at the cutting edge during continuous and interrupted turning operations of AISI 316L stainless steel. Their system was able to record a maximum temperature of 679.15°C, achieved a response time of 10 ms, and a mean square error of the measured repeated-cutting-temperature difference of 4.69°C.
Section Findings: Fibre-optic infrared thermometers are capable of achieving accurate measurements with very-high response times, making them the most suitable temperature measurement technique for milling operations. FO IRTs, compared to other thermocouple techniques, can have a limited temperature range based on a combination of the properties of the selected fibre, optics and detector; however, the range can be sufficient to cover the temperature range of interest during most machining operations. The use of two-colour FO IRTs can overcome issues with emissivity uncertainties; however, they can be prone to large measurement errors. IRTs with tool-embedded fibres can ameliorate both issues with unknown emissivity, as well as issues with the large measurement errors by creating a black-body cavity with a constant emissivity. The limitation of this approach is that the surface temperature is not recorded; it is the temperature of the material at a known depth from the surface. The surface temperature can be extrapolated from the temperature recorded by the IRT.
With fibre-optic IRTs, some of the more critical challenges faced with machining can be resolved. This includes the ability to log measurements accurately with sufficiently fast response times. The fibre-optic approach can be limited by the cost and properties of the selected fibre. The operating temperature of the fibre will determine if the fibre can be embedded, as temperatures can reach 1000°C, causing irreversible damage to the fibre. Furthermore, the rigidity and brittleness of the fibre can affect the robustness of the implementation, as the vibrations created during machining can put the fibre under stress, causing it to break. More robust FO IRT implementations are necessary to make this technique suitable for a wider range of machining operations.
Conclusions
Temperature monitoring in machining operations is of paramount importance, as it can have detrimental effects on the tool wear rate and the metallurgical properties of the workpiece, leading to an increase in waste and cost. Through the study of heat partitions, the highest temperature experienced during machining occurs at the tool-chip interface. Temperature monitoring is vital for machining operations, however, this review can conclude that there is not one single ideal temperature measurement technique for all metal machining processes. The selection of the most appropriate approach lies with individual researchers based on their experimental criteria and on what area they are interested in investigating. A comparative summary of the temperature measurement techniques for metal machining is shown in Table 1.
Measuring the temperature using workpiece-embedded methodologies can provide a great insight into temperature generation and distribution for FE modeling and research purposes but is unsuitable for industrial use. Workpiece-embedded methodologies require additional machining to make holes for the sensors, compromising the integrity of the machined component and increasing operational costs.
Turning operations are a relatively easy to implement temperature measurement methodology because the cutting tool, and subsequently the tool-chip interface where the highest temperature is expected, remain stationary. Significant complications are introduced when conceptualising the implementation of temperature monitoring in milling operations. The first complication is introduced with the discontinuous nature of milling, with a heating phase and a cooling phase, coupled with the high rotating speeds. Highspeed milling applications require a temperature monitoring technique capable of acquiring temperature readings at least every 1 ms, which was only achieved through the use of radiative temperature measurement techniques, as reported in the literature. The second complication is introduced due to the rotation of the cutting tool, making the robust implementation of any temperature measurement technique challenging. For industrial applications, tool-embedded approaches, such as tool-embedded thermocouples and toolembedded FO IRTs, have shown the highest potential. However, more research is required to make these techniques more robust and economical, to be sustainable for use in industrial settings. As the perfect technique which is suitable for all materials and processes has not yet been developed, careful considerations have to be made when selecting an appropriate technique to suit specific applications. This comparative review of thermocouple and radiative methodologies for temperature measurement in turning, drilling and milling processes can help guide future and current researchers to decide on the most appropriate method for their specific experimental scenarios. Detector sensitive to ambient temperature Industry 4.0 aims to transition to a more sustainable and resilient industry, which has stimulated the need for more advanced sensor systems capable of collecting and transmitting large amounts of data in real time with the objective of increasing the degree of automation. It is this author's opinion that the incorporation of these sensor systems in the form of wireless-smart-tool holders and high response temperature measurement methodologies, such as FO IRTs, with data-processing capabilities have the highest potential for use in the machining industry of the future. Such systems offer very fast response times which can provide sufficient measurement rates for high-speed machining applications, of which thermocouple systems are not capable. In contrast to thermocouple systems, IRTs do not drift over time and are not prone to hysteresis effects under thermal cyclic conditions, which would enable for more reliable and accurate measurements. Future developments in these approaches will see a reduction in cost, size and shape in more robust implementations to enable for better integration in high-speed drilling and milling operations to align with the goals of Industry 4.0.
|
v3-fos-license
|
2017-11-07T00:40:04.409Z
|
2017-01-01T00:00:00.000
|
3774713
|
{
"extfieldsofstudy": [
"Geography",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/aabc/v89n1/0001-3765-aabc-201620160297.pdf",
"pdf_hash": "b7a767f108379ce4c7253a0002f644a9f7833fdb",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2946",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b7a767f108379ce4c7253a0002f644a9f7833fdb",
"year": 2017
}
|
pes2o/s2orc
|
Dental caries at Lapa do Santo , central-eastern Brazil : An Early Holocene archaeological site
The origin and dispersion of the first Americans have been extensively investigated from morphological and genetic perspectives, but few studies have focused on their health and lifestyle. The archaeological site of Lapa do Santo, central-eastern Brazil, has exceptionally preserved Early Holocene human skeletons, providing 19 individuals with 327 permanent and 122 deciduous teeth dated to 9,250 to 7,500 years BP. In this study, we test whether the inhabitants of Lapa do Santo had high prevalence of dental caries as previous studies of Lagoa Santa collection have indicated, using individual and tooth as units of analyses. The results show a high prevalence of dental caries in the permanent dentition (5.50%, n=327 teeth; 69.23%, n=13 individuals) compared to other samples of hunter-gatherers worldwide. In addition, dental caries in deciduous teeth start occurring as early as 3 to 4 years old, suggesting an early start to caries. Compared with other samples from Lagoa Santa, Lapa do Santo shows statistically similar prevalence of overall caries but different caries location pattern. We believe that a subsistence adaptation to a tropical environment rich in sources of carbohydrates, such as fruits, is the best explanation for the overall caries prevalence.
iNtroDUCtioN
Hundreds of human remains from Early Holocene have been found in the rock shelters of the Lagoa Santa region, since the first excavations of Peter Lund in the 1830s and 1840s (Araujo et al. 2012, Holten andSterll 2011).Many studies have used the Lagoa Santa assemblage to reconstruct population history of these first inhabitants of the Western Hemisphere (e.g., Neves and Hubbe 2005).In contrast, few studies have focused on aspects of health and behavior of these inhabitants (e.g., Da-Gloria and Larsen 2014).Unfortunately, the majority of the Lagoa Santa human collection is not associated with an archaeological context, limiting chronological and contextual inferences.In this study, we report dental caries prevalence of a well-excavated and extensively dated PEDrO DA-GLOrIA, rODrIGO E. OLIvEIrA and WAlTer A. NeveS archaeological site in the region, named Lapa do Santo (Figure 1).Several reasons qualify this sample as unique in local and continental contexts.Firstly, differing from other sites in the region, the Lapa do Santo rock shelter was excavated using modern archaeological techniques, such as the use of a total station, providing a detailed spatial record of the archaeological remains.Secondly, this site was extensively dated using both indirect (52 dates on charcoal) and direct radiocarbon dates (13 dates on bone and tooth).Thirdly, the careful exhumation and curation of human skeletons at Lapa do Santo resulted in a high number of teeth per mouth (25 teeth per individual compared to just 10 teeth for the other sites of Lagoa Santa), allowing a more complete understanding of the oral health of each individual.
Here, we investigate the occurrence of dental caries at Lapa do Santo.Cavities are the result of demineralization of enamel, dentine, and/or cement caused by organic acids formed during the fermentation of carbohydrates by plaque bacteria.Cavities are the late stage of a process that starts underneath the surface and progressively causes the formation of a visible lesion (Hillson 2001).Sugar in the diet has the strongest association with the caries process (larsen 2015).Worldwide surveys of the bioarchaoelogical literature have shown that hunter-gatherers have better oral health than agriculturalists (Cohen and Crane-Kramer 2007, Larsen 2015, Turner 1979), but the foragers of Lagoa Santa seem to be an exception among huntergatherers.Da-Gloria and Larsen (2014) showed poor oral health at Lagoa Santa sites compared to other hunter-gatherers from the Americas.Thus, we expect that the Lapa do Santo assemblage to show high prevalence of caries compared to other hunter-gatherer samples.
LAPA DO SANTO SITE
Lapa do Santo site is within the archaeological region of Lagoa Santa, state of Minas Gerais, which is located in eastern-central Brazil, some 400 km from the Atlantic coast (Figure 1).The region is dominated by karst, which includes hundreds of rock shelters in limestone outcrops (Neves and Piló 2008).The current climate of the region is tropical, characterized by high humidity, marked seasonality, an average precipitation of 1,286.5 mm and a mean annual temperature of 21.4 ºC (Patrus 1998).The main biome of the region is the Brazilian savannah (cerrado), dotted with patches of deciduous and semi-deciduous forests (IBGE 1992), which were present throughout the Holocene (G.Q.Freire, unpublished data).A more humid and stable climate was detected in the Early Holocene through pollen (raczka et al. 2013) and geoarchaeological data (Araujo et al. 2013).
The Lapa do Santo site (UTM 23K 600926 E -7845965 N; datum SIrGAS 2000) consists of a sheltered area of 1,300 m 2 with its mouth opening towards the west and a steep slope to the north.From 2002 to 2012, systematic archaeological interventions excavated an area of 50 m 2 .The faunal assemblage contains deer (Mazama sp.), armadillos (e.g., Dasypus novemcinctus), pacas (Cuniculus paca), peccaries (Tayassu sp.), cavies (Cavia sp.), Brazilian rabbits (Silvylagos brasiliensis), reptiles (e.g., Ameiva ameiva), among other species (C.P. Perez, unpublished data).In terms of botanical remains, the archaeological sites of Lagoa Santa are rich in the fruits araticum (Annona classiflora), pequi (Caryocar brasiliense), jatobá (Hymenaea sp.), and palm nut (Syagrus sp.) (Nakamura et al. 2010).The variability of plants and of medium to small-sized faunal remains suggest that early Holocene inhabitants consumed a generalized diet at Lapa do Santo.Concerning chemical analyses, T. Hermenegildo (unpublished data) undertook an isotopic analysis using Early Holocene human remains recovered from the site of Lapa do Santo.His results found relatively low levels of δ15N, suggesting a diet based on plants and primary consumers such as deer and cavies.
The lithics record at Lagoa Santa shows a consistent pattern of quartz crystal flakes throughout the Early Holocene period, with a low frequency of flint (exotic material) concentrated in the deeper layers.The lithics remains are small, with less than 4 cm, with only one or two well defined small edges (Araujo andPugliese 2010, Bueno 2012).The disappearance of flint material occurred around the same time that the first burials are first found in the rock shelters of the region circa of 9,000 years BP.During the burial period at Lapa do Santo, a diversity of funerary rituals was recorded (Strauss 2016), but there is no clear evidence of chronological distinctions among them.The cranial morphology of the Lapa do Santo skeletons is compatible with the Paleoamerican morphology, which is different from the majority of the recent Native Americans (Neves et al. 2014).
MAtEriALS AND MEtHoDS
The sample employed in this study is composed of 19 individuals with 327 permanent and 122 deciduous teeth (Table I), including all the teeth of the collection available for analysis in 2014.The remains are currently housed in the Laboratório de Estudos Evolutivos e Ecológicos Humanos at the Universidade de São Paulo, São Paulo, Brazil.In the southern part of Lapa do Santo, where the burials were found, the youngest radiocarbon date on charcoal (uncalibrated age) was of 7,890 ± 40 years BP (BETA 214142) obtained from a sample close to the site surface.Eight burials (1, 7, 14, 17, 19, 21, 26, and 27) provided dates consistent with the stratigraphic chronology of Lapa do Santo (Araujo et al. 2012, Strauss 2016).Two burials provided dates younger than 7,000 years BP.The first one, burial II, provided two dates: 790 ± 40 years BP (BETA 253497) and 8,331 ± 44 (S-EvA 24491).The second date is consistent with the site stratigraphy and was obtained from a rib bone fragment securely associated with the burial (Strauss 2016).The second one, burial v, provided a minimum date of 5,990 ± 40 years BP (BETA 215195).This age was obtained from a sample containing low amount of collagen, which necessitated dispensing of a pre-treatment based on a full series of alkali extractions carried out by BETA Analytic Laboratory (BETA), increasing the possibility of sample contamination.Its stratigraphic position suggests that the actual date of burial v is older than its reported age.In short, the Lapa do Santo assemblage is considered here as belonging to the Early Holocene period, probably between 9,250 to 7,500 years BP.Dating of the Lapa do Santo skeletons do not allow chronological divisions within this interval.Thus, we treat the Lapa do Santo sample as a single sample.In fact, there are no archaeological and biological elements supporting biocultural divisions of this period.The Lapa do Santo sample consists of the largest Paleoamerican sample for a single site in the Americas with archaeological context and wellpreserved human skeletons.
Carious lesions were scored when a cavity was observed through visual inspection with no optical aids and no use of probe.We recorded caries position and pulp exposure by caries in order to better characterize the carious lesions.Position of the lesions was accessed through five categories: gross, occlusal, interproximal, root/cement enamel junction (CEJ), and buccal/labial/lingual.Gross carious lesions are large cavities occurring in more than one position with no identifiable point of initiation (Hillson 2001).In this article, the prevalence of each category of carious lesion position was calculated in relation to the total number of carious lesions.A caries correction factor was calculated using the prevalence of posterior and anterior dentition in order to correct by differential tooth loss (erdal and Duyar 1999).A second caries correction factor was calculated using the proportion of teeth with pulp exposure by caries and number of teeth lost antemortem (AMTL) for each sex (Lukacs 1995).AMTL was scored in cases in which the alveolus was resorbed at the extent of becoming evident the tooth loss.Sex estimation of adult Lagoa Santa skeletons was based on traits of the cranium and pelvis (Buikstra and Ubelaker 1994).Subadult age estimation relied on dental development (Ubelaker 1989) and epiphyseal closure (Buikstra and Ubelaker 1994), while adult age estimation was based on cranial suture closure (Meindl and Lovejoy 1985), pubic symphysis morphology (Brooks and Suchey 1990), and auricular surface morphology (Buckberry and Chamberlain 2002).The integration of these methods was carried out by one of us (PD) depending on the preservation of each skeleton.The age used for the statistics was the mean age of the interval estimated (see Table I).
Two methods were used in this investigation: tooth/alveolus and individual counts (Lukacs 1989).In the first method, the prevalence of caries was computed by dividing the total number of teeth/ alveoli affected by the total number of teeth/alveoli.In the second method, the number of individuals with at least one observable tooth/alveolus affected was divided by the total sample of individuals.The statistical method used to compare caries prevalence was the chi-square test of heterogeneity in 2 x 2 tables, while t-tests were applied to age-atdeath comparisons.Levene's test was applied for testing equality of variances in t-tests (McKillup 2005).Alpha was set at 0.05, and the statistical tests were run in SPSS 18.0.
In order to have a regional view of the caries prevalence at Lagoa Santa, we compared the caries prevalence of Lapa do Santo and an assemblage of 16 other Early Holocene archaeological sites from the same region.The Lagoa Santa sample used in this comparison was analyzed by one of us (PD) in 2009 and 2010 from public institutions in Brazil (see Da-Gloria and Larsen 2014 for details), while the Lapa do Santo sample was analyzed by the same researcher (PD) in 2014.This comparison aimed to create a contrast between the Lapa do Santo sample and a less contextualized Lagoa Santa sample.
rESULtS
The prevalence of dental caries using teeth as the units of analysis is 5.50% (18/327) for permanent teeth (Table II).When individuals are the unit of analysis, Lagoa Santa inhabitants show 69.23% (9/13) of affected individuals with permanent dentition, and 33.33% (2/6) of affected individuals with deciduous dentition.The number of permanent carious teeth per individual ranges from 0 to 5, and their relative caries prevalence range from 0% to 33.33% (Table I).Females (9.86%, n=77) show statistically similar prevalence of caries as males (4.23%, n=213) when teeth are the units of analysis (χ 2 =3.179; p=0.07).The lack of significance is also observed when individuals are the unit of analysis.When the prevalence of caries is corrected by differential preservation of posterior and anterior I).
Lapa do Santo shows 14.75% (18/122) of deciduous teeth affected by caries (Table vI).These carious lesions, however, are concentrated in one individual with 16 carious lesions.Two individuals affected by dental caries were 4-year-old, while one carious lesion was found in a loose tooth of a 3-year-old individual.The individual with 16 caries likely suffered from some congenital disease, which an ongoing study is investigating.Excluding this individual, the caries prevalence in deciduous teeth at Lapa do Santo is 1.92% (2/104).
DiSCUSSioN
General trends among pre-historic hunter-gatherer skeletal samples are important references to interpret local populations.Lapa do Santo dental caries prevalence using teeth (5.50%) as unit of analysis is higher than dental caries from global (1.72%; Turner 1979) and Western Hemisphere (3.58%; Da-Gloria and Larsen 2014) samples of hunter-gatherers, standing closer to mixed economies (4.37%; Turner 1979).Using individuals as units of analysis, Lapa do Santo prevalence of dental caries (69.23%) is higher than prevalence in hunter-gatherers from the Western Hemisphere using similar methodology (31.58%;Da-Gloria and Larsen 2014).The reasons a hunter-gatherer population show unexpectedly high prevalence of caries is a relevant question here.Such exceptions have been reported in other populations worldwide.A high prevalence of caries (14.3% of occlusal caries, n=182 lower molars) was reported at Moita in the Mesolithic period in Portugal (Lubell et al. (Smith and Fausto 2016).This practice seems to be very old in the tropical regions of South America (Erickson 2008).These fruits are the best candidates to explain the high prevalence of caries at Lapa do Santo.The comparison of dental pathologies at Lapa do Santo by sex shows females with more carious lesions when the prevalence is corrected by antemortem tooth loss (AMTL).In bioarchaeology, poor oral health in females is traditionally attributed to the sexual division of labor, in which females tend to gather plants, prepare food, and consume carbohydrates more often than males (Larsen 2015).recently, Lukacs (2008Lukacs ( , 2011) ) stressed the importance of biological factors (and its behavioral implications) in female oral health, such as the role of fertility and hormones in caries prevalence.At Lapa do Santo, however, isotopic data suggests older age-at-weaning (T.Hermenegildo, unpublished data), implying low fertility for that population.Indeed, we believe that dietary habits relating to the high consumption of carbohydrates by females is the best explanation for this difference.Da-Gloria and larsen (2014), using a larger sample of Lagoa Santa teeth, showed that both tooth wear and caries prevalence increase with age.Therefore, we do not believe that tooth wear significantly affects the observed prevalence of caries.
The regional comparison shows no significant difference of lapa do Santo with other lagoa Santa sites regarding overall caries.On the other hand, there are more occlusal caries than root caries at Lapa do Santo, while the Lagoa Santa sample shows the opposite pattern.relatively more occlusal caries may be related to low tooth wear or to less exposure of the roots at Lapa do Santo (Hillson 2001).Further investigation of these factors in the Lagoa Santa collection may provide an answer.
Studies with caries prevalence using deciduous teeth are not common in the literature.As one exception, Sciulli (1997) analyzed sites from the Ohio valley dated to the Late Archaic period, reporting prevalence ranges from 0% to 1.65% from 11 archaeological sites, resulting in the total of 0.88% (7/793).This value is statistically similar to the lapa do Santo prevalence (χ 2 =1.002; p=0.32), excluding the possibly diseased individual (1.92%, n=104).Although these Late Archaic populations focused their subsistence on wild resources, their diet appears to have had some contribution from domesticated plants, suggesting a mixed economy.That is, the prevalence of caries in the deciduous dentition in the Lapa do Santo sample seems to be similar to those in mixed economy populations.In addition, occurrence of carious lesions in individuals 3 to 4 years old at Lapa do Santo indicates a cariogenic diet since their first introduction to solid food.The food eaten by these young individuals probably included the wild fruits, described above, which have a high carbohydrate content.
Ecological factors seem to be related to the availability of carbohydrates to human consumption.Strӧhle and Hahn (2011) showed that hunter-gatherer societies living in desert and tropical grasslands consumed the highest quantity of carbohydrates (ca.29%-34% of the total energy) in a worldwide database of 63 hunter-gatherer societies living in different ecological conditions.As an example, the Hadza hunter-gatherers living in the African savannah eat carbohydrates regularly, such as tubers and fruits (Marlowe 2010).In fact, the results shown here suggest that the diet of the Early Holocene inhabitants of Lagoa Santa was based on foods with relatively high carbohydrate content.
CoNCLUSioNS
The dental pathologies found in the teeth of the Lapa do Santo assemblage reveal a population with relatively high prevalence of dental caries, which started as early as 3 to 4 years old.These results corroborated our expectation that the Early Holocene inhabitants of Lagoa Santa have unusually poor oral health compared to other hunter-gatherer populations, especially the females.We believe that dietary habits are the main factors explaining the unusual prevalence of dental pathologies at Lapa do Santo.Zooarchaeological and paleobotanical material recovered from Lapa do Santo suggest a diversified subsistence strategy including smallto medium-sized animals and non-domesticated plants, such as fruits.In short, the dental markers and the archaeological context suggests that Lagoa Santa inhabitants had a subsistence based on a relatively high carbohydrate content diet which may characterize a lifestyle adapted to a mosaic of tropical environments.
Figure 1 -
Figure 1 -Location of Lapa do Santo in the Brazilian map, showing the main rivers and cities from the Lagoa Santa region (Figure credit: Marcelo da Costa Silva) (see the colors in the online version).
Figure 2 -
Figure 2 -Distribution of the mean age-at-death by sex of skeletons at Lapa do Santo (see the colors in the online version).
tABLE ii Prevalence of dental caries by tooth type using permanent teeth as the unit of analysis.
PEDrO DA-GLOrIA, rODrIGO E. OLIvEIrA and WAlTer A. NeveS teeth, the sex difference increases slightly.When the prevalence of caries is corrected by antemortem tooth loss, it accentuates even more the difference between females and males (TableIII and Iv).
tABLE iv Prevalence of caries corrected by tooth loss.
a = teeth affected by caries divided by total teeth.b = percentage of caries in posterior dentition multiplied by (5/8) + percentage of caries in anterior dentition multiplied by (3/8), see
|
v3-fos-license
|
2020-03-12T10:38:27.892Z
|
2020-03-01T00:00:00.000
|
212693265
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/12/3/726/pdf",
"pdf_hash": "658adf2d3b997d24df3cd5aceeff77be7b102ff1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2947",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "6bc54988c46deed443335a748d383abf520ffe92",
"year": 2020
}
|
pes2o/s2orc
|
Antrodia cinnamomea Confers Obesity Resistance and Restores Intestinal Barrier Integrity in Leptin-deficient Obese Mice
Obesity is associated with metabolic disorders. Thus, obesity prevention and treatment are essential for health. Antrodia cinnamomea (AC) is a multifunctional medicinal fungus used for the treatment of various diseases and for preventing diet-induced obesity. Leptin deficiency causes over-eating and spontaneous obesity. The concomitant metabolic symptoms are more severe than diet-induced obesity. Here, we used leptin-deficient (ob/ob) mice as an animal model for over-feeding to study the effect of AC on obesity. We fed C57BL/6 mice (WT, ob+/+) and ob/ob mice with AC for four weeks before performing qRT-PCR and immunoblot analysis to elaborate AC-modulated mechanisms. Further, we used Caco-2 cells as a human intestinal epithelial barrier model to examine the effect of AC on intestinal permeability. Our results suggested that AC reduces lipid deposits of the liver and epididymal white adipose tissue (EWAT) by promoting lipid metabolism and inhibiting lipogenesis-associated genes and proteins in ob/ob mice. Moreover, AC effectively repaired intestinal-barrier injury caused by leptin deficiency and enhanced intestinal barrier integrity in Caco-2 cells. Interestingly, AC significantly reduced body weight and EWAT with no compromise on food intake in ob/ob mice. Thus, AC effectively reduced obesity caused by leptin-deficiency and can potentially be used as a nutraceutical for treating obesity.
ob+/+-AC, ob/ob-Ctrl, and ob/ob-AC. To mimic treatment on early-stages and exclude downregulated metabolism by aging, the experiment was started in 4-5-week-old male mice and the mice were euthanized after a 4-week treatment period; the experiment was performed as previously described [18], with some modifications. Each group comprised 4-5-week-old male mice of similar initial body weight (no blinding). Body weight was monitored every week from the day before the first time (T0) to the day after the 4-week (T4) treatment (with AC or PBS) period; daily food and water intake and fecal and urine weight were recorded at T0 and T4 using a metabolic cage. All animal experiments were performed according to the protocols approved by the institutional animal care and use committee of the National Health Research Institutes (Approval No. NHRI-IACUC-107046-A). All experiments were performed following the guidelines.
Sample Collection and Histological Observation
The mice were euthanized using carbon dioxide overdose. The liver, intestine, and epididymal white adipose tissue (EWAT) were harvested, fixed in 4% formaldehyde, paraffin-embedded, sectioned, and stained using hematoxylin and eosin (H&E) or immunohistochemistry (IHC). IHC was performed as previously described [19]. The appropriate volume of primary anti-cluster of differentiation 36 (CD36) (1:100, GTX100642, GeneTex) antibody was added to cover the specimen and the samples were incubated at 4 • C overnight. Nuclei were stained with hematoxylin. The images were captured using Pannoramic MIDI II (3DHISTECH Ltd., Budapest, Hungary).
RNA Extraction and Real Time RT-PCR
For real time RT-PCR, tissues were collected in RNAlater RNA Stabilization Reagent (QIAGEN, Hilden, Germany), snap-frozen in liquid nitrogen and stored at -80 • C. Total RNA was isolated from 50-100 mg of homogenized liver, intestine, and EWAT using the TRIzol reagent (ThermoFisher, Waltham, Massachusetts), as described previously [20]. cDNA was synthesized from 2 µg total RNA using M-MLV reverse transcriptase (Promega, Madison, WI, USA). Real-Time PCR reactions were performed on a LightCycler 480 System (Roche) and FastStart Universal SYBR Green Master (Rox) (Roche Diagnostics GmbH, Mannheim, Germany) was used for the reactions. Relative quantification was performed using the comparative 2 -∆∆ CT method [11]. The RNA expression profiles from liver, intestine, and EWAT were normalized to the 18S ribosome, TBP1, and HPRT respectively [15]. For detailed information of primers used in this study, see Supplementary Table S1.
Immunoblotting
For immunoblotting, tissues were collected, snap-frozen in liquid nitrogen, and stored at −80 • C. The tissues were homogenized and lysed in 1% Nonidet P40 Substitute lysis solution in the presence of 0.1% protease inhibitor. Immunoblotting was performed as previously described [21]. A total of 40-60 µg of protein samples were separated on a 12% 1D-SDS-PAGE and transferred to polyvinylidene difluoride membranes (Pall Corp., Port Washington, NY, USA). The membranes were blocked with 5% (w/v) skim milk or bovine serum albumin in Tris-buffered saline with Tween-20 (TBST; 50 mM Tris, 150 mM NaCl, and 0.1% Tween-20 (v/v); pH 8.0) for 1 h. Thereafter, the membranes were probed with the following primary antibodies (all from GeneTex): anti-acetyl-CoA carboxylase (ACC) Subsequently, the membranes were washed in TBST (4 × 10 min) and incubated with horseradish peroxidase-coupled secondary antibodies (Jackson ImmunoResearch Laboratories, Inc., Baltimore, PA, USA) in TBST for 1 h. The membranes were then washed in TBST (6 × 10 min), and the immunoprobed proteins were visualized using the enhanced Nutrients 2020, 12, 726 4 of 14 chemiluminescence method (Visual Protein Biotech Corp., Taiwan). Protein expression was quantified using the ImageQuant TL Software (GE Healthcare Life Sciences, Pittsburgh, PA, USA) and was normalized to that of β-actin, which was used as the internal control.
Statistical Analysis
All statistical analyses were performed using GraphPad Software of Prism 6.0. Comparisons between two and more than two groups were done using the unpaired T test and two-way ANOVA, followed by Tukey's multiple comparisons test. Data are presented as mean ± SEM. Statistical significance levels are indicated as * p < 0.05, ** p < 0.01, and *** p < 0.001; non-significant comparisons are marked as ns.
AC has an Anti-obesity Effect in ob/ob Mice
To understand the effect of AC on overfeeding-induced obesity, ob+/+ and ob/ob mice were fed with AC thrice a week for four weeks and their body weight was measured every week. The experiment was started in 4-5-week-old mice before spontaneous obesity developed [24]. After four weeks of treatment, the average body weights in ob+/+ mice, fed without or with AC, were 23.73 g and 23.17 g, respectively; 39.17 g and 35.95 g in ob/ob mice fed without or with AC, respectively ( Figure 1A). AC significantly reduced the 8% weight gain in ob/ob mice compared to ob/ob mice fed without AC ( Figure 1B). Simultaneously, AC significantly increased daily water intake and urine weight in ob/ob mice; however, food intake and fecal weight remained unchanged in ob+/+ mice ( Figure 1C-F). These results suggested that AC inhibits the obesity phenotype of ob/ob mice without any compromise on food consumption and defecation. week-old ob+/+ and ob/ob fed a diet with or without AC were measured every week for 4 weeks. B Body weights of ob+/+ and ob/ob fed with or without AC were measured after 4 weeks (T4). C-F Daily food or water intake and fecal or urine weights were monitored using a metabolic cage at T0 and T4. All data are expressed as mean ± SEM, *p < 0.05, **p < 0.01, ***p < 0.001. Non-significant; ns. n = 5-14 mice in each group.
AC Alleviates Hepatic Lipid Accumulation and Lipid Deposition in EWAT in ob+/+ and ob/ob Mice
We examined the effect of AC on the liver and EWAT of ob/ob mice after a four-week stimulation period. Although liver weight remained unaltered, EWAT weight was significantly reduced in ob/ob mice (Figure 2A,B). We also used H&E staining to examine the degree of lipid content in liver cells and EWAT. The number of cells in the liver and EWAT was evaluated using the number of nuclei or cells per field. We observed a change in lipid drop size in the cells. AC significantly reduced hepatic week-old ob+/+ and ob/ob fed a diet with or without AC were measured every week for 4 weeks. (B) Body weights of ob+/+ and ob/ob fed with or without AC were measured after 4 weeks (T4). (C-F) Daily food or water intake and fecal or urine weights were monitored using a metabolic cage at T0 and T4. All data are expressed as mean ± SEM, * p < 0.05, ** p < 0.01, *** p < 0.001. Non-significant; ns. n = 5-14 mice in each group.
AC Alleviates Hepatic Lipid Accumulation and Lipid Deposition in EWAT in ob+/+ and ob/ob Mice
We examined the effect of AC on the liver and EWAT of ob/ob mice after a four-week stimulation period. Although liver weight remained unaltered, EWAT weight was significantly reduced in ob/ob mice (Figure 2A,B). We also used H&E staining to examine the degree of lipid content in liver cells and EWAT. The number of cells in the liver and EWAT was evaluated using the number of nuclei or cells per field. We observed a change in lipid drop size in the cells. AC significantly reduced hepatic lipid A, B Percentage of liver or epididymal white adipose tissue (EWAT) weight were normalized to body weight after ob+/+ and ob/ob mice were fed with or without AC for 4 weeks. C, E The liver and EWAT were examined using hematoxylin and eosin staining. D, F The number of liver or EWAT cells per field was estimated using the ImageJ software. Magnification, 100×. Scale bars are 20 μm for the liver and 50 μm for EWAT. All data are expressed as mean ± SEM, *p < 0.05, **p < 0.01, ***p < 0.001. Nonsignificant; ns. n = 4-14 mice in each group.
AC Downregulates Fatty Acid Uptake and Lipogenesis-associated Genes and Proteins in the Liver of ob+/+ and ob/ob Mice
To investigate how AC improves lipid accumulation in ob/ob mice, we used real time RT-PCR to evaluate regulation of expression of genes involved in lipid catabolism and lipogenesis by AC. We found that AC significantly suppressed the gene expression of peroxisome proliferator-activated receptor gamma (PPARγ), CD36, ME1, and SCD1 in ob+/+ and ob/ob mice ( Figure 3A). In addition, a comparison of ob+/+-Ctrl and ob/ob-AC results showed that AC could restore lipid catabolism and lipogenesis-related gene expression to the normal level in ob/ob mice. Next, we used immunoblotting to further clarify the mechanism. AC significantly decreased expression of fatty acid synthesis-related Percentage of liver or epididymal white adipose tissue (EWAT) weight were normalized to body weight after ob+/+ and ob/ob mice were fed with or without AC for 4 weeks. (C,E) The liver and EWAT were examined using hematoxylin and eosin staining. (D,F) The number of liver or EWAT cells per field was estimated using the ImageJ software. Magnification, 100×. Scale bars are 20 µm for the liver and 50 µm for EWAT. All data are expressed as mean ± SEM, * p < 0.05, ** p < 0.01, *** p < 0.001. Non-significant; ns. n = 4-14 mice in each group.
AC Downregulates Fatty Acid Uptake and Lipogenesis-associated Genes and Proteins in the Liver of ob+/+ and ob/ob Mice
To investigate how AC improves lipid accumulation in ob/ob mice, we used real time RT-PCR to evaluate regulation of expression of genes involved in lipid catabolism and lipogenesis by AC. We found that AC significantly suppressed the gene expression of peroxisome proliferator-activated receptor gamma (PPARγ), CD36, ME1, and SCD1 in ob+/+ and ob/ob mice ( Figure 3A). In addition, a comparison of ob+/+-Ctrl and ob/ob-AC results showed that AC could restore lipid catabolism and lipogenesis-related gene expression to the normal level in ob/ob mice. Next, we used immunoblotting to further clarify the mechanism. AC significantly decreased expression of fatty acid synthesis-related proteins, such as Nutrients 2020, 12, 726 7 of 14 ACC, FAS, and HMGCR in ob+/+ and ob/ob mice ( Figure 3B,C). In addition, AC treatment appeared to increase fatty acid β-oxidation in the mitochondria and peroxisomes, as seen by the up-regulation of CPT1A and FACL4, respectively ( Figure 3D,E). Notably, AC promoted gluconeogenesis by increasing FBP1 expression and decreasing lipid accumulation ( Figure 3D,E). These results demonstrated that AC could prevent lipid accumulation in the liver of ob/ob mice by reducing the expression, at the mRNA and protein level, of genes involved in lipid uptake and lipogenesis, as well as promoting the expression, at the mRNA and protein level, of genes involved in lipid catabolism. proteins, such as ACC, FAS, and HMGCR in ob+/+ and ob/ob mice ( Figure 3B,C). In addition, AC treatment appeared to increase fatty acid β-oxidation in the mitochondria and peroxisomes, as seen by the up-regulation of CPT1A and FACL4, respectively ( Figure 3D,E). Notably, AC promoted gluconeogenesis by increasing FBP1 expression and decreasing lipid accumulation ( Figure 3D,E). These results demonstrated that AC could prevent lipid accumulation in the liver of ob/ob mice by reducing the expression, at the mRNA and protein level, of genes involved in lipid uptake and lipogenesis, as well as promoting the expression, at the mRNA and protein level, of genes involved in lipid catabolism.
AC Promotes Lipolysis-associated Protein Expression in EWAT of ob/ob Mice
We used H&E staining to show that lipid depositions were significantly reduced by AC in the EWAT of ob/ob mice. To investigate the effects of AC in the EWAT of ob+/+ and ob/ob mice, we used immunoblotting. The protein of ACC expression was reduced in the EWAT of ob/ob mice fed with AC, suggesting that AC suppressed lipogenesis ( Figure 4A,B). Notably, the protein levels of ATGL, a lipid droplet degradation (lipolysis) protein, were increased 2.4 times in the EWAT of ob/ob mice, fed with AC, after four weeks. ATGL played a key role in the EWAT by decreasing lipid accumulation ( Figure 4A,B). These results suggested that AC down-regulated lipogenesis and up-regulated a lipolysis-associated protein to decrease fat deposition in the EWAT of ob/ob mice.
AC Promotes Lipolysis-associated Protein Expression in EWAT of ob/ob Mice
We used H&E staining to show that lipid depositions were significantly reduced by AC in the EWAT of ob/ob mice. To investigate the effects of AC in the EWAT of ob+/+ and ob/ob mice, we used immunoblotting. The protein of ACC expression was reduced in the EWAT of ob/ob mice fed with AC, suggesting that AC suppressed lipogenesis ( Figure 4A,B). Notably, the protein levels of ATGL, a lipid droplet degradation (lipolysis) protein, were increased 2.4 times in the EWAT of ob/ob mice, fed with AC, after four weeks. ATGL played a key role in the EWAT by decreasing lipid accumulation ( Figure 4A,B). These results suggested that AC down-regulated lipogenesis and up-regulated a lipolysis-associated protein to decrease fat deposition in the EWAT of ob/ob mice.
AC may Restore the Intestinal Barrier in ob/ob Mice
The intestine is at the front line of absorbing nutrients and lipids and the intestinal barrier integrity and permeability are thought to be involved in certain chronic inflammatory diseases such as inflammatory bowel disease (IBD), obesity, and other metabolic disorders [25]. Previously, AC was shown to produce an anti-obesity and anti-inflammatory effect by maintaining intestinal integrity in DIO mice; moreover, CD36 deletion in endothelial cells of the small intestine resulted in impaired barrier function of the small intestinal in mice [11,26]. We found that CD36 expression was restored in the small intestine of ob/ob mice and unaffected in ob+/+ mice, after AC treatment ( Figure 5A). Levels of tight junction proteins, like zonula occludens-1 (ZO-1) and zonula occludens-2 (ZO-2), that maintain intestinal permeability were unchanged. However, the level of occludin (Ocln) was slightly increased in ob+/+ or ob/ob mice fed with AC, after four weeks ( Figure 5A). Moreover, H&E staining showed increased intestinal barrier integrity in ob/ob mice fed with AC than ob/ob mice not fed AC ( Figure 5B). We also examined CD36 and ZO-1 localization in the intestine using IHC. AC restored CD36 expression in endothelial cells ( Figure 5B). Moreover, AC promoted ZO-1 localization in intestinal epithelial cells in ob/ob mice compared to ob/ob-Ctrl ( Figure 5C). Thus, AC repairs the intestinal barrier by up-regulating CD36 expression, redistributing ZO-1, and reducing intestinal permeability in ob+/+ and ob/ob mice.
AC may Restore the Intestinal Barrier in ob/ob Mice
The intestine is at the front line of absorbing nutrients and lipids and the intestinal barrier integrity and permeability are thought to be involved in certain chronic inflammatory diseases such as inflammatory bowel disease (IBD), obesity, and other metabolic disorders [25]. Previously, AC was shown to produce an anti-obesity and anti-inflammatory effect by maintaining intestinal integrity in DIO mice; moreover, CD36 deletion in endothelial cells of the small intestine resulted in impaired barrier function of the small intestinal in mice [11,26]. We found that CD36 expression was restored in the small intestine of ob/ob mice and unaffected in ob+/+ mice, after AC treatment ( Figure 5A). Levels of tight junction proteins, like zonula occludens-1 (ZO-1) and zonula occludens-2 (ZO-2), that maintain intestinal permeability were unchanged. However, the level of occludin (Ocln) was slightly increased in ob+/+ or ob/ob mice fed with AC, after four weeks ( Figure 5A). Moreover, H&E staining showed increased intestinal barrier integrity in ob/ob mice fed with AC than ob/ob mice not fed AC ( Figure 5B). We also examined CD36 and ZO-1 localization in the intestine using IHC. AC restored CD36 expression in endothelial cells ( Figure 5B). Moreover, AC promoted ZO-1 localization in intestinal epithelial cells in ob/ob mice compared to ob/ob-Ctrl ( Figure 5C). Thus, AC repairs the intestinal barrier by up-regulating CD36 expression, redistributing ZO-1, and reducing intestinal permeability in ob+/+ and ob/ob mice.
Ethanol Extracts of A. Cinnamomea Decrease Intestinal Permeability in Caco-2 Cells
To further examine the effect of AC on the human intestine, we used Caco-2 cells, as a human intestinal epithelial cell barrier model, treated with 500 µL/mL EEAC, according to the cell viability assay (IC25), to understand whether EEAC affected intestinal permeability ( Figure 6A) [23]. Integrity of the Caco-2 membrane was assessed using TEER values after cell seeding for 7, 14, and 21 days. After 21 days, the TEER values of the Caco-2 membrane, treated with EEAC, were significantly higher than that of the control group (Ctrl) ( Figure 6B). EEAC increased the gene of PPARγ expression and induced the upregulation of tight junction proteins, including ZO-1 and ZO-2, in Caco-2 cells ( Figure 6C). Moreover, ZO-1 levels were enhanced in Caco-2 cells treated with EEAC ( Figure 6D). These results suggested that AC enhanced intestinal barrier integrity and decreased intestinal permeability.
Ethanol Extracts of A. Cinnamomea Decrease Intestinal Permeability in Caco-2 Cells
To further examine the effect of AC on the human intestine, we used Caco-2 cells, as a human intestinal epithelial cell barrier model, treated with 500 μL/mL EEAC, according to the cell viability assay (IC25), to understand whether EEAC affected intestinal permeability ( Figure 6A) [23]. Integrity of the Caco-2 membrane was assessed using TEER values after cell seeding for 7, 14, and 21 days. After 21 days, the TEER values of the Caco-2 membrane, treated with EEAC, were significantly higher than that of the control group (Ctrl) ( Figure 6B). EEAC increased the gene of PPARγ expression and induced the upregulation of tight junction proteins, including ZO-1 and ZO-2, in Caco-2 cells ( Figure 6C). Moreover, ZO-1 levels were enhanced in Caco-2 cells treated with EEAC ( Figure 6D). These results suggested that AC enhanced intestinal barrier integrity and decreased intestinal permeability.
Thus, AC administration inhibited hepatic lipogenesis and lipid uptake; promoted lipolysis and reduced lipogenesis to prevent fat deposition in the EWAT in ob/ob mice. In addition, AC restored intestinal barrier integrity in ob/ob mice, enhanced intestinal barrier integrity, and decreased intestinal permeability in Caco-2 cells. Our study provides a rationale for the anti-obesity effect and intestinal protection effect of AC in leptin-deficient obese mice. Thus, AC administration inhibited hepatic lipogenesis and lipid uptake; promoted lipolysis and reduced lipogenesis to prevent fat deposition in the EWAT in ob/ob mice. In addition, AC restored intestinal barrier integrity in ob/ob mice, enhanced intestinal barrier integrity, and decreased intestinal permeability in Caco-2 cells. Our study provides a rationale for the anti-obesity effect and intestinal protection effect of AC in leptin-deficient obese mice.
Discussion
Leptin maintains the physiological balance of energy. It has an impact on metabolism and body weight and plays a key role in promoting body fat degradation [27]. Leptin secretions are regulated by factors such as excess energy stored as fat, overfeeding, glucose and insulin levels, and inflammatory cytokines [28]. Leptin deficiency could cause symptoms including early-onset morbid obesity, hyperphagia, hypogonadotropic hypogonadism, advanced bone age, hyperinsulinemia, and immune dysfunction [28]. The previous studies have used Roux-en-Y gastric bypass surgery, food or calorie restriction, leptin administration, and adipose tissue transplantation to treat leptin-deficient mice [9, [29][30][31]. Currently, metreleptin (Myalept), a recombinant human leptin analog, is used as an injectible to treat complications of leptin deficiency in patients with congenital or acquired generalized lipodystrophy. Although metreleptin was approved by the Food and Drug Administration in 2014, it has common side effects like headache, ovarian cysts, ear infection, high levels of protein in the urine, fever, and leptin resistance [32,33].
The leptin-deficient ob/ob mice and DIO mice exhibit over-feeding and excessive energy uptake-derived obesity, and they are prone to many diseases like nonalcoholic fatty liver disease (NAFLD), hyperphagia, and type II diabetes [34][35][36]. Interestingly, we found that AC decreased body weight and lipid accumulation in the liver and EWAT, but it did not significantly affect food intake. Although a previous study showed that AC prevents obesity and fatty liver in DIO by regulating AMPK and SREBP signaling, here, the AMKP and SREBP signaling pathways were not affected by AC in ob/ob mice [12]. We further examined fatty acid uptake, lipogenesis, and the lipid catabolism pathway. PPARγ is a lipogenesis-related protein that has been shown to regulate lipid uptake, lipogenesis, and lipid storage. CD36 is an integral membrane protein, also called fatty acid translocase, that is involved in translocation of long-chain fatty acids [37,38]. Our results suggested that AC suppressed PPARγ and CD36 gene expression and reduced fatty acid transportation. Simultaneously, AC inhibited lipogenesis by decreasing the expression of ME1, which generates NADPH used for lipogenesis in the liver and adipose tissues, and SCD1, which is involved in fatty acid synthesis in the liver [39,40]. Moreover, ACC and FAS have been shown to be involved in fatty acid synthesis and HMGCR in the cholesterol synthesis pathway [41,42]. Here, our immunoblotting results showed that AC significantly inhibited lipogenesis by decreasing protein expression of ACC, FAS, and HMGCR in ob+/+ and ob/ob mice. AC also promoted fatty acid β-oxidation in the mitochondria and peroxisomes by increasing protein expression of CPT1A and FACL4, which are the key enzymes that catalyze mitochondrial fatty acid oxidation in the liver [42,43]. Gluconeogenesis is a pathway of glucose metabolism that might assist to keep 3C substrates out of lipid metabolism, and synthesized glucose may be transported to other tissue. [44]. Our results showed that AC alleviated fatty liver mainly by decreasing fatty acid uptake (CD36), lipogenesis (PPARγ, SCD1, ACC, FAS, and ME1), and increasing gluconeogenesis (FBP1) in ob/ob mice. Simultaneously, AC also decreased lipid accumulation in the adipose tissue by decreasing lipogenesis (ACC) and facilitating lipolysis (ATGL) in ob/ob mice.
Intestinal barrier integrity and permeability are thought to contribute to inflammatory bowel disease, obesity, and metabolic disorders [45]. In previous studies, CD36 deletion in endothelial cells of the small intestine impaired the small intestinal barrier [26]. Therefore, CD36 influences lipid utilization, homeostasis, and barrier maintenance in the intestine, especially in intestinal endothelial cells [46]. Also, a previous study showed that AC can regulate gut microbiota and enhance antimicrobial peptide production [11]. Our results showed that AC restored the integrity of the intestinal barrier by increasing CD36 expression in endothelial cells and decreasing intestinal permeability in ob/ob mice. Further, tight junction proteins of the intestine are important in preventing the entry of harmful substances, such as microbial components, into the body [47]. Intestinal permeability is associated with various diseases and is a potential target for disease prevention and therapy [48]. A previous study has shown that AC regulates gut microbiota, prevents DIO, and decreases intestinal inflammation and obesity [11]. In our study, AC slightly increased the expression of intestinal Ocln gene and redistributed ZO-1 to the membrane in ob/ob mice. These results suggested that AC reduces obesity by regulating intestinal permeability and barrier integrity.
Caco-2 human intestinal epithelial cells, used as a gut barrier model, when treated with EEAC also demonstrated that AC could decrease intestinal permeability and reinforce intestinal barrier integrity by regulating ZO-1 expression on the membrane. Taken together, these results suggested that AC alleviates leptin-deficiency induced obesity and disorders by regulating lipid catabolism and restoring intestinal barrier integrity.
In conclusion, our results indicated that AC supplementation inhibited hepatic lipogenesis and lipid uptake in ob/ob mice. At the same time, AC promoted lipolysis and decrease lipogenesis to prevent fat deposition in the EWAT in ob/ob mice. Furthermore, AC enhanced intestinal barrier integrity as preventive protection in ob/ob mice. Our work provides evidence that AC supplementation effectively reduced leptin-deficiency-mediated obesity by regulating metabolism in the liver and EWAT and restoring the gut barrier integrity without any significant compromise on food intake. The AC extract could be potentially used as a nutraceutical for the treatment of obesity, and AC compounds could be further analyzed as potential targets for drug design.
|
v3-fos-license
|
2018-04-03T02:47:56.839Z
|
2000-12-22T00:00:00.000
|
20497306
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/275/51/40620.full.pdf",
"pdf_hash": "e592c1904174270bd88c04f38eb7e4d8a3038fbd",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2948",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "782e6467cdd2c05e2407679be689f75294579ec4",
"year": 2000
}
|
pes2o/s2orc
|
Activation of atypical protein kinase C zeta by caspase processing and degradation by the ubiquitin-proteasome system.
Atypical protein kinase C zeta (PKCzeta) is known to transduce signals that influence cell proliferation and survival. Here we show that recombinant human caspases can process PKCzeta at three sites in the hinge region between the regulatory and catalytic domains. Caspase-3, -6, -7, and -8 chiefly cleaved human PKCzeta at EETD downward arrowG, and caspase-3 and -7 also cleaved PKCzeta at DGMD downward arrowG and DSED downward arrowL, respectively. Processing of PKCzeta expressed in transfected cells occurred chiefly at EETD downward arrowG and DGMD downward arrowG and produced carboxyl-terminal polypeptides that contained the catalytic domain. Epitope-tagged PKCzeta that lacked the regulatory domain was catalytically active following expression in HeLa cells. Induction of apoptosis in HeLa cells by tumor necrosis factor alpha plus cycloheximide evoked the conversion of full-length epitope-tagged PKCzeta to two catalytic domain polypeptides and increased PKCzeta activity. A caspase inhibitor, zVAD-fmk, prevented epitope-tagged PKCzeta processing and activation following the induction of apoptosis. Induction of apoptosis in rat parotid C5 cells produced catalytic domain polypeptides of endogenous PKCzeta and increased PKCzeta activity. Caspase inhibitors prevented the increase in PKCzeta activity and production of the catalytic domain polypeptides. Treatment with lactacystin, a selective inhibitor of the proteasome, caused polyubiquitin-PKCzeta conjugates to accumulate in cells transfected with the catalytic domain or full-length PKCzeta, or with a PKCzeta mutant that was resistant to caspase processing. We conclude that caspases process PKCzeta to carboxyl-terminal fragments that are catalytically active and that are degraded by the ubiquitin-proteasome pathway.
The protein kinase C (PKC) 1 family consists of at least a dozen structurally related phospholipid-dependent serine/threonine protein kinases (1)(2)(3). Two members of the family, zeta () and iota/lambda (/), are atypical (aPKCs), because they lack a functional C1 domain and, therefore, are not activated by C1 ligands such as diacylglycerol and phorbol ester tumor promoters (1)(2)(3). aPKCs are widely expressed in mammalian tissues and cell types (2). aPKCs transduce a variety of extracellular stimuli and thereby modulate the proliferation, malignant transformation, and survival of mammalian cells (2). For example, studies with a kinase-dead, dominant-negative PKC mutant indicate that PKC is required for the mitogenic activation of mouse fibroblasts (4). PKC binds and phosphorylates transcription factor Sp1, which controls the expression of a variety of mammalian genes, including vascular permeability factor/vascular endothelial growth factor (5). By turning on another transcription factor, namely nuclear factor B (NF-B), aPKCs regulate the expression of genes that mediate inflammatory responses and cell survival (6 -10). Activation of NF-B opposes the induction of apoptosis, which suggests that aPKCs transduce one or more survival signals (8 -10). Consistent with this idea, Pongracz et al. (11) found that spontaneously apoptotic U937 cells had about 20% as much PKC as did exponentially growing cells. In contrast to the depletion of PKC protein, induction of apoptosis in NIH-3T3 cells by UV radiation inhibited the activity of aPKCs without changing the aPKC protein level (12). Murray and Fields (13) have shown that PKC/, but not PKC, protects human K562 leukemia cells from apoptosis. Although there is little understanding of how PKC influences apoptosis, PKC activity apparently promotes cell survival (8 -12).
A family of cysteinyl proteases called caspases (Ͼ13 genes), which are related to the ced-3 death gene of Caenorhabditis elegans, are the key players in a pervasive pathway of apoptosis in mammalian cells (14,15). Caspases are among the most specific of proteases with an unusual requirement for cleavage after aspartic acid. Caspases that mediate induction of apoptosis fall into two classes: initiator caspases (e.g. caspase-8, -9), which activate downstream effector caspases (e.g. caspase-3, -6, -7) that process a wide variety of cellular proteins (14,15). Generally processing of target proteins by caspases either activates proapoptotic functions or turns off survival pathways. Widmann et al. (16) have shown that caspase-dependent cleavage of Raf-1 and Akt-1 inhibited their kinase activity and may explain the inhibition of the extracellular signal-regulated and Akt pathways during the progression of apoptosis. Kufe and co-workers (17)(18)(19) found that ionizing radiation or etoposide, which damages DNA and induces apoptosis in U937 cells, produces CF␦, a 40-kDa carboxyl-terminal fragment of PKC␦ that is intrinsically active. Moreover, transfection of HeLa cells with CF␦, but not an inactive CF␦ (K378R) mutant, induced nuclear fragmentation and cell death (19). Analogous results were obtained with a carboxyl-terminal fragment of PKC (20). PKC was processed by caspase-3 (or a closely related caspase) in vitro and in vivo to CF, which induced apoptosis following expression in U937 cells (20). Recently Reyland et al. (21) reported evidence that PKC␦ activity is essential for etoposide-induced apoptosis in rat parotid C5 cells.
Here we identify three caspase cleavage sites of human PKC, which lie in the hinge segment between the regulatory and catalytic domains. Induction of apoptosis in nontransfected parotid C5 cells, or HeLa transfected with epitope-tagged PKC, produced 40-and 50-kDa carboxyl-terminal fragments of PKC and stimulated PKC immune complex kinase activity. Our demonstration of PKC activation following the induction of apoptosis appears to be unprecedented. Caspase processing appears to be an alternative mechanism to phosphorylation for activating the kinase function of PKC. A carboxyl-terminal segment of PKC that contained the catalytic domain was active and formed polyubiquitin conjugates following expression in mammalian cells. The conjugates accumulated in cells treated with Lacta, a selective inhibitor of the 26 S proteasome, which governs the abundance of many influential proteins (22,23). Our results indicate that the free catalytic domain of PKC that is produced by caspases is subject to degradation by the Ub-proteasome pathway, which is a novel mechanism for down-regulation of the kinase lacking the regulatory domain.
EXPERIMENTAL PROCEDURES
Cells and Transfection-Baby hamster kidney (BHK) and HeLa cells were grown in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum. The rat parotid salivary gland acinar cell line, parotid C5, was grown on primaria culture dishes (Falcon) in a 1:1 mixture of Dulbecco's modified Eagle's medium:F-12 medium supplemented with 2.5% fetal bovine serum, 5 g/ml insulin, 5 g/ml transferrin, 1.1 M hydrocortisone, 25 ng/ml epidermal growth factor (24). Transfections of BHK were done with 0.5 g of pcDNA3 vector containing lacZ as a negative control, or wild type or mutant PKC using 8 g of LipofectAMINE (Life Technologies, Inc.). Transfected BHK cells were selected with 0.5 mg/ml active G418. Cultures were maintained at 37°C in a humidified atmosphere containing 5% CO 2 and 95% air. HeLa cells were transfected with the pcDNA3.1/GS wild type and mutant PKC constructs using X-tremeGENE Q2 reagent as recommended by the manufacturer (Roche Molecular Biochemicals).
PKC cDNA Constructs-Human PKC cDNA (ATCC; accession number z15108), which was blunt-end-subcloned into the EcoRV site of pcDNA3 (Invitrogen), was kindly provided by Dr. G. Yancey Gillespie (University of Alabama at Birmingham). Site-directed mutagenesis was done with the QuikChange kit according to the instructions of the manufacturer (Stratagene). The following PAGE purified primer pairs (Life Technologies, Inc.) were used (substituted base underlined): D210A, forward, 5Ј-CT TCC GAG GAG ACA GCT GGA ATT GCT TAC; reverse, 5Ј-GTA AGC AAT TCC AGC TGT CTC CTC GGA AG; D230A, forward, 5Ј-GAC GAC TCG GAG GCC CTT AAG CCA GTT ATC G; reverse, 5Ј-C GAT AAC TGG CTT AAG GGC CTC CGA GTC GTC; D239, forward, 5Ј-CA GTT ATC GAT GGG ATG GCT GGA ATC AAA ATC TC; reverse, 5Ј-GA GAT TTT GAT TCC AGC CAT CCC ATC GAT AAC TG. A catalytic fragment of PKC (CAT) consisting of residues 238 -592 and having the D239A mutation was produced by PCR with the following primer pair: forward, 5Ј-ATG GCT GGA ATC AAA ATC TCT CAG; reverse, 5Ј-TCA CAC CGA CTC CTC GGT GGA CAG C. The PCR product was TA cloned into pCR2.1 (Invitrogen), excised with HindIII and SacII, and subcloned into the corresponding sites of pcDNA3.1Bϩ (Invitrogen). A kinase-deficient K281R mutant of PKC and CAT was produced with the QuikChange kit using the following primers: forward, 5Ј-C CAA ATT TAC GCC ATG AGA GTG GTG AAG AAA AGA GC; reverse, 5ЈGC TCT TTT CTT CAC CAC TCT CAT GGC GTA AAT TTG G.
Epitope-tagged wild type and mutant PKC-V5 constructs were prepared from a pcDNA3.1/GS plasmid containing the 1776-bp human PKC open reading frame (H-L14283) with carboxyl-terminal V5 epitope and hexahistidine tags (Invitrogen). A 1.4-kbp NotI/BamHI segment of the open reading frame was excised and replaced with the 1.4-kb fragment of the kinase-deficient or caspase-resistant mutants or wild type PKC constructs described above. A catalytic domain mutant with V5 and hexahistidine tags (CAT-V5) lacking amino acids 16 -239 was produced by PCR from wild type PKC in pcDNA3 with the following primers: 5Ј-TTT AAA GCG GCC GCG GAA TCA AAA TCT CTC; and 5Ј-GAA CCG GGG GAT CCG GAT G. The PCR product was agarose gel-purified and TA-cloned with pCR2.1 (Invitrogen). The 0.7-kb NotI/ BamHI fragment was excised from pCR2.1 and ligated into the corresponding site of the PKC pcDNA3.1/GS plasmid. Similarly, a kinasedeficient K281R CAT-V5 was prepared using the K281R PKC as the PCR template. All mutant PKC cDNA clones were subjected to automated DNA sequencing of both strands, which confirmed that the constructs were correct.
Caspase Treatment of [ 35 S]Met-labeled PKC and Purified
Recombinant PKC in Vitro-Purified recombinant human caspase-2, -3, -4, -6, -7, and -8 were prepared as described (25). [ 35 S]Met (10 Ci/20-l reaction)-labeled PKC was produced by in vitro transcription and translation with the T7 Quick TNT kit (Promega). Caspase treatments were for 30 min at 30°C in buffer (CB) containing 0.1 M Hepes adjusted to pH 7.5 with Tris, 5 mM DTT, 0.5 mM EDTA, 10% (v/v) glycerol, and 5 l of the TNT reaction. The caspase reaction was stopped by addition of SDS sample solution and a 5-min incubation in boiling water.
Caspase Activity-BHK cells were rinsed with ice-cold PBS and disrupted by Dounce homogenization in a buffer containing 20 mM Tris-HCl, pH 8.0, 1 mM NaEGTA, 1 mM DTT, and 10 g/ml each of leupeptin, aprotinin, and pepstatin. The lysate was clarified by centrifugation for 30 min at 4°C at 16,000 ϫ g. Protein concentration was measured by the Bradford method with ␥-globulin as a standard (Bio-Rad). The reaction was started by adding 0.1 mg of lysate to 2 ml of reaction buffer containing 20 mM Tris-HCl, pH 8.0, 2 mM MgCl 2 , and 50 M Ac-DEVD-amc. Fluorescence due to the production of free AMC was continuously recorded at 440 nm (380-nm excitation).
PKC Western Blot and Immunoprecipitation-Transfected cells were rinsed with ice-cold PBS and lysed with a buffer containing (in mM): 10 Tris-HCl, pH 7.4, 1% Triton X-100, 50 NaCl, 50 NaF, 30 sodium pyrophosphate, 5 NaEDTA, 5 NaEGTA, 1 phenylmethylsulfonyl fluoride, 0.1 sodium orthovanadate, and 10 g/ml each of leupeptin and aprotinin. The lysates were homogenized by passage through a 26gauge needle and centrifuged for 30 min at 16,000 ϫ g at 4°C. Protein concentration was measured by the BCA method (Pierce Chemical) with bovine serum albumin as a standard.
For the preparation of cytosol and membrane fractions from parotid C5 cells, they were rinsed with ice-cold PBS and disrupted by Dounce homogenization with C5 lysis buffer which contained (in mM): 20 Tris-HCl, pH 7.5, 0.5 EDTA, 0.5 EGTA, 25 g/ml each of aprotinin and leupeptin. Unbroken cells were removed by centrifugation for 5 min at 1000 ϫ g, and the supernatant was centrifuged for 30 min at 100,000 ϫ g at 4°C to obtain cytosol and particulate fractions. Membrane proteins were extracted from the particulate fraction with C5 lysis buffer containing 1% Triton X-100. Proteins (50 g) were fractionated by SDS-PAGE and subjected to Western blot analysis with the PKC antibody as described (21).
For Western blot analysis a protein sample (30 g) was mixed with SDS sample solution and fractionated by SDS-PAGE (10% gel). Proteins were electrophoretically transferred to a PVDF membrane (Millipore Corp.). The membrane was blocked for 1 h at room temperature with 5% (w/v) nonfat dry milk in Tris-buffered saline (TBS). TBS contained (per liter) 8 g of NaCl, 0.2 g of KCl, and 3 g of Tris base and was adjusted to pH 7.4 at room temperature, and 0.05% (w/v) Tween 20 in the case of TBS containing Tween 20. Membranes were incubated overnight at 4°C with 1 g of PKC antibody (C-20, Santa Cruz Biotechnologies) in 5 ml of TBS containing 1% nonfat dry milk. Membranes were rinsed and processed with HRP-conjugated goat anti-mouse IgG (Transduction Laboratories) and a chemiluminescent substrate as described (26). An HRP-conjugated anti-V5 antibody (Invitrogen) was used for Western analysis of immunoprecipitates of cells transfected with epitope-tagged PKC.
Immunoprecipitations were done with a polyclonal antibody (usually 3 g) to a carboxyl-terminal peptide of (amino acids 576 -592) of rat PKC (C-20, Santa Cruz Biotechnologies) or with 1 g of anti-V5 epitope monoclonal (Invitrogen). The antibody was incubated with the lysate for 3 h at 4°C. Protein A-agarose (20 l of a 50% slurry) was added during the last hour. Immune complexes were washed six times with ice-cold lysis buffer as described (26) and twice with kinase assay buffer as described below.
Immune Complex Kinase Activity-The kinase activity of immunoprecipitated V5-tagged PKC from HeLa cells was assayed in 25 l of buffer, which contained 25 mM HEPES-Tris, pH 7.4, 10 mM MgCl 2 , 20 mM -glycerophosphate, 2 mM DTT, 0.1 mM Na 3 VO 4 , 5 g of MBP, 20 M ATP, and 5 Ci of [␥-32 P]ATP. After 30 min at 30°C the reaction was stopped by adding SDS sample solution and boiling for 5 min. Samples (10 l) were size-fractionated by SDS-PAGE (15% gel) and 32 P-labeled MBP was quantified autoradiographically after fixing and drying the gel. To estimate the relative amounts of the V5 epitope-tagged proteins in the immunoprecipitates, 10-l samples were fractionated by SDS-PAGE (10% gel). Proteins were transferred to a PVDF membrane and immunostained with an HRP-conjugated anti-V5 monoclonal antibody (Invitrogen). Statistical analysis was done by two-tailed Student's t test.
Anti-Ub Western Blot-The Ub monoclonal antibody 6C1 was prepared by immunization of BALB/c mice with bovine erythrocyte Ub, which was coupled to ovalbumin with glutaraldehyde (27). Ascites (10 l/5 ml of TBS containing 1% nonfat dry milk) containing the Ub antibody (IgG 2a class) was used for Western blotting as described previously (27). Pretreatment of the antibody with purified bovine erythrocytes Ub abolished the immunostaining of polyubiquitinylated proteins (27). Lacta was prepared as described previously (28) and used to block protein degradation by the Ub-proteasome pathway (29). The cells were lysed with 95°C SDS lysis buffer, which contained (in mM): and 10 Tris-HCl, pH 7.4, 2 EDTA, 2 EGTA, 5 N-ethylmaleimide, 1% (w/v) SDS, and 50 M ALLN. N-Ethylmaleimide inhibits deubiquitinylating enzymes, and ALLN blocks the proteasome. Protein concentration was measured by the BCA method (Pierce Chemical) with bovine serum albumin as a standard. For immunoprecipitation of PKC and Ub-PKC conjugates, a sample of the SDS lysate (usually 0.5 mg of protein) was diluted 10-fold with immunoprecipitation buffer (IB), which contained (in mM): 1 EDTA, 1 EGTA, 0.2 sodium orthovanadate, 30 sodium pyrophosphate, 50 NaF, 50 M ALLN, 1% Triton X-100, 0.5% Nonidet P-40, and 10 Tris-HCl, pH 7.4. Immunoprecipitation was done with 3 g of the C-20 antibody for 2 h, and 20 l of a 50% slurry of protein A-agarose was added after the first hour. Immunoprecipitates were rinsed twice with IB, twice with IB containing 0.5 M NaCl, twice with lower salt IB (no NaF, sodium pyrophosphate, or sodium orthovanadate), and twice with 10 mM Tris-HCl, pH 7.4. Proteins were extracted with SDS sample solution for 5 min in a boiling water bath, fractionated by SDS-PAGE (10% gel), and transferred to a nylon membrane for immunostaining with the Ub antibody, which was done essentially as described for Western blot analysis of PKC. with apparent masses of 50, 40, and 25 kDa, respectively. Caspase-6 and -8 produced F1 and F3 (Fig. 1).
Processing of [ 35 S]Met-labeled PKC by Purified Caspases-
To identify the sites of caspase cleavage, candidate Asp residues were mutated to Ala and the sensitivity of wild type and PKC mutants to processing by caspase was determined (Fig. 1). These studies show that caspase-3 processed [ 35 S]Metlabeled PKC chiefly at EETD 210 and DGMD 239 , whereas caspase-6 and -8 processed PKC chiefly at EETD 210 ( Fig. 1, B-D). Caspase-7 processed [ 35 S]Met-labeled PKC at EETD 210 and DSED 230 , but not at DGMD 239 (Fig. 1E). These results show that PKC is a substrate of both initiator and effector caspases in vitro and that the processing sites are caspasespecific. Intracellular processing of human PKC occurred chiefly at EETD 210 and DGMD 239 as shown by transfection experiments with wild type PKC, and the mutants with alanine substituted for aspartate at the caspase cleavage sites as discussed below (see Fig. 4).
Caspase Processing and Increased Epitope-tagged PKC Activity following the Induction of Apoptosis by TNF␣ and CHX-To determine the effect of caspase processing on PKC kinase function, HeLa cells were transfected with PKC-V5, which has a carboxyl-terminal V5 epitope tag, and apoptosis was induced by treatment with TNF␣ plus CHX. Induction of apoptosis in HeLa and other cell types requires the combination of TNF␣ plus CHX (30,31). The treatment markedly increased the accumulation of carboxyl-terminal fragments (F1 and F2) of PKC-V5 and decreased the steady-state level of full-length PKC-V5, as determined by Western analysis of anti-V5 immunoprecipitates with an HRP-conjugated V5 antibody ( Fig. 2A). Treatment with the general caspase inhibitor, zVAD-fmk, prevented the depletion of full-length PKC-V5 and accumulation of F1 and F2 (Fig. 2A).
Additionally, we assayed the kinase activity of the anti-V5 immune complexes. Treatment with TNF␣ plus CHX markedly increased immune complex kinase activity (Fig. 2B). Interestingly, treatment with zVAD-fmk, which prevented the conversion of full-length PKC-V5 to the C-terminal fragments (F1 and F2), abolished the increase in kinase activity produced by TNF␣ plus CHX (Fig. 2B). This result suggests that the conversion of full-length PKC-V5 to one or both of the carboxylterminal fragments was responsible for the increase in kinase activity. Also note that the zVAD-fmk treatment decreased F2 and kinase activity of the immune complex from the cells that were not treated with TNF␣ plus CHX (Fig. 2). This result suggests that there was a basal rate of F2 production and disappearance in the untreated cells. Transfection of the cells with a kinase-deficient K281R PKC-V5 mutant produced no detectable kinase activity in untreated cells as expected (Fig. 2B). The kinase-deficient mutant was expressed to a similar extent as wild type PKC,-V5, and the TNF␣ plus CHX treatment depleted full-length K281R PKC-V5 similarly to wild type PKC-V5, although there was much less accumulation of the F1 and F2 fragments of the kinase-deficient mutant ( Fig. 2A). One possible explanation of this result is that the K281R mutant of F1 and F2 is less stable following the TNF␣ plus CHX treatment than wild type F1 and F2. In agreement with this explanation, the TNF␣ plus CHX treatment decreased K281R CAT-V5 more than wild type CAT ( Fig. 2A).
To confirm that the catalytic domain of PKC-V5 was active, HeLa cells were transfected with CAT-V5 or K281R CAT-V5, which are mutants that lack the regulatory domain. As expected, CAT-V5 was active following immunoprecipitation from transfected cells, but K281R CAT-V5 had no detectable activity (Fig. 2B). Note that the amounts of the immunoprecipitated CAT-V5, K281R CAT-V5, and full-length PKC-V5 polypeptides were similar ( Fig. 2A). CAT-V5 had 68 Ϯ 20% (n ϭ 3, p ϭ 0.04) more kinase activity than full-length PKC-V5 based on the ratios of the 32 P-MBP to the immunostained protein bands. Treatment with TNF␣ plus CHX decreased the steady-state level of CAT-V5 and its kinase activity (Fig. 2). Although the mechanism by which treatment with TNF␣ plus CHX induced the disappearance of the catalytic domain of PKC is unknown, it may involve the Ub-proteasome, because the results presented below show that the catalytic domain forms polyubiquitin conjugates and accumulates following blockade of the proteasome.
Production of Catalytic Domain Fragments of Endogenous PKC Activation of PKC Activity in Parotid C5 Cells following the Induction of Apoptosis-To determine if caspase activation evoked the processing of endogenous PKC, rat parotid C5 cells were treated with etoposide for 8 -24 h and PKC was analyzed by Western blot. Treatment with etoposide increased the levels of 40-and 50-kDa carboxyl-terminal fragments of PKC, with the amount of the smaller F2 fragment being greater than F1 (Fig. 3, A and C). The carboxyl-terminal PKC fragments were predominantly in the cytosol fraction, which suggests that they are not strongly membrane-associated (Fig. 3A). Although the antibody used for Western blot analysis cross-reacts with the / isoform of PKC(13), parotid C5 cells lacked detectable PKC/ as determined by Western blot with a PKC/-specific antibody. 2 The carboxyl-terminal fragments in etoposidetreated parotid C5 cells are probably produced by caspase processing of rat PKC at EETD 210 and DGVDG 239 as recently demonstrated in HeLa cells transfected with Myc-tagged PKC (32).
Interestingly, treatment of rat parotid cells with etoposide, which induces apoptosis in these cells (21), increased PKC immune complex kinase activity (Fig. 3B) (18 h) increased PKC immune complex kinase activity 3.8 Ϯ 1.1 (n ϭ 4, p ϭ 0.04). A general caspase inhibitor, zVAD-fmk (14), abolished accumulation of the carboxyl-terminal PKC fragments in etoposide-treated parotid C5 cells and prevented the increase in the PKC immune complex kinase activity (Fig. 3, B and C). Following the treatment with 25 or 50 M zVADfmk and etoposide, PKC immune complex kinase activity (relative to untreated cells) was not significantly different from that of untreated cells. At 25 M concentration, zVAD-fmk had no effect on PKC immune complex kinase activity, but decreased etoposide-stimulated kinase activity by 86% (n ϭ 3). A caspase-3 selective inhibitor, zDEVD-fmk, reduced PKC immune complex kinase activity of etoposide-treated cells (Fig. 3B). zDEVD-fmk also inhibited the production of the carboxylterminal PKC fragments in etoposide-treated parotid cells, but zVAD-fmk was more potent than zDEVD-fmk in prevent-ing the increase in PKC immune complex kinase activity (Fig. 3B). 2 Accumulation of PolyUb Conjugates of the Catalytic Domain or Full-length PKC following Blockade of the Proteasome-Previously we observed the formation of Ub conjugates of PKC␣ in vitro and intracellularly (26,33). The ␦ and ⑀ isoforms of PKC also appear to be degraded by the Ub-proteasome system (26,34). To determine whether the catalytic domain or full-length PKC formed Ub conjugates, BHK cells were transfected with PKC, CAT, or with lacZ as a negative control. Some cells were treated for 18 h with Lacta, a specific inhibitor of the 26 S proteasome (29), to cause polyubiquitinylated proteins to accumulate. Proteins were immunoprecipitated with an antibody that recognizes the extreme carboxyl-terminal segment of PKC, fractionated by SDS-PAGE, and subjected to Western blot analysis with a monoclonal antibody to Ub (Fig. 4, top left). Lacta treatment caused the accumulation of a smear of proteins (Ͼ150 kDa) that were immunoprecipitated by the anti-PKC antibody and immunostained by the anti-Ub antibody (Fig. 4, top left). Importantly, the Ͼ150-kDA smear was absent from cells transfected with lacZ instead of PKC or CAT (Fig. 4). These findings show that transfection of BHK cells with PKC or CAT caused Ub conjugates of PKC or carboxyl-terminal fragments of PKC to accumulate following the treatment of the cells with Lacta. No Ub-PKC conjugates were detected in untreated cells (Fig. 4) as expected if the conjugates were rapidly degraded by the proteasome in the absence of Lacta. There was no significant increase in fulllength PKC in BHK cells transfected with PKC compared with those transfected with lacZ (Fig. 4, bottom left, and see Fig. 5C). Lacta treatment caused the accumulation of fulllength PKC and the F1 and F2 fragments (Fig. 4, bottom left, and Fig. 5C). Additionally, CAT was only detected in Lactatreated cells (Fig. 4, bottom left). Note that PKC comigrated with an endogenous protein band, which is probably the / isoform of PKC. PKC/ has the same electrophoretic mobility as PKC and is recognized by the carboxyl-terminal PKC antibody (13). Moreover, Western blot analysis with antibodies specific for either PKC/ or PKC (Santa Cruz Biotechnologies) showed that BHK cells had endogenous PKC/ and lacked detectable PKC. 3 Lacta treatment increased the steady-state level of ectopic full-length PKC and caused F1 and F2 to accumulate, but had no effect on the level of the endogenous PKC/ (Fig. 4, bottom right). Lacta treatment produced no detectable F1 or F2 in the lacZ-transfected BHK cells (Fig. 4, bottom left), which agrees with the recent reports that PKC/ is not processed by caspases in vitro or intracellularly following induction of apoptosis (32,35).
To determine whether caspase processing was required for the accumulation of Ub conjugates of PKC, BHK cells were transfected with PKC mutants with Ala substitutions at the caspase-processing sites (Fig. 4, right panels). Following Lacta treatment, Ub-PKC conjugates accumulated in cells transfected with wild type PKC or the Ala substitution mutants, including the double mutant, which was not cleaved by caspases (Fig. 4, top right). Ectopic expression of the double mutant produced no detectable PKC fragments in the Lactatreated cells (Fig. 4, bottom right). These results indicate that caspase processing was not required for the production of polyUb-PKC conjugates. The single cleavage site mutants were processed to F1 and F2 as expected in the Lacta-treated BHK cells (Fig. 4, bottom right). Thus, the D210A and D239A mutations prevented the accumulation of F1 and F2, respectively (Fig. 4, bottom right). F1 and F2 PKC fragments were (Etop). B, the cells were incubated for 18 h in the presence or absence of 50 M etoposide and the indicated concentration of zDEVD-fmk (zDEVD) or zVAD-fmk (zVAD). The cells were lysed, and PKC was immunoprecipitated from a 0.5-mg sample. Histone H1 kinase activity of the immunoprecipitates was assayed as described previously (21). The assay buffer contained 25 mM Tris-HCl, pH 7.5, 5 mM MgCl 2 , 1 mM DTT, 50 M ATP, 1 Ci of [␥-32 P]ATP, 20 g of histone H1 (Sigma, histone type III-SS), and 40 g/ml phosphatidylserine (Avanti Polar Lipids) as indicated. After 10 min at 30°C, the reaction was stopped and [ 32 P]histone was measured as described previously (21). C, the cells were incubated for 18 or 24 h in the absence or presence of 100 M etoposide and 100 M zVAD-fmk as indicated. The cells were lysed, and proteins were subjected to Western blot analysis with the antibody to PKC. F1 and F2 indicate the positions of the 50-and 40-kDa carboxyl-terminal fragments of PKC, respectively. Two additional experiments gave similar results. detected in BHK cells transfected with wild type PKC (Fig. 4, bottom right, first lane), apparently because BHK cells spontaneously undergo apoptosis shortly after becoming confluent. 3 Expression of the full-length PKC and the single and double caspase cleavage site mutants in HEK 293 cells showed that caspase processing occurred chiefly at EETD 210 and DGMD 239 , as in the case of BHK cells. 3 Treatment of BHK Cells with Lacta or Etoposide Activated Caspase-3-like Activity, but Only Lacta Caused PKC Fragments to Accumulate-Lacta treatment produced a severalfold increase in caspase-3-like activity in the BHK cells, which were transfected with lacZ or PKC (Fig. 5, A and B). The extent of the activation of caspase-3-like activity by Lacta was similar to that produced by etoposide, a topoisomerase II inhibitor, which has been widely used to induce apoptosis (14). The combination of Lacta with etoposide synergistically activated caspase-3-like activity (Fig. 5, A and B). Lacta and etoposide or the combination of the two activated caspase-3-like activity in nontransfected BHK cells similarly to transfected cells. 3 Lacta probably evoked the accumulation of carboxyl-terminal PKC fragments by two mechanisms: 1) inhibition of the proteasome activated caspases (Fig. 5, A and B), as shown previously (36) and 2) the inhibition of the proteasome prevented the degradation of polyubiquitinylated proteins (29), including the carboxyl-terminal PKC fragments (Fig. 4).
Interestingly the accumulation of full-length PKC and the F1 and F2 fragments required treatment with Lacta (Fig. 5C). Thus, full-length PKC and/or the F1 and F2 fragments were probably degraded as rapidly as they were produced in the absence of the proteasome inhibitor. Extraction of BHK cells with Triton X-100 (Fig. 5C) solubilized relatively little of the cross-reacting protein, which had a slightly slower mobility than PKC compared with extraction with SDS (Fig. 4). Treatment of lacZ-transfected cells with etoposide produced a small amount of a polypeptide with a mobility similar to that of the F1. The polypeptide is probably not F1, however, because BHK cells lacked detectable PKC and because Lacta failed to increase the accumulation of the polypeptide (Fig. 5C). 3
DISCUSSION
Atypical PKC, like other PKC isoforms, consists of an amino-terminal regulatory domain (first ϳ240 amino acids), which is connected to the carboxyl-terminal catalytic domain (amino acids ϳ240 -584) via a short hinge segment (1)(2)(3). The results presented here show that both initiator (e.g. caspase-8) and effector (e.g. caspase-3, -6, and -7) caspases efficiently process human PKC in vitro at the hinge segment and generate carboxyl-terminal fragments of ϳ40 and 50 kDa, which contain the catalytic domain. Each of the caspases readily or predominantly cleaved human PKC at EETD2G in vitro, which was not expected, based on the known substrate specificities of the caspases. The substrate specificity of caspase-2, -3, and -7 is DEXD and that of caspase-6, -8, and -9 is (I/L/V)EXD (25,37). Furthermore, studies with peptide substrates have shown that caspase-7 is more efficient than caspase-3 in processing peptides with glutamate at the P4 or P3 positions by six and two times, respectively (25). However, caspase-3 readily processed PKC at EETD2G (Fig. 1). These results support the idea that the structural context of a protein contributes to recognition as a caspase substrate (25). Some of the PKC processing data are consistent with in vitro studies of the substrate specificities of recombinant human caspases. For example, caspase-3 is three times more efficient than caspase-7 at cleaving peptides with a P2 methionine instead of valine (25), and caspase-3, but not caspase-7, cleaved PKC at DGMD2G (Fig. 1). A recent report indicated that Myc-tagged rat PKC expressed in HeLa cells was processed at EETD2G and DGVD2G following the induction of apoptosis by UV radiation (32). The valine for methionine substitution at the caspase site of rat versus human PKC may explain the processing of rat PKC predominantly at DGVD2G instead of EETD2G. 3 Our results appear to be the first demonstration of PKC activation following the induction of apoptosis. Moreover, the activation of PKC depended on caspase processing to carboxylterminal fragments (Figs. 2 and 3). Caspase processing of epitope-tagged or endogenous PKC produced catalytic domain fragments (F1 and F2), which would be expected to dissociate from the autoinhibitory amino-terminal regulatory domain (1-3). CAT-V5, which encodes the catalytic domain with a V5 epitope, was active following expression and immunoprecipitation from HeLa cells (Fig. 2). Furthermore, increased PKC immune complex kinase activity accompanied caspase processing of epitope-tagged or endogenous PKC in parotid C5 or HeLa cells, respectively, following the induction of apoptosis (Figs. 2 and 3). The general caspase inhibitor, zVAD-fmk, which abolished V5-tagged or endogenous PKC processing, prevented the increase in immune complex kinase activity ( Figs. 2 and 3). These results suggest that caspase processing of latent, inactive PKC constitutes a mechanism for activation of the kinase. Additional work is needed to determine whether caspase processing activates PKC independently of phosphorylation by an upstream kinase (38,39), which is plausible because proteolysis between the regulatory and catalytic domains would be expected to dissociate them and thereby relieve the autoinhibition of the kinase function (1)(2)(3).
In contrast to the activation of PKC following the induction of apoptosis in HeLa cells described here, Frutos and coworkers (32) observed that the induction of apoptosis by the exposure of HeLa cells to UV radiation markedly diminished the immune complex kinase activity of Myc-tagged PKC. The decrease in immune complex kinase activity was unrelated to caspase processing of PKC, because the kinase activity of a caspase-resistant PKC mutant was decreased similarly to that of wild type PKC (32). Although the mechanism of kinase activation that accompanies caspase processing is most likely the dissociation of the inhibitory amino-terminal regulatory domain, the biochemical basis for the inhibition of the kinase function following UV radiation is unknown (32). More work is needed to determine whether caspase processing and activation of PKC, which follows the induction of apoptosis, generates executive and/or survival signals.
The present findings appear to be the first to show that an atypical PKC isoform, which lacks a functional diacylglycerolbinding C1 domain, forms Ub conjugates and is degraded by the Ub-proteasome. Polyubiquitinylation reversibly marks proteins for rapid destruction by the 26 S proteasome (22,23). Polyubiquitinylated-PKC conjugates accumulated in BHK cells that were transfected with the catalytic domain or fulllength PKC or with PKC mutants with Ala substitutions for Asp at the caspase cleavage sites (Fig. 4). These results indicate that the carboxyl-terminal catalytic segment of PKC (amino acids 240 -592) is sufficient for ubiquitinylation in transfected BHK cells. Importantly, accumulation of the polyubiquitinylated-PKC conjugates depended on transfection with wild type or a mutant of PKC and blockade of the 26 S proteasome with Lacta (Fig. 4). Without Lacta the cells appear to degrade the Ub-PKC conjugates essentially as rapidly as they are produced. In fact it was difficult to detect the carboxylterminal PKC fragments in the absence of Lacta, even when caspases were activated by treatment with etoposide (Fig. 5C). Although the half-life of PKC has not been reported, the present findings suggest that it is short-lived, at least in BHK cells, because so little of the catalytic domain or full-length PKC accumulated in the transfected unless the proteasome was blocked (Figs. 4 and 5). Polyubiquitinylated-PKC conjugates of the caspase-resistant double mutant accumulated providing the proteasome was blocked by Lacta (Fig. 4). Therefore, ubiquitinylation of a full-length PKC mutant can occur, although additional studies are needed to exclude the possibility that the substitution of the Asp residues with Ala affected ubiquitinylation.
Discreet structural elements, called destruction signals or degrons, which are recognized by the ubiquitinylation E2⅐E3 complex, have been identified in a variety of proteins in fungi and mammalian cells (22,23). Degrons include the cyclin destruction boxes, the ␦ domain of c-Jun, a destabilizing aminoterminal residue (the N-degron), Deg1 and Deg2 of MAT␣2, the DSGXXS motif of IB and -catenin, the Ub conjugation motif of growth hormone receptor, and the PEST elements of ornithine decarboxylase and other proteins (22, 23, 40 -45). Because transfection with the catalytic domain of PKC readily produced Ub conjugates, there appears to be a degron in the catalytic domain. PKC has several prominent PEST elements (amino acids 51-73, 86 -104, 175-190, 190 -211) in the regulatory domain and one in the catalytic domain (amino acids 521-560) (26). Interestingly, the prominent caspase cleavage site of PKC (EETD2G) lies in the middle of the PEST element, which has the highest PEST score of 6.4 (26). If this PEST element constitutes a degron, then processing at EETD2G might alter the stability of the protein. Alternatively, phosphorylation may influence processing of PKC, because PEST sites contain serine and/or threonine residues and because phosphorylation is known to regulate the processing of IB and other proteins by caspases (15,44,46,47). Phosphorylation, in turn, is known to regulate ubiquitinylation. For example, an F box protein recognizes proximal phosphorylated serines of the DSGXXS motif of IB and -catenin (41)(42)(43), and dephosphorylation preserves c-Fos, c-Jun, and c-Mos from destruction by the Ub-proteasome system (40 -43, 48 -50). PKC provides an attractive model for furthering our understanding of the interrelationships between the three major post-translational modifications of phosphorylation, ubiquitinylation, and caspase processing, which are likely to govern the functional impact of this pivotal kinase on cell proliferation, differentiation, malignant transformation, and apoptosis.
|
v3-fos-license
|
2019-04-02T13:13:34.616Z
|
2018-01-26T00:00:00.000
|
89791577
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://juniperpublishers.com/nfsij/pdf/NFSIJ.MS.ID.555647.pdf",
"pdf_hash": "40d985ec61d6d307f3fa6ce97186ffb65a0997c3",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2949",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "9f123a89cd900e7861a813c102c086b0b78914db",
"year": 2018
}
|
pes2o/s2orc
|
Effects of Replacing Maize with Rhizopus Oryzae Fermented Mangifera Indica Seed Kernels on Broilers Chicken Growth Performance
Livestock production plays an important role in the agricultural sectors of every nation particular in the West African sub region. The satisfactory outcomes of agricultural quality livestock industries usually formulate feeds from materials that are either edible or inedible by man. These feeds, when ingested by the animals enhance the animal productivity in terms of number and nutrients, to meet of the immediate nutrients requirement of man [1].
Introduction
Livestock production plays an important role in the agricultural sectors of every nation particular in the West African sub region. The satisfactory outcomes of agricultural quality livestock industries usually formulate feeds from materials that are either edible or inedible by man. These feeds, when ingested by the animals enhance the animal productivity in terms of number and nutrients, to meet of the immediate nutrients requirement of man [1].
World feeds resources are on the aerographic decline, caused probable by increase in the number of humans and human's activities. Hence, it is inevitable that conventional animal feeds should become increasingly more expensive. This was led to a search for now often unconventional feeds and effective method of processing presently inedible roughages into more acceptable and nutritional feed [1].
Agricultural practices in West Africa and indeed most developing countries consist of small scale farming. The farmers in this sub-region have, in general, low level of agricultural education and at the same time are handicapped by insufficient capital. According to Payne and Wilson [1], the unavailability of capital and increasing worldwide cost of energy, purchased feeds, equipment and pharmaceuticals may in this long run delay or even halt the complete industrialization and urbanization of poultry production in tropical countries, under such circumstances, substances and small-scale production methods with additional improvement may become relatively attractive to this sector of the population [1].
There are two types of broilers rations, namely; the broiler starter mash fed from day one to fourth or fifth week and the broiler finished mash, fed from week 4 or 5 until slaughter. The high cost of feed still remains the greatest constrain to poultry production in Nigeria [2].
Mango (Mangifera indica) is a tree crop well adapted to all ecological zones in Nigeria and the trees are found all over the country, mango kernel, a byproduct of mango pulp is reported to be a good source of starch [3]. In India, mango kernel is Nutrition & Food Science International Journal consumed by human beings in the form of porridge [3,4]. In Nigeria however, it is regarded as waste thus contributing to environmental pollution. These are few reported on the use of mango kernel in livestock feeding but the level of incursion in poultry diets had been low because of the presence of tannin which have been reported to reduce chicks growth [5,6]. Drying, soaking, leaching and fermentation are simple means of detoxifying these feed sources to reduce the presence of anti-nutritional and toxic compounds [7].
The high level of competition between man and livestock for available feed ingredients has posed a great concern to nutritionists over the years particularly in developing countries. The fact that feed alone accounts for up to 70 -80% of the recurrent production input in intensive monogastric animal production also makes the sourcing for alternative feed ingredient expedient [8,9]. The high cost of conventional ingredients used in feed formulation contributes immensely to the high cost of the finished feed. This is a major obstacle to the feed expansion of poultry industry in Nigeria and by extension most developing countries in the world [10][11][12][13]. The high cost of maize as an energy source in feed formulation has generated a lot of controversy as to its economic justification [14]. Beer production has been on the increase in Nigeria with the attendant shift in imported barley to local grains as the major raw material. Maize and sorghum thus became ready substitutes in the beer making industry. The use of maize and sorghum in beer making further exacerbated the scarcity of the cereal grains and thus its inclusion in feed formulation [14]. It is light of this increasing cost that this research work was conducted with the aim of: determining the proximate, mineral and antinutritional factors composition of unfermented and Rhizopus oryzae fermented mango seed kernel and to determine the effect of replacing maize with fermented mango seeds kernels on the growth performance of Broiler chicken.
Sample collection
Fresh mango (Mangifera indica) seeds were collected from "Kofar mata" area of Usmanu Danfodiyo University Sokoto into seven clean polythene bags. The kernel was obtained by cutting the seed open using knife. The fresh kernel was chopped to reduce the particle size, sundry for 72 hours, grinded and used in the formulation of feed.
Two weeks and four days old boilers chicks (Rhodes Island) were purchased from Razaki feeds and chick "Kofar Atiku" old market area Sokoto. Commercial feed, Bone meal and vitamins premix were purchased from "Kofar Gabas" Sokoto central market.
Inoculum preparation
Rhizopus oryzae was previously isolated from mango seed kernel and maintained on PDA slants. The spores were subcultured onto a molten PDA and incubated at room temperature for 5 days. Inoculum preparation was done as described by Negi and Banerjee [15]; Ibrahim et al. [16]. For inoculum preparation, 45ml of sterile distilled water was added to the 5 days old slant growth on PDA and scrapped aseptically with inoculating loop. 45ml of this suspension, having spores concentration of approximately 1.17 X 10 9 cells/ml was used as inoculum for the fermentation.
Fermentation
This was done as described by Lawal et al. [17] with little modification. Twelve kilograms (12kg) of the powdered mango seed kernel was moistened with 800ml of distilled water, autoclaved at 121 °C for 15 °C and allowed to cool at room temperature. It was seeded with Rhizopus oryzae inoculum earlier prepared, mixed thoroughly and allowed to ferment for 10 days, after which it was then oven dried at 60 °C for 45 minutes to arrest the fermentation process.
Proximate composition
Samples were analyzed in triplicate for proximate composition in accordance with the Official Methods of the Association of Official Analytical Chemists [18]. Ash was determined by incinerating two grams (2g) of powdered mango seed kernel at 550 °C in lenton furnaces (England) over night. Fibre was determined by drying two gram (2g) of powdered mango seed kernel over night at 105 °C in the oven (Gallenhamp Oven BS) and incinerated at 550 °C for 90minutes in lenton furnaces (England). Moisture Content was determined by drying two gram (2g) of powdered mango seed kernel over night at 105 °C in the oven (Gallenhamp Oven BS). Crude lipid was determined using saturated method. Two grams (2g) of powdered mango seed kernel were weighed into 50ml conical flask and N-hexane was added and allowed to stand at room temperature overnight. It was drained into an empty flask, earlier weighed and designated W1. It was placed in an oven to allow the N-hexane to evaporate in the oven (Gallenhamp Oven BS). Protein (% N*6.25) was determined by the Micro-kjeldahl Method. Soluble carbohydrate is not determined directly but obtained as a difference between crude protein and the sum of ash, protein, crude lipid and crude fibre.
Mineral Content
Analysis of minerals in unfermented, and fermented were done in triplicate according to methods described by Anhwange et al. [19]; Walinga et al. [20]. The investigated minerals include Phosphorus, Potassium, Sodium, Calcium, and Magnesium. Phosphorus was determined using Spectrophotometer (JENWAY 6100) at 660 (wavelength), Potassium, Sodium was determined using flame photometer (Corning 400 Essex. England), determination of calcium and Magnesium was done by ethylene diamine tetra acetic acid (EDTA) Titration Method.
Antinutritional Factor
Oxalate was determined by the method of Krishna and Ranjhan [21], phytate and hydrocyanic acid were determined Nutrition & Food Science International Journal by the AOAC [18] method. Nitrate was determined by IITA [22] method.
Feed formulation
The feed were formulated according to the National research council specification as cited by Aduku (1992).
Feeding trials A (fermented mango seed kernel)
Five hundred gram (500g) of Rhyzopus oryzae fermented mango seed kernel was mixed with five hundred gram (500g) of maize. Then supplemented with 100g of groundnut cake powder, 100g of bone meal, 100g of vitamins premix and 10grams of salt.
Feeding trial B (Fermented Mango Seed Kernel + Conventional Feed)
Five hundred grams 500g of Rhizopus oryzae fermented mango seed kernel was mixed with five hundreds gram 9500g) of conventional feed. Then supplemented with 50g of groundnut cake powder, 50g of borne meal, 50g of vitamins premise and 5grams of salt.
Feeding trial C (Conventional feed only as Control)
Commercially prepared conventional feed, vital feed, broilers starter was used in feeding the birds.
Weighting birds
The weight of the birds were measured using a weighting balance to obtain the weight at weekly intervals for a period of eight weeks for the three different feeding diets used in feeding the birds.
Results
Analysis of the proximate composition of unfermented and fermented mango seed kernel was conducted and the result in presented in Table 1. From the result, unfermented mango seed kernel has highest value of carbohydrate 83.50, Moisture (43.67), Ash (3.83), lipid (12.17), fibre (1.33) while the fermented mango seed kernel has high percentage of crude protein (7.06). Analysis of the mineral composition of fermented and unfermented mango seed kernel was conducted and the result is presented in Table 2. From the resulted unfermented mango seed kernel had the highest value of sodium (84.77mg/kg), magnesium (1.05mg/kg) while the ferment mango seed kernel has the highest value for phosphorous (2.67mg/kg), potassium (2433mg/kg) and calcium (0.4 mg/kg). Analysis of the antinutritional composition of unfermented and fermented mango seed kernel was conducted and the result is presented in Table 3. From the result, fermentation caused a decrease in the antinutritional content of unfermented mango seed kernel with an initial value of 0.02mg/g for oxalate, phytate (15.27mg/mol), tannin [0.53mg/mol], nitrate (2.62 mg/mol), cyanide (0.80mg/mol) to 0.01mg/g for oxalate, phytate (5.24 mg/mol), tannin (0.31mg/mol), nitrate (0.05 mg/mol) and cyanide (0.07 mg/mol). Analysis of broilers growth performance at weekly intervals for the period of four weeks was conducted to ascertain the effect of replacing maize with fermented mango kernel at 50% level of inclusion and the result presented in Table 4. During the periods of feeding, birds fed on fermented mango seed kernel plus conventional feed had a lower final body weight (2.2kg) compared to the fermented mango kernel based diets chicks (2.6kg) and control (2.9kg).
Discussion
The result of proximate analysis shows that mango seed kernel have substantial amounts of crude protein. Fermentation increased the bioavailability of crude protein. This increment could be attributed to the Rhizopus oryzae metabolism involving polysaccharides hydrolysis and proteogenesis during fermentation [23]. Similar protein increment was also reported during the African Locust bean (Parkia biglobosa) fermentation [24].
The result of mineral analysis of unfermented and fermented mango seed kernel shows that fermentation increases the bioavailability of minerals. Fermented mango seed kernels could be exploited as cheap source of minerals for poultry feed formulations. The variation in mineral content could be related to the type of soil from which the seed were harvested in the case of raw seeds and fermentation recipients in case of fermented seeds [25,26].
The reduction in oxalate, phytate, cyanide, nitrate and tannins observed after fermentation may have resulted due to microbial degradation of these toxic compounds to less toxic ones. It has also been reported that toxic antinutritional factors could be reduced by fermentation [7]. This implies that Rhizopus oryzae could be exploited to decreases the antinutritional factor in mango seed kernel there by making them less harmful for broilers consumption.
An increasing weight gain was observed in all the feeding trials with no mortality recorded during the feeding experiment. However, birds fed with Rhizopus oryzae fermented mango seed kernel alone had higher weight gain compared to that fed with Rhizopus oryzae fermented mango seed kernel and conventional feed (1:1) and not much difference was observed between the Rhizopus oryzae fermented feed and conventional feed in terms of growth performance. This implies that Rhizopus oryzae fermented feeds was able to support the broilers chicken energy and growth requirement. Diarra and Usman [2] reported that 20% of maize could be replaced with boiled mango kernel meal in the diet of broilers without adverse effect on growth and blood parameters.
Joseph and Abolaji [27] observed no adverse effect on broilers which have been fed 10% of raw mango seed kernels and twofold improvement on the 10% inclusion level of incorporation into broilers chicken rations [28][29][30][31].
Conclusion
The study shows that Rhizopus oryzae caused an improvement in the protein, mineral and antinutritional values of raw mango seeds. It also shows that replacing maize at 50% level of inclusion with Rhizopus oryzae fermented mango seed kernel in broilers rations had no adverse effect on the growth performance of broilers thereby suggesting that it could be used in poultry feed formulation especially for small scale farmers which will go a long way to cut the cost of poultry meat assisting to achieve food safety concerns issues in Nigeria.
|
v3-fos-license
|
2021-09-01T15:11:34.320Z
|
2021-06-23T00:00:00.000
|
237919628
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.12688/wellcomeopenres.16968.2",
"pdf_hash": "80ac9a6ad6433c6aecac49b24b41c909f79b79f2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2950",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e3b74f4eab6b52e3738c0fb648335b0618ae722b",
"year": 2023
}
|
pes2o/s2orc
|
Pharmacokinetics and pharmacodynamics of azithromycin in severe malaria bacterial co-infection in African children (TABS-PKPD): a protocol for a Phase II randomised controlled trial
Background: African children with severe malaria are susceptible to Gram-negative bacterial co-infection, largely non-typhoidal Salmonellae, leading to a substantially higher rates of in-hospital and post-discharge mortality than those without bacteraemia. Current evidence for treating co-infection is lacking, and there is no consensus on the dosage or length of treatment required. We therefore aimed to establish the appropriate dose of oral dispersible azithromycin as an antimicrobial treatment for children with severe malaria and to investigate whether antibiotics can be targeted to those at greatest risk of bacterial co-infection using clinical criteria alone or in combination with rapid diagnostic biomarker tests. Methods: A Phase I/II open-label trial comparing three doses of azithromycin: 10, 15 and 20 mg/kg spanning the lowest to highest mg/kg doses previously demonstrated to be equally effective as parenteral treatment for other salmonellae infection. Children with the highest risk of bacterial infection will receive five days of azithromycin and followed for 90 days. We will generate relevant pharmacokinetic data by sparse sampling during dosing intervals. We will use population pharmacokinetic modelling to determine the optimal azithromycin dose in severe malaria and investigate azithromycin exposure to change in C-reactive protein, a putative marker of sepsis at 72 hours, and microbiological cure (seven-day), alone and as a composite with seven-day survival. We will also evaluate whether a combination of clinical, point-of-care diagnostic tests, and/or biomarkers can accurately identify the sub-group of severe malaria with culture-proven bacteraemia by comparison with a control cohort of children hospitalized with severe malaria at low risk of bacterial co-infection. Discussion: We plan to study azithromycin because of its favourable microbiological spectrum, its inherent antimalarial and immunomodulatory properties and dosing and safety profile. This study will generate new data to inform the design and sample size for definitive Phase III trial evaluation. Registration: ISRCTN49726849 (27 th October 2017).
azithromycin and followed for 90 days. We will generate relevant pharmacokinetic data by sparse sampling during dosing intervals. We will use population pharmacokinetic modelling to determine the optimal azithromycin dose in severe malaria and investigate azithromycin exposure to change in C-reactive protein, a putative marker of sepsis at 72 hours, and microbiological cure (seven-day), alone and as a composite with seven-day survival. We will also evaluate whether a combination of clinical, point-of-care diagnostic tests, and/or biomarkers can accurately identify the sub-group of severe malaria with culture-proven bacteraemia by comparison with a control cohort of children hospitalized with severe malaria at low risk of bacterial co-infection. Discussion: We plan to study azithromycin because of its favourable microbiological spectrum, its inherent antimalarial and immunomodulatory properties and dosing and safety profile. This study will generate new data to inform the design and sample size for definitive Phase III trial evaluation.
Introduction
Plasmodium faliciparum malaria remains to be a common cause of hospital admission in much of sub-Saharan Africa, and plays a substantial role in under five-year mortality 1 . Strategies to control or prevent malaria (including vaccines) have so far offered limited short-term protection. Over the past decade some African countries have either documented no decline 2 , an increase in hospitalisations with severe P. falciparum malaria 3 or a resurgence of severe malaria following a period of sustained control. Thus, prevention strategies alone in many parts of Africa will not adequately address the burden that malaria poses on health services, particularly in countries where transmission is high. Even with the best (evidence-based) antimalarial treatment artesunate, children in the Artesunate versus Quinine in the treatment of severe falciparum malaria in African children (AQUAMAT) trial had an overall mortality of 6-8.5% 4 . The trial population were selected pragmatically (defined by parasite-positivity and the admitting clinician's desire to use parenteral rather than oral antimalarials) for generalisability of the results. However, mortality in children with severe malaria would almost certainly be substantially higher outside a Good Clinical Practice (GCP)-run trial and outside selected centres with a strong track record of research; and with more stringent clinical criteria defining severe malaria 5 . Thus, there is substantial potential for clinical trials addressing the safety and efficacy of adjuvant supportive therapies to not only close existing gaps in the severe malaria treatment algorithm, but also substantially improve outcomes 6 .
Children with severe malaria and bacterial co-infection, largely due to enteric Gram-negative organisms with a predominance of non-typhoidal salmonella (NTS) species, have substantially higher rates of in-hospital and post-discharge mortality 7 . An estimated one third of all severe malaria deaths in African children are attributable to bacterial co-infection 8 . Current guidance and evidence for treating co-infection in children is lacking, and there is no consensus on the dosage or length of treatment required. Identifying which children genuinely have bacterial co-infection is practically impossible. The indiscriminate use of antibiotics is both financially costly and may perpetuate the rise of antibiotic resistance. Establishing which children with malaria are at greatest risk of bacteraemia is critical to pragmatically inform a policy for targeted antibiotic therapy that could substantially reduce malaria-associated mortality while minimising the risks of excess antibiotic prescribing.
Bacterial co-Infection in severe malaria
We conducted a systematic review in 2013 examining studies among children with malaria admitted to hospitals or outpatient clinics in sub-Saharan Africa reporting invasive bacterial infection (IBI) 9 . We identified a total of 25 studies across 11 African countries which fulfilled the inclusion criteria. These comprised of 20 cohort analyses, two randomised controlled trials and three prospective epidemiological studies. We initially compared the prevalence of IBI in 20,889 children who were hospitalised with all-severity malaria to 27,641 children hospitalised with a non-malarial febrile illness. The mean prevalence of IBI was 5.58 (95% CI 5.5 to 5.66%) in children with malaria and 7.77% (95% CI 7.72 to 7.83%) in non-malaria illness. In the meta-analysis involving 10 studies (n=7,208) of children with severe malaria the mean prevalence of IBI was 6.4% (95% confidence interval (CI) 5.81 to 6.98%). Ten of the 25 studies reported mortality stratified by bacterial infection status. Case fatality was higher at 81 of 336, 24.1% (95% CI 18.9 to 29.4) in children with malaria/IBI co-infection compared to 585 of
Amendments from Version 1
Changes made to the document 1) We have made a minor edit to the abstract so that our aims are clearer ' ..and investigate the impact of different levels of azithromycin exposure on change in C-reactive protein (a putative marker of sepsis at 72 hours), as well as on microbiological cure alone and as a composite with survival at 7 days'.
2) Additional abbreviations were added since the list was not comprehensive 3) We have edited a very long sentence in the introduction to improve readability and so that the point we were making is now clearer.
4) Under the section covering other favourable properties of azithromycin We were asked to justify the statement that azithromycin half-life is such that it is likely to offer protection from bacteraemia for a month. We agreed that the referenced paper did not make this clearer about the specific claim so we have amended this.
5) The Study Figure had a footnote missing for the *. We have now clarified that the sampling in Day 1-5 inclusive in the footnote next the the *.
6) The first sentence of the Trial monitoring section has been edited to make more sense.
7) Under trial status we added the following 'The trial completed recruitment on 4 th October 2021 and follow up on 4 th . January 2022. The clinical trial data and PKPD are currently being analysed'.
Any further responses from the reviewers can be found at the end of the article 5,760, 10.2% (95% CI 9.3 to 10.98) with malaria alone. In seven of 14 hospital studies, NTS was the commonest isolate and one other study listed enteric Gram-negative organisms as the most common isolates in malaria-infected children 9 . Thus, enteric Gram-negative organisms, and NTS in particular, cause the vast majority of bacteraemia co-infections 9 . Moreover, the poor sensitivity of blood cultures (due to low blood volumes in children and previous antibiotic use) means that the true prevalence of bacteraemia co-infection is likely to be significantly higher and 'nosocomial bacteraemia' will be acquired in a further 2-3.5% of children receiving a transfusion (commonly prescribed in severe malaria) due to poor quality.
Current management recommendations
Current recommendations indicate that antibiotics should be given to 'all children with suspected severe malaria in areas of moderate and high transmission until a bacterial infection is excluded… and should be based on culture and sensitivity results or, if not available, local antibiotic sensitivity patterns' 10 . However, many hospitals in Africa lack culture facilities or access to such data; thus, best practice is not obvious. In one study almost 50% of bacterial isolates were resistant to the antibiotics most commonly recommended (chloramphenicol and gentamicin for empirical use) 11 . In the specific case of NTS, the efficacy of gentamicin is doubtful and susceptibility testing unreliable due to this infection's intracellular nature. For newer and broadspectrum antimicrobials, apart from financial considerations, there are concerns that over-use could lead to resistance. A systematic review of studies and trials of supportive therapy in severe malaria was undertaken. Whilst definitive evidence for parenteral antimalarial treatment has now been provided 4,10 , progress on supportive therapeutics has been limited. To date, of the 34 clinical trials involving adjunctive therapies in severe malaria, including 20 (59%) in African children, none of these trials have targeted bacterial co-infection 6 . Searching ClinicalTrials.gov and ISRCTN we found no trials addressing anti-microbial treatment of bacterial co-infection in severe malaria.
Justification for azithromycin for malarial sepsis There are several reasons why we consider that azithromycin represents an attractive option in such an approach. Given the worldwide threat of antimicrobial resistance, the underlying principle is to test the narrowest spectrum antibiotic that would be practical, generalisable and plausibly have reasonable efficacy. There are several reasons for selecting azithromycin as the targeted antimicrobial invention. Primarily, it is active against NTS, the commonest cause of bacterial co-infection in severe malaria 9 , against which currently recommended antimicrobials are ineffective (e.g. chloramphenicol, ampicillin/gentamicin combination) 11 . It is also active against a range of other gramnegative and gram-positive organisms, although it is less active against Escherichia coli and Klebsiella spp. The alternative (effective) antimicrobial therapies we considered, given that they would likely be active against NTS 12 , were, first, a full course (~seven days) of parenteral third generation cephalosporins. This has feasibility concerns as it would prolong hospital stay (median four-five days), and would be difficult (and potentially unethical) to blind. In contrast to macrolides, which are not widely used in other childhood diseases 13 , adding a major new indication for third-generation cephalosporins (severe malaria) also risks widening their use even further, and thus increasing the threat of antimicrobial resistance to a second-line antibiotic recommended for other common childhood diseases. Quinolones (e.g. ciprofloxacin) are another alternative strategy, given their superior antimicrobial activity against Salmonella species 13 , E. coli and Klebsiella spp. However, resistance is generated through single mutations in the housekeeping genes gyrA or gyrB; and can be detected shortly after administration in healthy volunteer studies, suggesting resistant variants are commonly present at low levels or easily acquired 14 . A recent study found pathogenic ciprofloxacin-resistant E. coli were highly prevalent in healthy children and their mothers 15 , raising the possibility that widespread use in children with severe malaria without evidence of bacterial infection could have serious unintended consequences. Moreover, in most countries the use of fluoroquinolones in children is relatively contraindicated (due to concerns over musculoskeletal side-effects) except for use in multi-drug resistant infections where there are no suitable alternatives 13 . Finally, although cephalosporin and quinolone resistance rates are currently relatively low in relevant organisms in some African countries 12 , they are much higher in Asia 13 , limiting generalisability of the trial's findings worldwide.
Other favourable properties of azithromycin include: 1. It is licenced for use and has a good safety profile in children.
3. Its long half-life is also likely to protect children at increased risk of NTS bacteraemia following a malaria episode 17 .
4. An accumulating body of evidence indicates some macrolides have beneficial properties independent of their antimicrobial effects 18 . It is thus plausible that azithromycin may also have a beneficial effect on immune activation and gut inflammation.
5.
A further consideration is that azithromycin is a weak antimalarial 19 , and has proved effective in treating P. falciparum malaria in early phase trials when used in combination with faster-acting antimalarials.
6. Dispersible azithromycin is currently widely available at an affordable price, costing 0.40 euros per 100mg dose in an oral formulation. i.e. two-six Euros per five-day paediatric treatment course.
Target population
Current recommendations indicate that antibiotics should be given to 'all children with suspected severe malaria in areas of moderate and high transmission until a bacterial infection is excluded' 10 . However, WHO definitions of severe malaria are very broad, incorporating high parasitaemia as a single criterion, and are thus applicable to a large proportion of paediatric admissions in such regions, with a relatively overall low case fatality (1-2%) 20 , who are unlikely to benefit from antimicrobial therapy as a group. For this study (and in a future trial) we aim to target antibiotics to children with the highest risk of bacterial co-infection, meeting 'Teule' criteria 11 : that is, with malaria (positive blood film or ParacheckTM rapid diagnostic test, RDT); temperature > 38°C or < 36°C; and ≥1 of prostration, respiratory distress, haemoglobin (Hb) <5g/dl or HIV. These criteria identified 85% of bacterial co-infections, with a threefold higher mortality than children admitted with malaria without these criteria.
Credibility: can antibiotics be targeted? A major legitimate concern is whether future management guidelines can be implemented which try to reduce the overuse of antibiotics where microbiology services are poor or nonexistent, thus an approach to targeting only those with IBI is not feasible in much of Africa. Within the context of this study and to inform a future trial, we aim to investigate approaches (that may be generalizable in future) to targeting antibiotics. We propose to use (i) simple clinical criteria: proposed by Nadjm et al. 11 , the Teule criteria identifies 85% of malaria cases with culture-proven bacteraemia: a pathogen was isolated from 20% children meeting these criteria. Simple and objective criteria identifying those at risk would be more generalizable outside research centres and would be validated in this study. (ii) Potential biomarkers: even when quality-controlled blood-culture facilities are available, low sensitivity (due to frequent pre-hospital antibiotic therapy, low culture volumes from children and low bacterial density) and long times to culture-positivity (typically two-three days) mean that alternative approaches for identifying concurrent gram-negative bacteraemia would be highly valuable.
Justification for the study This trial, designed to inform a key research gap, proposes targeted and appropriate antimicrobial treatment of children with severe malaria at greatest risk of bacterial co-infection. Incorporated in the design are experimental data to demonstrate that the mechanisms that azithromycin is targeting are biologically reasonable, that the dosing is providing the right exposure, and that there is a pharmacokinetic-pharmodynamic (PKPD) link between azithromycin exposure and the potential mechanisms for reducing both short-term and microbiological cure (seven-day) and longer-term mortality (28-day). Comparison with a control group will also enable us to identify whether antimicrobial treatment can be further targeted.
At present pharmacological data are lacking on the correct dosage of oral azithromycin for treatment of bacterial co-infection in severe malaria in African children. We will therefore conduct a Phase I/II trial comparing three doses of azithromycin: 10, 15 and 20 mg/kg (prescribed for feasibility by weight-bands) spanning the lowest to highest mg/kg doses demonstrated to be equally effective as parenteral treatment for other salmonellae infection 21 .
Our hypotheses 1. Azithromycin given to children once-daily for five days in addition to standard treatment of severe malaria (including anti-malarials) can provide adequate dosing in children admitted to hospital with severe malaria.
2. Children with severe malaria and culture-proven bacteraemia can be accurately identified using clinical criteria alone or in combination with a rapid diagnostic biomarker tests, in comparison with a control cohort of children hospitalized with severe malaria but not meeting Teule criteria, at low risk of bacterial co-infection.
General objectives
Our principal objectives are to: (i) establish the appropriate dose of oral dispersible azithromycin as an antimicrobial treatment for children with severe malaria; (ii) investigate whether antibiotics can be targeted to those at greatest risk of bacterial co-infection using clinical criteria alone or in combination with a rapid diagnostic biomarker test.
Specific objectives (i) To conduct a Phase I/II trial comparing three doses of azithromycin: 10, 15 and 20 mg/kg (prescribed for feasibility by weight-bands) spanning the lowest to highest mg/kg doses demonstrated to be equally effective as parenteral treatment for other salmonellae infection.
(ii) To determine, via PKPD modelling, the optimal azithromycin dose in severe malaria, and investigate associations between azithromycin exposure and pathogen susceptibility (minimum inhibitory concentration, MIC) with treatment outcome. Outcome measures will consist of changes in C-reactive protein (CRP), a putative marker of sepsis at 72 hours (continuous) and microbiological cure (seven-day) (binary), alone and as a composite with seven-day survival, while providing preliminary data on longer-term survival (to day 90).
(iii) To evaluate whether a combination of clinical, point-ofcare diagnostic tests, and/or biomarkers can accurately identify the sub-group of severe malaria with cultureproven bacteraemia by comparison with a control cohort of children hospitalized with severe malaria but not meeting Teule criteria, at low risk of bacterial co-infection.
Study site
The trial will be conducted in Mbale Regional Referral Hospital (MRRH), Eastern Uganda, where we already have an on-going collaborations 22,23 , in an area with hyperendemic malaria and paediatric admissions of >20,000/year, approximately 50% with malaria parasitaemia. MRRH is now an established research hub of the KEMRI-Oxford-Wellcome Trust Major Overseas Programme, assisted by trial, data and laboratory management from Kilifi, Kenya.
Study design
A Phase I/II open-label trial comparing three doses of azithromycin: 10, 15 and 20 mg/kg (prescribed for feasibility by weight-bands) spanning the lowest to highest mg/kg doses demonstrated to be equally effective as parenteral treatment for other salmonellae infection.
Study populations
Children will be considered eligible for enrolment in this trial if they fulfil all the inclusion criteria and none of the exclusion criteria.
Inclusion criteria
Cases: 105 children aged six months to 12 years at admission to hospital with P. falciparum malaria (on either blood film or ParacheckTM RDT) and all of the following: i) Axillary temperature >38°C or <36°C.
iii) Parents willing/able to provide consent.
Controls
The control cohort of children (n=50) are those hospitalised with severe malaria but not meeting Teule criteria (i.e., low risk of bacterial co-infection) whose parents are willing/able to provide consent.
Exclusion criteria (cases only)
Major contraindications to azithromycin, e.g. strong existing clinical diagnosis of QT-prolongation. Concomitant use of interacting drugs: drugs that may cause QT-prolongation or drugs that may cause a pharmacokinetic interaction with azithromycin, like strong CYP3A/P-GP inducers and concomitantly administered antacids.
Sampling
Sample size determination A formal sample size was not calculated. Phase I/II trials are an important step in the assessment of new or existing interventions providing the first data on the feasibility of using a product in a new application. They assess the effect of products (how well they work), and the side effects they may have. They are not usually designed to compare superiority of different products or interventions.
The aim is to generate pilot efficacy data on the optimal azithromycin dose for children with severe malaria, which will inform the design of a later larger Phase III trial. The numbers required to address the trial objectives are therefore balanced against the exposure of children in these settings to a therapeutic intervention (dose) for which there are limited data to date.
The overall sample size for the trial will be 105 children randomised 1:1:1 to receive 10, 15 or 20mg/kg azithromycin (based on weight-bands). This is sufficient for the PKPD sampling and modelling to determine an optimal dose in children with severe malaria using change in CRP at 72 hours, and microbiological cure (seven-day) alone and with seven-day survival. This is under the assumption that 20% of enrolled children (meeting Teule severity criteria) will have bacteraemia and that 80% of these infections will be caused by NTS or other enteric gram-negative organisms based on previous research.
Based on Monte Carlo simulations, we expect that all dose groups should reach the population reference target (adult) AUC24h of at least 3.4 h*mg/L 24 , although there are no data on absorption in severe malaria, which is characterised by impaired gut function 25 , hence the need for this pilot. Along with the 105 children with Teule criteria we will enrol 50 children meeting other inclusion criteria but without Teule criteria (and without any exclusion criteria) into the control group.
Study methods and procedures
Eligible children will be identified by the nurse and clinician on duty and registered in the eligibility screening log. A member of the trial team will then perform a rapid structured assessment of heart rate, oxygen saturation (pulse oximetry), respiratory rate, axillary temperature, blood pressure, markers of shock (capillary refill time, pulse volume and assessment of lower limb temperature) and severity (conscious level and respiratory distress). Children who are potentially eligible with suspected severe malaria will have a rapid bedside malaria test (detecting P. falciparum histidine-rich protein 2, HRP2) to determine malaria status and eligibility.
Randomisation procedure
Randomisation lists will be generated and kept at the Medical Research Council (MRC) Clinical Trials Unit at University College London, London. The randomisation envelopes will be prepared before the trial, using the lists at the Clinical Trials Facilty, KEMRI Wellcome Trust Research Programme (KWTRP), Kilifi, Kenya. These will be opaque and sealed and will contain a card with allocation. Children will be randomised (1:1:1) to receive 10, 15 or 20 mg/kg azithromycin (based on weight-bands). The cards will be numbered consecutively and opened in numerical order. Clinicians will be aware of the treatment-group dose assignments, but the laboratory tests are to be performed in a blinded manner.
Consent process
Once eligibility has been confirmed, authorised trial staff will approach parents/guardians to invite their child to take part in the trial. An information sheet will be provided to the parent/guardian in their usual language containing details of the TABS trial. The sheet will be read aloud to those who are unable to read. The doctor/nurse will check that the information has been fully understood and parents/guardians will be encouraged to ask questions they may have about their child's participation in TABS trial. The information sheet and consent includes details of the clinical trial, follow up and additional biological samples taken for the trial and permissions for sample storage. Where possible, prospective written informed consent will be sought from parents/guardians who will then be asked to sign the consent form. Consent will include permission for the collection of admission and follow-up blood samples for later aetiological investigations. If parents/guardians are unable to sign, a thumbprint will be taken in lieu of a signature. A copy of the consent form will be given to the parent/guardian, the original placed in the patient's medical notes, and a copy kept in the investigator site file. A number of children will present as emergencies where delay in study enrolment, and thus treatment, will not be practical or indeed humane. We will use a modified form of deferred consent; used in the FEAST trial 22 and for which we have received ethical approval. It proposes to use a 'two-stage' consent process in this circumstance 26 . Verbal assent will be sought from parents or guardians by the admitting medical team if it is considered that the full consent process would significantly delay treatment allocation, and consequently could be detrimental to the child's health. Full consent will be sought once the child's clinical condition has been stabilized. Caregivers will be provided with a brief verbal description of the trial and will be given the opportunity to "opt out". As in the FEAST trial, if following an assent process a child died prior to full written consent, full consent would not be sought. This process of emergency consent was approved for FEAST and has been subsequently approved for use in a TRACT transfusion trial in Uganda and Blantyre 23 .
Treatment allocation
Following consent, children will be randomly allocated to the treatment arms. Children, on admission to hospital with severe malaria (see Inclusion criteria above) eligible for the trial will be randomised to one of three doses of adjunctive once daily oral azithromycin doses, which will be given for five days in addition to a three-day course of parenteral artesunate. The comparator cohort with non-severe malaria will receive usual standard of care (oral artemisinin combination therapy). See Study Flow (Figure 1). Pragmatic dosing chart Rather than calculate azithromycin doses based on body weight (which in practice is poorly implemented), the trial will use weight bands (see Table 1) all using 100mg dispersible tablets, which are more practical and generalizable outside of clinical trials.
This weight-band dosing has been developed based on existing data 27 and will be validated by the nested PKPD sub-study (as has been done in several HIV trials). Children generally tolerate a wide range of azithromycin doses (5-20mg/kg); however, when given in combination with artesunate, doses of 20mg/kg more commonly resulted in vomiting (versus artemether-lumefantrine) 28 . Based on Monte Carlo simulations, we expect that all dose groups should reach the population reference target (adult) AUC24h of at least 3.4 h*mg/L 24 , although there are no data on absorption in severe malaria, which is characterised by impaired gut function 25 , hence the need for this pilot.
Trial assessment schedule Children will be intensively monitored during admission by the clinical team at least eight hourly in the first 72 hours following recruitment, and daily thereafter. Locator maps and contact numbers will be obtained to facilitate follow-up. All participants will then be seen at seven-days, 28-days and 90-days post-admission at outpatient clinics attached to each centre for evaluation of morbidity and toxicity. Any patient not returning for a study visit will be traced for vital status ascertainment (Table 2).
Blood sampling schedule Following consent and randomisation, blood samples will be taken for the following investigations: full blood count, urea and electrolytes, lactate, glucose, malaria status (malaria slide), blood culture and other clinically indicated investigations (not required by the study protocol). Urine will be taken as soon as possible after trial enrolment for multi-stick analysis for assessment of cola or red colour urine and additional storage for later analysis of gut barrier function. In accordance with national guidelines, HIV testing will be performed during admission procedures since this is one of the eligibility criteria. Pre-and post-test counselling will be done in accordance with routine practice. At admission, a venous blood sample will be collected into Heparin and EDTA tubes and stored for subsequent cytokine and biomarker assays, parasitological (P. falciparum HRP2), genetic studies (including sickle cell and glucose-6-phosphatase deficiency status) and immune activation studies (transcriptome and cellular assays). For the latter, red and white cell pellets will be prepared and stored. Further blood samples will be taken at 72 hours, seven days and 28 days (for biomarker assays and immune activation studies); and at day seven for repeat blood culture (see Table 3). Changes in inflammatory markers (including CRP and procalitonin, PCT) will be measured retrospectively at 72 hours and day seven and microbiology cure measured at day seven (from blood cultures and using non-culture based molecular diagnostics on stored blood).
Clinical management and monitoring
All trial patients will receive standard of care including (intravenous or oral) anti-malarial drugs following national guidelines, based on WHO syndromic patient management 29 . We will collect data on all administered drugs. Antipyretics, anticonvulsants and treatment for hypoglycaemia will be administered according to nationally agreed protocols. If required, maintenance fluids will be run at 3-4 ml/kg per hour irrespective of age until the child can drink and retain oral fluids. As the trial is designed to target the use of 'appropriate' antimicrobial treatments, clinicians will be free to prescribe antibiotics other than cephalosporins, quinolones or macrolides on admission day two based on the child's clinical condition and according to country guidelines. On day three (or earlier if results available) clinicians will be permitted to prescribe broadspectrum antimicrobials guided by the admission blood culture results (species and/or susceptibility). The Uganda severe malaria guidelines give no specific guidance on the use of antibiotics in severe malaria. Children or their carers will be provided with an appropriate supply of azithromycin drugs if discharged before day five.
The clinical coordinator is responsible for ensuring the discharge check-list is complete and for chasing up inpatient notes at discharge. Any relevant information, especially with regard to date of discharge, serious adverse events (SAEs), treatments, blood transfusions, use of intravenous fluids, oxygen or non-routine treatments or investigations, will be recorded. Children will be assessed on day seven, day 28 and day 90 following recruitment. The parent/guardian will receive a follow up invitation on a card. A symptom checklist and targeted physical examination will be performed at each clinic visit post-discharge. Medical history since last visit including hospital re-admissions, transfusions and grade three or four adverse events related to the antibiotic interventions including severity and likely relationship of any adverse events will be documented by a doctor. Any child lost to follow-up before 90 days will be traced for vital status. In order to minimise losses to follow-up, locator data (maps and identifiable landmark) and mobile phone numbers will be taken on discharge and verified at every review. During the 90 day study period attempts will be made to contact the patient via phone (if available) and to follow-up with home visits, if clinic visits are not attended.
In the statistical analysis, a patient will be regarded as 'lost to follow-up' if they were not seen in clinic at the day 90 visit and were not known to have died.
Trial products, storage and accountability
The product to be tested in the trial is azithromycin 100 mg dispersible tablets, which are a commercial product and were obtained for the trial as a donation from Cipla Pharmaceuticals, Mumbai, India. Azithromycin is approved for use in children in Uganda where the trial will take place. Cipla Ltd is fully GMP compliant and has provided full certificates for the manufacture of these products. Azithromycin will be stored at room temperature and not exceeding 30°C (86°F). It will be stored in these conditions for three years as packaged. The trial coordinator at the site will maintain accountability logs for the antibiotic intervention (standard and interventional). These will be kept securely until verified by the external monitor's visit.
Sub-studies PKPD study
For PK analysis, model-based sampling strategies will enable the description of both intra-and inter-patient variability. A maximum of 3x 0.2ml samples will be taken from each subject on two occasions between day one to five around meaningful time points to describe the PK throughout a dosing interval plus one sample at follow up on day seven to determine the elimination of azithromycin after end of treatment (a total of seven samples per patient). In the first year we will develop highly sensitive bioanalytical assays using ultra performance liquid chromatography coupled with tandem mass spectrometry (UPLC-MS) for determination of azithromycin in stored microvolumes of plasma (analysis performed at Radboud University, Nijmegen).
Initially a population PK model will be developed for azithromycin and will investigate possible covariates for its PK. This will be followed by an investigation of the relationship between plasma PK and PD using the developed PK model, known MIC-values since the area under the curve (AUC):MIC ratio is the pharmacodynamic driver for azithromycin 16,30 and the treatment outcomes (changes in CRP at day three and microbiologic cure and/or mortality at day seven). Based on Monte Carlo simulations, we expect that all dose groups should reach the population reference target (adult) AUC24h of at least 3.4 h*mg/L 24 , although there are no data on absorption in severe malaria, which is characterised by impaired gut function 25 , hence the need for this pilot. The different doses in 105 children provides variability in exposure so that we identify whether all doses are in the flat part of the exposure-response curve, in which case the lowest dose will be chosen. Other secondary endpoints include mortality at 48 hours, length of hospital stay (days), re-hospitalisation and other adverse events. Qualitycontrol of stored bacterial isolates, non-culture (molecular) assessment of bacterial infection including the determination of MIC to inform PKPD studies will be batch-processed at KWTRP, Kenya.
Biomarker and gut barrier function
In 105 trial participants and 50 children with controls (severe malaria without Teule criteria) we plan to investigate from admission samples the utility of a range of potential biomarkers including CRP, PCT 31,32 and endotoxin 25 , to identify whether any of the above markers, alone or in combination, can predict which at-risk children with severe malaria have bacterial co-infections to improve the sensitivity of clinical/laboratory criteria and enable antimicrobial therapy to be further targeted. We will investigate associations between pathological concentrations of these markers and clinical, parasitological and microbiological data in cases (meeting Teule criteria) and controls (not meeting Teule criteria) to test their accuracy for stratification in the main trial. The samples will also allow us to analyse markers of gut barrier dysfunction -a requisite for bacteraemia (intestinal fatty acid binding protein (I-FABP) and the ileal-bile acid binding protein (I-BABP) and establish whether these can accurately predict children with blood-culture proven bacteraemia alone or in combination with clinical criteria/RDTs. In addition, we will analyse the peripheral blood mononuclear cell transcriptome in a subset of children with and without bacterial co-infection to verify the significant changes in transcriptome we observed in a small pilot study of children with malaria and bacterial co-infection compared to children with malaria alone and to confirm observed changes on the cellular level in a second set of children 33 . We will investigate whether changes in gene expression profile can be translated into rapid and affordable biological tests predicting children with bacterial co-infection with high specificity and sensitivity. This later analysis is subject to additional funding.
Trial outcome measures
Primary outcomes • Change in CRP, a putative marker of sepsis at 72 hours (continuous) and microbiological cure (seven-day) (binary), alone and as a composite with seven-day survival.
• The changes in CRP between day 0 and day 3 as the main endpoint for the PKPD studies.
Secondary outcomes
The secondary outcome measures are: • Mortality at 48-hours and longer-term survival (to day 28 and to day 90).
• Length of hospital stay (days) • Re-admission to hospital by 90 days • Adverse events • The population PK of azithromycin and their relation (combined with pathogen susceptibility) with treatment outcome (PD).
SAEs and interim analyses
SAEs will be reviewed immediately by a designated physician (SAE reviewer) and reported to the appropriate ethics and regulatory committees within one week. The Chief Investigator will inform the Trial Steering Committee (TSC) and Data Monitoring Committee (DMC) for review on a regular basis (as deemed necessary).
DMC
An independent DMC (see composition at the end of the protocol) will review data on enrolment, safety, adherence to the trial protocol and efficacy at regular intervals and in strict confidence. The terms are covered in the DMC charter, signed by the chair and trial statistician. There are no fixed 'stopping rules'. The DMC will receive and review information on the progress and accruing data of the trial and provide advice on the conduct of the trial to the TSC. The DMC would inform the Chair of the TSC if, in their view the results are likely to convince a broad range of clinicians, including those supporting the trial and the general clinical community, that, on balance, one trial arm is clearly indicated or contraindicated for all participants or a particular category of participants. The DMC is comprised of a chair and two other independent members. None declared conflicts of interest.
Trial monitoring
The Mbale site will use a web enabled trial database and will be responsible for data entry and local trial management. The site will retain the original case report forms (CRFs). Data stored on the database will be checked for missing or unusual values (range checks) and checked for consistency within participants over time. If any such problems are identified, the site will be contacted and asked to verify or correct the entry. Changes will be made on the original CRF and entered into the database at the site. Kilifi Clinical Trial Facility will also send reminders for any overdue and/or missing data with the regular inconsistency reports of errors.
This trial will be monitored according to a monitoring plan which will set out the frequency of visits, the degree of source document verification against the CRFs and the requirements for triggered on-site monitoring visits. This plan will also detail the procedures for review and sign-off. The monitoring will adhere to the principles of ICH GCP.
The site initiation visits will include training in the trial procedures, as well as practical training in administration of trial interventions, reporting guidelines for adverse events of study interventions as well as other trial procedures. All staff at sites involved in the trial will receive formal training in GCP through a dedicated training programme during site initiation visit and will also be required to complete an on-line course.
The Clinical Trial Facility in Kilifi oversees standards and quality of all trials conducted through the KWTP and through its monitoring systems and standard operating procedures are organised to ensure that all sites can be monitored with equal independence and rigor. All monitors will be appropriately qualified and trained. The monitors will require access to all patient medical records including, but not limited to, laboratory test results and prescriptions. The investigator (or delegated deputy) should work with the monitor to ensure that any problems detected are resolved.
Data management
All clinical and laboratory data will be recorded in the CRF and stored with a unique serial number identifier. Data will be entered (double data entry) onto OpenClinica. All data will be regularly backed up and backup copies stored both on and off site. Paper records will be archived in locked cabinets. These cabinets will have limited access with prior authorisation (by the site principal investigator). All data will be partially anonymised prior to presentation or publication of any results. Study participants will be identified by a unique subject identification number but patient identifiable information will not be recorded on the study database in compliance with GCP requirements. The data will be examined for inconsistencies during the trial by the statistician and fed back to study sites for corrections following GCP procedures.
Confidentiality
All clinical data will be held confidentially, and personal identifiers will be removed before analysis of the data and presentation of the results.
Data sharing
After completion of the study, requests for data access from researchers outside the study team will be considered by the trial management team and clinical trials unit (Data Governance Committee), and where indicated, requestors will be asked to develop scientific protocols for approval of secondary analyses. The potential to share data will be included in the participant Information and Consent Form (see extended data section) 34 .
Statistical analysis
The primary analysis for the trial will describe baseline parameters (stratified by study arm) and compare primary and secondary end points by trial arm. The primary analysis will be by intention to treat and secondary analysis will include those treated per protocol (i.e., receiving five days of azithromycin).
The co-primary outcomes are change in CRP (which will be analysed using normal linear regression adjusting for baseline, using appropriate transformations) and microbiological cure (seven day) alone and as a composite with seven-day survival (which will be analysed using logistic regression and exact tests as a binary outcome). Analysis of adverse events and re-admission will use time-to-event methods through day 90 counting in-hospital death as a competing risk. Changes in CRP and other inflammatory markers will be analysed using normal linear regression (potentially on log-transformed data), using generalised estimating equations to jointly model changes at 72 hours and day-seven. Adverse events will also be summarised by body system.
Ethics statement
Ethical approval has been obtained Mbale Regional Hospital Research Ethics committee (MRRH_REC 095/2017) and from Imperial College Research Ethics Committee (17IC3965), the sponsor of the study. The trial was registered on ISRCTN (ISRCTN49726849) on 27 th October 2017 and updated on 12 th August 2020 (detailing delay in start of trial and new start date).
Safety
The randomised trial will be conducted in children who are most likely to benefit from the treatment. We will minimise the risks of cannula insertion and phlebotomy by pretrial training in phlebotomy technique and regular cannula site inspection which is included in a SOP. We have calculated that no more than 1ml/kg of blood will be drawn for research purposes at any one time.
Benefits
Children with severe malaria often develop complications such as convulsions, severe anaemia and hypoglycaemia which through close clinical monitoring will be identified at the earliest opportunity and appropriate therapy initiated. Mbale Regional Referral Hospital trial has considerable experience with managing severe malaria and this will serve to minimise the risks to the patients in the trial. Pretrial training of the dedicated study team will include specific training on general management of severe malaria and its complications. A manual of operations will provide clear management guidelines as well as the details of trial conduct and procedures. Children enrolled in the trial will therefore receive a higher quality of care than those managed routinely.
Plans for dissemination of the study outcomes
Public engagement Results from this trial will be disseminated locally through community meetings and national meetings with the wider healthcare professional community. These systems have been developed for dissemination of MRC FEAST and TRACT trial results and will be adapted to dissemination for the TABS PKPD trial.
National and international policymakers
The site principal investigator has discussed the study with Ministry of Health staff. When the results are available, we will provide a summary briefing highlighting the trial results and what then next steps will be. Whilst the current study will go some way towards addressing whether azithromycin could be used as an adjunctive treatment in severe malaria a future trial should include a pragmatic design to ensure results are applicable to health services in Africa. The approach we propose may also have utility for syndromic management of other conditions, especially those whose clinical features overlap with severe malaria and in areas where microbiological diagnostic facilities are non-existent.
Discussion
Severe malaria remains a common cause of paediatric admission in many countries in Africa. Children who have bacterial co-infection are at high risk of poor outcome. Establishing which children with malaria are at greatest risk of bacteraemia is critical to pragmatically inform a policy for targeted antibiotic therapy that could substantially reduce malaria-associated mortality while minimising the risks of excess antibiotic prescribing. Short-course empiric therapy with early termination guided by culture results (to prevent development of resistance 35 ) is commonly practiced in high-income countries. However, poor microbiology services and overloaded health systems mean that this approach is not feasible in much of Africa and a trial designed around it would not be generalisable. Thus, in many parts of Africa antibiotic prescribing in children with severe malaria is highly variable across hospitals, with most physicians essentially using antibiotics "blind", and thus are unable to follow WHO guidelines recommending stopping broadspectrum antibiotics when bacterial infection has been ruled out. A more pragmatic approach would be to use a standard short-course of a moderately broad-spectrum antibiotic that is not already recommended for empiric use in a large number of other conditions with the potential for improving both early and post-discharge mortality. Relevant to a future trial and to current guidelines, is that many hospitals in Africa lack microbiological culture facilities, meaning best practice is not obvious given the clear threat posed by antimicrobial resistance 36,37 . To address this within the context of this study and to inform a future trial, we aim to investigate potential approaches (that may be generalizable in future) to targeting antibiotics where microbiological faciliaties are not present or reliable.
We favoured investigating azithromycin since one study found that almost 50% of bacterial isolates were resistant to the antibiotics most commonly recommended for empirical use 11 . For NTS specifically, the efficacy of gentamicin is doubtful and susceptibility testing unreliable due to its intracellular nature 17,38 . Third generation cephalosporins (e.g. ceftriaxone) are the most widely used antimicrobials but resistance is becoming widespread. Moerover, few children will receive an 'adequate' therapeutic dose of cephalosporins, since NTS, the commonest cause of bacterial co-infection, requires ~seven days parenteral therapy thus requiring prolonged hospital stay (median four-five days) with economic implications. Further, adding a major new indication for third generation cephalosporins (severe malaria) risks substantially expanding their use, with downstream threats of AMR to a second-line antibiotic currently recommended for other common childhood diseases. Newer antimicrobials, apart from financial considerations, have similar concerns that over-use could lead to resistance.
We therefore plan to study azithromycin because of its favourable microbiological spectrum, its inherent antimalarial and immunomodulatory properties and dosing and safety profile. Its long half-life is also likely to protect children at increased risk of NTS bacteraemia during the month following a malaria episode. At present pharmacological data are lacking on the correct dosage of oral azithromycin for treatment of bacterial co-infection in severe malaria in African children. The TABS trial will therefore conduct a Phase I/II trial comparing three doses of azithromycin: 10, 15 and 20 mg/kg (prescribed for feasibility by weight-bands) spanning the lowest to highest mg/kg doses demonstrated to be equally effective as parenteral treatment for other salmonellae infection. Treatment will be given over five days in children with severe malaria at highest risk of bacterial co-infection (meeting Teule criteria) in order to generate relevant PK data by sparse sampling during dosing intervals. The goal is to determine, via population PK modelling, the optimal azithromycin dose in severe malaria, and investigate associations between azithromycin exposure and potential mechanisms PKPD using change in CRP, a putative marker of sepsis at 72 hours (continuous) and microbiological cure (seven-day) (binary), alone and as a composite with seven-day survival, while providing preliminary data on longerterm survival (to day 90). We will also evaluate whether a combination of clinical, point-of-care diagnostic tests, and/or biomarkers can accurately identify the sub-group of severe malaria with culture-proven bacteraemia by comparison with a control cohort of children hospitalized with severe malaria but not meeting Teule criteria, at low risk of bacterial co-infection.
A future Phase III trial may therefore consider two strata: first comparing a pharmacologically-informed dosage of oral azithromycin to standard-of-care (largely third generation cephalosporins) in children at highest risk of bacterial co-infection and second comparing standard of care versus no antibiotics in children with severe malaria but minimal risk of bacterial co-infection to establish whether a policy for targeted antibiotic therapy could substantially reduce malaria-associated mortality while minimising the risks of excess antibiotic prescribing. Risk of co-infection could be determined by Teule criteria 11 , but would also be informed by this pilot trial. The primary outcome would be mortality at either day seven, day 28 or day 90 mortality which is also be informed by this pilot trial (other timepoints would be secondary outcomes). Other secondary outcomes include mortality at 48 hours, length of hospital stay (days); subsequent hospital readmission; adverse events, faecal changes in antimicrobial resistance 34 and cost-effectiveness.
Trial status
Trial enrolment started in January 2021. The delayed start of recruitment was due to the requirement for CIPLA to donate the trial drug which was mandated by the National Drug Authority (NDA), Uganda. Although we received support from CIPLA (2019) and the relevant approvals for importing the drug for the trial was then secured from NDA there was significant delays to importing the dispersible azithromycin for the trial and starting the trial due to the COVID epidemic. The trial completed recruitment on 4 th October 2021 and follow up on 4 th . January 2022. The clinical trial data and PKPD are currently being analysed.
Protocol version changes Version 1.0 was the original protocol submitted for ethical approval to ICREC version 1.1 was given full approval on 11 th November 2017 following MRRH approval (27 th October 2017) of the revised version 1.0 Protocol (version 1.2: dated 9 th October 2018 was approved by ICREC and MRRH REC following minor wording changes as a result of regulatory review/approval.
Role of study sponsor and funders
The sponsor and funder played no role in in study design and will play no role in data collection, trial management, analysis and interpretation of data and manuscript preparation the decision to submit the report for publication. analysis, and potential limitations.
Is the study design appropriate for the research question? Yes
Are sufficient details of the methods provided to allow replication by others? Partly Are the datasets clearly presented in a useable and accessible format? Partly itself elevates CRP, so what is the evidence to support using this as a discriminating factor between malaria and malaria+ bacteraemia? There is no discussion to justify this.
Control group: 50 children are to be recruited and not treated with azithromycin. At different points these are described as "those hospitalised with severe malaria but not meeting Teule criteria" (which includes severe anaemia, respiratory distress, and prostration, including cerebral malaria, as well as HIV infection); and elsewhere as "nonsevere malaria", Figure 1 and "treatment allocation", Page 7. Given most severe malaria in Uganda is due to severe anaemia, cerebral malaria and/or respiratory distress and few children who do not meet the Teule criteria but do have severe malaria may be expected, please clarify which children will be in the "control" group.
2.
On page 4 the authors claim that the azithromycin half life is such that it is likely to offer protection from bacteraemia for a month. My reading of the literature is that drug concentrations in target tissues are expected to be above MICs for ~10 days (Lode et al., J Antimicrob Chemother 1996). 1 3.
Given dosing is weight based, are there upper and lower weight limits as well as age limits? If so please provide these.
4.
Treatment: will the children receive artemether-lumefantrine after they complete parenteral artesunate?
5.
Number of blood samples for PK studies: to my reading it was not clear whether this is 4 samples per child ( Figure 1)? Or "a total of 7 samples per child" (PKPD substudy, page 10)? 6.
Under "statistical analysis" there is no discussion of analysis and interpretation of the PK data which to my mind is the main justification for the study.
7.
In the discussion, the authors state "Establishing which children with malaria are at greatest risk of bacteraemia is critical to pragmatically inform a policy for targeted antibiotic therapy that could substantially reduce malaria-associated mortality while minimising the risks of excess antibiotic prescribing." Whilst true, the present study does not address this question, rather it gives all children in established high risk groups treatment. 8.
Minor comments:
The methods section of the abstract is hard to follow, particularly the sentence which reads "and investigate azithromycin exposure to change in C-reactive protein, a putative marker of sepsis at 72 hours, and microbiological cure (seven-day) alone and as a composite with seven-day survival." 1.
The list of abbreviations is quite incomplete. 2.
In the Introduction, there is a very long sentence that needs revision, starting with "Even when with the best (evidence-based)…"
3.
There is no mention of molecular diagnosis of malaria. Will this be performed? 4.
Is the LC/MS set up now in Nijmegen? There are published methods for this, using small volumes of plasma.
5.
Under "Trial monitoring" the first sentence is unclear.. It reads: "The Mbale site on to a web enabled trial database will be responsible for its own data entry and local trial management 6.
Given this appears primarily to be a dose-finding study for a proposed definitive study, some of the hypotheses, aims and the primary outcomes do not seem to be particularly aligned with this, focusing instead on biomarker and bacterial sepsis endpoints, which the sample size is unlikely to be powered to address effectively. With 35 children per group, and a possible bacterial sepsis rate of 20% only a handful of children in each group are likely to have sepsis. By contrast, the justification for the pharmacokinetic aspects of the study is strong.
The trial was originally registered in 2017, but logistical delays including COVID-related disruptions led to delayed initiation to January 2021.
The descriptions of the trial procedures (with a couple of exceptions, noted below) are mostly clear, but there are some points for clarification. We thank Dr Rogerson for his very careful consideration of our trial protocol. To be clear the major objectives are to (i) establish the appropriate dose of oral dispersible azithromycin (by PKPD) as an antimicrobial treatment for children with severe malaria and (ii) investigate whether antibiotics can be targeted to those at greatest risk of bacterial coinfection using clinical criteria alone or in combination with a rapid diagnostic biomarker tests. This study will provide supporting data to inform a future trial and/or policy for hospitals without reliable microbiological facilities. Here are our specific responses to the comments.
Major comments:
The co-primary endpoint of "change in CRP at 72 hours" needs better explanation. Malaria itself elevates CRP, so what is the evidence to support using this as a discriminating factor between malaria and malaria+ bacteraemia? There is no discussion to justify this.
○
The major aim was to see if there were differences in CRP (a putative marker of bacterial sepsis) between the intervention (Azithromycin) and control. We understand that CRP is elevated in severe malaria, however the data on why CRP is raised is not clear. The most recent review of CRP in malaria only found a handful of studies conducted in Africa, many focused on P. Vivax or malaria in pregnancy. (DOI: 10.1038/s41598-021-01556-0). Whilst risk of co-infection could be determined by Teule criteria, this pilot trial will also study a number of other potential biomarkers (which could be done as point of care test) which could identify a group with a higher risk of bacterial sepsis. A continuous numeric (delta change in CRP) endpoint rather than a categorical endpoint reduces the overall number needed. Control group: 50 children are to be recruited and not treated with azithromycin. At different points these are described as "those hospitalised with severe malaria but not meeting Teule criteria" (which includes severe anaemia, respiratory distress, and prostration, including cerebral malaria, as well as HIV infection); and elsewhere as "non-severe malaria", Figure 1 and "treatment allocation", Page 7. Given most severe malaria in Uganda is due to severe anaemia, cerebral malaria and/or respiratory distress and few children who do not meet the Teule criteria but do have severe malaria may be expected, please clarify which children will be in the "control" group.
○
We apologise for the use of the term non-severe malaria, what was implied was cases admitted with severe malaria who do not have Teule criteria. Those children were identified as the group with the highest risk of bacterial infection. With regards to other cases of severe malaria we include the WHO classification of severe malaria in the PDF file linked here, which considers three groups for admission. Quite a number of other features are not covered by the Tuele critieria.
On page 4 the authors claim that the azithromycin half-life is such that it is likely to offer protection from bacteraemia for a month. My reading of the literature is that drug concentrations in target tissues are expected to be above MICs for ~10 days (Lode et al., J Antimicrob Chemother 1996). 1 ○ We agree the given reference indicates 10 days so we have amended this statement.
Given dosing is weight based, are there upper and lower weight limits as well as age limits? If so please provide these.
○
We have provided these in the 'look up table' with expected upper and lower limits. See Table 1. Treatment: will the children receive artemether-lumefantrine after they complete parenteral artesunate? ○ Yes, that is correct.
Number of blood samples for PK studies: to my reading it was not clear whether this is 4 samples per child ( Figure 1)? Or "a total of 7 samples per child" (PKPD substudy, page 10)? There were a total of 7 samples per child. The Figure has "D0, D1-D5 (2 x 0.2ml), D7" where D1-D5 is inclusive of D2, D3 and D4.
Under "statistical analysis" there is no discussion of analysis and interpretation of the PK data which to my mind is the main justification for the study.
○
Of methods used for the PK analysis is described under the PKPD substudy section of the paper. More details will be given in the trial report since the model will need to be built on the clinical data.
○
In the discussion, the authors state "Establishing which children with malaria are at greatest risk of bacteraemia is critical to pragmatically inform a policy for targeted antibiotic therapy that could substantially reduce malaria-associated mortality while minimising the risks of excess antibiotic prescribing." Whilst true, the present study does not address this question, rather it gives all children in established high risk groups treatment.
○ We agree; this is a pilot study aimed at informing a future trial with those aims.
Minor comments:
The methods section of the abstract is hard to follow, particularly the sentence which reads "and investigate azithromycin exposure to change in C-reactive protein, a putative marker of sepsis at 72 hours, and microbiological cure (seven-day) alone and as a composite with seven-day survival." has been revised to 'and investigate the impact of different levels of azithromycin exposure on change in C-reactive protein (a putative marker of sepsis at 72 hours), as well as on microbiological cure alone and as a composite with survival at 7 days'.
○
The list of abbreviations is quite incomplete.
○
We have updated these.
In the Introduction, there is a very long sentence that needs revision, starting with "Even when with the best (evidence-based)…" ○ This has been revised.
There is no mention of molecular diagnosis of malaria. Will this be performed? ○ There is no need to perform molecular diagnosis of malaria -this is a clinical study where children will be enrolled pragmatically using clinical criteria, point of care malaria diagnostics and a malaria slide. We are also performing PfHRP2 on admission samples to characterise those with true severe malaria from those with incidental parasitaemia.
Is the LC/MS set up now in Nijmegen? There are published methods for this, using small volumes of plasma. ○ This is currently being set up in Nijmegen; the methods will be published separately. Under "Trial monitoring" the first sentence is unclear.. It reads: "The Mbale site on to a web enabled trial database will be responsible for its own data entry and local trial management ○ We have corrected this sentence to provide more clarity.
Competing Interests:
No competing interests were disclosed.
|
v3-fos-license
|
2021-09-09T13:46:24.297Z
|
2021-09-09T00:00:00.000
|
237447245
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12549-021-00506-y.pdf",
"pdf_hash": "e9d886d9fc04a762ad003aab4f58749bf0462924",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2952",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "7532e3847ed5ccf840b594d7a0814a137f72bad2",
"year": 2021
}
|
pes2o/s2orc
|
Facies, magnetic susceptibility and timing of the Late Devonian Frasnian/Famennian boundary interval (Xom Nha Formation, Central Vietnam)
Upper Devonian carbonates deposited through the Frasnian/Famennian (F/F) stage boundary in the Xom Nha Formation, Central Vietnam, were studied. The section is mainly composed of fossiliferous, brecciated, and laminated limestone beds, while shale beds occur in a subordinate number. Microfacies generally suggests a hemipelagic setting on an outer shelf environment with low sedimentation rates. A sediment accumulation rate of 0.217 cm/kyrs for this section is calculated. The Xom Nha section does not exhibit characteristic black shales or black limestones through the F/F boundary, but instead shows a similar lithology in comparison to other F/F sections in Southeast Asia. The carbonates yielded abundant conodonts, which represent mainly cosmopolitan species but the section differs from most sections in Europe in showing relatively high numbers of Palmatolepis linguiformis species. Deposition through the F/F boundary interval reported here for the Xom Nha section appears to have occurred during a ~1.2 Ma, and shows well-defined climate cyclicity.
Introduction
The Late Devonian Frasnian/Famennian (F/F) boundary interval records one of the most important biotic crisis, interpreted to be one of the big five mass extinction events in Earth's history (McLaren 1982;Raup and Sepkoski 1982;McGhee 1996;Stanley 2016). Numerous publications have shown that many Late Devonian fossil groups were impacted including trilobites, brachiopods, ostracods, ammonoids and tentaculitoids and even specific fossil communities such as coral reef ecosystems (Stearn 1987;Copper 2002;Ma et al. 2002;Olempska 2002;Racki 2005;Bond 2006;Wei et al. 2012;Crônier and François 2014;Ma et al. 2016;Mottequin and Poty 2016). The F/F interval is generally characterised by two dark, organic-rich lithologies that were dependent on the facies setting (Carmichael et al. 2019). These are known as the Lower and Upper Kellwasser events (LKE, UKE) that represent a stepwise ocean anoxia and extinction sequence (Walliser 1996;Gerecke and Schindler 2012). The section described herein lacks the diagnostic Kellwasser type lithologies. Here, the LKE took place during the Palmatolepis rhenana conodont zone of the late Frasnian, while the UKE is associated with the F/F boundary (Palmatolepis linguiformis conodont zone; Ziegler and Sandberg 1990). Even though numerous studies on more than 150 Kellwasser localities worldwide have been published, causes and mechanisms for the Late Devonian mass extinctions are still being debated. These events have been linked to an extraterrestrial impact (McGhee 1996;Wang et al. 1996), glaciation (Streel et al. 2000;Song et al. 2017), a cooling event that was linked with abrupt warming and sea-level rise (Thompson and Newton 1988;Algeo and Scheckler 1998;Riquier et al. 2005Riquier et al. , 2006Algeo and Scheckler 2010;Huang et al. 2018b), multiple arc volcanic events (Racki et al. 2018), and eutrophication and the development of suboxic to anoxic conditions present during both events as a result of high biological productivity (Algeo and Scheckler 2010;Whalen 2015;Kido 2014, Carmichael et al. 2016). Possible causes and triggers for anoxia and extinction of Late Devonian events were recently reviewed by Carmichael et al. (2019). These authors documented that the current knowledge concerning the F/F crisis is limited by a significant sampling bias, due to the fact that most previous studies sampled epicontinental seaways or passive continental shelves, primarily from localities across Europe and North America. Recently, more data on the F/F interval were published from southeast Asia (e.g. Huang and Gong 2016;Königshof et al. 2017;Huang et al. 2018a, b;Komatsu et al. 2018;Racki et al. 2019). Here, we report new biostratigraphical, sedimentological and magnetic susceptibility records across the F/F boundary interval exposed in the upper part of the Xom Nha Formation, Central Vietnam.
Geological setting
The F/F boundary is known from two sections in Vietnam. One section is exposed in the Si Phai Pass, Dong Van area (Ha Giang Province) in the northeast, close to the Chinese border (Tạ Hoa 2002, 2007Königshof et al. 2017;Komatsu et al. 2018) Usuki et al. 2009;Trị et al. 2011). The Xom Nha section is exposed in a steep cliff at an entrance of a cave close to the small village of Xom Nha (N 17°48′26,9″, E 105°56′41,1″; Figs. 2 and 3).
The calcareous rocks exposed here belong to the upper part of the Xom Nha Formation and we have sampled a 2.6 m thick section which covers the F/F stage boundary interval. The Xom Nha section is mainly composed of fossiliferous, brecciated and laminated limestones, while shale beds occur in a subordinate number. The lithological characteristics of the F/F interval, such as black limestones and black shales known from many sections worldwide, are lacking as a result of facies, as described from other sections in Asia such as from Thailand (Königshof et al. 2012;Racki et al. 2019). The overall fauna is rather scarce in many layers except for conodonts, which occur in large numbers, particularly in the Frasnian part of the section. Conodonts from the sampled section range from the Frasnian Palmatolepis hassi Zone (Ziegler and Sandberg 1990;Ji and Ziegler 1993) to the Famennian Palmatolepis crepida Zone (Spalletta et al. 2017;Fig. 4).
Conodont biostratigraphy and microfacies analysis
Forty-six samples for conodont biostratigraphy from the 2.6 m thick section were processed. Conodont samples weighting 1 to 1.5 kg were collected from all beds. Some samples were barren or contained a very limited number of conodonts with a lack of zonal index taxa, so that a detailed stratigraphy for the middle Famennian was not possible. Limestones and marls were cleaned and crushed to about 2 cm 3 and placed in wide-mesh sieves hung in buckets filled with warm tap water. Formic acid was added until the dilution reached a concentration of about 5% HCOOH. The process was repeated until samples were dissolved. The fractions (63 μm, 125 μm, 250 μm and 500 μm) were dried in an oven (~40°C) and the most promising conodont-sized fractions were separated by using liquid sodium polytungstate. Later, the conodonts were handpicked under a microscope and the most indicative species were photographed using an SEM. Given that it is common practice in highresolution stratigraphic conodont studies, only Pa elements were identified. This is because many Late Devonian to Mississippian multielement reconstructions are still doubtful and incomplete.
Facies and microfacies types (FTs) were mainly compared on the basis of the classical models used by Wilson (1975) and Flügel (2004). In order to get a thorough overview on the section, we used thin sections that are at least 8 cm × 12 cm in size. These samples were usually collected at 10 cm intervals, but if necessary shorter intervals were used. Thin sections and conodont samples are stored at Senckenberg-Research Institute and Natural History Museum Frankfurt, Germany, under repository numbers SEM 70851 to SEM 70886 and SEM-XN-LD-01 to SEM-XN-LD-74, respectively.
Magnetic susceptibility measurements
From the 2.6 m thick Frasnian/Famennian Xom Nha section, 132 samples were collected for MS measurements from 24 layers (~2 cm/1 sample). Samples for magnetic susceptibility (MS) were measured in the Geomagnetism Department of the Institute of Geophysics laboratory at the Vietnam Academy of Sciences and Technology, Hanoi. The average distance between samples was 2 cm. All materials are "susceptible" to becoming magnetized in the presence of an external magnetic field. Low-field bulk, MS is an indicator of the strength of this transient magnetism (Ellwood and Gose 2006;Ellwood et al. 1995Ellwood et al. , 1997. MS is very different from remanent magnetism (RM), the intrinsic magnetization that accounts for the magnetostratigraphic polarity variations of materials. MS in stratigraphic sequences is generally considered to be an indicator of detrital iron-containing paramagnetic and ferrimagnetic grains, mainly clay minerals, and can be quickly and easily measured on small friable samples.
The overall facies setting points to outer shelf environments below fair-weather wave base. Beside conodonts, distinct layers are fossiliferous and contain tentaculitids, cephalopods, ostracods, trilobites and very rare crinoids and corals. Interestingly, the Xom Nha section does not exhibit characteristic black shales or black limestones around the F/F boundary (Fig. 4a) known from many places around the world, but instead show a similar lithology in comparison to other sections in Southeast Asia, particularly in Thailand and Vietnam (Königshof et al. 2012, MF 1: microbioclastic lime mudstone to wackestone Microfacies 1 occurs in Frasnian limestones and is dominated by greyish, thin-bedded lime mudstone to wackestone at the base of the section (Palmatolepis hassi Zone) and within the linguiformis Zone. The micritic matrix contains rare fossils, including conodonts, tentaculitids, benthic ostracods and small shell hash. In distinct layers, pelagic entomozoaceans also occur. The size of the bioclasts is small, burial solution occurs and some bioclasts are separated by clay seams. Shells are frequently dissolved and filled with sparry calcite. The fossil content generally varies between 5 and 10%. A mottled/nodular fabric occurs, along with styliolitization, but this is less frequent. Rarely, hard ground surfaces occur close to the F/F boundary (Fig. 4b). Similar sedimentological features around the F/F boundary are described from the Mae Sariang section in northwestern Thailand (Königshof et al. 2012).
Interpretation: Hardgrounds are related to a combination of non-deposition or very low sedimentation rates. The overall fine-grained sediment and fossil content suggests a hemipelagic facies setting on an outer shelf environment with low sedimentation rates. The occurrence of pelagic entomozoaceans is restricted to distinct layers. Variable fabric ranging from thinbedded to nodular/mottled can be explained as a result of bioturbation in a low-energy environment.
MF 2: fossiliferous wackestone to packstone
This rock type is composed of bioclastic wackestone to packstone which frequently occurs in the Palmatolepis rhenana rhenana and Pa. rhenana nasuta zones. Faunal elements comprise mainly conodonts, benthic ostracods, trilobites, tentaculitids, cephalopods and thin-shelled bivalves, which occur in descending order (Fig. 4c, d). Crinoid ossicles and small corals were observed, but occur with less frequency (Fig. 4e). Benthic ostracods are mainly disarticulated, but also complete ones occur. Pelagic ostracods are less frequent. The fossil content is higher than observed in MF 1 and can reach up to 30%. Single layers are packstone and bioturbation occurs in distinct layers. Fe-Mn crusts and microstromatolites around bioclasts and lithoclasts are observed in a few thin sections (Fig. 4f); the matrix is composed of lime mud. As a result of transport, the limestones show bedding and gradation, along with small-sized shall hash of up to 3 mm.
Interpretation: This microfacies suggests a hemipelagic setting on an outer shelf environment with low sedimentation rates. Coated clasts can be interpreted as reworked hardground fragments (Zatoń et al. 2012;Lazar et al. 2013). Shallowwater bioclasts, such as corals, occur in distinct layers, and are very scarce. They occur in samples VN-XN-11 and VN-XN-12 (upper Pa. rhenana nasuta Zone/Pa. rhenana rhenana Zone), which appear to be linked with a sea-level fall. The increasing number of Icriodus specimens in sample VN-XN-11 may support this assumption.
MF 3: brecciated grey limestone
This facies type occurs in distinct layers within the Xom Nha section, and fauna is rare or absent. These limestones are characterised as a monomict stylobreccia (Fig. 4g). Another type of carbonate breccias occurs in the linguiformis Zone (sample VN-XN-17a) and exhibits strong fracturing of individual clasts as well as the whole rock. Microfractures differ in width and are filled with sparry calcite (Fig. 4h).
Interpretation: Whereas the monomict stylobreccia is a result of pressure solution of lime mudstones, the second rock type is interpreted as a tectonic fault breccia.
MF 4: alternating grey limestone with marl layers
This microfacies occurs mainly in the Famennian part of the Xom Nha section and shows an alternation of laminated lightgrey limestone and intercalated greyish marls. The marls are less frequent. These limestones and marls yielded rare fauna following the Upper Kellwasser Event. Occasionally, thin Interpretation: The fossiliferous limestones are interpreted to be deposited, in part, by distal turbidity currents and are typical of sediments found in basinal facies settings, such as lower slope facies sediments (Stow et al. 1996;Flügel 2004;Mulder 2011).
Conodont biostratigraphy and biofacies
Conodont biostratigraphy (Table 1, Figs. 5,6,7 and 8) The results from the conodont stratigraphy in this section indicate that the upper part of the Xom Nha Formation ranges from the Palmatolepis hassi Zone to the Palmatolepis crepida Zone (Table 1). The crepida Zone is not shown in Figure 5, but this result is based on the earlier sampling (Tạ Hoa 2002). Zonation scheme follows Ziegler and Sandberg (1990), and Ji and Ziegler (1993) for the Frasnian conodont stratigraphy, and Hartenfels (2011) and Spalletta et al. (2017) for the Famennian conodont stratigraphy. Although the Montagne Noire Zonation (MN; Klapper 1989;Girard et al. 2005) provides a more detailed stratigraphy in comparison to the Frasnian Standard Zonation (Ziegler and Sandberg 1990;Ji and Ziegler 1993), we used the latter for the Frasnian because the conodont collection did not yield all naming MN zonal species. Conodont zones were indicated by either the First Appearance Datum (FAD) or the presence of distinct taxa. The distribution of important conodont taxa is shown in Table 1 and most zonal indicative species are provided in Figs 5, 6, 7 and 8. Overall, the conodonts are well preserved. The conodont alteration index (CAI, Epstein et al. 1977) has a range in the Xom Nha section between CAI 4.5 and CAI 5.
The Late Devonian Frasnian and Famennian conodonts from the Xom Nha section in Central Vietnam contain mainly cosmopolitan species. Most sampled beds yielded conodonts (ranging from <10 conodont elements per sample to >300 elements per sample). The conodont record in the Frasnian is generally very good whereas the record in the Famennian is scarce, which is not a result of different sample sizes because conodont samples weighting 1 to 1.5 kg were collected from all beds. The base of the Xom Nha section starts in the Palmatolepis hassi Zone. In sample VN-XN-1, the conodont collection is dominated by Palmatolepis hassi and Pa. punctata. Other important species are Ancyrodella lobata, and Ancyrodella buckeyensis, which have their first occurrence in the early part of the Palmatolepis hassi Zone (Ji and Ziegler 1993). Sample VN-XN-3 did not yield many conodonts, but yielded some important conodonts, including Ancyrognathus triangularis and Ancyrodella nodosa. The first appearance of Ancyrognathus triangularis defines the next younger Ancyrognathus triangularis Zone. The entry of Ancyrodella nodosa was used by Ji and Ziegler (1993) as an important species, which has its first appearance close to the beginning of the Late hassi Zone (= Ancyrognathus triangularis Zone). The base of the Ancyrognathus triangularis conodont zone can be correlated to the base of the MN 8 zonation defined by the FAD of Palmatolepis proversa (Klapper 1989). Palmatolepis proversa was not found in sample VN-XN-3, but did appear later in the section in sample VN-XN-6. Whether the later appearance of this species is linked to sampling bias or the range of this species is diachronous in respect to conodont records in other localities remains questionable. As shown by Dzik et al. (2018), who investigated the F/F boundary in northeastern Vietnam, it seems obvious that even very fast or subtle changes in environmental conditions find expressions in the fossil record of species. Overlying conodont samples (VN-XN-5, VN-XN-6) yielded species that represent diagnostic species to identify this zone, including Ancyrognathus triangularis, and Palmatolepis proversa. Other important associated conodonts include Palmatolepis hassi, Ancyrognathus primus, Ancyrodella nodosa, Ancyrodella curvata and Ancyrodella lobata ( Table 1). The next younger conodont zone in sample VN-XN-7 is defined by the FAD of Palmatolepis jamieae and Ancyrodella ioides. The latter appears at or close to the start of the Palmatolepis jamieae Zone (Ziegler and Sandberg 1990). The overlying sample VN-XN-8 yielded conodonts which correspond to the Palmatolepis rhenana nasuta Zone, which is defined by the FAD of the name-giving species Palmatolepis rhenana nasuta. This conodont zone ranges from sample VN-XN-8 to VN-XN-11. Important associated conodonts which were found in these samples are Palmatolepis hassi, Pa. jamieae, Pa. foliacea, Pa. plana, Pa. unicornis, Ancyrodella nodosa, Anc. lobata, Anc. buckeyensis, Ancyrognathus triangularis, Polygnathus decorosus and Po. webbi among others. Palmatolepis feisti which defines the MN 11 Zone (Klapper 1989) was not found in the Xom Nha section. Icriodontids are scarce (Table 1).
The next younger Palmatolepis rhenana rhenana Zone ranges from sample VN-XN-12 to VN-XN 16a ( Table 1). The base of this interval is well defined by the FAD of Palmatolepis rhenana rhenana. In addition, Palmatolepis subrecta occurred for the first time in that sample. Palmatolepis juntianensis occurred a little bit later in this zone in the Xom Nha section, and was found in sample VN-XN-13. Palmatolepis cf. bogartensis, which represents the MN 13a Zone (Klapper 1989), shows up in sample VN-XN-15b, below the Palmatolepis linguiformis Zone. The next younger Palmatolepis linguiformis Zone is clearly defined by the FAD of Palmatolepis linguiformis in sample VN-XN-16b (Table 1) where a large number of individuals of this species were found. It seems likely that in some areas in Asia this species is common. Savage (2013) also reported a relatively large number of Pa. linguiformis from the Mae Sariang section in northwestern Thailand. The base of this conodont zone (Ziegler and Sandberg 1990;Ji and Ziegler 1993) corresponds to the MN Zone 13b of Girard et al. (2005). A further subdivision, suggested by Girard et al. (2005) (MN 13c = Ancyrognathus ubiquitus), was not possible to define in our section due to a lack of this marker species.
The Frasnian/Famennian boundary of the Xom Nha section corresponds to the base of sample VN-XN-18b (Table 1) and is based on the FAD of Palmatolepis triangularis (Ziegler and Sandberg 1990). In our section, we could not define the Palmatolepis subperlobata Zone which corresponds to the lowest part of the former Lower triangularis Zone (Hartenfels 2011;Spalletta et al. 2017) defined by the FAD of Palmatolepis subperlobata. This species does not occur in our sample VN-XN-18b, probably as a result of the scarcity of conodonts at that level. The conodont fauna shows low diversity, and Icriodus alternatus helmsi and Polygnathus brevilaminus were conodonts associated with this sample. The triangularis Zone (Hartenfels 2011;Spalletta et al. 2017) ranges from sample VN-XN-18a to sample VN-XN-21a. The next younger section corresponds to the Middle triangularis Zone of Ziegler and Sandberg (1990) and can be correlated with the Palmatolepis delicatula platys Zone (Spalletta et al. 2017). The useful conodont species for the zonal identification in the absence of the marker species is Ancyrognathus sinelaminus which was found in sample VN-XN-21c (Figs. 7p and 8e). This species enters at the base of this zone (Spalletta et al. 2017). Palmatolepis delicatula platys was not found which may support the observation by Schülke (1995) who reported that this species is rather rare outside Europe. Savage (2013) reported a small number of this species from a F/F boundary section in Thailand but this species was also described from the Toc Tat Formation in Northern Vietnam by Komatsu et al. (2018). The lack of this species in our section might be a result of facies or insufficient sample size. Other associated conodonts in this sample are Palmatolepis protorhomboidea, Pa. triangularis, Icriodus alternatus helmsi, Icr. alternatus alternatus and Polygnathus webbi. Stratigraphically, the section reaches up into the Palmatolepis crepida Zone, which is not shown herein, but was reported by Ta Hoa (2007).
Conodont biofacies
Based on the relatively abundant conodont assemblage of the Xom Nha section, conodont biofacies (e.g. Sandberg 1976; Weddige and Ziegler 1979;Sandberg and Ziegler 1979;Sandberg and Dreesen 1984;Sandberg et al. 1992) have been analysed. The whole section is mainly composed of lime mudstones and wackestones, suggesting an outer shelf environment below fair-weather wave base. Basically, the Xom Nha section is dominated by species of Palmatolepis and Polygnathus, which contain over 80% of the total P1 elements. Therefore, Palmatolepis elements exceed Polygnathus elements. Minor changes occur in some conodont zones (Table 1). The Frasnian part is generally dominated by a Palmatolepid-Polygnathid biofacies. In the lower part of the Xom Nha section, comprising the Palmatolepis hassi and Ancyrognathus triangularis zones, Palmatolepis species are dominant and comprise over 90% of the P1 elements. Ancyrodella species occur in subordinate numbers and the specimens are less frequent. Icriodus and Polygnathus specimens are present, although never numerous. In the uppermost part of the Ancyrognathus triangularis Zone, the number of Polygnathus increases and continues through the Palmatolepis rhenana nasuta Zone that is associated with an increasing number of Ancyrodella specimens. At the base of the Palmatolepis rhenana rhenana Zone, conodont biofacies is dominated by Palmatolepis and Ancyrodella. Polygnathus species occur but are very scarce. A similar biofacies occurs at the top of this conodont zone (Fig. 9).
Within the Palmatolepis linguiformis Zone, the number of Polygnathus conodonts increases, and Icriodus specimens occur in small numbers below the F/F boundary. Icriodus specimens are present in some layers in the Frasnian, but they occur in very small numbers. In the Famennian, there is a marked increase of Icriodus and Polygnathus specimens, whereas the number of Palmatolepis specimens is observed to be decreasing. However, it should be noted that the total abundance of conodonts in the Famennian is scarce, particularly above sample VN-XN-22b. Thus, conodont biofacies analysis above this sample is not possible based on the present conodont record.
Magnetic susceptibility
From the 2.6 m thick Frasnian/Famennian Xom Nha section, 132 samples were collected for MS measurements and the MS data are given in Figure 10. The following interpretation convention for MS measurements is used. If the MS cyclic trends increase or decrease by a factor of two or more, and if the change is represented by two or more data points, then this change is assumed to be significant. This method is best employed when high-resolution data sets are being analysed (large numbers of closely spaced samples) and helps resolve variations associated with anomalous samples. Such variations may be due to weathering effects, secondary alteration and metamorphism. Longer-term trends are due to factors such as plate-driven eustacy, as opposed to shorter-term climate cycles, or event sequences such as impacts (Ellwood et al. 2004). In addition, variations in detrital input between localities, or a change in detrital sediment source, can often be compared between different localities. Abrupt MS shifts, such as that given at 2.0 m in Figure 10, may be interpreted as representing an unconformity, while cyclic trends are often interpreted as due to climate cycles, warm representing higher MS values due to erosion, while lows are generally due to cooler times with less rainfall and erosion. The Multi-Taper Method (MTM) uses MS data for time-series analysis (Fig. 11). The approach used here is to (a) collect closely and uniformly spaced samples in the field, (b) report MTM after normal detrending, (c) apply confidence limits to the MTM data, (d) place the important Milankovitch climate bands on the MTM diagrams for consistency checks and (e) establish a uniform model which can then be graphically compared to the MS cyclicity presented in Figure 10.
The MTM results show Milankovitch cycles with significant peaks. Using the work of Berger et al. (1992), we calculated two important climate cycles in the data set reported here, an eccentricity E2 (405 kyrs) cycle and an obliquity O2 (42 kyrs) cycle. Timing for the Xom Nha section is based on using the timeseries analysis presented here (Fig. 11). Using the timing presented in the International Chronostratigraphic Chart, 2018 version with 372.2 Ma given for the F/F boundary (www. stratigraphy.org), we calculated the ages for the E2 and O2 main cycles for this section. From these results, we then calculated the interval of time represented by the MS samples in Figure 10 as 1.215 Ma. Therefore, the sediment accumulation rate (SAR) = 0.217 cm/kyrs for the collected section (Fig. 10).
The spectral power for MS data sets presented here was obtained using the MTM method after the data were detrended and subjected to a Hanning window to reduce spectral leakage and increase the dynamic range in the data (Jenkins and Watts 1968;Thompson 1982). Incidences of high-confidence peaks at the 90, 95 and 99% confidence limits in the resulting spectra were determined by employing the MTM analysis (Ghil et al. 2002), as calculated using the SSA-MTM toolkit (Dettinger et al. 1995). A null hypothesis of red noise was assumed (low frequency and high power in the spectrum, sloping towards lower values at high frequencies), using a three-taper model. As this method is prone to producing false positives, the use of statistical significance is limited here to its role in supporting (or not) the positions of multiple Milankovitch bands within the MTM dataset. The positions of these bands are mathematically fixed relative to each other, and thus a climate-forcing mechanism is supported by spectral analysis when the Milankovitch frequencies are also frequencies of high spectral power, as is the case here.
Conclusions
The overall microfacies and faunal characteristics shown here point to relatively deep and quiet water in the aphotic zone below storm wave base or current activity in an hemipelagic palaeoenvironment. Interestingly, the Xom Nha section does not show the characteristic black shale/limestone lithology around the F/F boundary, which is known from other hemipelagic sections around the world (Carmichael 2019, and references therein). The occurrence of discontinuous surfaces is related to a combination of non-deposition or low sedimentation rates. The main sedimentary process is settling from suspended matter. Based on the data presented herein, a sediment accumulation rate of 0.217 cm/kyrs was calculated. The conodont assemblage through the F/F boundary shows mainly cosmopolitan species, which is similar to many sections elsewhere. However, the section measured here differs from most sections in Europe, because the Xom Nha section in Central Vietnam contains relatively high numbers of the species Palmatolepis linguiformis, which has also been described from sections in Thailand. Deposition through the F/F boundary interval reported here for the Xom Nha section appears to have occurred over a period of~1.2 Ma, and exhibits well-defined climate cyclicity. . Luu T. P. L. received funding by the VAST (NVCC12.01/21-21). Ellwood received funding from the Robie Clark endowment to LSU. We thank I. Boncheva (Technical University of Sofia, Bulgaria), an anonymous reviewer, for their constructive comments and suggestions. Furthermore, we thank Jana Anger (Senckenberg-Research Institute and Natural History Museum) for preparing thin sections and Hanna Cieszynski (University of Cologne, Institute of Geology and Mineralogy) for taking the REM photographs. This is a contribution to IGCP 652 "Reading geologic time in Palaeozoic sedimentary rocks" and IGCP 700 "Carbonate build-ups in Southeast Asia".
Funding Open Access funding enabled and organized by Projekt DEAL.
Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
v3-fos-license
|
2023-09-30T15:06:48.133Z
|
2023-09-28T00:00:00.000
|
263239774
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2039-4403/13/4/113/pdf?version=1695885488",
"pdf_hash": "6f710f5140d81269f9a756b3d1de6da749f0aa9d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2953",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"sha1": "8b0bcc8c0b740e4fcd3c3179cf60b74cab01cb45",
"year": 2023
}
|
pes2o/s2orc
|
Current Nursing Education Considering Southern Europe’s Reality and Legal Framework: A Two-Phased Research Approach
Nursing education and formation is still heterogenous in spite of The Bologna Declaration in 1999. Apart from the existence of basic curriculum standards, universities have flexibility regarding its design. The objective of this study is to provide an overview of contemporary nursing education and contrast it with the legal frameworks in place in four Southern European countries: Portugal, Spain, Italy, and Greece. A scoping review was conducted in order to revise and obtain an up-to-date review of current nursing education and quality. Then, a content evaluation of the legal public framework was conducted. A total of nine articles were included in the review. Data analysis evoked three main themes from the results: nursing education heterogeneity owing to ECTS increased from 180 to 240 for the diversity of clinical practice hours; the nursing framework lacked a definition; and the evolution of nursing education, alongside quality improvement and more accurate guidelines were required. Regarding their legal framework, the main directives and legislation standards were assessed and compared with the current curriculum. To conclude, nursing heterogeneity education evokes competence inequality among students and future professionals as larger curricular programs (240 ECTS) offer more clinical practice. Nursing education uniformity could enhance international mobility and promote knowledge exchange and nursing competence framework definitions. These are facts that certainly bring nursing empowerment. This study was not registered.
Introduction
In 1999, 29 European countries joined The Bologna Declaration, identified as an agreement meant to enhance cohesion and homogeneity in higher education in Europe.Thus, all members agreed to move toward the introduction of a three-cycle higher education frame, a qualification recognition abroad method and a quality of the assurance system [1,2].Currently, 49 countries and the European Commission constitute The Bologna Follow-Up Group [3].
The main focus of the Bologna Process was the foundation of the European Higher Education Area (EHEA) [2].Furthermore, the constitution of the European Credit Transfer and Accumulation System (ECTS) and the Diploma Supplement were measures adopted within the action plan.Students represent a central role in the new conception of ECTS, measuring learning outcomes and curricular workload via a system of credits.A full-time academic year is represented by 60 ECTS, and one credit corresponds to 25-30 h of work [4].The Diploma Supplement consists of a document attached to the higher education diploma, which highlights curriculum content and learning achievement [5].
Nurs.Rep. 2023, 13 1343 Regarding professional qualifications and regulations, the 2013/55/EU directive of the European Parliament [6] established the requirements of the common training framework in order to offer equity in education and define professional competencies.Despite the homogeneity endorsed by the Bologna Process, the literature still states that there is a variety between nursing curriculum programs in Europe due to cultural and knowledge inequalities [6,7].Moreover, this heterogeneity leads to a difficult definition of the nursing role [8], discouraging international mobility [7].
According to recent studies, new concepts have to be considered to develop a transnational nursing curriculum and to build a more global higher education nursing approach.More creativity, communication, quality monitoring, and consideration of students' experiences are required to ensure successful education and reduce cultural and social diversity [9].Language and communication difficulties, prejudices and stereotypes, and lack of cultural competence training were identified as challenges to nursing in a clinical framework [10].However, communication skills training development in the educational curriculum could prevent such difficulties and increase the quality of health care [11].
Therefore, the present study aims to bring an overview of current nursing education and compare the results with the legal framework involved among four southern European countries: Portugal, Spain, Italy, and Greece.
Materials and Methods
The study methodology followed a two-phase research approach in order to answer our research question: What is the current state of nursing education focusing on curriculum and legal variations among four European countries?First, a scoping review was conducted to revise and obtain an up-to-date assessment of current nursing education and quality.By including studies from other continents, it was expected to confirm heterogeneity in nursing education over the world and to determine other education challenges.Afterward, a content evaluation of the legal public framework written for higher education nursing was realized within the four selected countries.Therefore, it was intended to make a comparison between the scoping review results and the specific legal framework applied to the countries selected.
The selection of these countries was made aiming to evaluate the situation in Southern Europe and focus the analysis and comparison within a specific region.Southern European countries, where the authors of the article work, both have a Latin-speaking origin, are geographically close, and have some cultural similarities, thus creating an opportunity for the mobility of nursing students, boosted by the ERASMUS+ program.
Consequently, addressing the main distinctions between these countries would be advantageous in promoting student mobility, as it could help reduce and resolve many of the difficulties often encountered due to variations in curricula and ECTS validation.
Scoping Review Eligibility Criteria, Information Sources and Search Strategy
The scoping review phase was conducted following the Joanna Briggs Institute methodology for scoping reviews [12] since it is identified as a rigorous and updated methodological procedure to procure high-quality reviews [13].The Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist and explanation were used once this methodology improved research quality and reliability [14].This guide has the purpose of promoting more transparent, complete, and accurate reviews of the literature and to ease evidence-based decision making [15].
To generate the research question and inclusion/exclusion criteria, the PCC (Population, Concept, and Context) framework was applied and fulfilled with JBI guidance [16] (Table 1).
The PCC model helped to construct the general research question for this research, as well as the specific eligibility criteria established.The population of interest were
Scoping Review Study Selection Process, Data Collection Process, Extraction and Analysis
The present search and data selection were conducted by two independent reviewers (AAF and SA) through the selection of scientific databases and evaluation and eligibility of studies, which started by analyzing the title and abstract.Afterward, the full-text paper was screened using AAF and reviewed by S to verify if the studies met the inclusion criteria.If discrepancies between reviewers occurred regarding the eligibility, inclusion, or not of these studies, they were solved through the consultation of three independent reviewers (HP, HF, and CA) to achieve an agreement.Records were managed via Rayyan, an online tool for systematic reviews, to assess study selection [17], and Mendeley, a specific software for managing bibliographies.For each study, an electronic table containing the following topics was obtained: authorship, year of publication, location, sample size, study design, main goal, outcomes, and key findings (Table 2).Considering the small number of studies, a narrative synthesis was carried out as part of the results of the present review.
Legal Document Evaluation and Framework Evaluation
Legal public documents about European Nursing Education were retrieved following two stages.First, an identification of relevant European Union Websites was conducted to obtain the legal reports.The main web pages consulted were the European Commission [2], the European Higher Education Area [3], and the European Union's official law portal (EUR-Lex) [18].Additionally, a search by country was realized, including Portugal, Spain, Italy, and Greece's legal framework.Their specific area inside the European Union web pages [19-22] was consulted.Afterward, the search was focused on the directives and regulations of each country, which provided specific country nursing education and legal contexts.Specific and relevant keywords such as "nursing degree legislation" and "nursing education regulation" were used to access goal documents.
In relation to the language, papers written in Portuguese, English, or Spanish were assessed.However, legal documents in Greek were not able to be analyzed due to the language barrier.
Scoping Review
The initial search identified 1078 records.After removing the duplicates, 831 articles were screened by title and abstract to assess their relevance according to the proposed inclusion/exclusion criteria.Subsequently, the full text of 13 articles was reviewed to determine if they met the eligibility criteria.The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) diagram outlines the review process and search outcomes [23] (Figure 1).A total of nine studies were finally included in the review (Table 3).
law portal (EUR-Lex) [18].Additionally, a search by country was realized, including Portugal, Spain, Italy, and Greece's legal framework.Their specific area inside the European Union web pages [19-22] was consulted.Afterward, the search was focused on the directives and regulations of each country, which provided specific country nursing education and legal contexts.Specific and relevant keywords such as "nursing degree legislation" and "nursing education regulation" were used to access goal documents.
In relation to the language, papers written in Portuguese, English, or Spanish were assessed.However, legal documents in Greek were not able to be analyzed due to the language barrier.
Scoping Review
The initial search identified 1078 records.After removing the duplicates, 831 articles were screened by title and abstract to assess their relevance according to the proposed inclusion/exclusion criteria.Subsequently, the full text of 13 articles was reviewed to determine if they met the eligibility criteria.The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) diagram outlines the review process and search outcomes [23] (Figure 1).A total of nine studies were finally included in the review (Table 3).All articles included were scientific studies published in different nursing journals.In relation to the design of the studies selected, quantitative (n = 4), qualitative (n = 2), and mixed-method (n = 3) approaches were identified.Regarding the quantitative studies, two followed a cross-sectional design [24,25], and two were descriptive [26,27].In relation to qualitative ones, one was a review of the literature with a focus group constitution [28], All articles included were scientific studies published in different nursing journals.In relation to the design of the studies selected, quantitative (n = 4), qualitative (n = 2), and mixed-method (n = 3) approaches were identified.Regarding the quantitative studies, two followed a cross-sectional design [24,25], and two were descriptive [26,27].In relation to qualitative ones, one was a review of the literature with a focus group constitution [28], and the other underwent quality framework appraisal [29].According to the mixed-method studies incorporated, two were reviews of the literature, including empirical research [30,31], and the last one had a mixed qualitative and quantitative approach [32].Concerning the origin of these studies, almost half of them addressed nursing education in Europe (n = 4) [25,28,30,32].The rest tackled nursing education in North America (n = 3) [27,29,31], South America [26], among East and Southeast Asia [24].The majority of these studies reported the current situation of the nursing curricular degree and its evaluation [26][27][28][29]31].Two studies addressed nursing competence areas and nursing clinical frameworks [25,30], and the last two analyzed specialization and doctoral programs [24,32].To recognize the interfaces of the specialist nurse profession across the EU.
Countries' characteristics, educational level, autonomy and responsibility, suggestions, and qualitative research results were analyzed.
Homogeneity is required to define a specialist nurse's role.This fact promotes the advanced practice of nursing and mobility around the EU.
Jager et al. (2020) Canada [31] --Integrative review and empirical research
To identify existing curriculum renewal strategies, to use an evidence-informed process, and to discuss the nursing curriculum and the ever-changing healthcare context.
The Ottawa model for nursing was identified as a model procedure for nurse curriculum renewal.
Help in defining the nursing curriculum renewal procedure was provided with this analysis.The Ottawa model is useful for attempting nursing curriculum renewal.
Legal Framework
Concerning the European legislation about higher education, Directive 2013/55EU [6] was the main public document to procure an overview of the current legal bases and standards meant to be applied.The European Union reported that nursing degrees must comprehend at least three years of study measured with ECTS, representing no less than 4600 h of theoretical and clinical training.The minimum duration of the theoretical component has to be at least one-third of the workload, and at least one-half of the curriculum ECTS has to be designated for clinical training [6].
Spanish legislation describes a distribution of nursing learning contents in three blocks of common core education (60 ECTS), nursing sciences (60 ECTS), supervised internships/thesis (90 ECTS), and elective subjects (30 ECTS) [36].The recent Law 2/2023 [38] regulates universities' autonomy to adapt and organize the structure and contents of their educational programs.Regarding Italy, they also divide their contents into four blocks of basic subjects, characteristic subjects, relational subjects, and other subjects, which include elective subjects, the final practice exam, and the final degree's dissertation.The amount of ECTS corresponding to each block can vary between universities; however, clinical practice is settled to involve a minimum of 60 ECTSs [41].Portuguese legal bases do not present a curricular content distribution defined but establish a practical component comprehending at least half of the course workload [48].In relation to Greece, a specific nursing curriculum's legal basis could not be assessed due to the language barrier and the inexistence of official translations.However, the web page of the "Nursing School Ateith" described their curricular program.Their degree consisted of 48 subjects: 39 compulsory, 12 electives, and 3 obligatory free choices.Their course structure is divided into 4 groups: basic science division, basic nursing division, nursing specialties division, and humanities and social division [49].
Discussion
The aims of this review were to provide an overview of the present status of nursing education and to conduct a comparative study of the findings with the legal framework involved in four southern European countries: Portugal, Spain, Italy, and Greece.
The results of the scoping review led to three primary themes for assessment, while the legal framework evaluation uncovered the fundamental legal foundations pertaining to the nursing curriculum within selected countries.The mentioned principal topics extracted from the results were:
•
Nursing education heterogeneity; • Nursing clinical framework and competence areas poorly defined; • Nursing education evolution, quality improvement, guidelines, and standard requirements.
First, the vast majority of the reviewed studies specified heterogeneity in nursing education.Ruiz-Rojo et al. [28], who analyzed the entire nursing degree curriculum in Spain, claimed differences between public and private universities according to subjects and ECTS distribution.Their study showed variability among nursing curriculums within the same country.This fact demands and requires the establishment of a more restrictive legislation basis according to the ECTS distribution and learning blocks definition.If there is a clear disparity in the content curriculum of one country, this heterogeneity can grow exponentially among international programs.Furthermore, in relation to clinical practice, Cipher et al. [28] described variances as well, suggesting to not examine results considering only the number of hours but also the empirical outcomes.Moreover, nursing education heterogeneity is not only present in the degree but also in a specialist [32] and doctoral programs [24].Regarding specialist nurses, Decock et al. [32] reported a lack of definition for the role of a specialist nurse, as well as a disparity in the process and training to become specialist in European countries.With respect to doctoral programs, Molassiotis et al. [24] claimed the establishment of guidelines and quality enhancement in order to guarantee doctoral programs; progress and development.
Furthermore, these last results outline the importance of defining a nurse's role as it presents a lack of clear description, including a poor concept of nursing competence areas.Satu et al. [30] suggested that the establishment of common curriculum competence areas Europe could promote equality in nursing education, as well as assurances of high-quality nursing care.In addition, Kiekkas et al. [25] showed a positive association between nursing students with self-reported competence and perceived quality degree.In addition, the increment of clinical hours and promotion of critical thinking skills are proposed to enhance higher competence.
According to the general curriculum's heterogeneity and the nurse's clinical framework, findings from this study are also reflected in the existing literature.Kraaij et al. [8] reported that an international description of the nurse's role is required in order to reduce nursing educational pathways and titles/degree diversity.Moreover, this variety between educational programs could lead to people's application for the same job without the same level of education and not being able to fulfill the same role.Making the nursing curriculum uniform and standardized could promote international mobility between undergraduate and graduate nursing professionals.
Additionally, nurses are expected to respond to every clinical environment and to deal with a constantly changing society's necessities with success.In order to handle people's health needs, nursing education must be adapted and evolve as fast as society demands.Jager et al. [31] proposed The Ottawa Model as a proper option to address a curriculum renewal process and reported, as well as Meira and Kurcgant [26], that evaluation is the main aspect in order to manage a renewal procedure.Another consideration in relation to quality improvement is student procedure involvement.This fact could enhance their professional communication and problem-solving skills [29].More accurate and restrictive guidelines and standard establishment are also identified as a required measure to enhance education quality [24,27,28,[30][31][32].
According to the main European directives, Portugal, Spain, Greece, and Italy comply with the minimum year's duration and the total compendium of hours established.Currently, the four selected countries include a bachelor's degree in nursing within their higher education programs.However, in relation to the degree's structure, whereas the nursing curriculum degree in Portugal, Spain, and Greece is constituted of 240 ECTS [36,48,49] distributed in four years of study, the nursing bachelor in Italy only comprehends 180 ECTSs distributed over three years of study [41].Consequently, the main existing difference between these four countries studied is the duration of the degree and ECTS distribution.The present variety in Europe increases inequality between nursing students' competence acquirement.The nursing curricula of the 240 ECTSs present require further ECTSs to be distributed for clinical practice.
European nursing and higher education legislation seems to be too lax.Considering the outlines of the present results, there is a clear requirement to establish more restrictive nursing uniformity bases [8,28].Efforts affording nursing homogeneity could enhance the international mobility between undergraduates and graduates, as well as scientific knowledge exchange.The homogenization of nursing in healthcare could improve the quality of healthcare and simultaneously guarantee patient security, as they must be the mainstay of all care plans.
Study Implications
The present study provides an overview of the current heterogenous state of nursing education and the main differences between the curriculum legal bases of four Southern European Countries.The survey findings include data and evidence that support legal concerns or issues that need to be addressed regarding the nursing curriculum its more accurate regulation.
Study Limitations
The analysis realized in the investigation encountered several limitations, primarily due to the scarcity of studies in this area, especially those employing the same methodological approach.Additionally, despite the legislative efforts to promote standardized education and, consequently, professional practice, we acknowledge that gaps persist in terms of the number of ECTS credits and the duration of academic training required in terms of the number of years.Furthermore, in addition to nursing having a rich historical legacy as a profession, a notable disconnection persists among legislative frameworks, educational approaches, and professional practices.Such a situation can be improved through the coordination of these three essential components, among others.It is important to note that our results only address generic aspects of the educational landscape.Nevertheless, they can serve as a valuable starting point for a more in-depth and, if possible, more comprehensive analysis.
The language barrier was also a challenge to understand and analyze the Greek legislation, reports, and directives.
Future Research Directions
Future research is needed to justify and initiate legal action changes in relation to nursing education for the purpose of ensuring international uniformity.
Conclusions
The evidence presented strongly suggests that nursing heterogeneity education evokes competence inequality among students and future professionals.For instance, larger curricular programs (240 ECTS) offer more clinical practice.The call for nursing education uniformity becomes evident as it holds the potential to foster international mobility, promote knowledge exchange, and facilitate nursing competence framework definition.These facts certainly bring nursing empowerment and significant contributions to the nursing field on a global scale.
Two principal strategies are proposed to address the challenges identified in the study's findings:
•
The establishment of a comprehensive framework for elementary nursing education with a more precise delineation of nursing competencies • The implementation of a standardized the global curriculum for nursing degrees and an equitable distribution of ECTSs.
These proposals represent an opportunity to enhance and advance nursing skills, encourage specialization, and promote the growth and development of doctoral education in the field.It is crucial to note that this second recommendation is intricately linked to the first.The achievement of the second proposal is contingent upon the successful realization of the first.
A more restrictive legal framework is required in order to accomplish nursing competence equality and to solve global curriculum uniformity shortage.It should be imperative that nursing professionals, regardless of their geographical location, possess uniform competencies to ensure patient's safety and healthcare rights.This fact is underscored by the European Parliament in Directive 2011/24/EU, which emphasizes that patient's cross-border healthcare should maintain quality attention.This legislation ensures patient mobility and contributes to social cohesion, social justice, and sustainable development.
Table 2 .
Items extracted for data analysis.
Table 3 .
Summary of included articles.
|
v3-fos-license
|
2021-09-02T13:50:32.415Z
|
2021-09-01T00:00:00.000
|
237378958
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-021-02566-2",
"pdf_hash": "a608590ed6caabb8830671ff1dae7dff1401e424",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2954",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"sha1": "f8a6f8f04109d9b58a0213cac1e010267bead1c6",
"year": 2021
}
|
pes2o/s2orc
|
Quality improvement in the undergraduate medical curriculum: the need for clinical exposure
This correspondence article aims to outline the importance of an integrated clinical component within Quality Improvement education in response to the recently published article by Shah et al.. The Quality Improvement and Patient Safety workshops described in the above study were compared with the Quality Improvement module experienced by medical students at King’s College London. The key difference between the two methods of teaching Quality Improvement was the clinical project undertaken by King’s College Students, which helped students gain an appreciation of the pitfalls of instigating change in a clinical environment. The authors feel that this arguably more authentic experience could have benefited the students in the study in making them feel better equipped to use the skills learned in the theoretical workshops in their later careers.
Dear Editor,
We read with interest the article by Shah et al. describing a pilot study of quality improvement and patient safety (QI/PS) workshops for students [1]. As final year medical students at King's College London, we have personal experience of the benefits and pitfalls of undertaking a mandatory QI/PS module. We are keen to compare the different ways in which a QI/PS education is delivered by our respective institutions. Understanding and applying principles of QI/PS forms part of the GMC Outcomes for Graduates, a list of required skills for newly qualified doctors in the United Kingdom, yet there is no specific guidance around how best to incorporate this into the undergraduate medical curriculum [2].
The use of simulated workshops as described by Shah et al. would help students to develop a theoretical understanding of QI/PS, however we put it to the authors that actioning these techniques in clinical environments can be far more complex. In contrast to the QI/PS teaching described, our first experience of QI/PS was directly designing and undertaking a QI project in a clinical setting. This experience showed us the realistic barriers of QI/PS; didactic teaching and simulated workshops would not have fully addressed clinical issues such as time management, health professionals' engagement, incentive to change and bureaucracy. We see the additional benefit that experiencing these challenges for oneself would have in preparing students for undertaking such projects in their future careers.
Comparably, our QI module lacked the didactic teaching described by Shah et al. Our appreciation for the theoretical aspect of QI could have been greatly enhanced if our institution had incorporated this style of teaching.
The teaching model adopted by Shah et al. whereby students designed QI/PS workshops and delivered these to their peers could raise questions regarding the quality of the teaching. Our QI project was always under the guidance of a faculty member with a background in QI.
Although we appreciate that a proportion of the workshops in the pilot study had staff facilitators, we believe that all workshops should be supervised by a faculty member with experience in QI, as this would provide the added benefit of discussion of the real-life challenges, as mentioned above. Research suggests that peer-led teaching is effective in engaging students, however one study found that 85.1 % of students benefited from expert teachers summarising the topic at the end of such sessions, implying that expert supervision is important for the success of these student-lead workshops [3].
Given that the foundation of medicine should be evidence-based practice, we believe QI/PS teaching should be a mandatory part of medical education, as supported by the GMC guidelines. Implementing clinical QI/PS projects under the supervision of experienced clinicians as part of the undergraduate curriculum should be achievable for medical schools as practising clinical teachers should have experience QI/PS as per GMC guidance [2]. Although the participant responses for the workshops devised by Shah et al. were overwhelmingly positive, there was a potential issue with participation bias, as the workshops were attended by students "with a genuine interest". If the programme was rolled out on a mandatory basis, students' desire to engage with the workshops may have been more varied, which may have been reflected in the students' feedback. Research into medical students' attitudes to QI education supports that clinical integration is highly preferable to didactic teaching alone, and if the pilot study were to be expanded as a mandatory module, adding an authentic clinical component, such as partaking in an audit under supervision, may enhance student satisfaction [4].
Abbreviations QI: Quality Improvement; PS: Patient Safety; GMC: General Medical Council
|
v3-fos-license
|
2015-09-18T23:22:04.000Z
|
2011-03-10T00:00:00.000
|
9246860
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1073/4/3/488/pdf?version=1426592311",
"pdf_hash": "4f12ad6b244e8dc2d2625790293c6f62772c70e4",
"pdf_src": "Crawler",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2955",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "4f12ad6b244e8dc2d2625790293c6f62772c70e4",
"year": 2011
}
|
pes2o/s2orc
|
A New Neural Network Approach to Short Term Load Forecasting of Electrical Power Systems
: Short-term load forecast (STLF) is an important operational function in both regulated power systems and deregulated open electricity markets. However, STLF is not easy to handle due to the nonlinear and random-like behaviors of system loads, weather conditions, and social and economic environment variations. Despite the research work performed in the area, more accurate and robust STLF methods are still needed due to the importance and complexity of STLF. In this paper, a new neural network approach for STLF is proposed. The proposed neural network has a novel learning algorithm based on a new modified harmony search technique. This learning algorithm can widely search the solution space in various directions, and it can also avoid the overfitting problem, trapping in local minima and dead bands. Based on this learning algorithm, the suggested neural network can efficiently extract the input/output mapping function of the forecast process leading to high STLF accuracy. The proposed approach is tested on two practical power systems and the results obtained are compared with the results of several other recently published STLF methods. These comparisons confirm the validity of the developed approach.
Introduction
Load forecasting consists of the prediction of electrical load demand of a power system for future time intervals.For short-term load forecasting (STLF), the forecast step is usually from a fraction of an
OPEN ACCESS
hour (e.g., half hour) to an hour.The forecast horizon is also from one hour to one week ahead, although the most common forecast horizon for STLF is the next day.
STLF is an essential part of an electrical power system's operation and has been used from many years ago due to its importance.It provides input data for many operational functions of power systems such as unit commitment, economic dispatch, optimal power flow and security assessment.A more accurate STLF can lead to more economic operating decisions and enhance the security level of the power system.For instance, several power systems have been studied in [1] and it was concluded that for the considered systems a 1% reduction in mean absolute percentage error (MAPE) of the STLF decreases variable generation costs by approximately 0.1-0.3%when MAPE is in the range of 3-5%.STLF has become even more important with the restructuring of the electric power industry in many countries around the world in recent years.In a restructured power system, generating companies must be able to forecast the system demand and the corresponding price in order to make appropriate market decisions.Moreover, STLF is important for the independent system operator (ISO) to schedule generators, determine reserve levels, predict power system security, provide information to the dispatcher and operate the market [2].Furthermore, load forecast is usually a key input for the prediction of electricity prices [3].
However, STLF is not an easy task, as electrical load time series display non-stationary behavior.Moreover, it is a dynamic nonlinear input/output mapping function of many exogenous variables (such as weather conditions), in addition to its historical values [3].In deregulated electricity markets, the interaction between load demand and electricity price signals and changes in the energy-use patterns of customers due to the variability of electricity price can further complicate STLF [4].The importance and complexity of this forecasting process has motivated many research works in the area.Time series models for STLF such as ARMA (Auto-Regressive Moving Average) [5] and modified ARMA [6], nonparametric regression [7], Kalman filter [8], and neural network (NN) [9,10] have been presented in the literature.Recently, some research works have combined different forecast techniques and proposed more efficient hybrid STLF methods.For instance, a combination of fuzzy linear regression and general exponential smoothing [11], a two-stage hybrid network composed of self-organized map (SOM) and support vector machine (SVM) [3], a combination of similar day and NN techniques [12] and hybridization of forecast aided state estimator with NN [13] have all been presented for STLF.Detailed reviews of different STLF methods can be found in [1,14,15].
Despite the research work performed in the area, more accurate and robust STLF methods, that can be easily adapted to different sites, are still in demand.In this paper, a new STLF method which can be considered as a hybridization of a NN with a novel stochastic search technique is presented.The proposed stochastic search technique is a modified harmony search algorithm used for the training of the NN based forecast engine.Despite the classical NN training mechanisms, the modified harmony search algorithm can efficiently search the solution space in various directions thus avoiding being trapped in local minima and dead bands.With the aid of this algorithm, the NN based forecast engine can effectively learn the input/output mapping function of a load time series presenting high STLF accuracy for any power system.
The rest of the paper is organized as follows.In the second Section, the proposed forecast engine and its learning algorithm are presented.Obtained numerical results from extensive testing of the proposed approach on different real world power systems are presented in Section 3 and compared with the results of several other STLF methods.Section 4 concludes the paper.
The Proposed STLF Strategy
The structure of the proposed STLF strategy, including a preprocessor and suggested hybrid forecast engine, is shown in Figure 1.The focus of this paper is on the new hybrid forecast engine module.However, before proceeding to this module, the applied preprocessor should be first introduced to describe the performance of the proposed STLF strategy.The preprocessor receives the input data of the proposed strategy, normalizes the data to bring all inputs to the same range, refines the inputs by the feature selection process and feeds the hybrid forecast engine by its selected inputs.The input data of the STLF strategy, shown by S(t) in Figure 1, is as follows: where L(t−1),…,L(t−N L ) are the historical values of load, since electrical load is dependent on its past values.The output of the STLF strategy is the load forecast of the next time interval, denoted by L(t) in Figure 1.The time interval depends on the STLF forecast step; for instance, for an hourly load forecast, t is measured in terms of hours.Electrical load is also dependent on exogenous variables (such as temperature and humidity), in addition to its past values.These exogenous variables are shown by EX 1 to EX P in (1).Since, the inputs of (1) have different ranges (such as load and temperature); we linearly normalize all inputs and output to be within the range [0,1] to avoid the masking effect.Linear normalization is a simple and well-known mathematical transformation.Suppose that an input x (such as load, temperature, humidity, etc.) is in the range of [x min ,x max ].Linear normalization of x, to be within the range [0,1], is as follows: where The output of the proposed hybrid forecast engine is in the normalized form, which is returned to the actual range by the inverse transform of (3).For each exogenous variable i, both its forecast value EX i (t) and past values EX i (t−1),…,EX i (t−N i ) (such as temperature forecast and past temperatures) are considered as the input data in (1).Choosing these exogenous variables is dependent on the engineering judgment and availability of data.For instance, while residential customers usually have high sensitivity to weather conditions (such as temperature), industrial loads are not so sensitive to weather parameters [13].A discussion about this matter can be found in [2].In (1), N L and N 1 to N P indicate order of back shift for load L and P exogenous variables EX 1 to EX P , respectively.From a data mining viewpoint, these orders should be considered high enough so that no useful information is missed.In [4], considering short-run trend, daily and weekly periodicity characteristics of hourly load time series, at least N L = N 1 = … = N P = 200 has been proposed.However, this results in a too large set of inputs S(t) in (1), which cannot be directly applied to a forecast engine.Moreover, this large set may include ineffective inputs, which complicate the construction of input/output mapping function of the STLF (i.e., the mapping function of S(t)→L(t)) for the forecast engine and degrade its performance.Thus, the set of inputs S(t) should be refined by a feature selection technique such that a minimum subset of the most informative inputs is selected and the other unimportant features are filtered out.For this purpose, the two-stage feature selection technique proposed in our previous work [16] is used here.This feature selection technique is based on the information theoretic criterion of mutual information and can evaluate both relevancy of each input with the output and redundant information among inputs.The preprocessor by means of the feature selection technique selects a subset of the most relevant and non-redundant inputs among S(t).Details of this feature selection technique can be found in [16].The selected inputs by the preprocessor are given to the proposed hybrid forecast engine (Figure 1).
The proposed forecast engine is a multi-layer perceptron (MLP) neural network trained by a new stochastic search technique, i.e., a modified harmony search (MHS) algorithm.In the following, first the original harmony search (HS) algorithm is presented and then the proposed MHS is introduced.Finally, its application for training of the NN based forecast engine is described.
HS is an efficient population based stochastic search method, which was first developed by Geem et al. [17].The idea of HS is based on mimicking the improvisation process of music players that improvise the pitches of their instruments to obtain better harmony [18].The harmony in music is analogous to the optimization solution, and the musician's improvisations are similar to local and global search schemes in optimization methods [19].Thus, HS has a good potential to be used as an optimization technique.Since its inception, HS has successfully been applied to several practical optimization problems [18][19][20].However, to the best of our knowledge, no research work has so far formulated application of HS (or any modified version of it) for NN training.Performance of HS, as an optimization method, can be summarized in the form of the following step by step algorithm: Step 1. Parameter setup.HS has a few user defined parameters or set points, like the other stochastic search techniques (such as genetic algorithm, particle swarm optimization, differential evolution, etc.).HS parameters include harmony memory size (HMS), harmony memory considering rate (HMCR), pitch adjustingt rate (PAR), number of improvisations (NI), and bandwidth (BW).Similar to the other stochastic search techniques, the values of HS parameters should be first set by the user before executing the HS.
Step 2. Initialization of harmony memory.Consider an optimization problem with ND decision variables x 1 ,…,x ND .To solve this optimization problem, harmony memory (HM) of HS is a matrix as follows: Each individual of the HS population is called a harmony vector (HV), which includes the decision variables x 1 ,…,x ND .HMS indicates number of harmony vectors of HS population.In (4), the superscript of each HV represents its number from 1 to HMS.In other words, the rows of HM matrix are individuals or harmony vectors of HS.To initialize HM, the decision variables of each of its HVs are randomly initialized within their allowable limits.Then, the value of the objective function, denoted by OF(.), is computed for each HV.Moreover, an improvisation counter IC is set to zero.For training of the MLP neural network (the forecast engine), the decision variables x 1 ,…,x ND are the weights of the MLP.Also, the objective function or OF(.) is the error function of the training phase of the MLP neural network that should be minimized.This error function will be introduced later.
Step 3. Improvisation of a new harmony.To produce a new harmony vector x is selected among them.As described in step 1, HMCR and PAR are two user defined parameters of HS in the interval [0,1].Thus, the probability of this case is HMCR × (1 − PAR).
Case 2. Memory consideration with pitch adjustment:
Moreover, pitch adjustment is also executed in this case as follows: where Rand3 is a random number uniformly distributed in the range of [− Here, the BW parameter is adaptively fine-tuned along the improvisations of HS.In each improvisation, BW is set to the standard deviation of HS population in the HM, as proposed in [18].To convert this HS to the proposed modified HS (MHS), its improvisation and HM updating mechanisms (Steps 3 and 4 of the above algorithm) are improved.For this improvement, two points deserve more attention.Although HS can efficiently use the information content of its harmony vectors, the gradient of HVs is not considered in the improvisation, while the gradient information is usually effective in the search process illustrating further useful search directions in the solution space.Moreover, despite the high diversity of HS search process, it does not focus on the promising areas of the solution space.However, an effective stochastic search technique should also be able to concentrate on promising areas of the solution space and enhance the quality of the potential solution in the promising region.Considering these two points, a new mutation operator is added to the improvisation (Step 3) of HS.Suppose that the output of the mutation operator is shown by where best i x is the ith element (decision variable) of the best HV, owning the lowest OF(.) value, of the current HM; x indicate the ith element of two randomly selected harmony vectors j and k of the current HM, respectively; β is a scaling factor controlling the effect of the differential variation.The mutation operation of (6) has been inspired from the mutation operation of differential evolution (DE) algorithm [21].However, the random individuals j and k of DE mutation operation are selected one time for all elements of the new individual.On the other hand, in the mutation operation of (6), the random harmony vectors j and k are separately selected for each element
HV
. In this way, the proposed mutation operation can benefit from higher diversity in its search process and is also more compatible with the improvisation of HS that separately generates each element HV is compared with the newly found worst HV and replaces it provided that new m HV has a lower OF(.)value than it.The proposed MHS saves the positive characteristics of the original HS (such as its high exploration capability to search different areas of the solution space), since the new operator of the MHS is performed in addition to the improvisation of HS.At the same time, the MHS can remedy the two problems of HS.The proposed mutation operation of (6), by computation of difference between two randomly chosen harmony vectors from the HM, determines a function gradient in a given area (not in a single point), and so can effectively employ the gradient information in its search process.Moreover, this mutation operation searches around the best harmony vector of the HM ( best i x , 1 ≤ i ≤ ND) in each iteration.If a promising area is found along the iterations, it is represented by the best HV of the HM and thus the MHS can search this area and find good solutions within it.
After introducing the proposed MHS, its application for training of the NN based forecast engine is presented.For this purpose, as previously described, the decision variables x 1 ,…,x ND of (4) are considered as the weights of the MLP neural network.For instance, if the MLP has 20 neurons in the input layer (corresponding to 20 selected inputs by the preprocessor), 10 neurons in the hidden layer and one neuron in the output layer, it will have 20 × 10 + 10 × 1 = 210 weights.These 210 weights are considered as x 1 ,…,x ND of the MHS (ND = 210).We should also determine the OF(.)of the MHS or the error function of the MLP neural network.To train a MLP neural network, the error function can be selected as the training error or validation error.Here, validation error is selected as the error function of the MLP, since it can better evaluate the generalization performance of the NN (generalization is a measure of how well the NN performs on the actual problem once training is complete) [22,23].
Finally, the performance of the whole proposed STLF strategy, shown in Figure 1, can be summarized as the following stepwise procedure: Step 1. Preprocessor normalizes input data and selects the most informative inputs for the STLF.
Step 2. Using the selected inputs of the preprocessor, the hybrid forecast engine is trained by the proposed MHS.The decision variables x 1 ,…,x ND of the final solution of the MHS (the best HV of the last iteration) are considered as the weights of the NN based forecast engine.
Step 3.After training the MLP neural network and determining its weights, it is ready to forecast the future hourly loads.The MLP has one neuron in its output layer for predicting load of the next time interval, i.e., L(t) (Figure 1).Multi-period STLF (e.g., prediction of load for the next 24 hours) is reached via recursion, i.e., by feeding input variables with the forecaster's outputs.For instance, predicted load for the first hour is used as L(t−1) for the load forecast of the second hour provided that L(t−1) is among the selected inputs for the forecast engine.
Numerical Results
As in any research area, in STLF it is important to allow the reproduction of one's results.The only way of doing that is using public domain data sets.Two real-life STLF test cases are considered in this paper to evaluate the performance of the proposed forecast strategy.The first STLF test case is related to the Pennsylvania-New Jersey-Maryland (PJM) power system, which is a well-established electricity market in the U.S. The employed data for load and weather parameters (including temperature and humidity) of this test case are publicly available data obtained from websites [24,25].The set of inputs S(t), shown in (1), is constructed from the historical values of load and historical and forecast values of the two exogenous variables.Then, the proposed STLF strategy, including the preprocessor and hybrid forecast engine, is executed by the constructed data based on the step by step procedure of the previous section.Its obtained results for day-ahead STLF of the PJM test case are shown in Table 1 and compared with the results of five other well-known forecast methods including multi-variate ARMA (Auto-Regressive Moving Average) time series, RBF (Radial Basis Function) neural network, MLP neural network trained by BR (Bayesian Regularization) learning algorithm, MLP neural network trained by BFGS (Broyden, Fletcher, Goldfarb, Shanno) learning algorithm, and MLP neural network trained by LM (Levenberg-Marquardt) learning algorithm.The five benchmark methods of Table 1 have been frequently used in the literature for load forecast of power systems such as [2,5,14,15,22].The reported results in Table 1 1.For the sake of a fair comparison, all forecast methods of Table 1 have the same training period including 50 days prior to each forecast day.Also, all of these methods have the same set of inputs S(t) and the same preprocessor (Figure 1).Thus, each forecast method is fed by the same selected inputs, since the purpose of this numerical experiment is comparison of the efficiency of different forecast engines.We observe from Table 1 that the proposed hybrid forecast engine outperforms all the other forecast methods shown in Table 1.The proposed hybrid forecast engine has both the lowest average MAPE, shown in the last row of Table 1, and the lowest MAPE of each test week.The superiority of the proposed forecast engine compared with the multi-variate ARIMA time series technique, RBF neural network, and MLP neural network trained by BR, BFGS and LM learning algorithms is related to its efficient training mechanism, i.e., MHS.MHS can effectively search the solution space and optimally determine the values of the weights for the NN based forecast engine leading to its high STLF accuracy.To better illustrate this matter, in Table 2, the proposed MHS is compared with five other stochastic search techniques, including simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), differential evolution (DE) and HS for training of the NN based forecast engine.For instance, in the first benchmark method of Table 2, the proposed MHS is replaced by SA (i.e., SA trains the NN based forecast engine instead of MHS) and obtained STLF results from the forecast engine are reported.The results of the other benchmark methods of Table 2 have been obtained similarly.All methods of Table 2 have the same training period, selected inputs and four test weeks like the methods of Table 1.The reported results for each stochastic search method in Table 2 are average results of ten trial runs with random initializations.The user defined parameters of each method are fine-tuned by the search procedure proposed in [26], which is an efficient cross-validation technique.As seen from Table 2, the proposed MHS outperforms all other stochastic search methods.Using the proposed MHS as the training mechanism of the NN based forecast engine leads to the lowest MAPE of each test week.Moreover, MHS has considerably lower average MAPE than the other stochastic search methods (indicated in the last row of Table 2).By combining positive characteristics of HS and new mutation operator, the proposed MHS can benefit from both high exploration capability to avoid being trapped in local minima and high ability to efficiently search promising areas of the solution space.In Table 3, the STLF results of the proposed strategy are compared with the STLF results of the PJM independent system operator (ISO).Observe from this table that the STLF errors of the proposed strategy are considerably lower than the STLF errors of the PJM ISO indicating its forecast capability.
To also give a graphical view about the STLF accuracy of the proposed strategy, its obtained results for the four test weeks are shown in Figures 2-5.From these figures, it is seen that the forecast curve accurately follows the real curve and only minor deviations are seen in the prediction of the proposed strategy.These figures further illustrate the accuracy and robustness of the proposed STLF strategy.1 (the forecast results are related to the proposed strategy).
To illustrate the performance of the proposed STLF strategy in a long run, its obtained results for one year are shown in Table 4 and compared with the results of the PJM ISO.The reported MAPE value for each test month in this table is average of its corresponding daily MAPE values.At first, better accuracy of the STLF results of the proposed strategy compared with the STLF results of the PJM ISO can also be seen from Table 4.For each test month of this table, the proposed strategy has considerably lower MAPE value than the STLF of the PJM ISO.Moreover, the results of the proposed STLF strategy for the 12 test months of year 2009 are close to its results for the four test weeks of year 2009.For instance, average daily MAPE value for the whole year (1.46%) is close to average daily MAPE of the four test weeks (1.39%).Thus, it is seen that the proposed strategy produces good STLF results for one year as well.This numerical experiment further illustrates STLF accuracy and robustness of the proposed strategy in long run.The hourly electricity load in New York City and weather data observed at Central Park have been considered as the second test case of this paper.The employed data for this test case has been obtained from [27].The proposed STLF strategy has the same training period and the same cross-validation technique of the previous numerical experiment.The obtained day-ahead STLF results for the test case are shown in Table 5.The error criterion of MAE is considered in addition to MAPE in this numerical experiment, which is defined as follows: where L act (t), L for (t) and NH are as defined for (7).The MAE and MAPE results of New York ISO [27], support vector machine (SVM) [3], hybrid network (composed of Self-Organized Map (SOM) for data clustering and groups of 24 SVMs for the next day load forecast) [3], and wavelet transform combined with neuro-evolutionary algorithm [22] for this test case are also reported in 6 and 7, respectively.As seen from these figures, the forecast curve is close to real curve, such that in most of the time these two curves cannot be discriminated, and the error curve has small values.This numerical experiment further validates the efficiency of the proposed hybrid forecast engine.
Although the load patterns of public holidays usually have some differences with respect to normal days, we treated these days like normal days in this research work and good STLF results have been obtained for both the test cases.However, if in a power system, the STLF accuracy of public holidays is not satisfactory, they can be separately treated with specific schemes.More details about this matter can be found in [3,6].
Total computation time required for the setup of the proposed STLF strategy including execution of the preprocessor, training process of the hybrid forecast engine by the MHS and fine-tuning of the user defined parameters by the search procedure is about 20 minutes for the test cases of this paper, measured on a simple hardware set of Pentium P4 3.2 GHz with 4 GB RAM.This setup time is completely reasonable within a day-ahead decision making framework.The computer code of the proposed STLF strategy has been written in the MATLAB software package version 7.8.0.347.5 (the forecast results are related to the proposed strategy).
Conclusions
In this paper, a new STLF strategy composed of a preprocessor and a novel hybrid forecast engine is proposed.The preprocessor performs normalization and feature selection tasks.The hybrid forecast engine is a NN based predictor equipped with a new learning algorithm, i.e., MHS.The proposed MHS has both high local and global search abilities and can optimally determine the weights of the NN to minimize its validation errors.Unlike the traditional learning algorithms of neural networks (such as gradient based techniques), which search the solution space in a specific direction, MHS can widely search the solution space in various directions, thus avoiding being trapped in local minima.Based on the MHS, the proposed hybrid forecast engine can efficiently learn the input/output mapping function of the forecast process, and predict the future values of the forecast feature (here, hourly load) with high accuracy and robustness.Effectiveness of the proposed STLF strategy is extensively illustrated on two real world test cases.Hybridization of the proposed forecast engine with other stochastic search techniques and development of more efficient feature selection techniques will be considered in the future research works.
Figure 1 .
Figure 1.Structure of the proposed STLF strategy including the preprocessor and hybrid forecast engine.
i ≤ ND) is produced as follows: 3 of the proposed MHS, the mutation operation of(6) is executed in addition to the improvisation of HS.In other words, both new HV and new m HV are produced by the improvisation and mutation operation, respectively, in the step 3 of the MHS.Then, step 4 of the MHS is sequentially performed for the two newly generated harmony vectors new HV and new m HV , respectively, to update the HM.In other words, new HV is first compared with the worst HV of the HM as described in the step 4 of the HS algorithm.If it is replaced by new HV , the new worst HV of the HM is found.Similarly, new m
Figure 2 .
Figure 2. Curves of real values, forecast values and forecast errors for the first test week of Table1(the forecast results are related to the proposed strategy).
Figure 3 .
Figure 3. Curves of real values, forecast values and forecast errors for the second test week of Table 1 (the forecast results are related to the proposed strategy).
Figure 4 .
Figure 4. Curves of real values, forecast values and forecast errors for the third test week of Table1(the forecast results are related to the proposed strategy).
Figure 5 .
Figure 5. Curves of real values, forecast values and forecast errors for the fourth test week of Table1(the forecast results are related to the proposed strategy).
Figure 6 .
Figure 6.Curves of real values, forecast values and forecast errors for the test month of January 2004 of Table5(the forecast results are related to the proposed strategy).
Figure 7 .
Figure 7. Curves of real values, forecast values and forecast errors for the test month of July 2004 of Table5(the forecast results are related to the proposed strategy).
x n indicates normalized value of x in the range of [0,1].Thus, each input is separately normalized based on its own minimum and maximum values.Also, the normalized variable x n can easily be returned to the original range [x min ,x max ] by means of the inverse transform: Rand1 and Rand2 are two randomly generated numbers with uniform distribution in the range of [0,1].To produce each decision variable The ith column of HM contains the previously stored values of i th decision variable, which in this case (Rand1 < HMCR & Rand2 > PAR), x(1 ≤ i ≤ ND) of the new harmony vector new HV , these two random numbers are separately generated.
its entire feasible range not limited to those stored in the HM.The probability of this case is (1 − HMCR).After generating all decision variables new Step 4. Update HM.The worst HV of HM owning the highest value of the objective function (the error function of the training phase of the MLP neural network) is found.If the produced Stopping condition.Increment the improvisation counter, i.e., IC = IC + 1.If IC < NI (number of improvisations), go back to Step 3; otherwise, terminate the HS algorithm and return the best HV of HM, owning the lowest OF(.) value, as the final solution of the optimization problem.
1,1].Thus, a random number in the range of [−BW,BW] is added to the selected new i x in this case.As mentioned in Step 1, BW is a user defined parameter of HS.The probability of this case is HMCR × PAR.
are in terms of well-known error criterion of MAPE (mean absolute percentage error) defined as follows: NH indicates number of hours in the forecast horizon.Here, NH = 24 for day ahead STLF.Four test weeks corresponding to the four seasons of 2009 (including the third weeks of February, May, August, and November) are considered for this numerical experiment indicated in the first column of Table1.This is to represent the whole year in the numerical experiments.The MAPE value for each test week, shown in Table1, is the average of seven MAPE values of its corresponding forecast days.Also, the average result of the four test weeks is shown in the last row of Table where L act (t) and L for (t) represent actual and forecasted values of load in hour t, respectively;
Table 1 .
Comparison of the proposed hybrid forecast engine with five other prediction methods for day-ahead STLF of the PJM test case in the four test weeks of year 2009 (the reported results are in terms of MAPE criterion).
Table 2 .
Comparison of the proposed MHS with five other stochastic search techniques for day-ahead STLF of the PJM test case in the four test weeks of year 2009 (the reported results are in terms of MAPE criterion).
Table 3 .
Comparison of the STLF results of the proposed strategy with the STLF results of the PJM ISO for the four test weeks of year 2009 (the reported results are in terms of MAPE criterion).
Table 4 .
Comparison of the STLF results of the proposed strategy with the STLF results of the PJM ISO for the 12 months of year 2009 (the reported results are in terms of MAPE criterion).
Table 5
for comparison.The results of the benchmark methods of this table have been quoted from their respective references.The same test period (January 2004 and July 2004) and the same error criteria (MAE and MAPE) of these references are also adopted for the proposed STLF strategy.The reported MAE and MAPE values for each test month in Table5are average of its corresponding daily MAE and MAPE values, respectively.Observe that the proposed strategy has the lowest MAE and the lowest MAPE among all forecast methods of Table5in both the test months.This comparison illustrates the effectiveness of the proposed STLF strategy.Its obtained results for the two test months are also graphically shown in Figures
Table 5 .
Day-ahead STLF results of New York city for the two test months of January 2004 and July 2004.
|
v3-fos-license
|
2020-09-19T13:05:59.476Z
|
2020-09-01T00:00:00.000
|
221789777
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/nano10091846",
"pdf_hash": "b7dc1be35b4e4e6643c070b06ad93366d9b6b751",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2956",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"sha1": "27bcb75e799bc3d16a44d4a736571b04bbb32837",
"year": 2020
}
|
pes2o/s2orc
|
Synthesis and Characterization of Graphene Oxide and Reduced Graphene Oxide Composites with Inorganic Nanoparticles for Biomedical Applications
Graphene oxide (GO) and reduced graphene oxide (RGO), due to their large active surface areas, can serve as a platform for biological molecule adhesion (both organic and inorganic). In this work we described methods of preparing composites consisting of GO and RGO and inorganic nanoparticles of specified biological properties: nanoAg, nanoAu, nanoTiO2 and nanoAg2O. The idea of this work was to introduce effective methods of production of these composites that could be used for future biomedical applications such as antibiotics, tissue regeneration, anticancer therapy, or bioimaging. In order to characterize the pristine graphene materials and resulting composites, we used spectroscopic techniques: XPS and Raman, microscopic techniques: SEM with and AFM, followed by X-Ray diffraction. We obtained volumetric composites of flake graphene and Ag, Au, Ag2O, and TiO2 nanoparticles; moreover, Ag nanoparticles were obtained using three different approaches.
Introduction
In the last decade, the properties of graphene materials have been intensively studied in the scientific community in order to develop their potential in various biomedical applications. Graphene is a single layer (in thickness) of carbon atoms arranged in hexagonal rings of an aromatic electron structure. Graphene oxide and reduced graphene oxide are the graphene derivatives characterized by the presence of oxygen functional groups on the graphene surface (more in graphene oxide (GO) than in reduced graphene oxide (RGO)) and, therefore, by the presence of defects in the graphene hexagonal rings as well. Such disorder in the graphene structure leads to many unique properties of GO and RGO.
Due to the presence of oxygen groups bonded to the surface, GO is an electrical insulator (ideal graphene is an excellent semiconductor). GO is highly hydrophilic and therefore has great wettability. The surface of both GO and RGO is susceptible to modifications with different molecules, including biological ones, which allows composites with strictly controlled and desired properties to be synthesized. GO and RGO, due to their large active surface areas, can serve as a platform for biological molecules (both organic and inorganic) to be safely introduced into the organism without the risk of uncontrolled and undesirable spreading into the surrounding tissues. This is a great advantage, for example, in the case of anticancer therapy, where cytotoxic compounds are introduced into the tumor [1][2][3][4][5].
In many biomedical applications, it is crucial to use materials that are nontoxic to human cells, in terms of this requirement GO and slightly reduced GO seem to be suitable materials. The biocompatibility of these two graphene derivatives has been confirmed by many publications in the literature [6]. This aspect of GO and RGO materials presented in this study was also previously investigated by us with human umbilical cord mesenchymal stem cells (hUC-MSCs) [7]. The overriding conclusion from this study was that none of the tested materials (GO and RGO) were characterized with a negative impact in the cellular viability, proliferation, morphology, or gene expression. It was also found that the cytotoxicity depends on the size of the graphene flakes and the content of the oxygen functional groups: smaller flakes (about 0.2-2 µm of lateral size) and highly reduced graphene oxide (approximately 10% of remaining oxygen atoms) do have a toxic effect. Therefore, in this work we used RGO with a higher oxygen content and GO with larger flakes that were nontoxic to hUC-MSCs.
There are two main synthesis methods of graphene oxide: Hummers and Offeman [8] and Marcano and Tour [9]. The first one consisted of adding potassium manganate (KMnO 4 ) as an oxidant to the reaction mixture in the presence of concentrated sulfuric acid (VI) and NaNO 3 salt. Marcano and Tour proposed an improved method of graphite oxidation synthesis involving the elimination of sodium nitrate (V) from the reaction mixture, increasing the amount of potassium manganate (VII) and carrying out the reaction in the presence of concentrated acids: H 2 SO 4 and H 3 PO 4 in a volume ratio of 9:1. As a result, GO with a higher content of oxygen functional groups can be obtained. On the basis of these methods, numerous modifications are made to the synthesis of graphene oxide: partial oxidation to prepare GO of C/O atomic ratio between 12 and 3 [10], double oxidation leading to a high carboxyl groups concentration [11]; the use of different concentration of nitric acid (from 50% to 98%) leading to GO structure of C/O atomic ratio between 6.5 and 2.8 [12] and fast, 1-h oxidation [13].
In this work, we used Ag, Ag 2 O, Au, and TiO 2 nanoparticles to create composites with GO and RGO. Silver nanoparticles (AgNPs) were chosen because of their antibacterial properties-AgNPs have been shown to be effective against gram-positive and gram-negative bacteria, as well as fungi and viruses. Currently, nanosilver is used in many fields of industry, most widely in the production of dressings for accelerated wound healing and medical implants, as well as in cosmetics, dentistry, water filtration, or textile production. The mechanism of its action is based on its ability to interact with the thiol groups of bacterial cell walls and to disrupt the cell membrane. This leads to the denaturation and inactivation of enzymatic proteins that are a part of the respiratory chain, the formation of reactive oxygen species (ROS) and, consequently, the appearance of oxidative stress of the cell [14].
Gold nanoparticles (AuNPs) are widely used in such areas of medicine as diagnostics, cancer treatment and drug delivery systems. The ability of nanogold to enhance light scattering and absorption due to surface plasmon resonance is used in cancer diagnosis and therapy. Binding of nanogold to ligands allows the specific targeting of biomarkers for imaging and cancer detection. In addition, it can convert absorbed light into localized heat energy, which can be used in laser photothermal therapy. In addition, it has been shown that AuNPs can have a positive effect on the differentiation (osteogenesis) of hMSCs. The influence of AuNPs' shape and size on the survival, proliferation, and expression of osteogenic gene markers has been demonstrated [15][16][17].
Nano-titanium oxide, in addition to its antibacterial properties which have been used in decontamination preparations, is widely used in tissue engineering as part of biocomposites and surface coatings. The topography of titanium's surface plays a very important role in biomedical applications. As the rough and porous morphology of Ti imitates native bone architecture, it enhances osteoblasts adhesion, maturation and bone formation. The distribution of charge and the surface chemistry of titanium materials are also important and can be one of the key factors inducing stem cells differentiation to osteoblasts. Much work has focused on the use of nano-TiO 2 as a photosensitizer in the treatment of cancer due to its high photocatalytic activity, low toxicity, and high photostability [18,19].
Literature reports indicate the possibility of using nano-Ag 2 O in the treatment of venous ulceration. Silver oxide used as an ingredient in dressing ointment resulted in improved microcirculation and wound healing [20]. Considering the possible anti-inflammatory properties and accelerated wound healing, we also decided to prepare a nano-Ag 2 O composite with flake graphene as a potential material in tissue engineering.
The other approaches related to the fabrication of the antibacterial composites based on the graphene oxide contained the addition of graphitic carbon nitride (GO/g-C 3 N 4 ) [21], zinc oxide (ZnO) [22] and Ag/Cu bimetallic nanoparticles (NPs) [23]. Other kinds of GO composites are those with the anti-inflammatory properties. They were formed using Fe 2 (MoO 4 ) 3 nanorods [24], aerogel (GA)-supported metal-organic framework (MOF) particles [25] and polyoxotungstate [26]. There is also another great research area that uses graphene composites: regenerative tissue engineering. The examples can be calcium silicate-graphene composites [27] and graphene oxide-calcium phosphate nanocomposites [28] for osteogenic and angiogenic differentiation of human mesenchymal stem cells. For neural tissue engineering, RGO is commonly used due to its electrical properties and one of the described materials is RGO/TiO 2 for photo stimulation of neural stem cells [29].
Knowing that both morphology and chemical composition of composites for potential biomedical applications may result in different properties of the obtained materials, we decided to perform a detailed and systematic study addressing this issue. For this purpose, this manuscript is focused exclusively on the material aspect: the synthesis and characterization of graphene-nanoparticles composites. This work, for the first time, provides a comprehensive study concerning both GO and RGO composites with various types of inorganic nanoparticles of a specified biological activity. We plan to perform and discuss the biological assessment of the aforementioned composite materials in the future.
The morphology of the materials was examined by scanning electron microscopy SEM (Auriga CrossBeam Workstation, Carl Zeiss) and atomic force microscopy AFM (Dimension Icon, Bruker; with tapping mode and OTESPA R3 scanning probe, Bruker). The chemical structure was studied with Raman Spectroscopy (Renishaw Invia, excitation laser source: 532 nm; laser power: < 1 mW), XPS Spectroscopy (UHV Multichamber XPS System, Prevac; with Al Kα X-ray source (1486.6 eV)) and X-ray diffraction XRD (Rigaku Diffractometer, Japan; with Cu Kα anode of 8.038 keV, U = 40 kV, I = 30 mA, scanning speed: 2 deg/min, sampling density: 0.02 deg). For XRD, Raman and XPS measurements, the samples were prepared in the form of powders. AFM, SEM and EDS measurements were conducted on layers of the samples placed on a silicon substrate, without sputtering.
GO Preparation
The GO was prepared by the modified Marcano method [9]. In brief: 3 g of graphite flakes (with the average size of 300-425 µm) were added gradually to a reactor containing 360 mL of concentrated sulphuric acid and 40 mL of orthophosphoric acid. After that, 18 g of potassium permanganate were slowly added to the graphite. The oxidation process was conducted for a few hours and it was stopped by the addition of deionized water and finally-3 mL of perhydrol (30% H 2 O 2 ; Chempur, pure p.a.). The water suspension of such obtained graphite oxide was left to sediment. The purifying process was carried out with a custom made microfiltration system. Due to specific shearing forces acting on the GO flakes during the purification process, the flakes were exfoliated at the same time.
The chemical formula of GO was assumed as "C 2 O", where there is one oxygen atom per two carbon atoms, which gives a molar mass of 40 g/mol. Such structure is in agreement with the information obtained from the XPS measurement and with the literature as well [30].
RGO Preparation
The RGO was prepared via a "green" reduction process by using L(+)-ascorbic acid (C 6 H 8 O 6 ) as a reducing agent. An L-ascorbic acid solution was added to the previously prepared GO water suspension with GO: L-ascorbic acid molar ratio of 1:4. The mixture was reduced for 3.5 h at a temperature of 95 • C with constant stirring. The GO molar mass was assumed to be 40 g/mol with respect to the C 2 O chemical formula. The prepared RGO was then filtered with ultrapure water to remove the remaining ions. Gao et al. (2010) postulated a reduction mechanism proceeded via a two-step S N 2 nucleophilic reaction followed by another one-step thermal elimination [31].
Composites with Nano-Ag[KA]
To prepare the composite of GO with nano-Ag obtained with L-ascorbic acid (GO Ag[KA]), a silver nitrate (V) solution was added to the aqueous suspension of GO while stirring and after a few minutes an aqueous solution of L-ascorbic acid was added dropwise. The reduction mechanism is shown in Figure 1. The molar ratio of GO to silver nitrate (V) was 1:0.08, while the molar ratio of silver nitrate (V) to L-ascorbic acid was equal to 1:2. Adding the compounds in this order resulted in a homogeneous coating of graphene oxide with silver particles. The reaction was carried out at room temperature to ensure that the reduction would involve silver ions only, without disturbing the graphene oxide structure. The mixture was left for 24 h with vigorous stirring and was then dialyzed for 72 h to remove the residual ions.
The RGO composite with Ag[KA] (RGO-Ag[KA]) was prepared using the above recipe for GO-Ag[KA], followed by the addition of L-ascorbic acid to finally reduce the GO. The reaction was performed for 3.5 h at 95 • C. The GO: L-ascorbic acid molar ratio was 1:4. After the reaction was completed, the material was purified by dialysis. The two-step process was carried out to obtain a good distribution of silver nanoparticles on RGO flakes. Because GO flakes are well dispersed, firstly, nano-Ag was precipitated on these non-agglomerated flakes. After that, the GO reduction causing also partial agglomeration of the flakes was conducted (with the use of ascorbic acid and the higher temperature) without negative influence on the distribution of the nanoparticles on RGO flakes.
Composites with Nano-Ag[BS]
A reaction between silver nitrate and sodium borohydride in a water solution was described by Sobczak-Kupiec et al. (2011) [32]. In brief, the reaction can be written as follows: To obtain the GO-Ag [BS] composite, silver nitrate (V) was added successively to the aqueous GO suspension, followed by sodium borohydride. The molar ratio of GO (M = 40 g/mol) to silver was 1:0.08 and the molar ratio of silver nitrate (V) to sodium borohydride was 1:2. NaBH 4 creates a reductive and alkaline environment, causing deprotonation of carboxylic groups, which renders more negative zeta potential and therefore improves the GO flakes stability [33]. However, salt type and ionic strength have a significant effect on GO stability. Here, the presence of Na + ions could compensate for the effect of the decrease in zeta potential, finally leading to the flakes agglomeration. The hydrodynamic dimension depends on the type of ions present in the GO sample and is bigger for multivalent ions than monovalent ones; it also depends on the concentration of these ions [34].
The reaction was continued for 24 h at a room temperature with continuous stirring. The material was then dialyzed for 72 h to remove the remaining ions.
The RGO-Ag[BS] composite was obtained through the preparation of the GO-Ag[BS] composite followed by the addition of L-ascorbic acid to reduce the GO. The reaction was carried out for 3.5 h at 95 • C. The GO: L-ascorbic acid molar ratio was 1:4. After the reaction was completed, the material was dialyzed for 72 h.
Composites with Nano-Ag[PP60]
To obtain the GO-Ag[PP60] composite, an aqueous AgNO 3 solution was added to the aqueous suspension of GO at a molar ratio of 1:0.08. Then, the polyphenol PP60 solution was added dropwise in a 1:1 molar ratio with AgNO 3 . The reagents were vigorously stirred on a magnetic stirrer for 24 h at room temperature. The material was purified by dialysis.
The procedure for obtaining the RGO composite with Ag[PP60] nanoparticles consisted of producing the GO-Ag[PP60] composite (as described above) and adding L-ascorbic acid to reduce the GO. The reaction was carried out for 3.5 h at 95 • C. The GO:ascorbic acid molar ratio was 1:4. After the reaction was completed, the material was dialyzed.
Composites with Nano-Au
To prepare the GO-Au composite, an aqueous solution of HAuCl 4 ·3H 2 O was added dropwise to the GO water suspension at a GO:HAuCl 4 molar ratio of 1:0.08. The mixture was stirred on a magnetic stirrer for 30 min at room temperature (to obtain a homogeneous mixture of GO flakes and Au 3+ ions), after which the temperature was increased to 80 • C to provide the conditions needed for Au 3+ reduction. Then, an aqueous solution of sodium citrate (3-hydrated) was added dropwise at a molar ratio with HAuCl 4 of 1:0.17. The reaction continued for 1 h at 80 • C while being stirred. A composite with a purple glow was formed, proving the formation of gold nanoparticles. The material was dialyzed to remove the remaining ions. The mild reaction conditions did not reduce the GO, only Au 3+ to Au 0 .
To prepare the RGO-Au composite, L-ascorbic acid was added to the previously prepared GO-Au composite. The reduction was carried out on a magnetic stirrer for 3.5 h at 95 • C and purified by dialysis.
Composites with Nano-Ag 2 O
The GO-Ag 2 O composite was obtained by adding silver nitrate (V) solution followed by sodium hydroxide solution to the aqueous GO suspension. This order of adding compounds was used because by adding NaOH first would cause GO agglomerates to form (in the highly alkaline environment) and the Ag nanoparticles to be poorly distributed on the flakes. The addition of AgNO 3 to GO before NaOH provides less alkaline conditions due to the presence of strong acid salt. The molar ratio of GO to silver nitrate (V) was 1:0.08 and the molar ratio of silver nitrate (V) to sodium hydroxide was 1:1. The reaction was carried out at room temperature for 24 h with continuous stirring. The resulting material formed a stable aqueous suspension that was dialyzed to remove residual ions.
The RGO-Ag 2 O composite was produced by reducing GO with L-ascorbic acid as described in Section 2.2. An aqueous solution of AgNO 3 and NaOH was then added to RGO to precipitate Ag 2 O nanoparticles. The reactions were carried out at room temperature for 24 h followed by purification with dialysis.
Composites with Nano-TiO 2
To obtain the GO-TiO 2 composite, titanium isopropoxide was added dropwise to a small amount of absolute ethanol and then added slowly to the aqueous GO suspension under vigorous stirring. During the addition of GO, a white TiO 2 precipitate formed. Proper amounts of the compounds were used to obtain a molar GO:TiO 2 ratio of 1:0.08.
The RGO-TiO 2 composite was prepared by reducing the GO with L-ascorbic acid and purifying it by dialysis. The appropriate amount of titanium isopropoxide (standard 1:0.08 molar ratio) was then added dropwise to a small amount of absolute ethanol, after which it was added slowly to the RGO suspension under vigorous stirring. The sample was stirred for 24 h in order to precipitate out TiO 2 particles.
SEM and AFM Visualization
The images from the SEM (Figure 1) show the morphology and the sizes of the resulting GO and RGO flakes, which was 10-20 µm and a few micrometers, respectively. The petals form thin "curtains" that wrinkle if they are large enough. The SEM images indicate the formation of small agglomerates of RGO, which is not observed for GO flakes. This is the result of the smaller number of oxygen functional groups on the RGO surface. These oxygen groups are responsible for stabilizing the material: by forming both hydrogen bonds between groups of adjacent flakes (protonated form) and repulsive interactions between charged flakes. In RGO there is a small amount of them so the stabilization does not occur as in GO. The π-π stacking (π-π interaction of aromatic structure) of RGO flakes can be also responsible for their agglomeration. It is difficult to estimate the size of RGO flakes because they agglomerate and overlap.
The working principle of AFM-based on mechanical contact between the microscope's scanning probe and tested surface allowed for the quantification of average thickness of GO and RGO flakes. AFM was operating in tapping mode, which enabled an acquisition of high resolution topographical images, from which the flake thickness was estimated. The average GO flakes thickness was estimated at 1.2 ± 0.2 nm. Knowing that the thickness of pure graphene (exfoliated, CVD) is between 0.4-1.7 nm [35], and taking into account the presence of functional groups in the GO structure (which are a component of the total thickness), we can conclude that the GO we produced was characterized with almost complete exfoliation into stacks composed of one to three flakes. The measured thickness of RGO material was in a range of 20-180 nm (Figure 1). Reduced graphene oxide was made from GO, the thickness of which was determined by the AFM. It is worth underlining that because of the strong tendency of RGO flakes to create agglomerates it is difficult to estimate the thickness on individual flakes.
XPS Spectroscopy
The XPS survey spectra ( Figure 2) and the information gathered in Table 1 describe the elemental composition of the GO and RGO samples. The most prominent is the difference of oxygen content for these materials, indicating the removal of oxygen groups and the reduction of the initial GO. The oxygen content in GO is 33% with a C:O atomic ratio of 1.98; for RGO it is 17.8% with a C:O atomic ratio of 4.47. The presence of other elements visible in the samples can be attributed to the residue of post-reaction ions. Figure 3 presents the XPS C1s spectra of the GO and RGO samples. The spectra can be deconvoluted into bands, corresponding to carbon atoms with sp 2 (C=C) and sp 3 Carbonate ions (CO 3 2− ) and defective carbon structures have also been detected. Table 2 presents detailed information on the functional groups with their percentage content in tested samples.
The results indicate that epoxy groups were the most numerous among the functional groups on the GO surface (~25%). Hydroxyl (~13%) and ether (~17%) groups also had a large share. Carboxyl groups represented only 4% of the atoms, which is to be expected, as they are primarily found on the edges of the flakes. The spectra from the C1s region of GO and RGO display very significant differences, mainly concerning the ratio of C=C (sp 2 ) band intensity to those corresponding to oxygen groups-especially hydroxyl (C-OH), epoxy ([CC] > O), and ether (C-O-C). In RGO, as a result of the reduction, the C=C double bond structure was partially restored: from 10.22% for GO to 43.70% (4.3-fold increase). The XPS measurement shows that functional oxygen groups were less frequent in RGO than GO. Moreover, the sum of ether and epoxy groups in GO was 41.82%, while in RGO it was 10.40%. The results of XPS spectroscopy confirmed that GO flakes were successfully reduced using L-ascorbic acid.
X-ray Diffraction (XRD)
The GO and RGO powders obtained by freeze-drying were characterized with the XRD method. In Figure 4, the characteristic carbon peak for GO (001) at 2θ ≈ 9 • is shown. GO has an order of layers, where the interplanar distance (d) is 9.98 Å and the average number of layers is 10. After GO reduction, the reflection (001) disappears and (002) reappears at 2θ ≈ 24 • [44,45]. This RGO band is blurred, which indicates that the carbon planes were poorly packaged. The interplanar distance (d) for RGO was 3.6 Å (this is the average value due to the wide band (002)), and the number of layers in the package was 3.4. The higher d value in GO is due to the presence of numerous oxygen functional groups between the graphene layers.
Raman Spectroscopy
Raman spectroscopy was used to estimate the degree of GO reduction in the RGO sample and to find structural differences between them ( Figure 5). The main bands for the graphene derivatives appeared at about 1350 cm −1 (D band) and about 1600 cm −1 (G band), respectively. The G peak corresponds to the sp 2 hybridization of the carbon network and is attributed to the first order scattering from the E 2g phonon modes in the Brillouin zone. Moreover, it originates from the stretching of the sp 2 carbon pairs in both the rings and chains [46]. The D peak corresponds to the breathing mode of the aromatic rings (disorder-induced modes existing because of structural defects). Therefore, the intensity of the D peak is used to measure the degree of disorder. The positions of the D and G bands were similar for both the GO samples. The ratio of the intensities of the D and G bands (I D /I G ) determines the reduction degree of GO and provides information about the changes in the sp 2 and sp 3 hybridization domains in the carbon lattice [47]. The I D /I G values for the GO and RGO were 0.91 and 1.16, respectively. This parameter increases with the increase in the degree of disorder. A prerequisite for the presence of the D peak in the RGO is the presence of sp 2 domains, which are partially defected (broken structure of the conjugated π-π bonds). The increase in the intensity of this peak can be therefore related to the presence of a greater number of small sp 2 domains and is largely related to edge defects-the edge effect is higher for smaller flakes created during the reduction. Based on this information it can be stated that GO was slightly reduced after the reaction with ascorbic acid, which led to a partial restoration of the graphite structure.
X-Ray Diffraction (XRD)
Based on XRD measurements (Figure 6), it can be stated that the average size of Ag[KA] was 33 ± 6 nm-this is the size of the coherent regions of Ag, not the grains themselves. A pure, single-phase Ag was obtained. The determined network constant was 4.089 Å. The XRD measurements for Ag [BS] also indicate that pure, single-phase Ag was obtained. The network constant determined here was 4.088 Å, and the average crystallite size was 42 ± 12 nm. The average crystallite size for Ag[PP60] was 20 ± 4 nm-the smallest value obtained. The sample was single-phase with a network constant of 4.008 Å. The sizes of the crystallites in each case were calculated using the Scherrer method. H, k, l indexes marked in Figure 6 were compared with Vanaja et al. (2013) [48], Yang The XRD measurements indicate that nano-Au particles with a determined network constant of 4.079 Å were obtained. The average crystallite size calculated by the Scherrer formula was 17 nm. The network constant calculated from the Scherrer formula for Ag 2 O nanoparticles was 4.714 Å, and the average crystallite size was about 59 nm, with a defective crystal structure. According to the XRD analysis of nano-TiO 2 , an amorphous material was obtained in which traces of crystallization nuclei of anatase (A) and rutile (R) can be observed.
Raman Spectroscopy
The Raman spectra (Figure 7) for all the three of the silver samples show characteristic peaks in the range of 90 to 130 cm −1 (Ag mode), at approximately 240 cm −1 (Ag-O mode), and at approximately 1040 cm −1 (Ag mode). There is also a signal for Ag [KA] indicating the presence of carbonates, which is the result of exposing the material to atmospheric conditions [52,53]. The Raman spectrum for Ag [PP60] also indicates that the sample contains polyphenol groups, as this chemical was used as a silver ion reducer. This is demonstrated by the peaks in the range of 1350 to 1440 cm −1 , corresponding to the stretching vibrations of the COO groups and phenyl rings [54]. Based on the analysis, it can be concluded that the polyphenol present in the sample acts as an inhibitor of agglomerate formation and as a stabilizer of silver nanoparticles; this corresponds with the XRD and SEM measurements (Section 3.2.4). The Raman spectrum of the nano-Au shows peaks, which are characteristic of gold nanoparticles at 1360 and 1593 cm −1 [55]. The band in the Raman spectrum for nano-Ag 2 O at above 400 cm −1 can be attributed to the stretching vibrations (Ag-O) of the nanoparticles [56]. The Raman spectrum of the nano-TiO 2 particles indicates the presence of nuclei of the anatase (A) and rutile (R) phases; the respective bands are marked on Figure 7 [57,58].
EDS Spectroscopy
EDS measurements (Figure 8) made of each sample confirmed the chemical purity of the produced nanoparticles. The peaks which are marked in the spectra correspond to the atoms of which each material is composed. In addition to the proper atoms, there are signals from the silicon (Si) that is present in the substrate.
SEM Microscopy
To visualize the morphology of pristine nanoparticles, they were produced without the presence of GO/RGO. However, the synthesis conditions (molar ratios, time, temperature) were kept as for the respective synthesis conducted in the presence of GO/RGO.
The SEM images in Figure 9 show that ovally shaped Ag[KA] formed agglomerates about 2 µm in diameter. They consisted of smaller, nanometer-sized particles. The sizes of the roundish shaped Ag[BS] were about 200 nm; also, they did not form large agglomerates. The silver particles obtained by the reduction of silver salt with polyphenol (Ag[PP60]) were the smallest and of least uniform shape among those produced: approximately 100 nm. In addition, these nanoparticles did not form agglomerates. Ag[KA] particles deposited on GO flakes formed both small agglomerates and fine nanoparticles. The SEM images show a good distribution of silver nanoparticles on the RGO flakes, despite the appearance of agglomerates in some places. It can also be stated that the GO composite with Ag[BS] nanoparticles and a good distribution of flakes was obtained. The distribution of Ag [BS] was more homogeneous than that of Ag [KA]. The SEM images indicate that an RGO-Ag[BS] composite was obtained in which the silver nanoparticles were distributed evenly throughout the sample volume and did not form agglomerates. The SEM images also show the presence of Ag[PP60] particles on the GO flakes, which formed small agglomerates, comparable to those present in the GO-Ag [KA]. Similarly, in the RGO-Ag[PP60], Ag particles were present in the entire volume, though they formed larger agglomerates than in the case of Ag [BS]. The reason for the formation of agglomerates could be the high affinity of polyphenol molecules for graphene flakes, which meant that the polyphenol could not ensure effective stabilization of Ag nanoparticles alone, as it was observed for Ag[PP60] without the presence of flake graphene. The affinity of polyphenol to graphene flakes is due to the presence of the oxygen functional groups and carbon aromatic rings that occur in these two materials. The SEM images ( Figure 10) indicate that gold nanoparticles of small sizes-far below 100 nm-were obtained. The Au nanoparticles were evenly distributed on the GO flakes and did not form agglomerates. Very good, homogeneous distribution can also be seen on the RGO flakes. The particles are nanometer-sized, so it is difficult to determine their diameter based on SEM images.
The morphology of the nano-Ag 2 O was also examined by the SEM technique. The images indicate that the grain size of the material was 200-400 nm, but smaller dots can also be observed. The resulting SEM images of composites show that nano-Ag 2 O particles have good distribution on both the GO and the RGO flakes, creating small nanometer-sized dots and larger agglomerates, especially in the RGO. Based on the SEM images, it can be estimated that the diameter of the TiO 2 particles is well below 50 nm. The nano-TiO 2 particles created a very good distribution on the GO and RGO flakes-they did not form agglomerates and were uniformly present in the entire volume of composites.
Conclusions
In this paper we described in detail the effective methods of preparing composites consisting of graphene flakes (GO and RGO) and inorganic nanoparticles-Ag, Au, Ag 2 O, and TiO 2 -of a known biological activity. Moreover, three different routes of silver nanoparticle synthesis were compared in the creation of the desired homogenous composites with flake graphene. The forms of graphene used (graphene oxide and slightly reduced graphene oxide) are known to be nontoxic towards human cells, for example, mesenchymal stem cells, a finding that was confirmed in our previous work [7]. This manuscript provides a broad morphological and chemical description of each of the inorganic particles and composites produced in this study. Obtained results allow us to state that the described synthesis protocols enable the fabrication of GO and RGO composites, decorated with evenly distributed NPs of different origin. The idea of this work was to introduce effective methods of production of these composites that could be used for future biomedical applications such as antibiotics, tissue regeneration, anticancer therapy, or bioimaging. Our next work, connected to this paper, will concern a biological study of these composites with human mesenchymal stem cells.
|
v3-fos-license
|
2022-10-14T04:57:57.839Z
|
2023-03-22T00:00:00.000
|
252734647
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41597-023-02062-w.pdf",
"pdf_hash": "2c65a70a192ab03139e296d7b13a2c1044b3b8a8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2957",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7b60ae7530c7668c5515ce9db64a02332ac39218",
"year": 2023
}
|
pes2o/s2orc
|
Large scale crowdsourced radiotherapy segmentations across a variety of cancer anatomic sites
Clinician generated segmentation of tumor and healthy tissue regions of interest (ROIs) on medical images is crucial for radiotherapy. However, interobserver segmentation variability has long been considered a significant detriment to the implementation of high-quality and consistent radiotherapy dose delivery. This has prompted the increasing development of automated segmentation approaches. However, extant segmentation datasets typically only provide segmentations generated by a limited number of annotators with varying, and often unspecified, levels of expertise. In this data descriptor, numerous clinician annotators manually generated segmentations for ROIs on computed tomography images across a variety of cancer sites (breast, sarcoma, head and neck, gynecologic, gastrointestinal; one patient per cancer site) for the Contouring Collaborative for Consensus in Radiation Oncology challenge. In total, over 200 annotators (experts and non-experts) contributed using a standardized annotation platform (ProKnow). Subsequently, we converted Digital Imaging and Communications in Medicine data into Neuroimaging Informatics Technology Initiative format with standardized nomenclature for ease of use. In addition, we generated consensus segmentations for experts and non-experts using the Simultaneous Truth and Performance Level Estimation method. These standardized, structured, and easily accessible data are a valuable resource for systematically studying variability in segmentation applications.
Background & Summary
Since the advent of contemporary radiation delivery techniques for cancer treatment, clinician generated segmentation (also termed contouring or delineation) of target structures (e.g., primary tumors and metastatic lymph nodes) and organs at risk (e.g., healthy tissues whose irradiation could lead to damage and/or side effects) on medical images has become a necessity in the radiotherapy workflow 1 . These segmentations are typically provided by trained medical professionals, such as radiation oncologists. While segmentations can be performed on any imaging modality that provides sufficient discriminative capabilities to visualize regions of interest (ROIs), the current radiotherapy workflow prioritizes the use of computed tomography (CT) for ROI segmentation due to its ubiquitous nature and use in radiotherapy dose calculations. Subsequently, clinicians spend a large fraction of their time and effort generating ROI segmentations on CT imaging necessary for the radiotherapy workflow.
Interobserver and intraobserver variability are well-documented byproducts of the use of manual human-generated segmentations 2,3 . While consensus radiotherapy guidelines to ensure ROI segmentation quality have been developed and shown to reduce variability 4 , these guidelines are not necessarily followed by all practicing clinicians. Therefore, segmentation variability remains a significant concern in maintaining radiotherapy plan quality and consistency. Recent computational improvements in machine learning, particularly deep learning, have prompted the increasing development and deployment of accurate ROI auto-segmentation algorithms to reduce radiotherapy segmentation variability [5][6][7] . However, for auto-segmentation algorithms to be clinically useful, their input data (training data) should reflect high-quality "gold-standard" annotations. While research has been performed on the impact of interobserver variability and segmentation quality for auto-segmentation training [8][9][10][11] , it remains unclear how "gold-standard" segmentations should be defined and generated. One common approach, consensus segmentation generation, seeks to crowdsource multiple segmentations from different annotators to generate a high-quality ground-truth segmentation. While multi-observer public medical imaging segmentation datasets exist [12][13][14][15][16][17] , there remains a lack of datasets with a large number of annotators for radiotherapy applications.
The Contouring Collaborative for Consensus in Radiation Oncology (C3RO) challenge was developed to engage radiation oncologists across various expertise levels in cloud-based ROI crowdsourced segmentation 18 . Through this collaboration, a large number of clinicians generated ROI segmentations using CT images from 5 unique radiotherapy cases: breast, sarcoma, head and neck, gynecologic, and gastrointestinal. In this data descriptor, we present the curation and processing of the data from the C3RO challenge. The primary contribution of this dataset is unprecedented large-scale multi-annotator individual and consensus segmentations of various ROIs crucial for radiotherapy planning in an easily accessible and standardized imaging format. These data can be leveraged for exploratory analysis of segmentation quality across a large number of annotators, consensus segmentation experiments, and auto-segmentation model benchmarking. An overview of this data descriptor is shown in Fig. 1.
Methods
Patient population. Five separate patients who had undergone radiotherapy were retrospectively collected from our collaborators at various institutions. Each patient had received a pathologically confirmed diagnosis of cancer of one of the following sites: breast (post-mastectomy intraductal carcinoma), sarcoma (malignant peripheral nerve sheath tumor of the left thigh), head and neck (oropharynx with nodal spread, [H&N]), gynecologic (cervical cancer, [GYN]), and gastrointestinal (anal cancer, [GI]). Clinical characteristics of these patients are shown in Table 1. Of note, these five disease sites were included as part of the C3RO challenge due to being among the most common disease sites treated by radiation oncologists; additional disease sites were planned but were not realized due to diminishing community participation in C3RO. Specific patient cases were selected by C3RO collaborators on the basis of being adequate reflections of routine patients a generalist radiation oncologist may see in a typical workflow (i.e., not overly complex). Further details on the study design for C3RO can be found in Lin & Wahid et al. 19 . Imaging protocols. Each patient received a radiotherapy planning CT scan which was exported in Digital Imaging and Communications in Medicine (DICOM) format. CT image acquisition characteristics are shown in Table 2. All images were acquired on scanners that were routinely used for radiotherapy planning at their corresponding institutions with appropriate calibration and quality assurance by technical personnel. The sarcoma, H&N, and GI cases received intravenous contrast, the GU case received oral contrast, and the breast case did not receive any contrast. Of note, the H&N case had metal streak artifacts secondary to metallic implants in the upper teeth, which obscured anatomy near the mandible. No other cases contained noticeable image artifacts. Notably, the sarcoma case also received a magnetic resonance imaging (MRI) scan, while the H&N and GI cases received full body positron emission tomography (PET) scans. The sarcoma MRI scan was acquired on a GE Signa HDxt device and corresponded to a post-contrast spin echo T1-weighted image with a slice thickness of 3.0 mm and in-plane resolution of 0. 35 20 . The removal of acquisition data and time metadata (if available in DICOM header tags) caused compatibility issues with ProKnow so were kept as is. Moreover, if institution name or provider name were available in the DICOM file, they were not removed as they were not considered protected health information. Select cases (breast, GYN, GI) were previously anonymized using the DICOM Import Export tool (Varian Medical Systems, CA, USA).
Participant details.
To register for the challenge, participants completed a baseline questionnaire that included their name, email address, affiliated institution, country, specialization, years in practice, number of www.nature.com/scientificdata www.nature.com/scientificdata/ disease sites treated, volume of patients treated per month for the designated tumor site, how they learned about this challenge, and reasons for participation. Registrant intake information was collected through the Research Electronic Data Capture (REDCap) system -a widely used web application for managing survey databases 21 ; an example of the intake form can be found at: https://redcap.mskcc.org/surveys/?s=98ARPWCMAT. The research conducted herein was approved by the HRRP at MSK (IRB#: X19-040 A(1); approval date: May 26, 2021). All subjects prospectively consented to participation in the present study, as well as to the collection, use, and disclosure of de-identified aggregate subject information and responses. Participants were categorized as recognized experts or non-experts. Recognized experts were identified by our C3RO team (EFG, CDF, DL) based on participation in the development of national guidelines or other extensive scholarly activities. Recognized experts were board-certified physicians with expertise in the specific disease site. Non-experts were any participants not categorized as an expert for that disease site. All non-experts had some knowledge of human anatomy, with the majority being composed of practicing radiation oncologists but also included resident physicians, radiation therapists, and medical physicists. Worthy of note, a participant could only be considered an expert for one disease site, but could have participated as a non-expert for other disease sites. Out of 1,026 registrants, 221 participated in generating segmentations, which were used for this dataset; due to the low participation rate, participants may represent a biased sample of registrants. Of note, participants could provide segmentations for multiple cases. Additional demographic characteristics of the participants can be found in Lin & Wahid et al. 19 .
ProKnow segmentation platform. Participants were given access to the C3RO workspace on ProKnow (Elekta AB, Stockholm, Sweden). ProKnow is a commercially available radiotherapy clinical workflow tool that allows for centralization of data in a secure web-based repository; the ProKnow system has been adopted by several large scale medical institutions and is used routinely in clinical and research environments. Anonymized CT DICOM images for each case were imported into the ProKnow system for participants to segment; anonymized MRI and PET images were also imported for select cases as available. Each case was attributed a short text prompt describing the patient presentation along with any additional information as needed. Participants were allowed to utilize common image manipulation (scrolling capabilities, zooming capabilities, window leveling, etc.) and segmentation (fill, erase, etc.) tools for generating their segmentations. No auto-segmentation capabilities were provided to the participants, i.e., all segmentations were manually generated. Notably, for the sarcoma case, an external mask of the patient's body and a mask of the left femur was provided to participants. Screenshots of the ProKnow web interface platform for the various cases are shown in Fig. 2. Segmentation details. For each case, participants were requested to segment a select number of ROIs corresponding to target structures or OARs. Notably, not all participants generated segmentations for all ROIs. ROIs for each participant were combined into one structure set in the ProKnow system. ROIs were initially named in a consistent, but non-standardized format, so during file conversion ROIs were renamed based on The Report of American Association of Physicists in Medicine Task Group 263 (TG-263) suggested nomenclature 22 ; TG-263 was chosen due its ubiquity in standardized radiotherapy nomenclature. A list of the ROIs and the number of available segmentations stratified by participant expertise level is shown in Supplementary Table 1.
Image processing and file conversion. For each case, anonymized CT images and structure sets for each annotator were manually exported from ProKnow in DICOM and DICOM radiotherapy structure (RTS) format, respectively. The Neuroimaging Informatics Technology Initiative (NIfTI) format is increasingly used for reproducible imaging research 23-27 due to its compact file size and ease of implementation in computational models 28 . Therefore, in order to increase the interoperability of these data, we converted all our DICOM imaging and segmentation data to NIfTI format. For all file conversion processes, Python v. 3.8.8 29 was used. An overview of the image processing workflow is shown in Fig. 3a. In brief, using an in-house Python script, DICOM images and structure sets were loaded into numpy array format using the DICOMRTTool v. 0.4.2 library 30 , and then converted to NIfTI format using SimpleITK v. 2.1.1 31 . For each annotator, each individual structure contained in the structure set was separately converted into a binary mask (0 = background, 1 = ROI), and was then converted into separate NIfTI files. Notably, voxels fully inside and outside the contour are included and not include in the binary mask, respectively, while voxels that overlapped the segmentation (edge voxels) were counted as surface coordinates and included in the binary mask; additional details on array conversion can be found in the DICOMRTTool documentation 30 . Examples of random subsets of five expert segmentations for each ROI from each case are shown in Fig. 4.
Consensus segmentation generation.
In addition to ground-truth expert and non-expert segmentations for all ROIs, we also generated consensus segmentations using the Simultaneous Truth and Performance Level Estimation (STAPLE) method, a commonly used probabilistic approach for combining multiple segmentations [32][33][34][35] . Briefly, the STAPLE method uses an iterative expectation-maximization algorithm to compute a probabilistic estimate of the "true" segmentation by deducing an optimal combination of the input segmentations and incorporating a prior model for the spatial distribution of segmentations as well as implementing spatial homogeneity constraints 36 . For our specific implementation of the STAPLE method, we utilized the SimpleITK STAPLE function with a default threshold value of 0.95. For each ROI, all available binary segmentation masks acted as inputs to the STAPLE function for each expertise level, subsequently generating binary STAPLE segmentation masks for each expertise level (i.e., STAPLE expert and STAPLE non-expert ). An overview of the consensus segmentation workflow is shown in Fig. 3b. Examples of STAPLE expert and STAPLE non-expert segmentations for each ROI are shown in Fig. 5. (2023)
Data Records
Medical images and multi-annotator segmentation data. This data collection primarily consists of 1985 3D volumetric compressed NIfTI files (.nii.gz file extension) corresponding to CT images and segmentations of ROIs from various disease sites (breast, sarcoma, H&N, GYN, GI). Analogously formatted MRI and PET images are available for select cases (sarcoma, H&N, GI). ROI segmentation NIfTI files are provided in binary mask format (0 = background, 1 = ROI); file names for each ROI are provided in TG-263 notation. All medical images and ROI segmentations were derived from original DICOM and DICOM RTS files (.dcm file extension) respectively, which for completeness are also provided in this data collection. In addition, Python code to recreate the final NIfTI files from DICOM files is also provided in the corresponding GitHub repository (see Code Availability section).
Consensus segmentation data. Consensus segmentations for experts and non-experts generated using
the STAPLE method for each ROI have also been provided in compressed NIfTI file format (.nii.gz file extension). Consensus segmentation NIfTI files are provided in binary mask format (0 = background, 1 = ROI consensus). Python code to recreate the STAPLE NIfTI files from input annotator NIfTI files is also provided in the corresponding GitHub repository (see Code Availability section).
annotator demographics data. We also provide a single Microsoft Excel file (.xlsx file extension) containing each annotator's gender, race/ethnicity, geographic setting, profession, years of experience, practice type, and categorized expertise level (expert, non-expert). Geographic setting was re-coded as "United States" or "International" to further de-identify the data. Each separate sheet corresponds to a separate disease site (sheet 1 = breast, sheet 2 = sarcoma, sheet 3 = H&N, sheet 4 = GU, sheet 5 = GI). Moreover, in order to foster secondary analysis of registrant data, we also include a sheet containing the combined intake data for all registrants of C3RO, including those who did not provide annotations (sheet 6). www.nature.com/scientificdata www.nature.com/scientificdata/ Folder structure and identifiers. Each disease site is represented by a top-level folder, containing a subfolder for images and segmentations. The annotator demographic excel file is located in the same top-level location as the disease site folders. Image folders contain separate subfolders for NIfTI format and DICOM format images. Segmentation folders contain separate subfolders for expert and non-expert segmentations. Each expertise folder contains separate subfolders for each annotator (which contains separate subfolders for DICOM and NIfTI formatted files) and the consensus segmentation (only available in NIfTI format). The data have been specifically structured such that for any object (i.e., an image or segmentation), DICOM and NIfTI subdirectories are available for facile partitioning of data file types. An overview of the organized data records for an example case is shown in Fig. 6. Segmentation files (DICOM and NIfTI) are organized by anonymized participant ID numbers and can be cross referenced against the excel data table using this identifier. The raw data, records, and supplemental descriptions of the meta-data files are cited under Figshare doi: https://doi.org/10.6084/ m9.figshare.21074182 37 .
Technical Validation
Data annotations. Segmentation DICOM and NIfTI files were manually verified by study authors (D.L., K.A.W., O.S.) to be annotated with the appropriate corresponding ROI names.
Segmentation interobserver variability.
We calculated the pairwise interobserver variability (IOV) for each ROI for each disease site across experts and non-experts. Specifically, for each metric all pairwise combinations between all available segmentations in a given group (expert or non-expert) were calculated; median and interquartile range values are reported in Supplementary Table 2. Calculated metrics included the Dice Similarity coefficient (DSC), average surface distance (ASD), and surface DSC (SDSC). SDSC was calculated based on ROI specific thresholds determined by the median pairwise mean surface distance of all expert segmentations for that ROI as suggested in literature 38 . Metrics were calculated using the Surface Distances Python package 38,39 and in-house Python code. For specific equations for metric calculations please see corresponding Surface Distances Python package documentation 39 . Resultant values are broadly consistent with previous work in breast 40 , sarcoma 41 , H&N 35,42,43 ,GYN 44 , and GI [44][45][46] IOV studies.
Usage Notes
The image and segmentation data from this data collection are provided in original DICOM format (where applicable) and compressed NIfTI format with the accompanying excel file containing demographic information indexed by annotator identifiers. We invite all interested researchers to download this dataset for use in segmentation, radiotherapy, and crowdsourcing related research. Moreover, we encourage this dataset's use for clinical decision support tool development. While the individual number of patient cases for this dataset is too small for traditional machine learning development (i.e., deep learning auto-segmentation training), this dataset could act as a benchmark reference for testing existing auto-segmentation algorithms. Importantly, this dataset could
ROI 2 NIfTI
SimpleITK STAPLE www.nature.com/scientificdata www.nature.com/scientificdata/ also be used as a standardized reference for future interobserver variability studies seeking to investigate further participant expertise criteria, e.g., true novice annotators (no previous segmentation or anatomy knowledge) could attempt to segment ROI structures on CT images, which could then be compared to our expert and non-expert annotators. Finally, in line with the goals of the eContour collaborative 47 Examples of consensus segmentations using the simultaneous truth and performance level estimation (STAPLE) method for each region of interest (ROI) provided in this data descriptor. STAPLE segmentation generated by using all available expert segmentations (STAPLE expert ) and STAPLE segmentation generated by using all available non-expert segmentations (STAPLE non-expert ) are displayed as green and red dotted outlines, respectively, and overlaid on zoomed in images for each case. Subplots for breast, sarcoma, head and neck, gynecologic, and gastrointestinal cases are outlined in pink, red, blue, purple, and green borders, respectively. www.nature.com/scientificdata www.nature.com/scientificdata/ Original DICOM format images and structure sets may be viewed and analyzed in radiation treatment planning software or select digital image viewing applications, depending on the end-user's requirements. Current open-source software for these purposes includes ImageJ 51 , dicompyler 52 , ITK-Snap 53 , and 3D Slicer 54 with the SlicerRT extension 55 .
STAPLE NIfTI
Processed NIfTI format images and segmentations may be viewed and analyzed in any NIfTI viewing application, depending on the end-user's requirements. Current open-source software for these purposes includes ImageJ 51 , ITK-Snap 53 , and 3D Slicer 54 .
Code availability
Segmentations were performed using the commercially-available ProKnow (Elekta AB, Stockholm, Sweden) software. The code for NIfTI file conversion of DICOM CT images and corresponding DICOM RTS segmentations, along with code for consensus segmentation generation, was developed using in-house Python scripts and is made publicly available through GitHub: https://github.com/kwahid/C3RO_analysis. ). Segmentation subfolders contain separate subfolders which stratify expert and non-experts, which are further divided into subfolders for each annotator's segmented ROIs in DICOM and NIfTI formats. Consensus segmentations for each ROI generated by the simultaneous truth and performance level estimation (STAPLE) method are also provided in expert and non-expert folders.
|
v3-fos-license
|
2020-05-16T13:05:31.804Z
|
2020-05-14T00:00:00.000
|
218649516
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/article/S0021925817489691/pdf",
"pdf_hash": "3292fbab34addd1689bc515e6ab4e9e1062d1832",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2959",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Biology"
],
"sha1": "c895ce2a2043f9994de66577d251eb1c224f2fcf",
"year": 2020
}
|
pes2o/s2orc
|
Cryo-electron microscopy structure and potential enzymatic function of human six-transmembrane epithelial antigen of the prostate 1 (STEAP1)
Six-transmembrane epithelial antigen of the prostate 1 (STEAP1) is an integral membrane protein that is highly up-regulated on the cell surface of several human cancers, making it a promising therapeutic target to manage these diseases. It shares sequence homology with three enzymes (STEAP2–STEAP4) that catalyze the NADPH-dependent reduction of iron(III). However, STEAP1 lacks an intracellular NADPH-binding domain and does not exhibit cellular ferric reductase activity. Thus, both the molecular function of STEAP1 and its role in cancer progression remain elusive. Here, we present a ∼3.0-Å cryo-EM structure of trimeric human STEAP1 bound to three antigen-binding fragments (Fabs) of the clinically used antibody mAb120.545. The structure revealed that STEAP1 adopts a reductase-like conformation and interacts with the Fabs through its extracellular helices. Enzymatic assays in human cells revealed that STEAP1 promotes iron(III) reduction when fused to the intracellular NADPH-binding domain of its family member STEAP4, suggesting that STEAP1 functions as a ferric reductase in STEAP heterotrimers. Our work provides a foundation for deciphering the molecular mechanisms of STEAP1 and may be useful in the design of new therapeutic strategies to target STEAP1 in cancer.
Six-transmembrane epithelial antigen of the prostate 1 (STEAP1) is an integral membrane protein that is highly upregulated on the cell surface of several human cancers, making it a promising therapeutic target to manage these diseases. It shares sequence homology with three enzymes (STEAP2-STEAP4) that catalyze the NADPH-dependent reduction of iron(III). However, STEAP1 lacks an intracellular NADPHbinding domain and does not exhibit cellular ferric reductase activity. Thus, both the molecular function of STEAP1 and its role in cancer progression remain elusive. Here, we present ã 3.0-Å cryo-EM structure of trimeric human STEAP1 bound to three antigen-binding fragments (Fabs) of the clinically used antibody mAb120.545. The structure revealed that STEAP1 adopts a reductase-like conformation and interacts with the Fabs through its extracellular helices. Enzymatic assays in human cells revealed that STEAP1 promotes iron(III) reduction when fused to the intracellular NADPH-binding domain of its family member STEAP4, suggesting that STEAP1 functions as a ferric reductase in STEAP heterotrimers. Our work provides a foundation for deciphering the molecular mechanisms of STEAP1 and may be useful in the design of new therapeutic strategies to target STEAP1 in cancer.
Since its discovery in 1999 as a multispan membrane protein highly expressed on prostate cancer cells (1), six-transmembrane epithelial antigen of the prostate 1 (STEAP1) emerged as a cancer antigen expressed in various human cancers, including prostate, bladder, colorectal, lung, ovarian, and breast carcinoma and Ewing sarcoma. Because its expression in physiological tissues is minimal and mainly confined to the prostate gland (2), STEAP1 represents a potentially attractive therapeutic tool as both a cancer biomarker and a target for anticancer therapies (2)(3)(4). Indeed, several strategies for targeting STEAP1 in cancer have been explored; in 2007, a study reported the production and characterization of two monoclonal antibodies (mAb120.545 and mAb92.30) that bind STEAP1 with nanomolar affinity on prostate cancer cells and inhibit the growth of prostate and bladder tumor xenografts in mice (5). More recently, clinical studies employing humanized variants of mAb120.545 that target STEAP1 were conducted, including 1) a phase I trial using an antibody-drug conjugate (termed DSTP3086S or Vandortuzumab Vedotin) to target prostate cancer (6)(7)(8) and 2) a combined phase I/phase II trial for the PET imaging of metastatic castration-resistant prostate cancer using Zr 89 -labelled antibody (termed [ 89 Zr]Zr-DFO-MSTP2109A) (9)(10)(11). Besides antibodybased strategies, several in vitro and in vivo studies revealed that STEAP1-derived peptides are immunogenic and thus suitable for recognition by cytotoxic T lymphocytes (12)(13)(14)(15)(16), indicating that STEAP1 could represent a potential candidate for the development of anticancer vaccines (4,17). STEAP1 belongs to a protein family that comprises three metalloreductases (18,19), STEAP2-STEAP4, also known as STAMP1-STAMP3 (20)(21)(22), which reduce iron(III) and copper(II) and are also associated with cancer progression (23)(24)(25). At the molecular level, the four STEAP proteins are predicted to adopt a common architecture with intracellular N and C termini, six transmembrane helices, and a single heme B prosthetic group bound in the transmembrane domain (TMD) (26). STEAP2-STEAP4 also contain an intracellular oxidoreductase domain (OxRD) that binds NADPH (27,28). The ferric and cupric reductase mechanism of STEAP2-STEAP4 is defined by electron transfer from intracellular NADPH through membrane-embedded FAD and heme cofactors to chelated metalion complexes at the membrane extracellular side (26,29). In contrast to STEAP2-STEAP4, STEAP1 does not exhibit metalloreductase activity when overexpressed on mammalian cells (19), suggesting that it may have a distinct yet unidentified function. However, a recent study revealed that dithionitereduced, purified STEAP1 retains heme and is capable of reducing metal-ion complexes and oxygen (30), indicating that the absence of a binding site for an electron-donating substrate like NADPH could explain the lack of reductase activity for STEAP1. It has been proposed that STEAP1 may have a functional role in heterooligomeric complexes with other STEAP paralogues (19,30). In support of this, its expression often correlates with the expression of STEAP2 in cancers (17) and both proteins co-purify in detergent (30), suggesting that they could form a functional complex. Further indications for a functional heterotrimeric STEAP complex emerged from the recent cryo-EM structures of homotrimeric human STEAP4 (29), which revealed a domain-swapped architecture, with the intracellular OxRD positioned beneath the TMD of the adjacent protomer. This arrangement supports a model in which the heme in STEAP1 receives electrons from NADPH bound to an adjacent STEAP2/3/4 subunit. However, the in vivo redox activity of STEAP1, in both the absence and presence of other STEAP paralogues, remains to be established. In addition, there are no high-resolution structures available to help distinguish a functional role for STEAP1 as a metalloreductase or, as previously proposed, a potential channel or transporter protein (1,2,5,31). Thus, although STEAP1 is a populous plasma membrane component of many different types of cancer cells and hence is a promising novel therapeutic target, its structure and function in both health and disease remain unknown.
Here, we present the cryo-EM structure of full-length, trimeric human STEAP1 bound to three Fab fragments of the therapeutically relevant mAb120.545. The Fabs dock on the extracellular helices of STEAP1 through an extensive polar interface. The TMD of STEAP1 resembles the architecture of the STEAP4 TMD and exhibits cellular ferric reductase activity when fused to the NADPH-binding OxRD of STEAP4.
Biochemical characterization of STEAP1
A previous pioneering study reported the biophysical and electrochemical characterization of N-terminally truncated rabbit STEAP1, purified from insect cells in lauryl maltose neopentyl glycol detergent (30). Our initial attempts to purify fulllength, human STEAP1 from mammalian HEK cells using a similar protocol were hampered by the loss of the noncovalently bound heme B cofactor during the purification, suggest-ing that STEAP1 was not natively folded. Therefore, we screened several other detergents for the solubilization of STEAP1 and identified digitonin as a suitable replacement for lauryl maltose neopentyl glycol. In digitonin, the purified protein retained its heme cofactor (Fig. 1A) and eluted as a monodisperse peak in size-exclusion chromatography experiments ( Fig. 1, B and D). In addition, thermostability assays revealed a melting temperature of ;55.5°C for STEAP1 in digitonin ( Fig. 1, D and E), indicating that the protein was stable outside its native membrane environment. To assess whether the extracellular domains of purified STEAP1 adopted a conformation similar to those of membrane-embedded STEAP1, we tested the binding of STEAP1 to the Fab fragment of mAb120.545, which exhibits 1 nM affinity for STEAP1 on cells (5) and recognizes a conformation-dependent, nonlinear epitope (32). Size-exclusion chromatography assays revealed a smaller elution volume for STEAP1 when it was premixed with Fab120.545 (Fig. 1B), suggesting the formation of a complex, which was then confirmed by SDS-PAGE analysis of the eluted sample (Fig. 1C). Thus, the conformation of the STEAP1 epitope recognized by the Fab fragment on cells is preserved during the detergent solubilization and purification of STEAP1.
Cryo-EM structure determination
To gain insights into the molecular architecture of STEAP1, we set out to obtain a structural model of the protein using single-particle cryo-EM. Full-length, trimeric STEAP1 proved to be a challenging sample for EM due to its small size (,120 kDa) Figure 1. STEAP1 purification and stability. A, UV-visible spectrum of the purified STEAP1-Fab120.545 complex used for EM sample preparation. The protein and heme absorbance peaks exhibit maxima at 280 and 412 nm, respectively. B, size-exclusion chromatography elution profile of STEAP1 in the absence (red) or presence (blue) of excess Fab120.545. Fractions collected from the STEAP1-Fab120.545 sample were used for EM sample preparation. C, SDS-PAGE of the purified STEAP1-Fab120.545 complex. D, heme absorbance size-exclusion chromatography elution profiles of STEAP1 after 10 min of incubation at several temperatures. E, melting curve for digitonin-purified STEAP1, generated by using the peak maxima from D. STEAP1 exhibits a melting temperature of 55.5°C.
Structural insights into human cancer antigen STEAP1
and the absence of folded domains protruding from the membrane region. To create a larger particle with more extramembrane features to facilitate EM image processing, we opted to determine the structure of STEAP1 purified in complex with Fab120.545. The complementarity-determining regions of Fab120.545 are identical to those present in STEAP1 antibodies used in clinical trials (Fig. S1), indicating that the structure of the STEAP1-Fab120.545 complex could also be useful in engineering antibodies and other molecules that target STEAP1 in cancer. Micrographs collected on a 200-kV Talos Arctica microscope showed nonaggregated particles distributed in vitreous ice (Fig. S2A). Subsequent 2D classification experiments yielded class averages with clear secondary structure elements and furthermore revealed that more than one Fab fragment is bound to micelle-embedded STEAP1 ( Fig. 2A; Fig. S2B). Image processing in RELION (33) finally resulted in a reconstructed cryo-EM-density map at ;3.0-Å resolution ( Fig. 2B; Fig. S2, C-F). The map displayed well-defined side chain density for the TMD of STEAP1 and the variable regions of the Fab (Fig. S3). The model for STEAP1 was built with the TMD of STEAP4 as the template (29) (PDB code 6HCY), whereas the starting model for the Fab was generated through the PIGS homology server (34). The refined structure has acceptable stereochemistry and exhibits high correlation to the cryo-EM density map within the determined resolution (Fig. S3, G and H; Table 1).
Architecture
The cryo-EM structure reveals a 1:1 stoichiometry of the STEAP1-Fab120.545 complex, with 3 STEAP1 protomers interacting with 3 Fab molecules (Fig. 2, B-D). The Fabs bind at the extracellular region of STEAP1, consistent with the observation that the antibody targets STEAP1 expressed on intact cancer cells. The intracellular loops of STEAP1 extend ;18 Å from the membrane region into the cytoplasm, whereas the Fabs protrude up to ;75 Å into the extracellular space. STEAP1 adopts a trimeric arrangement that is similar to that of its family member STEAP4 (41% amino acid sequence identity; root mean square deviation (RMSD) of 0.8 Å for 640 Ca atoms) (Fig. 3). Each STEAP1 subunit contains six membrane-spanning a-helices (h1-h6) that define the TMD of the protein. A single B-type heme cofactor is surrounded by helices, h2, h3, h4, and h5 at the extracellular membrane leaflet (Fig. S3B). Strictly conserved histidine residues H175 and H268 coordinate the central iron moiety of the heme prosthetic group, thereby resembling the hexacoordinated heme arrangement of STEAP4. At the intracellular membrane leaflet side of the TMD, we observed weak density not corresponding to any protein residues. An overlay with the structure of STEAP4 revealed that the observed density overlaps with the flavin ring of the FAD-binding site in STEAP4 (Fig. S3D). The FAD-interacting residues in the TMD of STEAP3 and 4 are conserved in STEAP1, and the STEAP1-Fab120.545 cryo-EM sample was supplemented with 1 mM FAD before grid freezing. However, STEAP3 and STEAP4 also interact with the adenine moiety of FAD via their intracellular OxRD (26, 29) ( Fig. 3B; Fig. S3D), which is missing in STEAP1. In line with this, STEAP3 and STEAP4 exhibit a low micromolar affinity for FAD (K d = ;1 mM) (26,29), whereas the affinity of STEAP1 for FAD is much weaker (K d = 34 mM) (30). The cryo-EM density in this region could therefore correspond to a loosely bound FAD cofactor, although the weak density does not allow for modelling of the complete cofactor. Instead of an OxRD of ;175 amino acids in length, STEAP1 contains a 69-residue N-terminal intracellular tail with no predicted domain architecture. Indeed, we observed no density for the first 65 intracellular amino acids of STEAP1, indicating that these residues are flexible, which is consistent with in silico disorder predictions using RONN (35). Thus, in contrast to other STEAP family members, the homotrimeric human STEAP1 structure does not harbor a folded Nterminal intracellular domain.
STEAP1-Fab120.545 interface
We next analyzed the interface between STEAP1 and Fab120.545. The epitope recognized by the Fab spans the first and second extracellular regions (EC1 and EC2, respectively) of STEAP1, which bridge membrane helices h1-h2 and h3-h4. The total interface formed between the STEAP1 trimer and three Fabs comprises ;5,730 Å 2 of buried surface area and is arranged so that a single Fab molecule interacts with all three STEAP1 protomers. The interface is stabilized by a network of polar and hydrophobic interactions and can largely be described by two "interaction hot spots" on the EC2 of STEAP1 (Fig. 4, A and B). The first hot spot involves the extracellular helix of STEAP1 (residues 186-201), which extends from membrane helix h3. Y190, N194, W195, and Q198 interact with Fab heavy and light chain residues Y102 H , Y104 H , Y108 H , Y31 L , and S33 L (Fab chain identifier in subscript) (Fig. 4A). The carbonyl group of S33 L forms a hydrogen bond with the side chain of Y107 of the EC1, whereas Y104 H bridges two STEAP1 protomers by interacting with both Y190 and W195 from different chains. The second hot spot involves residues Q201, Q202, N203, Q205, and D206, which reside in the loop that connects the extracellular helix of STEAP1 to membrane helix h4. Hot spot 2 consists of numerous polar interactions, including a salt bridge between D206 and R32 L . Other Fab residues involved in binding to hot spot 2 are Y51 H , S57 H , T58 H , S59 H , Q27 L , S33 L , N99 L , and Y100 L (Fig. 4B). Besides the Fab residues in close proximity (,4 Å) to STEAP1, we identified three aspartate residues (D103 H , D105 H , and D106 H ) in the Fab heavy chain that are oriented toward the basic amino acid ring above the heme (Fig. 4C). These aspartates are at least 4.5 Å separated from any STEAP1 residues and could participate in long-range electrostatic interactions with the basic ring of STEAP1. Interestingly, in STEAP4, the corresponding basic amino acids constitute the substrate-binding site (Fig. 4D). To experimentally verify the STEAP1-Fab interface observed in the structure, we generated several mutants of Fab120.545 and tested their ability to bind purified STEAP1 by using size-exclusion chromatography assays (Fig. S4). Mutants R32 L E and N99 L D were designed to create charge repulsions in hot spot 2 between STEAP1 and Fab120.545. As expected, we did not observe binding events for these two mutants (Fig. S4, A-C). Fab mutants D103 H N, D105 H N,D106 H N (Fab-NNN) and D103 H A,D105 H A,D106 H A (Fab-AAA) similarly did not interact with purified STEAP1 (Fig. S4, A, D, and E), indicating that the long-range electrostatic interactions between the three Fab aspartates and STEAP1 are essential for maintaining a high-affinity antibodyantigen complex.
Generation of a functional STEAP4/1 fusion protein
The reductase-like architecture of STEAP1 (Fig. 3) and its heme redox potential of 2114 to 2118 mV (30) indicate that the protein could be functional in reducing metal-ion complexes in vivo. To the best of our knowledge, however, there are currently no experimental data available that show that STEAP1 is capable of reducing ferric iron in a physiological setting. Although STEAP1 may form relevant heterotrimeric complexes with other STEAP homologs (30), coexpressions of different STEAP family members will likely result in a mixed population of homo-and heterotrimers, making cellular ferric reductase experiments difficult to interpret. To overcome these hurdles and to assess whether the TMD of STEAP1 could direct electron transport across mammalian cell membranes, we aimed to design a construct in which the STEAP1 TMD was Structural insights into human cancer antigen STEAP1 fused to the intracellular OxRD of another STEAP homologue.
To this end, a sequence alignment of all human STEAP proteins revealed that STEAP4 and STEAP1 share a common three-amino acid LFP motif at the start of membrane helix h1.
Additionally, an overlay of their cryo-EM structures did not show any obvious clashes between the OxRD of STEAP4 and the TMD of STEAP1. Thus, we generated a construct that spans residues M1-Q195 of STEAP4, the shared LFP motif, and residues Q69-L339 of STEAP1, which we termed STEAP4/1 chimera (Fig. 5, A and B). We then expressed STEAP4/ 1 chimera in HEK293 cells and compared its cellular reductase activity to cells expressing STEAP1 or STEAP4, using the physiologically relevant ferric citrate as a substrate. Consistent with a previous study (19), overexpression of STEAP1 did not result in measurable cell-surface ferric reductase activity, compared with the empty-vector control, whereas cells expressing STEAP4 reduced ;57 pmol Fe 31 /min/well (Fig. 5C). Intriguingly, the STEAP4/1 chimera -expressing cells also showed highly significant reductase activity of ;43 pmol Fe 31 /min/well (Fig. 5C). To verify that the observed activity depended on transmembrane electron transport through the TMD of STEAP1, we additionally tested STEAP4/1 chimera mutants R161E and H175A (STEAP1 numbering) in which the FAD and hemebinding sites in the TMD, respectively, are abolished. Cells expressing STEAP4/1 chimera -R161E and STEAP4/1 chimera -H175A did not exhibit any ferric reductase activity (Fig. 5C), indicating that the STEAP1 TMD of the chimera is indeed crucial for cell-surface iron reduction. Confocal microscopy experiments subsequently confirmed that all expressed proteins except for STEAP4/1 chimera -H175A localized to the plasma membrane (Fig. S5). Taken together, our cell-based experiments reveal that STEAP1 adopts a conformation that facilitates transmembrane electron transport to reduce ferric citrate at the membrane extracellular side. Therefore, the lack of reductase activity of the protein can be explained by the ab-sence of a binding site for an electron-donating substrate in homotrimeric STEAP1.
Fab120.545 inhibits ferric reductase activity of STEAP4/1 chimera Because Fab120.545 binds close to the putative substratebinding site in STEAP1 (Fig. 4, C and D), we assessed whether Fab binding could influence the ferric reductase activity of STEAP4/1 chimera . Indeed, the addition of 4 mM Fab120.545 to cells expressing STEAP4/1 chimera led to a significant decrease in iron citrate reduction, from ;43 to ;18 pmol Fe 31 /min/well (Fig. 5C). Conversely, the activity of STEAP4 was unchanged, indicating that the Fab specifically recognizes an extracellular epitope of STEAP1. We further characterized the inhibitory properties of the Fab by incubating cells expressing STEAP4 or STEAP4/1 chimera with different Fab concentrations. This revealed a Fab-concentration-dependent effect on the inhibition of the ferric reductase activity of STEAP4/1 chimera , while the amount of iron(III) reduced by STEAP4 remained unaltered over the entire concentration range tested (Fig. 5D).
Discussion
Based on its amino acid sequence and subcellular localization, STEAP1 was previously predicted to function as a channel or transporter protein (1, 2, 5, 31). The cryo-EM structure of STEAP1 bound to antibody fragment Fab120.545 revealed a trimeric arrangement similar to that of its family member STEAP4 (Fig. 3), showing no obvious structural features of ion channels or transporters. Instead, the strictly conserved FADbinding residues and putative loosely bound FAD at the innermembrane leaflet of the TMD (Fig. S3D), the heme-binding site at the outer-membrane leaflet (Fig. S3B), and the basic amino acid ring above the heme (Fig. 4C) all imply that STEAP1 may function as a transmembrane oxidoreductase. In our previous work, we showed that substrate iron(III), complexed to a negatively charged chelator like citrate, binds in the basic ring of STEAP4 (Fig. 4D) and we proposed that this ring of positive amino acids may polarize the iron(III)-chelator complex to facilitate the iron reduction reaction (29). The presence of a comparable positively charged ring (Fig. 4, C and D) indicates that STEAP1 harbors a similar protein environment for the reduction of iron(III).
We investigated whether the TMD of STEAP1 is capable of directing transmembrane electron transport by generating a fusion construct between the intracellular region of STEAP4 and the TMD of STEAP1, termed STEAP4/1 chimera (Fig. 5, A and B). Cell-surface-expressed STEAP4/1 chimera catalyzed the reduction of iron citrate, providing evidence that STEAP1 is a functional reductase, albeit only when a binding site for the intracellular electron-donating substrate NADPH is available (Fig. 5, C and D). The lack of enzymatic activity of STEAP4/ 1 chimera -R161E (Fig. 5C), which localizes to the plasma membrane in HEK cells (Fig. S5D), confirmed that the TMD of STEAP1 enables transmembrane electron transport. We also showed that STEAP4/1 chimera -H175A exhibited no significant ferric reductase activity (Fig. 5C). However, fluorescence microscopy experiments revealed that this mutant resides almost exclusively in intracellular compartments (Fig. S5E), suggesting that the protein misfolds when the heme cofactor-binding site is abolished. A and B, interactions between STEAP1 and Fab120.545 at hot spot 1 (A), corresponding to the extracellular helix of STEAP1, and hot spot 2 (B), the loop between the extracellular helix and membrane helix h4. Amino acid backbones are shown as sticks only when they contribute to the interface. The STEAP1 chains are colored blue, cyan and green; and the Fab molecules are colored magenta and light-gray. C, amino acid environment above the heme in the STEAP1-Fab structure. All charged residues are shown as sticks. The Fab light chain is omitted from the figure for clarity. STEAP1 helices are colored gray for clarity. STEAP1 residues are colored by chain in cyan or green. Fab heavy chain residues are colored magenta. D, amino acid environment above the heme in the STEAP4 structure (PDB code 6HCY, EMDB-0199). All charged residues are shown as sticks. The difference density (taken from EMDB-0199) that corresponds to substrate Fe(III)-nitrilotriacetic acid is depicted in mesh. STEAP4 residues are colored by chain in orange or gray.
The addition of Fab120.545 resulted in a concentrationdependent decrease of ferric reductase activity of STEAP4/ 1 chimera -expressing cells but not of STEAP4-expressing-cells (Fig. 5, C and D). This demonstrates that the Fab likely does not interact with STEAP4, which can be explained by the observation that the glycan on STEAP4 residue N323 would clash with the Fab light chain. In contrast, the residue at the equivalent position in STEAP1 (N194) is not glycosylated and forms a hydrogen bond with S33 L (Fig. 4A).
How does Fab120.545 inhibit the cell-surface ferric reductase activity of STEAP4/1 chimera ? The conformations of the extracellular regions of STEAP4 and Fab-bound STEAP1 are similar (Fig. 3); therefore, the binding of Fab120.545 is not expected to induce large conformational changes in STEAP1. Instead, the Fab partially blocks access to the putative substrate-binding site in the basic ring above the heme (Fig. 4C). Alternatively, Fab residues D103 H , D105 H , and D106 H might neutralize the positively charged substrate-binding site and thereby prevent substrate polarization. However, mutagenesis of these aspartates to either asparagines (Fab-NNN) or alanines (Fab-AAA) resulted in a loss of binding of Fab120.545 to purified STEAP1 (Fig. S4, A, D, and E), and thus this hypothesis could not be tested. Nevertheless, our results suggest that antibodies can be employed as tools to inhibit the ferric reductase activity of STEAP enzymes. Additionally, the 3:3 STEAP1/Fab120.545 stoichiometry (Fig. 2, B and C) indicates that full-length antibodies may cross-link STEAP1 trimers into higher-order assemblies on cell membranes. A similar antibody-induced cross-linking mechanism has recently been reported for the therapeutic antibody rituximab binding to its dimeric target protein CD20 (36).
In conclusion, the study presented here describes the first structure-function analysis of the human cancer antigen STEAP1. Our results support a model in which STEAP1 forms heteromeric assemblies with partner proteins that recruit and orient intracellular electron-donating substrates toward the TMD of STEAP1, enabling transmembrane electron transport and the reduction of extracellular metal-ion complexes. This model warrants further investigations into the physiological function of STEAP1; for example, the incorporation of STEAP1 into STEAP heterotrimers might moderate iron(III) reduction rates locally and thereby prevent deleterious reactions associated with iron overload. Therefore, we envision that it will be of great interest to focus future research endeavors on these putative assemblies of STEAP1 with STEAP2-STEAP4 family members and other unidentified accessory proteins in relevant cancer tissues. Ultimately, understanding the molecular principles that underly the function of STEAP1 will guide the design of anti-STEAP1- focused cancer therapies, thereby exploiting the protein's high expression in cancer and minimal presence in healthy cells.
Chemicals
All chemicals were purchased from Sigma-Aldrich unless specified otherwise.
Constructs
Codon-optimized DNA coding for human STEAP1 (Uni-ProtKB code Q9UHE8) and STEAP4 (UniProtKB code Q687X5) for mammalian cell expression was purchased from GeneArt. The full-length STEAP1 construct used for structure determination was cloned in a pUPE expression vector (U-Protein Express BV) with a C-terminal Strep-tag. The STEAP4/ 1 chimera construct was generated through Gibson assembly cloning (NEB). For functional assays in HEK cells, all constructs were cloned in a pUPE expression vector with a C-terminal GFP-Strep-tag with a tobacco etch virus protease site. Mutagenesis of STEAP constructs was performed using the Q5 sitedirected mutagenesis kit (NEB). The primers used in this study are listed in Table S1. The amino acid numbering of STEAP1 was used for mutations introduced in the STEAP4/1 chimera (R161E and H175A) because the mutated amino acids reside in the STEAP1 domain of the chimera. These residues correspond to R290 and H304 in both STEAP4/1 chimera and STEAP4. All Fab120.545 variants were a kind gift from Genmab BV and were produced at our request.
Protein expression and purification for cryo-EM
The protein production protocol was adapted from the previously described protocol for STEAP4 (29). Full-length STEAP1 with a C-terminal Strep-tag was expressed in HEK293 GnTI 2 suspension cells (provided by U-Protein Express BV). Cells were grown at 37°C for ;96 h. All subsequent steps were performed at 4°C unless stated otherwise. After harvesting, cells were washed in PBS and solubilized for 2-3 h in lysis buffer containing 50 mM Tris (pH 7.8), 250 mM NaCl, 0.7% (w/v) digitonin (Calbiochem), 0.3% (w/v) n-dodecyl-b-D-maltoside (Anatrace), 0.06% (w/v) cholesteryl hemisuccinate, and protease inhibitor mixture (Roche). The sample was then subjected to ultracentrifugation at 100,000 3 g for 45 min to remove insoluble membranes and cell debris. The supernatant was incubated with Strep-Tactin resin (GE Healthcare) for 2 h, and the resin was washed with 20 column volumes of buffer A (50 mM Tris, pH 7.8, 250 mM NaCl, 0.08% digitonin). Protein was subsequently eluted with buffer A supplemented with 3.5 mM desthiobiotin. STEAP1-containing fractions (which exhibited a red color due to the presence of the heme cofactor) were concentrated to ;1.6 mg/ml with a 100-kDa-cutoff concentrator device (Amicon). Subsequently, 330 ml of STEAP1 was mixed with a large excess of Fab120.545 (145 ml at 9.7 mg/ml in PBS). After 1 h of incubation, the STEAP1-Fab mixture was injected onto a Superdex 200 increase 10/300 column (column volume, 24 ml; GE Healthcare) that had been preequilibrated in buffer containing 20 mM Tris (pH 7.8), 200 mM NaCl, and 0.08% (w/v) digitonin. Fractions containing the STEAP1-Fab complex were concentrated to a final concentration of ;5.0 mg/ml. Sample purity was assessed with SDS-PAGE analysis and analytical size-exclusion chromatography.
Grid preparation
Concentrated STEAP1-Fab120.545 (5.0 mg/ml) was incubated with 1 mM FAD on ice for 1 h before grid freezing. Then, 2.8 ml of sample was pipetted onto glow-discharged R1.2/1.3 Au holey carbon grids (200 mesh; Quantifoil) and plunge-frozen in a liquid ethane/propane mixture with a Vitrobot Mark IV (Thermo Fisher Scientific), blotting with force 0 for 4 s at 20°C.
EM data collection
Movie collection was performed with a 200-kV Talos Arctica microscope (Thermo Fisher Scientific) equipped with a K2 summit detector (Gatan) and a postcolumn 20-eV energy filter. Using EPU (Thermo Fisher Scientific) in superresolution/ counting mode (pixel size, 0.514 Å), movies were collected for 6.5 s in 26 frames with a dose of 1.905 e 2 /Å 2 /frame (measured in an empty hole without ice), corresponding to a total electron exposure of 49.5 e 2 /Å 2 . Defocus values for collection in EPU were set between 21 and 23 mm but varied between 20.4 and 23.5 mm during data collection.
Image processing
A total of 5,352 movies were imported in the RELION v3.0 pipeline. The movies recorded in superresolution mode were binned at 23 (resulting pixel size, 1.03 Å) and motion corrected using UCSF MotionCor2 (37), followed by contrast transfer function (CTF) estimation using GCTF (38). A total of 695 movies were subsequently discarded, based on their poor CTF spectra, resulting in 4,657 movies (87% of total) for further processing; 1,791 particles were picked manually and 2D classified. The generated classes were used as templates for autopicking in RELION (39), resulting in 616,302 particles. The picked particles were 3D classified into six classes with no symmetry applied. The particles belonging to the class with most proteinlike features (263,939 particles) were then subjected to CTF refinement and Bayesian polishing, followed by 3D classification without image alignment into three classes. The highest populated class (172,724 particles) showed clear amino acid side chain features. Particles were then CTF refined for a second time, and 431 junk particles were removed through 2D classifications. Automatic 3D refinement (with C3 symmetry applied) of the remaining 172,293 particles yielded a map at a global resolution of 3.8 Å, based on the gold-standard criterion of Fourier shell correlation (FSC) = 0.143. A postprocessing step in which the constant region of the Fab was masked out improved the resolution to 3.5 Å. Following the release of RELION v3.1beta with high-order aberration and anisotropic magnification estimation (40), we performed four additional rounds of CTF refinement and Bayesian polishing. This iterative process was followed by a 3D classification without image alignment into three classes, removing 2,867 particles. The final 169,426 particles were 3D autorefined (C3 symmetry) and subjected to a postprocessing step, improving the map resolution to 3.0 Å (3.3 Å without masking), corresponding to ;1.44 times the Nyquist frequency.
Model building and refinement
To build the model for STEAP1, the TMD structure of human STEAP4 (residues 196-454) was rigid-body fitted into the cryo-EM map. For one chain, all STEAP4 residues were changed to the corresponding STEAP1 residue using the "mutate residue range" option in Coot (41), after which the model was manually inspected, adjusted, and refined in Coot. The model of this chain was copied and fitted in the density of the other two subunits. The starting model for the variable regions of Fab120.545 was obtained through the PIGS homology server (34) by uploading the heavy and light chain sequences. This model was rigid-body fitted in the cryo-EM map and the complementarity-determining regions were manually built in Coot. Then, the STEAP1-Fab120.545 model was iteratively refined using Coot (manually) and Phenix real-space refine (42) with geometric restraints and noncrystallographic symmetry constraints. Final refinements were performed using the nonsharpened cryo-EM map, in which the constant region of the Fab was masked out. The nonsharpened map revealed sufficient side chain detail for modelling. The final model uploaded to the PDB includes residues 67-312 of STEAP1, residues 1-112 of the Fab120.545 light chain, and residues 2-122 of the Fab120.545 heavy chain. Figures were prepared with Pymol (Schrödinger), UCSF Chimera (43), and UCSF ChimeraX (44).
Thermostability assays
Thermostability assays were performed as reported previously (29,45,46). Aliquots of purified STEAP1 (in 20 mM Tris, pH 7.8, 200 mM NaCl, 0.08% digitonin) were heated over a range of temperatures (20-75°C) in a thermocycler for 10 min, cooled down, and centrifuged to remove aggregates. The supernatant was subsequently injected onto a Superdex 200 increase 10/300 column that had been equilibrated in buffer containing 20 mM Tris (pH 7.8), 200 mM NaCl, and 0.08% digitonin and was connected to a HPLC system (Shimadzu). The heme absorbance of STEAP1 was monitored at 412 nm using a SPD-20A UV-visible detector. In order to determine the melting temperature, peak maxima were normalized to the sample incubated at 20°C and were fitted to a dose-response equation using GraphPad Prism 5.
Size-exclusion chromatography-binding assays with Fab variants
All Fab variants were diluted to a concentration of 0.5 mg/ml in PBS supplemented with 0.08% digitonin. Twenty-five microliters of purified STEAP1 (0.28 mg/ml in 20 mM Tris, pH 7.8, 200 mM NaCl, 0.08% digitonin) was mixed with 35 ml of PBS plus digitonin or a ;2-fold molar excess of Fab (35 ml at 0.5 mg/ ml). After incubation for several hours, the mixtures were injected onto a Superdex 200 increase 3.2/300 column (column volume, 2.4 ml) that had been equilibrated in buffer containing 20 mM Tris (pH 7.8), 200 mM NaCl, and 0.08% digitonin and was connected to a HPLC system (Shimadzu). The heme absorbance of STEAP1 was monitored at 412 nm using a SPD-20A UV-visible detector, whereas the tryptophan fluorescence (excitation at 275 nm and emission at 354 nm) emitted by both STEAP1 and Fab variants was detected using a RF-20Axs detector. STEAP1-Fab complex formation was assessed by comparing the peak elution profiles of mixtures with the profiles of individually injected proteins.
Ferric reductase assays HEK293 GnTI 2 suspension cells (U-Protein Express BV) were transfected with GFP-tagged STEAP constructs; ;96 h after transfection, cells were washed in PBS, resuspended in iron uptake buffer (25 mM MES, 25 mM MOPS, pH 7.0, 140 mM NaCl, 5.4 mM KCl, 1.8 mM CaCl 2 , 0.8 mM MgCl 2 , 5 mM glucose, 400 mM ferrozine), and pipetted into a 96-well plate (;5 3 10 4 cells/ well). Experiments were started by the addition of ferric citrate (Fisher Scientific) to each well (final concentration, 200 mM). The assay was performed in the dark at 37°C for 35 min. Fe 21 -ferrozine formation was monitored using a model 680 microplate reader (Bio-Rad) at 550 nm. The Fe 21 formed was quantified using a standard curve that was generated as described (47). To assess the effect of Fab120.545 on the ferric reductase activity of the STEAP variants, cells were incubated with PBS or a range of Fab120.545 concentrations for 20 min prior to the addition of ferric citrate. Experiments were performed as technical triplicates by diluting cell stocks originating from single transfections three separate times. Error bars represent the S.D. Experiments in which activities were compared were carried out in parallel in the same 96-well plate. All statistical analyses were performed with GraphPad Prism 5.0. The ferric reductase activities of cells expressing different STEAP variants were compared for statistical significance using unpaired t tests, whereas paired t tests were employed to compare the same population of cells with or without the addition of Fab120.545.
Fluorescence microscopy
HEK293 GnTI 2 suspension cells (U-Protein Express BV) were transfected with GFP-tagged STEAP constructs; ;96 h after transfection, cells were washed in PBS and then imaged for GFP (excitation at 488 nm and emission at 509 nm) using a CorrSight spinning disk confocal microscope (FEI), at a magnification of 340, at 20°C.
Data availability
Data supporting the findings of this manuscript are available from the corresponding authors upon reasonable request. The relevant cryo-EM density maps of the STEAP1-Fab120.545 complex have been deposited under accession number EMDB-10735. This deposition includes unfiltered half-maps, nonsharpened unmasked maps, and sharpened masked maps. Model coordinates of the structure have been deposited in the Protein Data Bank under accession number 6Y9B.
Acknowledgments-The Fab120.545 fragments were a kind gift from Genmab BV and were produced at our request. Furthermore, we gratefully thank W. Hemrika (U-Protein Express BV) for HEK cell cultures; L. S. van Bezouwen, S. C. Howes, and the Utrecht EM- Conflict of interest-The authors declare no competing interests. mAb120.545 and its derivatives are patented by Genentech Inc.
|
v3-fos-license
|
2020-08-16T13:05:23.936Z
|
2020-08-14T00:00:00.000
|
221188242
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0237534&type=printable",
"pdf_hash": "92e4d83116b0d633ccbcbdc257ba29f272ac09c0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2960",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f69e37e083cc2e33d357358494174fef8ac6f5a7",
"year": 2020
}
|
pes2o/s2orc
|
Unfavorable outcomes to second-line tuberculosis therapy among HIV-infected versus HIV-uninfected patients in sub-Saharan Africa: A systematic review and meta-analysis
Background Drug resistance is a key obstacle to the global target set to end tuberculosis by 2030. Clinical complexities in drug-resistant tuberculosis and HIV-infection co-management could worsen outcomes of second-line anti-tuberculosis drugs. A comprehensive estimate for risks of unsuccessful outcomes to second-line tuberculosis therapy in HIV-infected versus HIV-uninfected patients is mandatory to address such aspects in segments of the target set. Therefore, this meta-analysis was aimed to estimate the pooled risk ratios of unfavorable outcomes to second-line tuberculosis therapy between HIV-infected and HIV-uninfected patients in sub-Saharan Africa. Methods We conducted a literature search from PubMed/MEDLINE, EMBASE, SCOPUS and Google Scholar. We screened the retrieved records by titles and abstracts. Finally, we assessed eligibility and quality of full-text articles for the records retained by employing appraisal checklist of the Joanna Briggs Institute. We analyzed the data extracted from the included studies by using Review Manager Software, version 5.3 and presented our findings in forest and funnel plots. Protocol for this study was registered on PROSPERO (ID: CRD42020160473). Results A total of 19 studies with 1,766 from 4,481 HIV-infected and 1,164 from 3,820 HIV-uninfected patients had unfavorable outcomes. The risk ratios we estimated between HIV-infected and HIV-uninfected drug-resistant tuberculosis patients were 1.18 (95% CI: 1.07–1.30; I2 = 48%; P = 0.01) for the overall unfavorable outcome; 1.50 (95% CI: 1.30–1.74) for death; 0.66 (95% CI: 0.38–1.13) for treatment failure; and 0.82 (95% CI: 0.74–0.92) for loss from treatment. Variable increased risks of unfavorable outcomes estimated for subgroups with significance in mixed-age patients (RR: 1.22; 95% CI: 1.10–1.36) and eastern region of sub-Saharan Africa (RR: 1.47; 95% CI: 1.23–1.75). Conclusions We found a higher risk of unfavorable treatment outcome in drug-resistant tuberculosis patients with death highly worsening in HIV-infected than in those HIV-uninfected patients. The risks for the unfavorable outcomes were significantly higher in mixed-age patients and in the eastern region of sub-Saharan Africa. Therefore, special strategies that reduce the risks of death should be discovered and implemented for HIV and drug-resistant tuberculosis co-infected patients on second-line tuberculosis therapy with optimal integration of the two programs in the eastern region of sub-Saharan Africa.
Introduction Tuberculosis (TB) is one of the top 10 causes of death and the leading cause of infectious disease-related mortality [1,2]. In 2018, there were an estimated 10 million incident TB cases worldwide with about 1.5 million deaths [3]. More than 95% of cases and deaths related to TB infection were occurred in developing countries [4]. The World Health Organization's (WHO) regional report in 2016 indicated a quarter (i.e., 2.5 million) of new cases and 417,000 deaths related to TB disease burden in Africa only [5]. Drug-resistance is a key obstacle to global efforts to end TB infection [6]. Remarkably, drug-resistant TB (DR-TB) including multidrug-resistant TB (MDR-TB) remains a public health crisis and a health security threat [2]. It is further complicated in the presence of the human immunodeficiency virus (HIV) co-infection [6]. For instance, the DR-TB was two times more likely to develop in HIV-infected TB patients than in those HIV-uninfected ones [7]. According to the WHO report in 2013, 3.5% of new cases and 20.5% of previously treated TB cases had MDR-TB [8]. The MDR-TB is an infection that is resistant to rifampicin and isoniazid, both of which are the most powerful drugs in the first-line regimen for TB therapy [4].
The DR-TB can be treated effectively by second-line anti-tuberculosis drugs which are toxic and also require a treatment follow-up for about 2 years [4,9]. The appropriate regimen involving four or more drugs commonly combined from the core second-line medicines to treat the DR-TB includes group A (levofloxacin, moxifloxacin, gatifloxacin), group B (amikacin, capreomycin, kanamycin, streptomycin), and group C (ethionamide/prothionamide, cycloserine/terizidone, linezolid, clofazimine) plus one drug or none from the add-on agents from group D (pyrazinamide, ethambutol, high-dose isoniazid, delamanid, bedaquiline, paminosalicylic acid, imipenem-cilastatin, meropenem, and amoxicillin-clavulanate) [10]. However, the presence of HIV-coinfection influences the successful outcomes to treatment with the second-line anti-tuberculosis medicines; for that, both infections are commonly indicated as cursed duets that exist together and affect the outcomes of each other [11,12].
High mortality was unfavorably associated with the treatment of DR-TB [13], and such an unsuccessful treatment outcome is more distressing to a community in resource-limited settings such as sub-Saharan Africa (SSA). Again, the rate of unfavorable treatment outcome occurring with the second-line tuberculosis therapy is an aspect which is alarming and threatening to the global progress towards the end TB strategy targets set by 2030 [9]. The WHO defines the unfavorable treatment outcome for a DR-TB as the sum of the numbers of death, treatment failure, loss to follow-up and unknown outcome identified during the courses of second-line tuberculosis therapy [14]. Prolonged therapy is required for the toxic secondline anti-tuberculosis treatment which is burdensome for patient compliance [13]. This prolonged treatment with the less tolerated second-line anti-tuberculosis treatment regimen might have a lower rate of patient compliance with the treatment that could affect the outcome of therapy [15,16]. As a result, a lower rate of unfavorable outcome during the DR-TB treatment is a key component that indicates a successful outcome for the epidemic [17]. More importantly, clinical complexity linked to the co-management of DR-TB and HIV-infections, demands comprehensive evidence that helps inform a successful treatment strategy for the second-line anti-tuberculosis drugs in HIV-infection [18]. Accordingly, a comprehensive estimate for risks of unfavorable outcomes to second-line TB therapy among HIV-infected patients compared to the HIV-uninfected ones is mandatory to address such aspects in pillars of the target set. Therefore, this study was aimed to pool the overall risk ratio (RR) for the unfavorable outcome to second-line TB therapy between HIV-infected and HIV-uninfected patients in SSA.
Study protocol
The method of this meta-analysis was reported as per the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement recommendations [19]. We performed selection of records, screening processes and eligibility evaluations against the predefined inclusion criteria following the PRISMA flow diagram [20]. We also strictly followed the PRISMA checklist during execution of this meta-analysis. Protocol for this metaanalysis was registered on the International Prospective Register of Systematic Reviews (PROSPERO) (ID: CRD42020160473).
Data search strategy
We conducted systematic searches of databases and legitimate indexing services to identify and include potential records. PubMed/MEDLINE (Ovid), EMBASE (Ovid), and SCOPUS were visited as major sources of data search from December 25, 2019, to February 15, 2020. Besides, we also searched Google Scholar and ResearchGate/directories to retrieve relevant records left unaddressed by the legitimate databases visited. The records identified by ResearchGate/directories were individually saved and linked to Endnote via the Google Scholar. Again, we searched for unpublished studies (grey literature) through the Google Scholar. We considered the unpublished studies to reduce their impacts on the publication bias. The search strategy involved combinations of one or more of the following terms: "second-line � ", "drug-resistant", multidrug-resistant (MeSH), tuberculosis (MeSH), "HIV-infection", "loss to follow-up", treatment failure (MeSH), death (MeSH), and Africa, South of the Sahara (MeSH). Moreover, we employed the truncation of search terms and Boolean operators (AND, OR) as appropriate to expand and fine tune the search strategy thereby identify and include more records.
Inclusion and exclusion criteria
We screened original articles that address treatment outcomes of second-line anti-tuberculosis drugs among HIV-infected patients compared to those HIV-uninfected and reported in the English language for inclusion in the meta-analysis. As such, we assessed the eligibility of studies reporting unfavorable outcomes (i.e., treatment failure, death, loss from treatment (i.e., either loss to follow-up or treatment default)) to the second-line TB therapy among the HIVinfected versus HIV-uninfected patients with at least one of the outcome in the definition of the unfavorable outcome and conducted in Africa, South of the Sahara. However, we excluded articles with outcomes unrelated to the outcome of interest (i.e., unfavorable outcomes not reported as died, treatment failed or lost from treatment) during the screening and eligibility assessments. Again, we excluded articles that report outcomes of mixed patients from extensively drug-resistant TB and MDR-TB and with no separate outcome report for the MDR-TB during courses of the treatment. Moreover, we excluded articles that fulfilled eligibility evaluation for inclusion but which did not meet the quality requirements.
Screening and eligibility assessment
First of all, we identified, downloaded, and exported records retrieved through a systematic search of electronic databases, indexing services, and directories with a compatible format to Endnote reference software, version 8.2 (Thomson Reuters, Stamford, CT, USA). Secondly, we identified, registered and removed duplicate records from the shortlisted references by the use of Endnote. Following this, we manually identified and removed duplicates resulting from variation in citation styles of some databases and indexing services. Next, two authors, Dumessa Edessa (DE) and Mekonnen Sisay (MS), independently screened the retained records by their titles and abstracts based on the predefined inclusion criteria. Finally, two authors, DE and Yadeta Dessie (YD), independently collected and evaluated full-texts of the retained articles for eligibility assessments.
Quality assessment and data extraction
We performed methodological quality assessments of the retained articles for inclusion by using the Joanna Briggs Institute's (JBI) critical appraisal checklist for cohort studies [21]. Accordingly, we employed the appraisal scores of the two authors in consideration of the third author's score in case of appraisal result disagreement between the two authors. Next, we ranked the articles by overall scores of positive responses to questions of the JBI's critical appraisal checklist for methodological qualities. Finally, we included all studies which fulfilled the eligibility requirements and with the overall positive scores higher than 50% in the metaanalysis.
We prepared a data abstraction format in Microsoft excel sheet to extract all relevant data about study characteristics (name of the first author, year of publication, study setting/country, study design, category of study participants (children, adolescents, adults, mixed-age patients), sample size, months of follow-up, and events of interest (number of patients with second-line tuberculosis treatment failure, number of patients died during the treatment and number of patients lost from the treatment follow-up) in both the HIV-infected and HIV-uninfected patients.
Outcome variables
The pooled RR estimate for the overall unfavorable outcome to second-line tuberculosis therapy among HIV-infected versus HIV-uninfected patients as defined by the WHO [9] was the primary outcome variable. We also conducted subgroup analyses for the overall unfavorable outcome based on categories of study participants and regions of the SSA. The separate pooled RR estimates for deaths, treatment failure and loss from treatment among HIV-infected versus HIV-uninfected patients treated with second-line tuberculosis therapy were secondary outcome variables we considered.
Data synthesis and analysis
We generated study identification and outcome for the dichotomous and discrete data type from the included studies for the HIV-infected versus HIV-uninfected DR-TB patients in Review Manager (RevMan) software, version 5.3 to analyze the pooled RR estimate of the overall unfavorable outcome measures and subgroup analyses. We employed the same software to estimate RRs for outcomes including death, treatment failure and loss from treatment. Again, we conducted subgroup analyses for the overall unfavorable outcome by the age category of patients and regions/settings of the SSA. We also conducted subgroup analysis to estimate RR for death outcome by regions of the SSA. Considering the variation in true effect sizes across study subjects, we applied Mantel Haenszel's random-effects method for the analyses at a 95% confidence level. We assessed the variation in study characteristics (heterogeneity) by using Tau 2 , chi 2 and I 2 statistics. We also employed RevMan 5.3 for publication bias assessment by the symmetry of funnel plots for standard error of logit RR. We employed forest and funnel plots to present results of our analyses.
Results
From the systematic electronic search of legitimate databases and indexing services, we retrieved a total of 1134 archives. After removing 280 duplicate records by the use of Endnote and manual screening, we were left with 854 records. Following this, we screened the retained records by their titles and excluded 346 records. Again, we excluded 415 records by screening abstracts. From this 761 total records excluded by screening titles and abstracts, 556 of them were with unrelated outcome of interest; 180 of them were discussion papers; 18 of them were records from outside setting or mixed settings with no separate data for the SSA, and 7 of them were published with non-English languages. Next, we conducted quality and eligibility assessments for 93 full-text articles along with the predefined inclusion and exclusion criteria. Accordingly, we excluded 74 full-text articles with reasons (i.e., 51 of them with irrelevant outcomes; 20 of them with insufficient information and 3 of them with mixed patients from extensively DR-TB and MDR-TB but with no separate outcome reported for MDR-TB treatment). In line with this, we employed JBI's critical appraisal checklist for methodological quality assessment of the retained articles (i.e., see score for each article from S1 Table).
In the end, we included 19 articles with greater than fifty percent of the average positive score for methodological quality assessments and with a report of at least an overall unfavorable outcome or any outcome that constitutes the definition of unfavorable outcome (i.e., death, treatment failure, loss from treatment). PRISMA flow chart illustrating the identification, screening, and eligibility assessment processes is shown in Fig 1 and S2 Table.
Study characteristics
The 19 studies included in this meta-analysis had 4,481 participants from HIV-infected DR-TB patients compared to the 3,820 participants in the DR-TB infection only. Overall (Fig 2). Similarly, the pooled estimate of the RR for death among the HIV-infected versus HIV-uninfected patients treated by second-line anti-tuberculosis treatment was 1.50 (95% CI: 1.30-1.74; Z = 5.55; P<0.00001; I 2 = 39%; P = 0.05). The RR estimates for the death outcome in the included studies ranged from 0.73 (95% CI: 0.45-1.16) to 2.18 (95% CI: 1.02-4.65) (Fig 3).
Sensitivity and subgroup analyses
We executed sensitivity analyses by excluding two outliers [22,24] and/or more studies [33, 39], but they did not have significant changes on the degree of heterogenity among the included studies. Following this, we included all studies in the meta-analysis. The studies we considered for the sensitivity analyses as outliers were those studies with RR estimates far smaller and/or greater than the pooled RR estimate for the outcome interest. We also performed subgroup analyses for the overall unfavorable outcome based on age group of patients (adults and adolescents, mixed-age, children) and regions of the SSA in which the studies were carried out (western, eastern, southern) to reduce the degree of heterogeneity in the included studies. As a result, the pooled RR estimates for the unfavorable outcome in the HIV-infected versus HIV-uninfected patients treated by second-line TB therapy were 1. Similarly, we performed sensitivity analyses by excluding two outliers [24, 36] and/or more studies [27,29] for the pooled RR estimate of death between the two comparison groups. However, they did not have significant changes in the degree of heterogeneity in the included studies. Besides, we excluded outliers [33, 37] and one or more studies for sensitivity analyses about the RR estimate for treatment failure, but none had significant changes in the degree of heterogeneity among the included studies. Moreover, we excluded outliers [35, 40] and/or more studies [27, 29], but they did not have significant changes in the degree of heterogeneity among the included studies. As a result, we included all the studies reporting deaths (n = 16), treatment failure (n = 10) and loss from treatment (n = 14) in their respective meta-analyses.
Publication bias
We assessed the effects of small-studies (publication bias) on our estimates using funnel plots under the fixed-effects model that helped us visualize the symmetry status of each funnel plot (i.e., presence of symmetrically inverted funnel in the absence of bias). However, we did not perform either Egger's regression test or Begg's correlation test as two of the required criteria
PLOS ONE
Second-line tuberculosis treatment outcomes in HIV-infected versus HIV-uninfected patients in SSA for appropriateness of these quantitative tests were not fulfilled. That means, the ratio of extreme variance across studies was not greater than four and the pooled RR estimate for the unfavorable outcome had a statistically significant heterogeneity (I 2 = 48%; P = 0.01) even though the studies included were > 10 in number and had more than one studies with significant results [42,43]. In principle, visual evaluation for effect estimates from larger studies that spread narrowing at the top of the plot, with more widely scattered estimates at the bottom of the plot among smaller studies could inform the presence of bias. Again, the inclusion of unpublished studies in our analysis might have impacted and reduced of the risk of publication bias. Accordingly, none of the funnel plots figured out had effect estimates which were scattered more widely at the bottom of the plots (Fig 7A-7D).
Discussion
The overall risk of unfavorable treatment outcome to second-line anti-tuberculosis drugs was 1.18 times higher among HIV-infected patients than in those HIV-uninfected ones. This was a significant increment by 18% in HIV-infected patients compared with the HIV-uninfected ones, both of them treated by second-line anti-tuberculosis medicines. Despite their varying magnitudes, several studies revealed higher rates of reports for an unfavorable outcome to second-line TB therapy in HIV-infected patients than in those with no HIV-infection. Accordingly, a study comparing the proportion of successful outcomes during the second-line TB therapy indicated that 28% of HIV-infected and 16% of HIV-uninfected patients had unfavorable outcomes [44]. Another study also explained that the risk of unfavorable outcome was 1.14 times higher among HIV-infected patients than in those HIV-uninfected [45]. Again, for several studies with different sample sizes but assessed treatment outcomes to second-line TB therapy, the risks of unfavorable outcome were 1. 5 Importantly, a lower rate of favorable outcome (48%) than a global rate of a successful outcome (54%) during the second-line TB therapy among HIV-infected patients is indicative of a difficulty to attain greater rates of favorable outcome among the TB/HIV-coinfected patients [55,56]. In line with this, TB treatment interruptions and more hospital readmissions due to adverse drug events linked to interactions and treatment complexities might also contribute to the increased rate of unfavorable outcomes in the TB/HIV-coinfection [57,58]. Also, negative influences such as social discrimination and inattention were important barriers to the effective treatment of the TB/HIV-coinfection with high mortality rates (i.e., death as a component of unfavorable outcome) [59]. Again, drug-interactions, overlapping toxicities, and inflammatory immune reconstitution syndromes are the unique treatment challenges and the cursed duets of TB/HIV-coinfection [60,61]. Specifically, severe side effects and high-level fatigue, stress, and burden of stigma were critical barriers to patient adherence in MDR-TB/HIV-coinfection [62]. More remarkably, there is a hypothesis that explains differences in a variety of transcriptional patterns and expression of genes coded by the TB/HIV interactions resulting in increased inflammatory conditions that contribute to the unfavorable outcome [63].
Significantly increased risks of an unfavorable outcome to second-line anti-tuberculosis treatment among HIV-infected versus HIV-uninfected patients were revealed for some patient groups (i.e., 1.22 times higher in mixed-age patients and 1.47 times higher in the eastern region of SSA, compared to their respective counterparts), but the increases in rest of the subgroups were not significant. Consistently, reports of previous studies involving mixed-age patients indicated that older HIV-infected patients had 1.53 [64] and 4.8 [49] times higher hazards of unfavorable outcomes than those older HIV-uninfected. Again, pediatric and elderly individuals were among the mixed-age patients in whom immune statuses might not be competent compared with that of adults/adolescents counterparts. This could be related with immature and reduced immune functions in children and elderly individuals, repectively. However, treatment successes for both DR-TB and HIV-infections require optimal functions of immune cells. The immune functions of such individuals are further weakened in the presence of the HIV and TB co-infections compared with the TB infection alone. The increased risks of unfavorable outcomes in mixed-age patients treated with second-line TB therapy among HIV-infected versus HIV-uninfected patients might hint these differences linked with worsening impacts of both infections on immune competence and age-related physiologic immaturity and/or changes. Alternatively, a study report also highlighted that there was no significant association of the unfavorable outcome by different age groups which aligns with the non-significant risk ratio in adolescent/adult groups treated by the second-line anti-tuberculosis drugs [65]. Again, lack of efficiently integrated services for HIV and DR-TB units in the eastern region of SSA could contribute to the higher risks of an unfavorable outcome to the secondline anti-tuberculosis treatment among the HIV-infected than those HIV-uninfected patients. In alignment with this finding, positive influences of interventions that effectively integrated programs of TB/HIV control strategies on outcomes of the two infections were indicated in the southern region of SSA [48, 66,67].
Similarly, the risk of death during treatment with second-line anti-tuberculosis drugs was significantly increased by 50% (1.50 times higher) among HIV-infected patients compared with those HIV-uninfected ones. Consistently, variable but positive relationships indicated between mortality during course of the second-line anti-tuberculosis regimen and presence of HIV-infection with 1.46 (in a study with 3,802 participants) [68]; 1.7 (for ART initiated) and 2.3 (for no ART) (in a study with 3,566 participants) [69]; 2.35 (in a study with 147 participants) [70]; 4.2 (in a study with 1,768 participants) [71]; 5.6 (in a study with 2,097 participants) [72]; 5.6 (in a study with 1,209 participants) [73]; and 29.9 (in a study with 50 participants) [74] times higher risks or odds of death in the HIV-infected patients than in those HIV-uninfected counterparts. Again, reports for the rates of mortality during the second-line TB therapy among HIV-infected versus HIV-uninfected patients showed 14% versus 6% in a study with 206 participants [75], 20% versus 9% in a study with 671 participants [76], and 72% versus 20% in a study with 173 participants [77]; all of them aligned with our study finding. Besides, early mortality and mortality adjusted after default were the most common reasons justified for the higher risks of unfavorable outcomes during the second-line TB treatment in the HIVinfected versus HIV-uninfected patients [78,79].
Different from the risks of death and overall unfavorable outcome, rates for the loss from treatment and treatment failure were relatively lower during the courses of second-line TB treatment in HIV-infected patients than in those HIV-uninfected. A non-significant decrease in the pooled risk of loss from treatment estimated was 0.82 times lower in HIV-infected patients than in those HIV-uninfected. In agreement to this finding, study reports indicated 11.8% versus 26.2% (P<0.001) and 14.2% versus 35.2% rates of loss to follow-up among the HIV-infected versus HIV-uninfected patients; and the loss to follow-up was 7.67 (95% CI: 1.00-59.0) times more likely in the HIV-uninfected patients than in those HIV-infected [80][81][82]. Also, reminding information via cell phone as a part of the HIV/TB integrated program enabled tracing lost patients and resulted in twice more likely returnee for traced patients than untracked ones [83]. Such a program could reduce the number of losses to follow-up in HIV/ TB co-infection. Again, a relatively higher number of treatment defaults reported as a death in HIV-infected versus HIV-uninfected MDR-TB patients through continuous tracing of patients lost from follow-up could indicate its negative impact on the number of loss from treatment among the HIV-infected patients [78]. Besides, patient-provider interactions were likely stronger for more patient empowerment and support during second-line TB therapy among HIV-infected patients compared with those HIV-uninfected [84,85]. More importantly, higher rates of severe adverse drug events and hospital readmissions due to the events during second-line TB treatment in HIV-infected patients than in those HIV-uninfected might reduce the rates of loss from treatment [86,87].
Despite inconsistencies in reports of some studies regarding this finding, infection management strategies for both HIV and TB programs could have a synergistic effect that can reduce the risks of loss from treatment during periods of the second-line TB therapy in patients with HIV-infection. We estimated a statistically non-significant reduced risk of treatment failure in HIV-infected patients than in those HIV-uninfected. In line with this, a study report also highlighted a non-significant increase in odds (1.1 times) of the treatment failure among HIVinfected patients [73]. Alternatively, a study report indicated 1.6 times higher odds of treatment failure in HIV-uninfected patients than in those HIV-infected [79]. Again, a previous study explained more frequent support for compliance in HIV-infected patients (23%) than in those HIV-uninfected (7%) [81]. Accordingly, good compliance with treatment is a key component of strategies that can reduce the rates of treatment failure [88][89][90][91][92][93][94].
Despite the large individual patient data pooled for this meta-analysis, it is not without limitations. First, the studies considered for this meta-analysis were observational by nature. This selection might have resulted in a higher degree of heterogeneity with a range of potential biases. However, we employed a random-effects model of analysis which is an appropriate method in such an anticipated heterogeneity. Besides, we executed sensitivity and subgroup analyses to reduce the degree of heterogeneity. Second, we included articles written in the English language and this could under-or over-estimate the pooled RR estimates for unfavorable outcomes during second-line TB therapy in the SSA. Therefore, interpretations of these findings should be seen in context of the aforementioned limitations.
Conclusions
We found that the risk of overall unfavorable outcome to second-line TB therapy among patients treated in SSA was significantly higher in HIV-infected patients compared with those HIV-uninfected. It was highly increased in mixed-age patients and the eastern region of SSA. The risk of death was alarmingly increased by 50%, but both treatment failure and loss from treatment were the outcomes with decreased rates in the HIV-infected patients compared with those HIV-uninfected. Therefore, special strategies that reduce the risks of death should be discovered and implemented for HIV and DR-TB co-infected patients on second-line tuberculosis therapy. Besides, integrating the HIV and DR-TB treatment strategies in the eastern region of SSA could optimize outcomes of HIV-infected patients during their second-line TB therapy.
Supporting information S1
|
v3-fos-license
|
2022-10-02T15:09:31.703Z
|
2022-09-30T00:00:00.000
|
252653616
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/23/19/11547/pdf?version=1664509955",
"pdf_hash": "f5dfb9707e02d845d483e8b999fef9ceb4b9de08",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2963",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "4b8b7e99e9c4a86d9ff51217140a557e88932b69",
"year": 2022
}
|
pes2o/s2orc
|
Dynamic Transcriptional Landscape of Grass Carp (Ctenopharyngodon idella) Reveals Key Transcriptional Features Involved in Fish Development
A high-quality baseline transcriptome is a valuable resource for developmental research as well as a useful reference for other studies. We gathered 41 samples representing 11 tissues/organs from 22 important developmental time points within 197 days of fertilization of grass carp eggs in order to systematically examine the role of lncRNAs and alternative splicing in fish development. We created a high-quality grass carp baseline transcriptome with a completeness of up to 93.98 percent by combining strand-specific RNA sequencing and single-molecule real-time RNA sequencing technologies, and we obtained temporal expression profiles of 33,055 genes and 77,582 transcripts during development and tissue differentiation. A family of short interspersed elements was preferentially expressed at the early stage of zygotic activation in grass carp, and its possible regulatory components were discovered through analysis. Additionally, after thoroughly analyzing alternative splicing events, we discovered that retained intron (RI) alternative splicing events change significantly in both zygotic activation and tissue differentiation. During zygotic activation, we also revealed the precise regulatory characteristics of the underlying functional RI events.
Introduction
The absolute star of RNA study has been mRNA for a very long time. However, over the past 20 years, human understanding of RNA has substantially increased, and an increasing number of RNA kinds, functions, and regulatory mechanisms have been identified. The general functions of long non-coding RNAs (lncRNAs) and alternative splicing in biological processes are the most fascinating.
A family of RNA molecules known as lncRNAs are those that are longer than 200 bp and do not encode proteins. They can take part in a variety of life activities, including gene expression regulation, cell differentiation and development regulation, disease occurrence, and other biological processes [1][2][3]. Expressed retrotransposons, for example, are selectively active during the process of mammalian zygotic gene activation (the process of epigenetic reprogramming) [4][5][6][7].
Alternative splicing is the mechanism by which cells generate numerous mature mRNA products from a single pre-mRNA. It is a common mechanism in eukaryotes and is involved in a variety of biological processes, including development, cell and tissue differentiation, and disease incidence [8][9][10]. Retained intron (RI) is one sort of alternative splicing pattern in which alternative introns are not spliced out but are retained in mature mRNA. 2 of 24 RI is the most common type of alternative splicing event in plants and fungi, and it has been extensively researched [11,12]. It has also been gradually demonstrated to be functional in animals [13]. RI may regulate gene expression via a variety of ways, including nonsense-mediated mRNA decay (NMD), blocking mRNA export from the nucleus, and creating novel gene products. It is vital for plant development and stress resistance, and it has also been linked to a number of human disorders [13,14].
RNA-seq technology based on next-generation sequencing considerably simplifies transcriptome research. Because it does not rely on poly(A) tails for library construction, Illumina strand-specific RNA sequencing (ssRNA-seq) technology is frequently employed in systematic identification and expression profiling of lncRNAs [15][16][17]. Due to the fact that it provides specific benefits in the identification of transcript structure as a result of its lengthy sequencing reads, single-molecular real-time sequencing (SMRT)-based RNAseq is frequently employed for full-length transcript assembly and alternative splicing analyses [18,19].
These sequencing technologies have made it much easier to construct a baseline transcriptome in a number of different species. As a result, our knowledge regarding the function of various RNAs and the alternative splicing events they participate in during the course of development has been significantly expanded [20][21][22][23][24][25][26][27]. Research in developmental biology can benefit from using fish embryo since it is simple to observe and manipulate and is a suitable material to study. If we carry out expression profiling research on a number of various time periods, it will unquestionably assist us in gaining a deeper understanding of the molecular control mechanism of ontogeny and differentiation at the transcriptional level.
Single-molecular real-time (SMRT)-based RNA-seq has been used a lot in fish studies to look at alternative splicing events. However, these studies have mostly focused on transcriptome assembly [28][29][30][31] and specific study subjects [32][33][34][35][36][37][38], and they have not looked at how alternatively spliced transcripts change in expression over time in a systematic way.
At the moment, most research on fish baseline expression is done on zebrafish, the most famous model fish. RNA microarray technology was used by Mathavan et al. in 2005 to look at the expression of 14,904 genes at 12 different times, from when the eggs were still unfertilized to when they were two days old [39]. Later, when RNA-seq technology advanced, the dynamics of gene expression throughout zebrafish development were investigated in more depth [40][41][42][43][44]. In comparison, studies on other fish species are not only scarce, but also of poor quality (with fewer time points). For example, only three samples encompassing three developmental time periods were gathered in a study of the Siberian sturgeon (Acipenser baeri) [45]. No more than five samples were collected per species in a comparative transcriptome investigation of six cichlid fishes [46]. Only six time points were explored in the bighead (Hypophthalmichthys nobilis) study, primarily including its early stage development [47]. In addition, 7 time points spanning 2 to 10 days post-fertilization (2-10 dpf) were sampled in the investigation of transcriptome dynamics of channel catfish (Ictalurus punctatus). However, because developmental morphology was not used as a reference during sampling, crucial time points in embryonic development were not gathered [48].
With an annual output of 5.7 million tons in 2018, accounting for 6.9 percent of the world's total farmed fish production, grass carp (Ctenopharyngodon idella) is the world's most productive farmed fish kind [49]. Currently, grass carp lncRNA research focuses mostly on the immunological response to infection, environmental stress response, feeding, and growth [50][51][52][53], with no systematic study to describe lncRNAs. The availability of the draft grass carp genome has tremendously aided genetics breeding research and other basic grass carp research [52,[54][55][56][57]. However, the draft genome annotation is lacking in information on lncRNAs and alternatively spliced transcripts, making it challenging to meet the demands of increasingly diversified data processing. As a result, a thorough baseline expression investigation on grass carp is required to expand its gene functional annotation while also gaining basic information of gene transcription dynamics in its development and differentiation.
In this study, we used SMRT-based RNA-seq and ssRNA-seq technology to systematically examine the essential transcriptional events in grass carp embryonic development to differentiated tissues. We collected 41 samples of 11 grass carp tissues at 14 time periods, built a transcriptome with a completeness of up to 93.98 percent (BUSCO), enhanced grass carp genome annotation, and recreated the entire grass carp development and differentiation landscape at the transcriptional level. We discovered a SINE family and its motif-related transcripts with unique expression patterns, and we detailed the specificity of RI alternative splicing events in embryonic development and tissue differentiation, as well as their possible regulatory elements.
Transcriptome Sequencing and Assembly
Given the spatiotemporal specificity of gene expression, we sampled as many time points and tissue types as possible. We obtained 41 samples for transcriptome sequencing in a whole sibling population of grass carp, from fertilized egg to 197 days following fertilization, covering 22 time periods and 11 tissues (embryos are considered one tissue) ( Table 1). These developmental stages are separated into four stages: zygotic genome activation (ZGA, T2-T8), gastrula to hatching out (GH, T9-T16), larvae (T17-T20), and juveniles. After sequencing, we were able to extract a total of 621 Gb of Illumina short reads and 87 Gb of PacBio long reads. Following assembly (shown in Figure 1A), we were able to get a total of 33,055 genes and 77,582 transcripts. In contrast to the annotation results of the grass carp genome (gc.v1) [54], the grass carp transcriptome (gc.v2) includes a greater number of alternative splicing events as well as more annotations for non-coding RNA ( Figure 1B,C, Supplemental Table S1). The gc.v2 genome contains a greater number of transcripts and genes than the gc.v1 genome. The gc.v2 is the transcriptome that was assembled in this study, and the gc.v1 is the annotation information that was found in the draft grass carp genome [54]. (C) Statistics on the types of transcripts found in the gc.v2 transcriptome. (D) Analysis of the completeness of the gc.v2 transcriptome using BUSCO, which is based on a database of rayfinned fish. The result of calculating gc.v2* was done so under the assumption that all clusters produced by SMRT sequencing are able to generate transcripts based on the grass carp genome. This means that gc.v2* is equal to gc.v2 plus the transcripts that cannot be successfully aligned to the grass carp genome in the SMRT data analysis. D. rerio denotes the zebrafish transcriptome (GRCz11); M. anguillicaudatus represents the loach transcriptome based on SMRT sequencing [28]; O. brevibarba denotes the Onychostoma abrevibarba transcriptome based on SMRT data [36].
The completeness of the gc.v2 transcriptome reaches 93.98 percent, which is better than that of the loach (Misgurnus anguillicaudatus) and the Onychostoma brevibarba, which are also based on SMRT sequencing; the only other transcriptome completeness that is lower is that of the zebrafish (GRCz11) ( Figure 1D). When the SMRT transcripts that could not be matched to the grass carp genome are taken into consideration, the level of completeness of our assembled transcriptome is remarkably similar to that of the zebrafish transcriptome: a difference of 1.72 percent, or 79 out of 4584 total transcripts. The expression levels were used in the principal component analysis (PCA) analysis of all 41 samples, and it was discovered that three-dimensional PCA had the ability to recreate the developmental trajectory and tissue differentiation scenarios of the samples ( Figure 2). The principal component 1 (PC1) is able to effectively differentiate embryo from differentiated tissues, and the relationship between the larva fish (T17-T20) within one month of hatching is closer to differentiated tissue. On the other hand, the fish that is hatching (T16) is just located between the embryo and the juveniles. The completeness of the gc.v2 transcriptome reaches 93.98 percent, which is better than that of the loach (Misgurnus anguillicaudatus) and the Onychostoma brevibarba, which are also based on SMRT sequencing; the only other transcriptome completeness that is lower is that of the zebrafish (GRCz11) ( Figure 1D). When the SMRT transcripts that could not be matched to the grass carp genome are taken into consideration, the level of completeness of our assembled transcriptome is remarkably similar to that of the zebrafish transcriptome: a difference of 1.72 percent, or 79 out of 4584 total transcripts.
The Developmental Trajectory of Grass Carp
The expression levels were used in the principal component analysis (PCA) analysis of all 41 samples, and it was discovered that three-dimensional PCA had the ability to recreate the developmental trajectory and tissue differentiation scenarios of the samples ( Figure 2). The principal component 1 (PC1) is able to effectively differentiate embryo from differentiated tissues, and the relationship between the larva fish (T17-T20) within one month of hatching is closer to differentiated tissue. On the other hand, the fish that is hatching (T16) is just located between the embryo and the juveniles. There are a total of 4428 transcripts that are associated with PC1. According to the results of the GO enrichment analysis, they were primarily found in the nucleus (the cellular compartment, or CC), participated in the biological pathway (BP) that was associated with the metabolism of nucleic acids, and the nucleic acid binding was the most important molecular function that they possessed ( Figure 3A). The majority of the considerably enriched KEGG pathways are those that are connected to immunological control and fundamental life regulation ( Figure 3B). There are 4068 PC2-related transcripts, with more non-coding RNAs were found than expected (p = 5.33 × 10 −38 ) (Supplemental Table S3). These transcripts were not enriched in any KEGG pathway; GO enrichment analysis shows that they were mainly development-related transcription factors (Supplemental Figure S1). There are a total of 4428 transcripts that are associated with PC1. According to the results of the GO enrichment analysis, they were primarily found in the nucleus (the cellular compartment, or CC), participated in the biological pathway (BP) that was associated with the metabolism of nucleic acids, and the nucleic acid binding was the most important molecular function that they possessed ( Figure 3A). The majority of the considerably enriched KEGG pathways are those that are connected to immunological control and fundamental life regulation ( Figure 3B). There are 4068 PC2-related transcripts, with more non-coding RNAs were found than expected (p = 5.33 × 10 −38 ) (Supplemental Table S3). These transcripts were not enriched in any KEGG pathway; GO enrichment analysis shows that they were mainly development-related transcription factors (Supplemental Figure S1). There are 4884 PC3-related transcripts, with more mRNAs found than expected (p = 2.30 × 10 −9 ). GO enrichment analysis shows that they were mostly related to ion transport (Supplemental Figure S2). The most significant pathways in KEGG enrichment analysis were nicotine addiction, glutamatergic synapse, and neuroactive ligand-receptor interaction (Supplemental Figure S3).
Fish Brain Is Highly Differentiated
In order to explain changes in tissue composition in grass carp based on expression patterns, we carried out PCA analysis on all tissue samples ( Figure 4A). It was discovered that PC1 based on tissue samples was highly comparable to PC3 based on all samples, and that PC2 based on tissue samples was highly similar to PC4 based on all samples. (Supplemental Figure S4). The differences between the various tissues of grass carp were, on average, substantially greater than those between the several phases of development present in the same tissue ( Figure 4A). There was a significant difference between brain tissue and the other tissues, and the majority of this difference was explained by PC1 with an explanatory degree of up to 25 percent. PC1-related genes were mainly enriched in GO terms related to ion channels, ion transport, and synapses ( Figure 4B), which clearly reflected the characteristics of metabolic activity in the brain; the most significant pathways of KEGG enrichment analysis were nicotine addiction, glutamatergic synapse, and neuroactive ligand-receptor interaction ( Figure 4C).
There are 4884 PC3-related transcripts, with more mRNAs found than expecte 2.30 × 10 −9 ). GO enrichment analysis shows that they were mostly related to ion tra (Supplemental Figure S2). The most significant pathways in KEGG enrichment an were nicotine addiction, glutamatergic synapse, and neuroactive ligand-receptor in tion (Supplemental Figure S3).
Fish Brain Is Highly Differentiated
In order to explain changes in tissue composition in grass carp based on expr patterns, we carried out PCA analysis on all tissue samples ( Figure 4A). It was disco that PC1 based on tissue samples was highly comparable to PC3 based on all sample that PC2 based on tissue samples was highly similar to PC4 based on all samples. plemental Figure S4). The differences between the various tissues of grass carp we average, substantially greater than those between the several phases of developmen sent in the same tissue ( Figure 4A). There was a significant difference between brain and the other tissues, and the majority of this difference was explained by PC1 w explanatory degree of up to 25 percent. PC1-related genes were mainly enriched terms related to ion channels, ion transport, and synapses ( Figure 4B), which clea flected the characteristics of metabolic activity in the brain; the most significant path of KEGG enrichment analysis were nicotine addiction, glutamatergic synapse, and roactive ligand-receptor interaction ( Figure 4C). Figure 2).
Muscle Development and Basic Metabolic Regulation Describe Major Differences among Fish Organs
Except for the brain, PC2 and PC3 in tissue PCA analysis described the origin information of tissues jointly ( Figure 4D). Muscle (HM) and heart (HH) were grouped together; liver (HL), spleen (HP), gut (HI), and kidney (HK) were grouped together to the left of PC2; and skin and gills were grouped together to the top right.
Significant GO terms in PC2 were largely connected to muscle, while significant KEGG pathways were mostly related to cardiac disease, according to enrichment analysis (Supplemental Figures S5 and S6). PC3-related genes were enriched in GO terms related to enzymes, amino acid metabolism, and organ development, while the significant KEGG pathways were mostly related to immunity (complement and coagulation cascades), regulation (ribosome), and amino acid metabolism (glycine, serine, and threonine metabolism) (Supplemental Figures S7 and S8). Coding RNAs were substantially more abundant in PC2-related genes, whereas non-coding RNAs were more abundant in PC3-related genes (Supplemental Table S4).
Expression Scenery of Grass Carp Transposon
According to the annotation of the grass carp genome, transposable elements (TEs) are widely distributed in the grass carp genome, accounting for 38.06 percent of total genome sequences (Supplemental Table S5). Moreover, based on the gc.v2 transcriptome, we discovered that the majority of transcripts are connected with repeat elements (52,686 TE-related transcripts), yet only 64 of these transcripts were classified as transposons (see Section 4). When compared to their distribution in the genome, the discovered expressed transposons were imbalanced ( Table 2, Supplemental Table S5). For example, the widely dispersed DNA transposons DNA/hAT (23.53 percent) were not found to be expressed, although DNA/TcMar (6.29 percent) were. Long terminal repeats (LTR) were a very prevalent retrotransposon (5.22 percent) in the grass carp genome, but only two of them were found to be expressed. The two most often expressed retrotransposons, long interspersed nuclear element (LINE) and short interspersed nuclear element (SINE), belonged to non-LTR that were relatively infrequent (4.8 percent) in grass carp total TEs. Unclassified SINE, on the other hand, accounted for 64.06 percent (41/64) of all expressed transposons, despite accounting for only 0.6 percent of total TEs. Even more intriguing, all of the discovered expressed SINE transposons are members of rnd-3_family-293.
SINE Element rnd-3_family-293 Is Specifically Expressed during Embryonic Development
The embryonic transcriptome of grass carp largely maintained the content of maternal RNA before the activation of the zygotic genome (T2-T6), and the amount of various RNA molecules was reasonably steady ( Figure 5A). Transposons' expression was rigorously restricted at the T2-T5 (ovum to 64-cell) stage compared to other genes, but was clearly expressed at the T6 stage (256-cell, Figure 5B). At the moment, all transposon el-ements expressed belonged to rnd-3_family-293. These transcripts were only expressed during embryonic development (T6-T16, Figure 5C), and the patterns of expression were strikingly consistent.
Motif Analysis for Family rnd-3_family-293
The rnd-3_family-293 transposons have a tRNA-derived head sequence, a body with a 65 bp Ceph domain [58], a 70 bp CORE domain [59], and a 116 bp tail designated as unknown domain ( Figure 5D). We examined the motifs of the 41 expressed transposons in rnd-3_family-293, and the large majority of these sequences contained motif 1 (41/41) and motif 5 (30/41) ( Figure 5E,F), with the two motifs positioned in the upstream region of the tRNA header ( Figure 5G).
Sequences Possessing Both Motif 1 and Motif 5 Were Specifically Expressed during Embryonic Development
We analyzed the transcriptome for transcripts related with motif 1 and motif 5 to see if they could be regulatory sequences. There was a strong correlation between the expression patterns of rnd-3_family-293 and those of motif-related transcripts (transcripts containing both motifs) ( Figure 5C,H); transcripts containing just motif 1 or only motif 5 did not show this expression pattern ( Figure 5I,J). We performed qPCR on eight motif-related genes and eight rnd-3_family-293 genes. CIWT.15163.1 and CIWT.15487 were predicted as tRNAs using Rfam for two of the eight motif-related genes, all of which are lncRNAs (Supplemental Table S6). As predicted by high-throughput sequencing, the qPCR data showed that motif-related transcripts and the rnd-3_family-293 transposon had different expression patterns from T6 to T8, with the maximum level of expression occurring at T7 and very low levels at other stages (T2, T5, T20) (Supplemental Figure S9).
The composition ratio of alternative splicing types of grass carp samples from gastrulation to juvenile stage (GH, larva, juvenile) is close to the overall ratio (all) ( Figure 6B). While the composition ratio in the ZA stage samples was distinct, the proportion of RI was much lower than in other stages of development ( Figure 6B). However, the compositional ratio of ZA resembled that of ZA.zf (zebrafish ZGA stage) ( Figure 6B).
RI Events Changes Dramatically during T6-T7
The Ψ value was used to describe the expression preference for splicing events. When Ψ = 1, all expressed transcripts are form 1; when Ψ = 0, all expressed transcripts are form 2; when Ψ = 0.5, two types of transcripts are equally expressed ( Figure 6A, see Methods).
The distribution of Ψ values for AL and AF varied widely across all developmental time series samples (T2-T20), but these difference were not statistically significant due to the small number (<40) of detected alternative splicing events for both events (Supplemental Figure S10, Supplemental Table S7). The distribution of the value of splicing patterns A3, A5, AF, and SE altered from flat to narrow from before to after zygotic genome activation (T2-T6), indicating that the two kinds of alternatively spliced transcripts were expressed more uniformly in the embryos after zygotic activation. The most substantial changes in RI occurred during embryonic development, demonstrating a considerable rise in the mean value of during zygotic activation ( Figure 6C). According to the p-value distribution of RI, its variation in embryonic phases was also substantially greater than that of other splicing types ( Figure 6D). The most significant changes of RI events occurred during T6-T7 (p = 5.56 × 10 −37 , the largest outlier in Figure 6D), i.e., from the 256-cell stage to the Dome stage, which corresponds exactly to the full activation of the zygotic genome. The composition ratio of alternative splicing types of grass carp samples from gastrulation to juvenile stage (GH, larva, juvenile) is close to the overall ratio (all) ( Figure 6B). While the composition ratio in the ZA stage samples was distinct, the proportion of RI was much lower than in other stages of development ( Figure 6B). However, the compositional ratio of ZA resembled that of ZA.zf (zebrafish ZGA stage) ( Figure 6B).
RI Events Changes Dramatically during T6-T7
The Ψ value was used to describe the expression preference for splicing events. When Ψ = 1, all expressed transcripts are form 1; when Ψ = 0, all expressed transcripts are form 2; when Ψ = 0.5, two types of transcripts are equally expressed ( Figure 6A, see Methods).
The distribution of Ψ values for AL and AF varied widely across all developmental time series samples (T2-T20), but these difference were not statistically significant due to In the diagram, the two distinct methods of splicing are indicated by the colors color and black, respectively. These colors represent the two possible outcomes of the process. The exons that are shared by both of the possible splicing types are denoted by the colored boxes that are otherwise empty. When the Ψ value is calculated, the colored splicing forms represent form 1; (B) The ratio of various alternative splicing events expressed at different stages. The ZA.zf indicates the ZGA stage of zebrafish [32]. Samples of grass carp were collected when the fish were in the ZGA stage (T2-T8), the GH stage (T9-T16), and the larva stage (T17-T20), respectively, as indicated by the letters ZA, GH, and larva, which are denoted by those letters. The statistical analysis was carried out on each and every sample of grass carp; (C) The variations in the values Ψ of alternative splicing events of the RI type that occurred during the ZA period; (D) A description of the distribution of the p-value. Classification of the comparison: tissue shows that the comparisons were carried out between 9 tissue samples pairwise and time indicates that the comparisons were carried out between adjacent time intervals in the time series samples from T2-T20.
Characteristics of RI Events during T6-T7
Differences calculated by subtracting the Ψ value in T6 (256 cells) from the Ψ value in T7 (dome) for the same splicing event reflects the change in expression preference of splicing events during development. The distribution of differences for RI alternative splicing events was significantly right-skewed compared to other events: that is, in the T7 period, more intron-retained (RI) transcripts were expressed. Based on this distribution, we defined alternative splicing events with a Ψ difference greater than 0.07 as "over" events, and others were named with "normal" events ( Figure 7A). The difference between the Ψ values of RI events (Ψ(T7)−Ψ(T6)) exhibited an obvious right-skewed distribution. The right side of the red dotted line (0.07), which was classified as an "over" event (which will be referred to as "over" later), and the left side of the line, which was defined as a "normal" event (which will be referred to as "normal" hereafter); (B) The total number of RI events that have taken place in a gene; (C) The expression ratio of the two different splicing forms that have taken place in RI events. The transcripts with the intron kept are denoted by the form "1," while the transcripts with the intron spliced out are denoted by the form "2."; (D) The position distribution of alternative introns (red) and constitutive introns (blue) on the gene (relative to the transcriptional start site); (E) The differences in GC content between alternative introns and constitutive introns; (F) The differences in length between alternative introns and constitutive introns; (G) The motif of the splice site. (H) Motifs of branch-point sequences are also referred to (BPS). Alternative introns of "over" transcripts are referred to as "Over_RI," while constitutive introns of "over" transcripts are referred to as "Over_Cons." Similarly, alternative introns of "normal" transcripts are referred to as "Normal_RI," while constitutive introns of "normal" transcripts are referred to as "Normal_Cons". The difference between the Ψ values of RI events (Ψ(T7)−Ψ(T6)) exhibited an obvious right-skewed distribution. The right side of the red dotted line (0.07), which was classified as an "over" event (which will be referred to as "over" later), and the left side of the line, which was defined as a "normal" event (which will be referred to as "normal" hereafter); (B) The total number of RI events that have taken place in a gene; (C) The expression ratio of the two different splicing forms that have taken place in RI events. The transcripts with the intron kept are denoted by the form "1," while the transcripts with the intron spliced out are denoted by the form "2."; (D) The position distribution of alternative introns (red) and constitutive introns (blue) on the gene (relative to the transcriptional start site); (E) The differences in GC content between alternative introns and constitutive introns; (F) The differences in length between alternative introns and constitutive introns; (G) The motif of the splice site. (H) Motifs of branch-point sequences are also referred to (BPS). Alternative introns of "over" transcripts are referred to as "Over_RI," while constitutive introns of "over" transcripts are referred to as "Over_Cons." Similarly, alternative introns of "normal" transcripts are referred to as "Normal_RI," while constitutive introns of "normal" transcripts are referred to as "Normal_Cons".
Among the "over" events, 151 genes encountered 184 RI events, with the majority of genes experiencing only one RI event and only 28 experiencing RI events greater than or equal to two times ( Figure 7B). Among these genes, intron-retained isoforms tended to have modest levels of expression during T2-T6, but their expression levels began to grow after T7. In contrast, the expression of transcripts with introns spliced out did not vary much ( Figure 7C). The three GO terms, nucleic acid binding, cellular metabolic process, and gene expression, were significantly enriched in the "over" event-related transcripts (hence referred to as "over" transcripts), indicating that these three GO terms were the specific functions of the "over" transcripts (Supplemental Tables S8 and S9).
In terms of intron length, GC content, branch-point sequence (BPS sequence), and other factors that may affect intron splicing, we compared alternative introns (introns engaged in RI events) to constitutive introns (introns that are always spliced out). Alternative introns in "over" transcripts were shown to be more likely to be located at the 5 position of the gene, whereas alternative introns in "regular" event-related transcripts (hereinafter referred to as "normal" transcripts) were found to be more likely to be located in the center of the gene ( Figure 7D). However, the positions of the two types of introns were not significantly different (Supplemental Table S10). More specifically, the alternative introns of the "over" transcripts were longer than the constitutive introns of the "over" transcripts (p = 0.03) and were also longer than the alternative introns of the "normal" transcripts (p = 3.04 × 10 −9 ), while the alternative introns of the "normal" transcripts were shorter than the constitutive introns of the "normal" transcripts (p = 1.30 × 10 −34 ) ( Figure 7E, Supplemental Table S10).
Constitutive introns showed a similar GC content distributions for both "over" and "normal" transcripts ( Figure 7F, p = 0.17, Supplemental Table S10). The GC content of alternative introns was higher than that of constitutive introns in both types of transcripts, among which the GC content of alternative introns of "normal" transcripts was the highest and was more dispersed (over: p = 2.56 × 10 −16 ; normal: p = 6.56 × 10 −39 ). However, the distribution of GC content of alternative introns was not significantly different between "normal" and "over" transcripts (p = 0.05, Figure 7F, Supplemental Table S10).
The BPP software predicted that the BPS sequence of grass carp is NNNYNAN (Y denotes pyrimidine, N denotes any base). The BPS of alternative introns was lower than that of constitutive introns (p = 0.05). However, the BPS scores of alternative introns did not differ between "over" and "normal" transcripts ( Figure 7H, Supplemental Figure S13, Supplemental Table S10).
Distribution of Splicing Isoforms in Nine Tissues and the Tissue Specificity of RI Alternative Splicing
The number of observed alternative splicing events in the 9 sequenced tissues of juvenile fish was positively correlated with the number of expressed genes (Supplemental Figure S14). The most expressed genes and alternative splicing events were found in the brain and gills, while the least were found in the liver and muscle. The ratio of alternative splicing events expressed in each tissue was relatively stable; the four categories of most expressed alternative splicing events were A3, A5, RI, and SE (Supplemental Figure S15, Supplemental Table S7).
We conducted pairwise comparisons of the distribution of Ψ values for alternative splicing across juvenile tissues, and the results differed slightly from those in developmental time series samples, as stated above. The results of pairwise tissue compar-isons revealed that both RI and SE have highly tissue-specific expression ( Figure 6D). Muscle was the most distinct among them in terms of RI alternative splicing events (Supplemental Figure S16A,B). The expression tendency of RI alternative splicing events differs most significantly between muscle and skin tissues (p = 2.95 × 10 −23 , Supplemental Figure S16C). Compared with in skin, the "upregulated" RI genes in muscle, which tended to express more intron-preserved isoforms in T7 than in T6 (Ψ(T7)-Ψ(T6) > 0.07, 94/813), were functional related to metabolism, especially energy metabolism. While the "downregulated" RI genes in muscle, which tended to express more intron-excluded isoforms in T7 than in T6 (Ψ(T7)-Ψ(T6) < 0.07, 262/813), were functional related to the regulation of mRNA synthesis. Spleen tissues showed the second-largest difference with muscle in terms of expression tendency of RI alternative splicing genes (p = 1.32 × 10 −21 , Supplemental Figure S16C). Compared with in spleen, the "upregulated" RI genes in muscle (79/716) were related to lipid metabolism, while the "downregulated" RI genes in muscle (251/716) were associated with the regulation of amino acids and nucleic acids (Supplemental Table S11).
Discussion
The transcriptome can be evaluated in three ways: (a) comprehensiveness-the fraction of all gene loci that are included; (b) exhaustiveness-the fraction of all transcripts from each locus that are known; and (c) completeness-the fraction of transcript models that cover the entire length of the physical RNA molecule, from start to end [61]. The transcriptome (gc.v2) assembled in this study is the most comprehensive (93.98 percent) compared to other non-model fish transcriptomes [28,32]. When compared to the original grass carp annotation (gc.v1), gc.v2 has a larger transcript/gene ratio (>2), indicating a better annotation of alternative splicing events and a higher level of transcriptome exhaustiveness. We anticipate obtaining a more complete molecular sequence due to the use of SMRT sequencing with longer reads [19,62]. In fact, if the median length of transcripts is used as a metric of transcriptome completeness, gc.v2 (3281 bp) outperforms gc.v1 (1642 bp) (Supplemental Table S2).
We created a dynamic picture of the grass carp transcriptome from a larger field of view for the first time under the condition that the transcriptome's quality is ensured. This is a valuable resource for gaining a better understanding of fish development and tissue differentiation. The differences between samples in developmental time series are primarily represented in changes in the cell cycle, which is the most significant KEGG pathway enriched in the PC1-related transcripts defined in PCA analysis based on all samples. The basic fact related with this discovery is that early embryonic development of fish undergoes fast cell division, with a very short cell cycle, but late development embryos and adult cells have a longer cell cycle [63]. Second, in the same PCA analysis, mid-developmental transition samples differed the most from differentiated tissues and cleavage-stage embryos in terms of PC2 (Figure 2). This aligns nicely with the hourglass model or the ground zero concept: the embryo experiences a process of de-differentiation similar to the induction of somatic cells into pluripotent stem cells from early to middle embryonic development. The expressed genes are more conserved in the mid-embryonic stage, the telomere length is longer, and the entropy of the embryonic structure is lower, indicating that the stemness of the cell is stronger at this time, which is more like the starting point of the organism's intergenerational rejuvenation [26,64,65].
Although fish are regarded to be lower vertebrates, the brain is highly differentiated and distinct from other organs in terms of tissue differentiation. The linkages between tissue and its origin germ layer are almost evident from the standpoint of development [66], which is reflected in the overall gene expression profiling. Except for highly differentiated brain tissue, tissues developing from the same layers clustered together in the PCA analysis based on all tissue samples. It suggests that tissues coming from the same germ layer may include a large number of genes/transcripts that express similarly: the aggregated muscle tissue (HM) and heart tissue (HH) both contain a large amount of muscle components and are derived primarily from mesoderm; the liver (HL), spleen (HP), intestine (HI), and kidney (HK) are derived primarily from endoderm; and the skin in the upper right is derived from ectoderm. The origin of the gills in fish has been questioned. The gills of jawless vertebrates are thought to originate from the endoderm, while those of jawed vertebrates (cartilaginous and bony fishes) are thought to come from the ectoderm [67]. Using in situ hybridization and destiny mapping techniques, a recent study found that gill filaments in cartilaginous fish arise from the endoderm [68]. Our findings reveal that the gill is placed in the center of the endoderm and ectoderm, highlighting the complexity and specificity of this tissue's origin.
In vertebrate genomes, transposons and transposon-derived sequences are common [69][70][71]. Transposon elements, which are a major driving force in the production of new genes, are frequently found in expressed genes in the form of repeated sequences [72]. However, because of the potential harm to genome integrity, transposons' "transposition ability" is normally blocked [73]. This can be seen in the enormous disparities between the genome's broadly distributed fixed transposons and the modest number of expressed transposons. Transposon DNA/hAT, for example, is thought to be dormant in all species [74], yet it accounts for 12.77 percent of repetitive elements in the grass carp genome and has not been discovered for transcription. However, transposon transcription is not always suppressed. Recent research has revealed that retrotransposons are preferentially activated during mammalian embryonic development, particularly during the process of zygotic gene activation (epigenetic reprogramming) [4][5][6][7]75]. Transposon elements were found to be involved in ZGA events in a study of mouse single-cell RNA sequencing, with LTR associated with the minor wave of zygotic activation and SINE associated with the major wave of zygotic activation, where a large number of essential genes begin to be transcribed [76].
The expressed transposons in grass carp were family-specific and had nothing to do with their abundance on the genome. The transposons of an unclassified SINE family (rnd-3_family-293) were found to be preferentially expressed during embryonic development. More intriguingly, they were activated simultaneously during the 256-cell phase, when the majority of zygotic genes are not yet activated. This shows that fish retrotransposons, like those of mammals, may function during zygotic stimulation of embryonic development. Furthermore, we discovered that the majority of these transposons contained a combination of two patterns (motif 1 and motif 5) in their 5 tRNA header, and the genes containing the two motifs displayed an expression pattern similar to that of these transposons. Given that SINE transposons are transcribed by RNA polymerase III and the binding site for promoter recognition is located within the sequence [77], we believe that this unique expression pattern of rnd-3_family-293 is regulated by a distinct mechanism, which may initiate expression via RNA polymerase III recognition of motif 1 and motif 5.
Gene splicing is involved in many biological processes, including development, cell and tissue differentiation, and disease occurrence [9,78]. As a result, systematic mining is very important. We found a total of 5339 alternative splicing events using SMRT RNA-seq technology, with SE (24.72 percent) and RI (21.08 percent) having the largest proportions, which was consistent with the findings of two SMRT-based fish transcriptome studies [31,37]. The percentage of RI events fluctuates substantially during development and is quite low during the grass carp zygotic activation (ZA) stage. This finding corresponds to the transcriptome during zygotic activation in zebrafish [32]. RI splicing events occur far more frequently during other developmental stages (GH, larva, and juvenile). To our knowledge, no other fish species has had such a significant fluctuation in RI events during development. As RI events are demonstrated to be functional, interest in this topic in animals is growing rapidly [13,14,79,80].
In our research, we discovered that RI may be important in both embryonic development and tissue specificity maintenance in fish. The modifications were most pronounced during the maternal to zygotic transition (MZT, 256-cells-Dome), when the number of RI alternative splicing events rose considerably and more isoforms with intron retention were produced. The over genes, the genes with an upregulated Ψ value (Ψ(T7)-Ψ(T6) > 0.07, genes expressing more intron-retained isoforms) during the zygotic activation stage exhibited some interesting features. Compared with the normal genes, the molecular functions of GO terms of over genes were significantly enriched for nucleic acid binding. In general, nucleic acid binding acts as a regulator in life activities.
We are particularly interested in the regulatory aspects of RI events, in addition to their function. In general, alternative introns have lower splicing signals to prevent spliceosome formation and hence remain in the mature RNA molecule. The placement at the 3 end of the gene, shorter intron length (shorter GT-AG spacing), higher GC content, weaker splice site strength, and lesser BPS sequence strength were the typical weak splicing signals (Table 3). However, the alternative introns we uncovered have some unique characteristics in terms of intron length and splice site strength. In general, alternative introns are short, which makes intron skipping easier during spliceosome formation [80,81]. In a study of chicken embryos, for example, alternative introns were shorter in RI occurrences [79]. Alternative introns in normal events meet this description in our study, whereas introns in over events are longer than constitutive introns. Furthermore, in our investigation, the splicing strength of the alternative introns of the over transcripts was greater than that of the normal transcripts. These distinguishing characteristics suggest that RI alternative splicing events have distinct purposes and meanings during the maternal zygotic transition, as well as a distinct regulatory mechanism.
Sample Collection
Each sample was taken from a population consisting entirely of the offspring of a single set of grass carp parents (bred in Xingfu village, Huanggang, China in 2017). During the process of collecting the samples, the information regarding the samples, such as the amount of time that has passed after fertilization, the type of tissue, and the morphophysiological properties, was recorded ( Table 1). The fertilized eggs were continually viewed using a dissecting microscope (OLYMPUS, Tokyo, Japan) after artificial insemination, and the embryonic developmental phases were recognized by reference to the zebrafish development diagram [82].
For the samples taken between T2 and T19, a total of 20 fertilized eggs, embryos, and juveniles were obtained at each time point. The T20 sample consisted of a single fish that was replicated twice for biological analysis. One month after fertilization, sampling was continued every 30 to 60 days. During the sampling process, the grass carp were euthanized with a high dosage of eugenol. This was followed by a quick dissection and the collection of tissue samples, with two to three biological replicates at each time point.
A TRIzol (Invitrogen, Carlsbad, CA, USA) solution was added to the samples, and they were completely crushed in an OSE-Y40 homogenizer (Tiangen, Beijing, China) to release the RNases, which were then inactivated by further contact with a TRIzol solution. The ethics committee gave their blessing to the entire experiment.
Preparation of Sample for Library Construction
The manufacturer's instructions for utilizing TRIzol reagent were followed when obtaining total RNA. The Qubit ® RNA Assay Kit in Qubit ® 2.0 Fluorometer (Life Technologies, Gaithersburg, MD, USA) was used to measure the RNA concentration. Using the Agilent 2100 technology, RNA integrity was determined (Agilent Technologies, Santa Clara, CA, USA). It was decided to use samples with RINs greater than 9 for the SMRT-based libraries and greater than 7 for the ssRNA-seq libraries in order to get the most accurate results. Those samples that met the requirements were kept in a −80 • C refrigerator.
ssRNA-Seq
For each tissue sample, a library was generated and sequenced. For each sample point, the qualified RNA from biological replicates were mixed in equal amounts. Three micrograms of RNA was used for each sample in the library construction.
For the first step, the Ribozero TM rRNA Removal Kit (Epicentre, San Antonio, TX, USA) was employed. An Illumina NEBNext Ultra TM Directional RNA Library Prep Kit was used to build the sequencing library, and the manufacturer's instructions were followed throughout the process. Finally, the Illumina HiSeq 4000 PE System was used to sequence libraries of sufficient quality.
With the help of IlluQC [83], adapter sequences and low-quality sequences that met the Q30 criteria were eliminated. When either the amount of nitrogen (N) in any read exceeded 10 percent of the total number of bases in the read or the amount of bases with a quality rating of 5 or lower exceeded 50 percent, the paired reads were discarded. FastQC and MultiQC were utilized in order to carry out quality information statistics on the aforementioned data [84].
SMRT-Based RNA-Seq
Before being utilized to generate sequencing libraries, the qualifying RNAs that were extracted from embryonic development samples were mixed together in equal amounts. In order to construct the sequencing library using the samples collected after hatching, the RNA samples obtained from the same tissue at various time points were combined in equal amounts to create the tissue samples, and then the tissue samples themselves were combined in equal amounts to create the sequencing library.
The SMARTer TM PCR cDNA Synthesis Kit (PacBio, Menlo Park, CA, USA) was used to do reverse transcription on a mixed RNA sample that was approximately 1 microgram in weight. The PCR product was then utilized in the construction of the SMRTbell library through the utilization of PacBio's SMRTbell template pre kit 1.0. The SMRTbell template was annealed to the sequencing primer, bound to polymerase, and then sequenced using PacBio's Sequel Binding Kit 3.0 on the PacBio Sequel platform. We processed the ssRNA-seq data with the classical process: aligned using Hisat2 (2.0.4) before assembled by StringTie (1.3.1c) [85]. The expression level was counted with Salmon (1.4.0) [86].
SMRT sequencing data was analysis with SMRT-link package (5.1.0) and Iso-Seq (Isoform Sequencing) procedure (https://www.pacb.com/support/software-downloads/ accessed on 17 September 2018). First, the command "ccs" in the SMRT-link toolkit was used to obtain circular consensus sequences (CCS); then, "pbtranscript classify" was used to obtain full-length sequences (FLNCs); finally, "pbtranscript cluster" was used to cluster FLNCs to obtain high-quality transcripts. The software gmap (10 June 2019) was used to align high-quality transcripts back to genome, and only results with a mapping rate of over 85% were retained [87]. The "collapse_isoforms_by_sam.py" script in the cDNA_Cupcake package was used to generate a GFF file from the alignment results, with the parameter set as -fq -dun-merge-5-shorter -c 0.95 -i 0.9 (https://github.com/Magdoll/cDNA_Cupcake, accessed on 18 September 2018). Transcripts with full-length CCS support numbers less than 2 and/or degraded from the 5 side were removed from the generated GFF file.
Merge ssRNA-Seq Transcriptome and SMRT Transcriptome
The ssRNA-seq transcriptome and the SMRT transcriptome were combined, and then gffread (gffread -M -K) was used to eliminate any duplicated transcripts that were found [88]. Using the program "gffcompare", we compared the merged transcriptome with the ssRNA-seq transcriptome and the SMRT transcriptome in their own right. This allowed us to acquire the supporting information, which indicated whether or not the transcript was detected by both technologies. Salmon (version 1.4.0) was utilized to determine the amount of expression, and transcripts whose values were lower than 1 TPM were eliminated [86]. Two different filtering techniques were carried out to improve the quality of the assembled transcripts even further: 1.
Eliminate any transcripts that might have been fragmented. It is quite simple to confuse fragmented mRNA transcripts with newly assembled lncRNAs. In order to locate these transcripts, we first determined the distance between the transcript and the border of the scaffold and then checked to see if the transcript was supported by SMRT data. More than 95 percent of introns were less than 7875 base pairs, as shown in the length distribution of introns in all transcripts (Supplemental Table S12). It was decided that the transcripts whose distance from the scaffold edge was less than this value and which were not supported by SMRT data were fragmented transcripts. As a result, these transcripts were removed from the analysis.
2.
Remove possible precursor mRNAs. It is anticipated that the RNA-seq technology will discover a high number of mRNA fragments that have not been spliced or are in the process of being spliced. Because these fragments, unlike mature mRNA, exactly correspond to the full continuous region of the genome, it is common practice to confuse them with lengthy single-exon transcripts; nonetheless, they are eventually discovered to be long non-coding RNA. In order to get rid of the negative effects of such transcripts, we made the assumption that if a gene contains multi-exon transcripts as well as single-exon transcripts, the single-exon transcripts are very certainly precursors and should be removed as well.
Annotation of Transcriptome
Two tools, CPAT and CPC2, were utilized to predict the RNA coding ability in order to improve the accuracy of lncRNA prediction. In particular, the CPAT zebrafish model was used for prediction [89]. When both prediction findings from two software were non-coding, a transcript was designated as lncRNA; mRNA when both predictions were coding; and TUCP (transcripts of uncertain coding potential) when the predictions were inconsistent.
If the class of all of a gene's transcripts is the same, the gene will be designated as the class of its transcripts; otherwise, the gene will be labeled "misc".
A gene's functional annotation is assigned based on the annotation result of its longest transcript. The GO function was annotated using blast2go (2.5.0), and the BLAST database was chosen from the nr library [90]. KO annotation was performed using the KAAS system [91] was utilized for KO annotation, and the species used for alignment were human (has), mouse (mmu), zebrafish (dre), common carp (ccar), medaka (ola), and Atlantic salmon (sasa). In KAAS annotation, the bidirectional best hit (BBH) approach was utilized. The RNA categorization (family) was annotated using cmscan (1.1.2) to search the Rfam database (14.1) [92,93].
Completeness of Transcriptome
The loach SMRT-based transcriptome [28] was obtained from the links mentioned in their paper. The SMRT-based transcriptome of Onychostoma brevibarba was obtained from the publication's supplementary material [36]. The completeness of all transcriptomes was determined using BUSCO software (3.1.0) and OrthoDB's actinopterygii (ray-finned fish) as analytic database [94].
Principal Component Analysis
The "prcomp" function in R was used to perform principal component analysis (PCA). The factoextra package was used to calculate the "variable contributions" of the transcripts to each principal component (PC). The PC-related transcripts were those that were associated with the PC with the highest contribution value. The plotly program was used to create interactive plots. Figure S17) were used to calculate the overlap information [54]. Expressed transposons are transcripts with overlaps of more than 90 percent of repeat elements and 90 percent of the transcript itself. TE-associate transcripts are those whose repeating element overlap exceeds 90% but does not exceed 90% of the transcript's own overlap. Transposons having N sequences within 20 bp upstream and downstream were specifically targeted for deletion. In order to annotate the structure of short interspersed elements (SINEs), we consulted the SINE Base database [95]. EBI clustal omega [96] was used to analyze the relationships between different sequences.
Motif Analysis
Motif was predicted de novo using MEME (5.3.3) [97]. FIMO (5.3.3) was used to search for motifs in all transcripts with MEME-1 starting coordinates less than 50 and MEME-5 starting coordinates less than 40 [98]. MAST (5.3.3) was used to search and visualize motif combinations [99].
Validation Using qPCR
The NEB E. Coli Poly(A) (#M0276S) kit was used to add Poly(A) tail. Genomic DNA removal and first strand synthesis were performed using the HiScript ® III RT SuperMix kit for qPCR (+gDNAWIper) (Vazyme, China). ChamQ SYBR qPCR Master Mix (Vazyme, China) kit and CFX96 Real-Time PCR Detection System (Bio-Rad, USA) were used to perform RT-qPCR. The RPS27 gene (small subunit ribosomal protein S27e) was selected as the internal reference [100,101]. Annealing conditions of internal reference and target gene primers were recorded (Supplemental Table S6).
Alternative Splicing Analysis
The suppa software [102] was used to identify alternative splicing events and calculate Ψ values. The transcripts participating in the event must have a minimum total expression of more than 1 TPM. The following is the formula for determining the value: F 1 is the set of transcripts spliced following the form 1 in a splicing event, and F 2 is the set of transcripts spliced following the form 2 in the same event. Because only simple splicing events were analyzed (no splicing event combinations were included), there were only two splicing forms for a splicing event. The Wilcoxon signed-rank test was used to compare the distributions of adjacent time points and tissues. All time points from T2 through T20 are considered adjacent. Tissue comparisons were performed using samples from the latest two sampling time points (134 dpf and 197 dpf).
The weblogo online application [103] was used to view the patterns of splice sites. The Maximum Entropy Model (MaxEnt) [104] was used to forecast the splice site strength. The software BPP [105] was used to forecast the branch-point sequence (BPS). The Mann-Whitney U test was used to assess distribution differences in splice site strength, BPS scores, intron length, and GC content.
Other Analysis
Fisher's test was used for KEGG enrichment analysis, while the R package "GOStats" was used for GO enrichment analysis [106]. Other data analysis was performed on the Linux (CentOS 7) platform using R (4.0.2), Python (3.6.11), and other software. The ggplot2 package was used for data visualization. Unless otherwise specified, the default parameters of the bioinformatics program used in this work were used during data processing.
Conclusions
Through continuous, high-frequency sampling, we obtained a high-quality reference transcriptome and dynamic transcriptional map of grass carp, an important resource for fish developmental study. On this basis, we systematically analyzed the changes of lncRNAs and alternative splicing events in grass carp development.
The zygotic genome activation (ZGA) is an extremely important research topic in fish developmental biology, but its mechanism has not been fully elucidated yet. Based on the reference transcriptome, we discovered a novel family of short interspersed nuclear element and its potential regulatory motifs associated with fish ZGA. Meanwhile, it was found that retained intron (RI) alternative splicing events dramatically change during ZGA, and this type of RI events are different with classical RI events in their regulatory characteristics. In addition, two types of alternative splicing events, RI and skipping exons (SE), are associated with tissue specificity maintenance in fish.
Institutional Review Board Statement:
This study was ethically approved (permission No. IHBLL2017035) and also approved by the local Animal Ethical and Welfare Committee (approval date 11 January 2017).
Data Availability Statement:
The sample information and sequencing data have been submitted to the National Genomics Data Center (NGDC, https://ngdc.cncb.ac.cn/, accessed on 10 June 2022) with the accession number PRJCA010000.
|
v3-fos-license
|
2019-02-23T19:41:12.963Z
|
2018-05-22T00:00:00.000
|
96450613
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/10/5/1675/pdf?version=1526969677",
"pdf_hash": "710074f2d4fd63be0dc9d35eef74b139048e4a44",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2964",
"s2fieldsofstudy": [
"Business"
],
"sha1": "710074f2d4fd63be0dc9d35eef74b139048e4a44",
"year": 2018
}
|
pes2o/s2orc
|
Corporate Social Responsibility, Internal Controls, and Stock Price Crash Risk: The Chinese Stock Market
: As the core of sustainable development strategy, corporate social responsibility (CSR) is a concept that influences business missions, management, operations, finance, and marketing. Studies of the economic consequences of CSR have focused on the theoretical and practical arenas. However, few studies have examined the impact of CSR on the market price fluctuations of company shares. The purpose of this study was to investigate the effect of CSR on stock price crash risk and its relationship with the role of internal controls in China. After empirical analysis, we found a significantly negative association between CSR and stock price crash risk. Furthermore, we determined that internal controls play a significant and partially mediating role between CSR and stock price crash risk. Internal controls have become an important system for Chinese companies to improve their social responsibility and reduce their operating risk, especially the risk of a stock price crash. We also found that internal controls had a significant and partial moderating effect on the relationship between CSR and stock price crash risk. In certain environments with higher levels of internal controls, CSR prominently reduced the risk of stock price crash. In theory, our study adds to the growing literature about CSR, expands the scope of CSR research, elaborates upon relevant CSR economic consequences, and complements the literature about the determinants of stock price crash risk. In practice, our conclusions provide a reference for Chinese managers, investors, and the related government departments to evaluate the effects of CSR and internal controls, and provides regulators with a method to help control abnormal fluctuations in the stock market. More importantly, the results of this study have reference value for scholars and practitioners in developing countries like China.
Introduction
Corporate social responsibility (CSR) is the responsibility of enterprises toward consumers, communities, and the environment while creating profit and maintaining legal responsibility to shareholders and employees.Corporate social responsibility requires enterprises to go beyond the traditional idea of profit being the sole goal, to emphasize the value of people in the production process and to contribute to the environment, consumers, and society.As the core of the sustainable development strategy, it plays an irreplaceable role in enhancing the reputation of the company and realizing the sustainable development of the company.The integration of CSR into the company's operations and decision-making can not only effectively improve the competitiveness of the company, but also protect the company's brand, discover new market opportunities, and achieve continuous profitability, thus ensuring the sustainability of the company.Excellent enterprises are not only able to exploit the needs of society to create economic value, but are also able to actively fulfill their responsibilities, give back to society, and create value for society in coordination with other parties.Thus, the positive interaction between the economic value and social value of enterprises is realized, and the sustainable development of enterprises and society is promoted.In the 1980s, the CSR movement began to emerge in Western developed countries.More recently, companies in China have also made progress in implementing and ensuring CSR.Many listed companies have created social responsibility committees, and an increasing number of public companies have taken the initiative to issue their own social responsibility reports.However, the real economic consequences of CSR for listed companies in China are unknown.Does CSR enhance or reduce shareholder wealth?Does it improve information transparency, thereby reducing the volatility of share prices and protecting investor interests?This remains an issue that should be addressed.The implementation of CSR will influence the accounting and financial behavior and market performance of a company.A large number of studies have investigated the link between CSR and company accounting, financial, or market performance [1][2][3][4][5][6].However, whether CSR creates positive or negative value is controversial, since CSR has two drivers: value and opportunity [7].A need for further research still exists about the environments in which CSR exerts either a positive value effect or a negative value effect.This is especially true for China, which is a country where CSR introduction has lagged and whose society is in a transition phase.After all, at the request of the China Securities Regulatory Commission (CSRC), it was only in 2009 that some Chinese-listed companies began to disclose their social responsibility reports together with annual reports, and some social institutions began to evaluate the social responsibility of listed companies.Although Chinese companies have made great progress in undertaking and fulfilling CSR in just a few years, they are still far from Western developed countries (for example, the MSCI ESG Stats, previously known as KLD, began to evaluate U.S. firms' CSR activities as early as 1990).At present, with the understanding that the Chinese government guides enterprises to rationally use social resources and achieve scientific development through institutional innovation, thereby improving the quality of the national economy, the economic consequences of the establishment of the CSR system (as an important micro-institutional design) have become the focus of attention for all sectors of society.
Unlike previous studies on CSR, which paid little attention to the impact of CSR on stock price volatility of listed companies in the securities market, in this study, we examined the relationship between CSR and firm-specific stock price crash risk in the Chinese stock market.More importantly, we examined the effects of internal controls on CSR and stock price.As stock price crash risk captures downside risk and is crucial for investment decisions and risk management, our research provides a reference for shareholders, company managers, and government regulators for understanding whether CSR reduces a company's share price risk.Simultaneously, our results provide evidence for all stakeholders to objectively evaluate the role of the internal enterprise control system.
In view of the fact that Chinese-listed companies have disclosed CSR reports since 2009, and that relevant agencies have also rated Chinese-listed companies' CSR performances since then, this paper used data from A-share listed companies in China from 2009 to 2015 to carry out research.Through empirical research, we found a significantly negative association between listed companies' CSR and stock price crash risk, and the results remained robust after considering potential endogeneity.Furthermore, we found internal control played a significant partial mediating role between CSR and stock price crash.Internal control is an important system for Chinese companies to ensure social responsibility and reduce operating risk, especially the risk of stock price crash.We also determined that internal control had a significant moderating effect on the relationship between CSR and stock price crash.In certain environments with higher levels of internal control, the effect of CSR was more obvious, and CSR more prominently reduced the company's stock price crash risk.
The contributions of this study are as follows: (1) Adding to the growing volume of CSR literature, expanding the scope of CSR research, and enriching the understanding of CSR-related economic consequences.Previous literature on the economic consequences of CSR rarely studied the impact of CSR on the volatility of company stock prices.Our study enriches and expands upon the existing research findings.(2) Our study complements the literature on the determinants of stock price crash risk.Based on the theory of management information hoarding, scholars have studied the factors that influence stock price crash risk from the aspects of financial information transparency, tax avoidance, CEO compensation, earnings management, and accounting conservatism.However, most of the studies did not analyze the stock price crash risk process from the perspective of socialized, non-financial behavior.This study combines CSR as a corporate social behavior with the internal control level as the internal environment, and analyzes their joint impact on the risk of stock price crash.Our study creates a better understanding about the causes of stock price crash risk.(3) Based on a developing capital market like China, the results of this study provide a reference for managers, investors, and relevant government departments in developing countries to evaluate the effect of CSR and internal controls, and helps regulators to control abnormal stock market fluctuations.The research conclusions have practical value.
The rest of the paper is structured as follows.Section 2 provides literature review on CSR, internal controls, and stock price crash risk, and then puts forward hypotheses.Section 3 discusses the sample, variable measurements, and research design.Section 4 presents empirical results and discusses the findings.The study is concluded in Section 5.
Literature Review
There are two different views on the economic consequences of CSR.The first view is a social impact theory that CSR maximizes the interests of all stakeholders, including shareholders.According to this view, CSR enhances the competitiveness of the enterprise and builds a good reputation for the enterprise, thus improving the corporate brand, creating a good impression for all stakeholders, and increasing investor confidence.Through CSR investment, enterprises can attract responsible consumers, obtain financial resources from investors with strong social responsibility, facilitate financing, or help struggling companies to recover from financial distress [8][9][10][11].Scholars have found that CSR is positively related to financial performance and that CSR can improve stakeholder interests, ultimately leading to better financial performance [12][13][14].On the contrary, failure to meet the expectations of stakeholders creates fear in the market and ultimately a loss in profitability [15].In terms of influencing financial information disclosure and risk accumulation, previous studies reported that CSR construction effectively suppressed the opportunistic behavior of management, constrained management earnings, improved the transparency of accounting information, restricted management over-investment behavior, protected and improved the reputation of a company, and ultimately reduced corporate risk [2,4,16].The second view is the management balance theory, which states that CSR only serves the interests of management rather than shareholders.According to this view, enterprises cannot fully satisfy the interests of each stakeholder, so the interests must be weighed for each group.If enterprises assume too much social responsibility, then shareholders' interests are affected, thus affecting the business activities of enterprises, resulting in a higher cost rate and lower performance [17].Because of the agency problem, the benefits of CSR are enjoyed by management, but risks and costs are paid by shareholders.Therefore, an enterprise's CSR is centered on management and CSR is actually a type of agency cost.Also, because CSR can unnecessarily raise the cost and risk, it may weaken a company's competitiveness [18][19][20].Some scholars have found a negative correlation between CSR and financial performance.When the management compensation plan is closely related to short-term profit and stock price, a negative correlation between CSR and financial performance often develops due to managers pursuing their own interests [21].Some researchers even stated that CSR reports are issued to disguise the impact of some misconduct.Therefore, they oppose CSR and argue that business has only one social responsibility, which is to use its resources without violating the rules of the game and work to increase corporate profits.
Due to the instances of crashes in the global stock market, stock price crashes have become a hot issue in the field of finance.The management possession of a high level of private information about firm operations and reluctance to report bad news are considered to be important causes of stock price crashes, which leads to an accumulation of negative news until a breaking point is reached [22][23][24].Under certain internal and external conditions, conflicts of interest between managers and outside investors lead to the management hoarding bad news.Additionally, managers may retain bad projects or conceal bad performance for their own benefit at the expense of shareholders.Given this information asymmetry, outside investors may be overly optimistic about corporate performance growth.So, the value of the company is overestimated, which results in bubbles.Then, once the bad news is rapidly released, a stock price crash occurs.Many studies have examined the cause of stock price crashes based on the management hoarding bad news theory.Many papers provided firm-specific determinants of stock price crashes.Earlier literature used the management opportunistic behavior of hoarding negative information to explain the risk of stock price crash, and more current literature focused on the impact of information asymmetry caused by the agency problem as explaining the risk.For example, scholars have reported that the lower the quality of financial reporting, the higher the stock price crash risk [25], and the higher the accounting conservatism and real earnings management, the lower the stock price crash risk [26,27].In addition, other scholars found that tax avoidance, excessive perquisites, over-investment, equity incentives, and political incentives increased the stock price crash risk [28][29][30].However, a female CEO and mandatory International Financial Reporting Standards (IFRS) adoption reduced the stock price crash risk [31,32].Overall, existing empirical studies have provided strong support for the new hoarding theory of crash risk, and subsequent research should continue under the guidance of this doctrine.
As a special corporate management activity to control business risk, internal controls can restrict employee moral hazard behavior and irrational decision-making.These controls play an important role in improving operational efficiency, asset safety, developing corporate strategy, and reporting reliability, through containment, restraint, protection, guidance, supervision, and influence.With the implementation of the Sarbanes-Oxley Act in 2002, internal control has become an important aspect in strengthening corporate governance and a hot issue in academic circles.Research on the economic consequences of internal controls has included several aspects, including the improvement accounting information quality, the optimization of internal financial decisions, such as investments and cash holdings, and the evaluation of external stakeholders.In terms of the quality of accounting information, researchers generally found that high-quality internal controls reduced the level of listed companies' earnings management, and increased the earnings persistence and earnings response coefficients [33,34].As a result, companies with high-quality internal control systems often are charged lower audit fees by accounting firms.Conversely, companies with defective internal control systems are charged higher audit fees for the risk of accounting information [35][36][37].To improve internal financial decisions, researchers found that high-quality internal controls improved the efficiency of investments and the ability to manage uncertainty [38].Those companies with higher levels of internal control have lower equity capital and bank debt costs, and more commercial credit financing compared to others in the industry [39][40][41].In summary, internal controls are not the only management activity available to improve efficiency and prevent risk, but are important for value creation.
Based on the relevant research literature about CSR, stock price crashes, and internal controls, we make theoretical deductions and introduce research hypotheses in the following article.First, we discuss the impact of CSR on the risk of stock price crash.Next, we discuss the role of internal controls on the relationship between CSR and stock price crash, including moderating and mediating effects.
CSR and Stock Price Crash
Considering that implementing CSR may affect the accounting information disclosure of a company, and inadequate information disclosure is an important reason for stock price crashes, an important relationship between CSR and stock price crashes may exist, which needs to be investigated through empirical research.Simultaneously, such research can also determine whether CSR functions to help protect the interests of investors or to attract the attention of investors.
Based on two opposing theories-social impact theory and management balance theory-we wanted to determine the relationship between CSR and stock price crash risk.We introduce two opposing hypotheses as follows: a risk reduction hypothesis and a risk increasing hypothesis.
For the first hypothesis, if a company performing CSR complies with social impact theory, CSR means benefits to shareholders.As a result, the better the CSR, the more acceptable the company to investors, and the more likely investors will retain the company's shares long term, so there is less risk that the stock price will crash.Specifically, the role of CSR in increasing company information transparency reduces the risk of a stock price crash.Because CSR is being performed for the long-term development of the enterprise, managers will emphasize the relationship with investors, consumers, suppliers, etc., and provide transparent and reliable financial reports to meet their requirements.These actions will considerably reduce the degree of information asymmetry, supervising and restricting managers, thus reducing the probability of the negative manager information hoarding behavior.As previously discussed, negative management information hoarding is the main reason for the risk of stock price crashes [22].Since companies performing CSR emphasize the interests of stakeholders, the managers of such companies tend to have higher ethical standards, be more diligent and conscientious, and less motivated to hoard information.Therefore, as CSR enhances information transparency, and the information transparency reduces the likelihood of managers hoarding bad news, we hypothesized the following: H1a: if, in the Chinese capital market, the managers of a listed company perform CSR for the interest of shareholders, then CSR and the stock price crash risk should be negatively correlated.
For the second hypothesis, if a company performing CSR complies with management balance theory, CSR aims the benefit of management rather than the shareholder.As a result, the more CSR implemented, the more likely managers are to use CSR as a tool to engage in self-interested behavior to the detriment of shareholders, so stock prices will have a higher crash risk.Specifically, from the agency theory viewpoint, management often uses CSR to enhance personal reputation, thereby neglecting shareholder interests.In fact, managers have been shown to always be motivated to engage in CSR to hide unethical behavior [39].In this case, CSR is essentially a self-interest tool for managers.After using CSR to build personal reputation, managers prefer to hide their benefit expropriation behavior, decreasing the company's information transparency, and increasing earnings management.As information transparency decreases and earnings management increases, companies inevitably lag in the transmission of negative information, so the risk of a stock price crash increases.Therefore, as CSR provides managers with an opportunity to usurp the company's interests, CSR helps management hide gains while enhancing their personal reputation, so bad news hoarding and accumulation occurs.As such, we hypothesized the following: H1b: if, in the Chinese capital market, the managers of a listed company perform CSR for self-interested reasons, then CSR and the stock price crash risk should be positively correlated.
CSR, Internal Controls, and Stock Price Crashes
As an important institutional arrangement for enterprises, we wanted to determine the role of internal controls in the relationship between CSR and stock price crashes.The existing literature suggests that internal controls can reduce the risk of stock price crashes, but the studies did not address the interrelationship between CSR, internal controls, and stock price crash risk.In fact, internal controls may have both a mediating effect and a moderating effect on the relationship between CSR and stock price crash risk.
Mediating Effect of Internal Controls on the Relationship between CSR and Stock Price Crash Risk
From the existing research, the establishment of an internal control system helps limit management risk-taking behavior, improve financial decision-making efficiency, and avoid the risk of improper investment, thereby reducing the accumulation of enterprise risk.At the same time, the internal control system will mitigate negative management information hoarding behavior, and then play a role in reducing the risk of stock price crash.As to the relationship between CSR and the risk of a stock price crash, CSR will likely influence the risk of stock price crash via internal control systems.
If H1a, the CSR value hypothesis, is supported by empirical evidence, and since the performance of CSR means that companies assume multiple social responsibilities to shareholders, creditors, governments, consumers, and the public, companies will create a public image of observing law and discipline, transparency, and sustainable development while maximizing profits.Therefore, companies will establish internal systems to achieve this goal.In China, government departments issued the Basic Standards of Internal Control, which requires enterprises to establish internal control systems.From this point, the establishment of an efficient internal control system will not only improve management and risk prevention capabilities and promote sustainable development, but also be an important aspect of observing law and discipline.In addition, ensuring information transparency, minimizing the risk of improper management decisions, and achieving sustainable development, thus achieving the objectives of the CSR, also requires efficient internal control systems.Overall, if H1a is supported, then the greater the CSR, the easier the establishment of an efficient internal control system.More managers will comply with internal control standards, which will facilitate the functioning of the internal control system, thus enabling the timely disclosure of negative information and reducing the risk of a stock price crash.
If H1b, the CSR tool hypothesis, is supported by empirical evidence, then managers will tend to implement CSR based on personal gain, in which case the corporate commitment is not based on management intent.Their inherent drive to create a public image of transparency, compliance, and sustainable development is insufficient.Therefore, they do not constrain their opportunistic behavior by establishing an efficient internal control system.Thus, under the tool hypothesis, the greater the CSR, the more self-interested the manager is, the harder the establishment of efficient internal controls, and the less willing the manager to abide by internal control rules.As a result, the level and function of internal controls are lower, and then the negative information hoarding of managers cannot be suppressed, so the risk of a stock price crash increases.
Thus, we hypothesized the following about the role of internal controls in mediating the relationship between CSR and the company's stock price crash risk: H2: In the Chinese capital market, CSR influences the risk of a stock price crash by influencing internal controls; H2a: If, in the Chinese capital market, the performance of listed companies' CSR is in the interest of the shareholders, then CSR will increase the internal control efficiency of the companies, thereby reducing the risk of a stock price crash; H2b: If, in the Chinese capital market, the performance of listed companies' CSR is in the interest of management, then CSR will reduce internal control efficiency, and thus, increase the risk of a stock price crash.
Moderating Effect of Internal Controls on the Relationship between CSR and Stock Price Crash Risk
According to the existing literature, as an important institutional design, internal controls restrict employee behavior at all levels of the enterprise by controlling activities based on risk assessment.After the internal control environment is formed, the opportunistic behaviors of enterprise managers are restrained and the risk is prevented.As a result, differences occur in the consequences of manager behavior in different internal control environments, and companies with high levels of internal controls tend to have higher investment efficiency, lower capital costs, and higher information transparency.As to the relationship between CSR and stock price crash risk, the internal control environment is likely to be a moderating mechanism.Specifically, if H1a, the CSR value hypothesis, is supported by empirical evidence, the possible moderating effect of internal controls will be reflected as follows.In a more efficient internal control environment, maintaining good relations with shareholders would be easy for management because the higher internal control environment allows shareholders to be confident that managers are being diligent, reducing the possibility of self-serving information hoarding behavior.Moreover, some synergies may be found between CSR and internal controls as their goals coincide in focusing on the relationship with stakeholders, and providing transparent and reliable financial reports.Thus, CSR will easily improve financial information transparency and reduce the level of earnings management, which will objectively enhance the effective supervision of managers and improve the probability of discovering management information hoarding behavior.As such, if CSR is performed for the benefit of shareholders, the effect of CSR on reducing the risk of a stock price crash will be enhanced in a more efficient internal control environment.
Conversely, in a less efficient internal control environment, even if the manager is willing to implement CSR and shape the enterprise image for shareholders' benefit, shareholders may question if poor internal controls lead to management self-beneficial information hoarding behavior, because imperfect enterprise risk control mechanisms cannot prevent potential risky management behavior.At the same time, poor internal control environments also negatively impact the CSR objectives of maintaining the relationship with the related parties through financial information transparency.Poor control and restraint mechanisms, at all levels of the enterprise, prevent the improvement of financial information transparency.Then, the effect of reducing the stock price risk decreases as information transparency restricts management information hoarding behavior.As such, if CSR is performed for the benefit of shareholders, the effect of CSR on reducing the risk of a stock price crash will be weakened in a less efficient internal control environment.
If H1b, the CSR tool hypothesis, is supported by empirical evidence, the possible moderating effect of internal controls will be reflected as follows.In a more efficient internal control environment, the management expropriation of shareholder benefit will be more difficult, even if managers have strong motive to perform CSR for themselves rather than for shareholders.Because management behavior is supervised and restrained, violating the interests of the enterprise by using CSR as a cover-up tool becomes difficult.If risk is strictly controlled and information is transparent, managers have little opportunity to hoard bad news under the cover of CSR.As such, if CSR is performed for the benefit of managers, the effect of CSR on raising stock price crash risk will be reduced in a more efficient internal control environment.
In a less efficient internal control environment, the violation of shareholders' interests will be easy to achieve if managers use CSR as a cover tool to maximize their own interests.Due to the weak internal control system, managers have limited restraint or supervision.So, the goal of creating a good enterprise image and attracting the interests of shareholders in the process of CSR implementation would be easily to accomplish.With poor internal controls and extremely low levels of transparency, managers can easily hoard bad news under the cover of CSR, and the stock price crash risk increases accordingly.As such, if CSR is performed for the benefit of managers, the effect of CSR on raising stock price crash risk will increase in a less efficient internal control environment.
Thus, we hypothesized the following about the role of internal controls in moderating the relationship between CSR and the company stock price risk: H3: In capital markets in China, internal controls will constrain the ability of CSR to influence the risk of a stock price crash.
H3a: If, in Chinese capital markets, the performance of CSR by the managers of listed companies is in the interest of shareholders, then CSR is more effective at reducing the risk of a stock price crash with higher levels of internal controls and less effective at lower levels of internal controls.
H3b: If, in Chinese capital markets, the performance of CSR by the managers of listed companies is performed in their own interests, the role of CSR in increasing the risk of a stock price crash is weaker at higher levels of internal control and stronger at lower levels of internal control.
Sample
This analysis used sample data from Chinese A-share companies listed on the Shanghai and Shenzhen exchanges from 2009 to 2015.We excluded the following companies: (1) financial, (2) ST (listed company under special treatment), (3) those with negative net assets, (4) those with incomplete data, and (5) those with less than 30 specific weekly stock return rate data per year, because stock crash risk calculations are based on the company's specific weekly stock return rate data, which is obtained and processed from the regression residual of the index model.To ensure the reliability of the results of the index model, at least 30 weekly stock return rate data are required for regression, so we excluded companies with less than 30 specific weekly stock return rate data per year.These data requirements yielded a final sample of 2747 firm-years from 2009 to 2015.Table 1 provides the sample distribution by year and industry.The data were mainly obtained from China's stock market and accounting research database (CSMAR), and the internal control index data were obtained from the DIB internal control database (http://www.ic-erm.com/).Corporate social responsibility report score data were obtained from Rankings CSR Ratings (RSK).To avoid the impact of extreme values on empirical results, all continuous variables were winsorized at the top and bottom 1%.
CSR Measurement
Following prior studies [40], in this paper, we chose Rankings CSR Ratings (RKS) overall evaluation score to measure the performance and disclosure of CSR in social responsibility reports.Rankings CSR Ratings is a third-party, authoritative rating agency for CSR in China, established in 2007.It is committed to providing objective scientific CSR ratings information for socially responsible investments, responsible consumers, and the public at large.In 2009, RST successively issued 371 evaluation reports on annual social responsibility reports of A-share listed companies, and disclosed evaluation scores to the public.Rankings CSR Ratings responsibility rating index evaluates CSR reports in four dimensions using its original Macrocosm-Content-Technique (MTC) system, based on structured scoring methods.The RKS overall score of the CSR report ranges from 0 to 100.The higher the score, the better the CSR performance.Many studies on the CSR of Chinese-listed companies have used the RKS index [40], so we used the same method.
Crash Risk Measurement
Based on the findings reported by Hutton et al. and Kim et al. [16,28,30], we used two measures of firm-specific crash risk.Specifically, we performed the calculations as follows: First, we calculated the firm-specific weekly return of stock i in week t W i,t = ln(1 + ε i,t ), where ε i,t is the residual return from Equation (1): where R i,t is the return rate of stock i in week t considering cash dividend reinvestment, and R m,t is the average return rate of all stocks in the A-share market weighted by week t current stock market value in circulation.Next, based on W i,t , we constructed two variables to measure the risk of stock crash.The first measure is the negative conditional skewness of firm-specific weekly returns over the fiscal year (NCSKEW): where n is the number of weekly returns during year t.
The other measure is the down-to-up volatility (DUVOL): where n u and n d are the number of up and down weeks in year t, respectively.The larger the negative conditional skewness coefficient (NCSKEW), the greater the risk of stock crash.The greater the down-to-up volatility (DUVOL), the greater the risk of stock crash.
Internal Controls
In our paper, we used the DIB internal control information disclosure index to reflect the quality of internal control information disclosure.The data were obtained from the DIB internal control and risk management database (http://www.ic-erm.com/).The DIB internal index is based on the research results of related internal control indexes in other countries, combined with the relevant Chinese institutional environment and the current situation of listed companies in China.The index is designed based on the realization of the five internal control goals, and the basic internal control index is constructed to reflect the level of internal controls and the risk control capability of listed companies.Because of relative objectivity and authority, many studies on internal control in China use it.Considering that the index value range is 0 to 1000, we used its natural logarithm to measure the internal control quality of the companies.Then, we got the internal control efficiency variable (IC).
Control Variables
Following prior studies [30][31][32]41], we used several control variables for the factors that have been shown to affect the dependent variables of stock price crash risk and internal control efficiency.
A summary of the variable definitions is shown in Table 2.
Empirical Models
Firstly, based on hypotheses H1a and H1b, to investigate how CSR is associated with firm-specific stock price crash risk, we estimated the model as follows: Our unbalanced panel analysis regressed the stock price crash risk in t + 1 year on the CSR variable in t year and other control variables.Following prior studies on stock price crash [28][29][30], these control variables included the stock crash risk in t year (Crash t ), stock turnover rate in t year (DTurnover t ), mean of firm weekly returns in t year (return t ), volatility of firm weekly returns (Sigma t ), firm size in t year (Lsize t ), market value to book value in t year (Mb t ), financial leverage in t year (Lev t ), rate of return on assets in t year (Roa t ), and earnings management of discretionary accruals in t year (Acc t ).In addition, our regression considered year and industry fixed effects.Added to the panel data regression, we further considered the model's endogenous problems.We also used the two-stage instrument variable method (2SLS) and the generalized moment estimate method (GMM) for regression.
Secondly, based on hypotheses H2a and H2b, and using the method for testing the intermediate factors presented by Baron and Kenny [42], we investigated whether internal controls play a mediating role in the relationship between CSR and stock price crash risk.Based on Equation ( 4), we derived the following models: Equation ( 5) is the determination equation of internal control (IC t ).The independent variable is CSR in t year (CSR t ).Following prior studies on internal controls [43], there are some control variables such as firm size (Lsize t ), financial leverage (Lev t ), rate of return on assets (Roa t ), ratio of fixed assets (Fixratio t ), growth opportunity (Growth t ), age of company (Age t ), property rights of controlling shareholders (Soe t ), CEO and chairman of board in one person (Ceochair t ), and auditing quality (Audit t ) in t year.Equation ( 6) is the result of adding a variable of internal control (IC t ) to Equation (4).Both equations consider year and industry fixed effects.
Thirdly, based on hypotheses H3a and H3b, to investigate whether internal controls play a moderating role in the relationship between CSR and stock price crash risk, we derived the following model: Equation ( 7) is the result of adding the variable of internal control (IC) and interaction term of internal controls and CSR (CSR × IC).And the interaction of CSR × IC is used to investigate internal controls' moderating effect on the relation of CSR and stock price crash risk.
Descriptive Statistics
Table 3 lists the descriptive summary of the variables.The mean values of the crash risk measures (i.e., NCSKEW and DUVOL) are 0.158 and −0.380, respectively.Although some differences were found between the two measures, the measures were in the same direction.The standard deviation of the two variables was 0.935 and 0.677, respectively.The significant SD indicated that the risk of stock price crash varied widely between different companies in China.The sample firms had an average CSR score of 35.666.In our firm dataset, the average firm had an IC score of 6.560.The average turnover rate was −0.090, the average firm-specific weekly return was 0.005, and the weekly return volatility was 0.066.The average abnormal accrual value was −0.001.The average financial level value was 0.501 and the return rate on assets was 0.052.The average company age was 11.74 years.For the company ownership, 65% were state-owned, and 16% had the CEO also serving as the chairman of the board.In addition, 15.9% of the firms were Big 4 auditor's customer companies.
Table 4 presents the correlation matrix of the variables employed in our empirical regression.With the exception of the correlation coefficient between NCSKEW and DUVOL of 0.607, the correlation coefficient between other variables was below 0.5, so there was no serious multi-collinearity in the models.At the same time, the correlation coefficients of CSR and NCSKEW, and CSR and DUVOL, were respectively negative at the lower level of 1%, which initially verified the correctness of hypotheses 1a.However, further multivariate regression analysis is still needed to test the preliminary results.
Effect of CSR on Crash Risk
Table 5 reports the results of the regression analysis on the relationship between CSR and firm-specific crash risk after controlling for other potential determinants of price crash risk.Because we used panel data to study the relationship, Hausman tests were employed to examine which model was better (i.e., the fixed effects model or the random effects model).The test results are shown in Table 5.The chi2 (10) value in column 1 is 591.67, the chi2(10) value in column 2 is 715.47, and the value of Prob > chi2 are both 0.000.So we used the fixed effect model for regression.The regression results suggested that CSR was negatively associated with one-year-ahead crash risk via NCSKEW and DUVOL.Column 1 indicates that CSR was significantly and negatively associated with NCSKEW.The correlation coefficient was −0.002, which means that an increase in one SD in CSR in year t was associated with a decrease of 0.002 in NCSKEW in year t + 1 year.Column 2 suggests that an increase of one SD in CSR in year t was associated with a decrease of −0.007 in DUVOL in year t + 1.As to the other variables (i.e., NCSKEW, DTurnover, Return, Sigma, and Lev) in year t, they were significantly and negatively associated with both NCSKEW and DUVOL in year t + 1; however, Lsize and Roa were significantly and positively associated with both crash risk variables.Overall, the results in Table 5 suggest that higher socially responsible listed companies have a lower future stock price crash risk.The results support the assumption in H1a that CSR reduces the risk of a stock price crash.This suggests that CSR enhances shareholder interests in the Chinese stock market.As CSR can improve information transparency, it can objectively reduce the likelihood of managers hoarding bad news, thus reducing firm-specific stock price crash risk.
Endogeneity
The above analysis suggests a negative relation between CSR and one-year-ahead crash risk.However, considering the endogeneity of the conclusions is necessary.For this reason, we used two approaches to control the endogeneity.We introduced an instrumental variable and used the 2SLS instrument variable method to control endogeneity.We also used the generalized method of moments (GMM).
The first approach used wa the 2SLS instrumental variables method.As previously reported [5], we used the average CSR score of other firms in the same industry as the instrumental variable (CSR_HAT).Columns 1 and 2 in Table 6 report the results of the instrumental variables approach.From the correlation coefficients, CSR_HAT was significantly negatively associated with both NCSKEW and DUVOL.This suggests that the negative relation between CSR and future crash risk holds after controlling for endogeneity based on the 2SLS instrumental variables method.
The second approach was the dynamic panel GMM approach.As previously reported [43], we used the GMM to control endogeneity.As can be seen from columns 3 and 4 of Table 6, the Arellano-Bond test value of AR(1) was 0, while the Arellano-Bond test value of AR(2) was large, which supports the existence of first-order serial correlation of the residual term and the absence of second-order serial correlation.At the same time, the Sargan test also shows that the use of instrumental variables in the model was valid as a whole, there was no over-identifying problem, and there was no reason to reject the validity of instrumental variables.In addition, columns 3 and 4 in Table 6 also report the results from the dynamic GMM estimation.From the correlation coefficients, CSR is significantly negatively associated with both NCSKEW and DUVOL.The results suggest that the negative relation between CSR and future crash risk holds after controlling for endogeneity based on the dynamic GMM estimator.
Altogether, considering possible endogeneity, both the 2SLS instrumental variables method and the GMM continue to support the conclusion, meaning the conclusion that CSR reduces the stock price risk in the Chinese stock market is statistically robust.
Mediating Effect of Internal Controls on the Relationship between CSR with Stock Price Crash Risk
Table 7 reports the results of the mediating effect test of internal controls.In this paper, we used the Sobel test method introduced by Baron and Kenny [42] to confirm the mediating role of internal controls on the relationship between CSR and stock price crash risk.For the first step, columns
Moderating Effect of Internal Controls on the Relationship between CSR with Stock Price Crash Risk
Table 8 reports the results of the test of the moderating effect of internal controls.The aim of the test was to investigate whether the relationship between CSR scores and stock price crash risk varied with the level of internal controls.As mentioned in the H3 reasoning process, the establishment of high-level internal control systems provides a favorable environment for CSR to play a positive role.Corporate information transparency improved, management opportunism was suppressed, so the role of CSR in reducing the stock price risk may be more significant.To test the hypothesis, we used two methods.First, we re-performed the Equation (4) regression analysis after partitioning the sample based on the median values of the IC variables, reported in the results in columns 1-4 of Table 8.We found that CSR was more significantly and negatively associated with crash risk (both NCSKEW and DUVOL) when firms had a higher-quality internal control system.Second, we added the interaction of CSR × IC into Equation (4) for re-performing the regression analysis.The results are reported in columns 5 and 6 of Table 8.We found that the interaction of CSR × IC was significantly and negatively associated with both NCSKEW and DUVOL, and the correlation coefficients were −0.004 and −0.127, respectively.The correlation coefficient of the interaction between CSR and IC was negative, which indicates that CSR had a stronger negative correlation with the risk of a stock price crash under higher levels of IC.This result was consistent with that of the first approach.Altogether, by using the two methods, we drew the conclusion that internal controls also have a significant moderating effect on the relationship between CSR and stock price crash (NCSKEW and DUVOL).With higher levels of internal controls, CSR was more effective at reducing the risk of a stock price crash.The results support the H3a assumption that CSR more significantly reduces the risk of a stock price crash under an efficient internal control system.
Conclusions
Sustainable development is an important content of scientific development and a major issue of the times.As the main body of the economy and society, in order to maintain competitiveness in the fierce market competition and achieve sustainable development, enterprises must actively perform their CSR while growing, and closely integrate their own development with social progress.Therefore, at the present stage, it is of special significance for China to actively fulfill its CSR.Nowadays, in a period of economic and social transition, due to the imperfect system and the lack of supervision, some Chinese companies do not have a strong sense of CSR.They pursue profits in a one-sided way, focusing on private interests rather than public interests.This is not conducive to the good and fast development of the Chinese economy and society, and is also not conducive to the sustainable development of enterprises.The government should correctly guide enterprises to fulfill their CSR so as to promote sustainable development in China.
This study examined the effects of CSR on stock price crash risk and the role of CSR on the relationship between crash risk and internal controls.In the Chinese capital market, determining whether the company undertakes CSR to improve or harm shareholders' interests is of concern.If socially responsible companies maintain a high degree of financial reporting transparency, then bad news hoarding behavior will be exhibited.So, CSR should reduce the risk of a stock price crash.On the contrary, if managers are using social responsibility for their own benefit, they will hide bad news, and CSR should increase the risk of a stock price crash.Simultaneously, as Chinese government departments have been promoting internal control systems for enterprises, its role in eliminating the risk of stock price crash should also be examined.
Our findings support the mitigating effect of CSR on stock price crash risk and the positive role of internal controls on the relationship between CSR and crash risk.Specifically, we found a significantly negative association between CSR and stock price crash risk, after controlling for other determinants of crash risk.In addition, our results remained robust after considering potential endogeneity, including using the 2SLS instrumental variables and GMM.Furthermore, we found that internal controls played a significant partial mediating role between CSR and stock price crash (NCSKEW and DUVOL).Internal controls have become an important method for Chinese companies to assume social responsibility and reduce their operating risk, especially the risk of a stock price crash.We also observed that internal controls have a significant moderating effect in the relationship between CSR and stock price crash (NCSKEW and DUVOL).In certain environments with higher levels of internal control, the positive effect of CSR was more obvious, and CSR more effectively reduced the company's stock price crash risk.
Our study adds to the growing CSR literature, expands upon the scope of CSR research, and enriches the understanding of CSR-relevant economic consequences.Our findings also complement the literature on the determinants of stock price crash risks.Combining CSR, as a corporate social behavior, with the level of internal control as an internal environment, our study analyzed their joint impact on the risk of a stock price crash.Our results help to better understand the causes of the risk of a stock price crash.In reality, the conclusions of this study provide a reference for Chinese managers, investors, and relevant government departments to evaluate the effect of CSR and internal controls, and provides regulators with a method to control abnormal fluctuations in the stock market.The relevant research results of this study have important reference value for investors, managers, and government departments in developing countries like China.For example, if corporate managers can take the demands of stakeholders seriously and pay attention to improving relations with investors, suppliers, consumers, etc., then CSR can not only enhance the brand image of the company, but also reduce the risk of stock price crash.For investors, it is possible to choose companies with higher CSR to invest in order to reduce the damage caused by the stock price crash to personal interests.For the government, however, focus of regulation should be on companies that perform poorly because they tend to cause volatility in the stock market.
Although our research contributes to the CSR literature in multiple ways, this study nevertheless faced some limitations.First, CSR was measured by the evaluation index of third-party rating agencies.While it seems appropriate to use this relatively authoritative measure in the Chinese context, and academics are also sympathetic to this method, the index still has certain defects in reflecting the actual social responsibility performance of listed companies in China.The reason is that the measure is based on the social responsibility reports disclosed by the listed companies themselves, and the rating agencies have not evaluated the actual social responsibility activities of listed companies.In the future, it is necessary to use appropriate measurement methods to carry out empirical research.Second, our study included observations of 2747 listed companies over six years, which is not a high percentage.The reason was that the proportion of Chinese-listed companies that disclosed social responsibility reports was less than 50%, and RSK evaluation index is based on these social responsibility reports, so the evaluation index can not fully reflect the social responsibility performance of all listed companies in China.In the future, the sample of research can be extended to all listed companies.Third, this study examined the role of internal control on the relationship between CSR and the crash risk of a company's stock price, and verified the relevant assumptions, but companies with social responsibility may have done more than that to reduce stock price volatility.Therefore, it is necessary to further study the mechanism between CSR and the risk of stock price crash in the future, and to examine more external factors that cause the strength of this relationship.
In order to expand the CSR literature, future research needs to focus on various topics in the securities market.Future research needs to be based on long-term observation of the impact of CSR on the volatility of stock prices of listed companies, combined with China's special institutional and cultural background, to select more meaningful research topics.This will not only make an important contribution to understanding the economic consequences of CSR, but also reduce the volatility of Chinese capital markets.In addition, in future studies, we need to find more factors that affect the risk of a company's stock price crash, so as to increase the understanding of stock price crash and improve the model's interpretation of the crash risk.
Table 1 .
Sample distribution by year and industry.
Table 2 .
Summary of variable definitions.
Table 3 .
Summary of descriptive statistics.
Table 4 .
Correlation analysis of the variables.
Table 4 .
Cont.This table presents Pearson correlation coefficients between the variables.The sample included 2747 firm-year observations between 2009-2015.* Statistical significance at the 10% level.** Statistical significance at the 5% level.*** Statistical significance at the 1% level.
Table 5 .
Regression analysis of the effect of corporate social responsibility on stock price crash risk.
Table 5 .
Cont. : This table presents the regression results of the effect of corporate social responsibility (CSR) scores on firm-level stock price crash risk.The sample included 2747 firm-year observations between 2009-2015.The two-tailed p-values, based on standard errors adjusted by a two-dimensional cluster at the firm and year levels, are reported in parentheses.* Statistical significance at the 10% level.** Statistical significance at the 5% level.
Table 6 .
Regression analysis to address endogeneity concerns.
Note: This table presents the regression results of the effect of corporate social responsibility (CSR) scores on firm-level stock price crash risk, the results of corporate social responsibility (CSR) on internal control (IC), and the results of corporate social responsibility (CSR) and internal control (IC) on firm-level stock price crash risk.The sample includes 2747 firm-year observations between 2009-2015.The two-tailed p-values, based on standard errors adjusted by a two-dimensional cluster at the firm and year levels, are reported in parentheses.* Statistical significance at the 10% level.** Statistical significance at the 5% level.*** Statistical significance at the 1% level.
: This table presents the regression results of the effect of corporate social responsibility (CSR) on firm-level stock price crash risk with different IC levels.The sample includes 2747 firm-year observations between 2009-2015.The two-tailed p-values, based on standard errors adjusted by a two-dimensional cluster at the firm and year levels, are reported in parentheses.* Statistical significance at the 10% level.** Statistical significance at the 5% level.*** Statistical significance at the 1% level. Note
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2006-01-01T00:00:00.000
|
268533644
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ij-healthgeographics.biomedcentral.com/counter/pdf/10.1186/1476-072X-5-56",
"pdf_hash": "18bda9e3a17f868f6692e5bab1da25731a02edd3",
"pdf_src": "Grobid",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2966",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"sha1": "18bda9e3a17f868f6692e5bab1da25731a02edd3",
"year": 2006
}
|
pes2o/s2orc
|
International Journal of Health Geographics an Unsupervised Classification Method for Inferring Original Case Locations from Low-resolution Disease Maps Background
Background: Widespread availability of geographic information systems software has facilitated the use of disease mapping in academia, government and private sector. Maps that display the address of affected patients are often exchanged in public forums, and published in peer-reviewed journal articles. As previously reported, a search of figure legends in five major medical journals found 19 articles from 1994–2004 that identify over 19,000 patient addresses. In this report, a method is presented to evaluate whether patient privacy is being breached in the publication of low-resolution disease maps. Results: To demonstrate the effect, a hypothetical low-resolution map of geocoded patient addresses was created and the accuracy with which patient addresses can be resolved is described. Through georeferencing and unsupervised classification of the original image, the method precisely re-identified 26% (144/550) of the patient addresses from a presentation quality map and 79% (432/550) from a publication quality map. For the presentation quality map, 99.8% of the addresses were within 70 meters (approximately one city block length) of the predicted patient location, 51.6% of addresses were identified within five buildings, 70.7% within ten buildings and 93% within twenty buildings. For the publication quality map, all addresses were within 14 meters and 11 buildings of the predicted patient location. Conclusion: This study demonstrates that lowering the resolution of a map displaying geocoded patient addresses does not sufficiently protect patient addresses from re-identification. Guidelines to protect patient privacy, including those of medical journals, should reflect policies that ensure privacy protection when spatial data are displayed or published. which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Background
Geocoding patient data -translating the plaintext addresses of patients into longitudes and latitudes -has become routine and enables display and analysis of disease patterns.Many public health surveillance systems and academic investigations rely on specific case locations for identifying patterns, correlates, and predictors of disease [1][2][3].Maps that display such geocoded health data are frequently presented publicly and published electronically and in print.
However, publishing patient address locations on maps also creates a risk of re-identification of individuals [4][5][6][7].We recently reported an inadvertent breach of privacy across five major medical journals, identifying 19 articles from 1994-2004 that include maps with patient addresses plotted as individual dots or symbols [4,5].From these publications, over 19,000 patient addresses are plotted on map figures.We demonstrated through a process of reverse identification that the home addresses of many of these patients could be discovered, despite the low resolution of the disease maps.
Here, we provide the details of that method.We rely on unsupervised classification of the spectral properties of the map image to identify case locations.Since we do not have available to us the original addresses of the patients represented in the published maps, we devised an indirect approach relying on simulation.
Methods
We sought to quantify the degree of re-identifiability of patient home addresses from published maps.To accomplish this, a hypothetical low-resolution map of geocoded patient addresses is produced and then the accuracy with which patient addresses can be resolved (reversely identified) through a five step process is measured.First, an original, prototypical patient map for an urban metropolitan area in Boston, MA was produced (Figure 1).Using building parcel outlines for the city of Boston, [8] we generated a synthetic or hypothetical set of patient addresses by randomly selecting buildings.Cases were assigned by a stratified sampling design of building parcels to achieve a distribution representative of all building and population densities in the city.Buildings were selected with equal spacing of 0.02 degrees.A total of 550 addresses were randomly selected.Centers of the selected building were then calculated and plotted on a county map of Boston to represent patient addresses [9].One important issue is that our use of the building footprint for geocoding does not mirror the reduced accuracy obtained from geocoding addresses.Address geocoding will have a series of associated errors that may be related to the underlying structure of a geographic area (e.g.: road length, parcel size, housing density) [10].
We created a JPEG image with a resolution of 50 dots per inch (dpi), 550 × 400 pixels, a file size of 129 kb and a scale of 1:100,000.This low resolution is typical for web display and is lower than generally used in slide presentations.Also the re-identification of patient addresses was evaluated using a higher-resolution map (266 dpi, 2926 × 2261 pixels, 712 kb, 1:100,000), often the minimum resolution for peer-reviewed publications.
There are several steps involved in reversely identifying a patient address.First, the sample map is scanned or imported into GIS software as an image file [11].Second, the imported map is georeferenced.The cartographic projection of the map is used to set the coordinate system.Generally, the projection of a published map would be unknown and the correct projection would need to be found by manually matching the image of the map to an image of a correctly georegistered map of the same area.In this case, we have a priori knowledge of the map projection.In either case, ground control points are selected on the image using a corresponding vector outline of the map area to re-project the image file of patient locations and reference it to a coordinate system.In this example, an outline of counties around Boston provided by the US Census Bureau to set the ground control points [9].The process of scanning and georeferencing the disease map parallels the methodology detailed by Curtis et al [7].Third, using image analysis software [12], unsupervised classification of the georeferenced map is performed.Given the spectral properties of the image file, pixels are classified so that pixels representing the patient points are aggregated together.Fourth, a reclassified raster map (an image composed of individual pixel elements arranged in a grid) that only contains patient points is extracted and converted to a vector file.Finally, Coordinates of the patient points are then calculated.
Accuracy of reverse geocoding was measured as (a) the number of correctly identified patient addresses (b) the distance between the reversely identified address coordinate and the boundary of the building of the patient home address and (c) the number of buildings in which the patient could reside, given the reversely geocoded address.To calculate (c), we estimated the minimum buffer size from the predicted location needed to contain the centroid of the correct address.Accuracy in this case is therefore defined as the number of incorrect addresses within this buffer.
Results
Our reverse identification method correctly identified 26% (144/550) of patient addresses precisely, from a sample map with low-resolution GIS output.We observed increased detection with the higher-resolution publica-tion quality output to 79% (432/550) of patient addresses identified exactly.
For the low resolution presentation quality map, reversely geocoded locations were on average within 28.9 meters (95% CI, 27.4-30.4) of the correct original address (Figure 2a).On average, correct patient address was identified within eight buildings (95% CI, 7.0-8.3).Overall, 51.6% of addresses were identified as being at any of five buildings, 70.7% at any of ten and 93% at any of 20 (Figure 2b).For the higher resolution publication quality map, all addresses were predicted within 14 m of the correct address.This distance is well within the footprint of most apartment buildings and even many single family resi-dences.While most addresses (79%) could be identified to a single building, the maximum number of buildings in which the patient could reside, given the reversely geocoded addresses was 11 buildings.
Predictions of patient location were accurate in both densely-populated urban settings as well as suburban regions, as illustrated in Figure 3.Among those addresses precisely identified, there was no observed effect of housing density on the rate of patient addresses re-identification.However, given the variation in number of individuals per housing unit, we expect that the anonymity of patients in suburban single family houses would be significantly reduced compared to urban areas.Locales Prototypical patient map for Boston, Massachusetts Figure 1 Prototypical patient map for Boston, Massachusetts.The image displays 550 addresses selected by stratified random sampling design.The original JPEG image used in the analysis had a resolution of 50 dots per inch (550 × 400 pixels), a file size of 129 kb and a scale of 1:100,000.This would be a typical output for web display and usually lower resolution than would be shown in a slide presentation or in a peer-reviewed publication.
with a high probability of living in large apartment buildings afford greater anonymity.In this study, we essentially controlled for the variability of geocoding accuracy by using building footprint data rather than address data.Previous research has shown that housing density may have substantial impact on address geocoding accuracy [10].
Discussion
Our results demonstrate that even lowering the resolution of a map displaying geocoded patient addresses does not sufficiently protect patient addresses from re-identification.Despite the low quality of output sources, these images -based on high precision input sources -preserve positional accuracy.Using a low quality map that would serve the purpose of web or presentation display, we were able to precisely identify more than one quarter of all randomly selected home addresses and on average patients could be identified to a city block or within one of eight buildings.Using a map with minimum resolution for peer-reviewed publication, we could identify almost all patient addresses and on average patients could be identified within 14 m.
The ultimate accuracy of the patient re-identification will no doubt depend on the number of individuals residing at these addresses.In the case of multi-family apartment dwellings, address identification may still afford a certain level of privacy protection.In the case of single family dwellings, re-identification becomes much more likely.However, even in the best case scenario of an urban area multi-family apartment building, an additional concern is that individuals at these addresses can be fully re-identified when linked with other datasets or by using other characteristics supplied in the publication [13].Previous research has shown that combinations of seemingly innocuous data is adequate to uniquely identify individuals with a high level of reliability [14].For example, an experiment using 1990 U.S. Census summary data surprised the public health community by showing that datasets previously thought to be adequately de-identified, containing only 5-digit ZIP code, gender and date of birth, could be linked with other publicly available data (e.g., voting records) and used to uniquely identify 87% of the population of the United States [15].Low-resolution maps of patient locations pose an additional risk to individual privacy -allowing considerably more precision in Accuracy of reversely identifying patient location from a hypothetical low-resolution patient map in Boston, Massachusetts Figure 2 Accuracy of reversely identifying patient location from a hypothetical low-resolution patient map in Boston, Massachusetts.The accuracy of the reverse identification was determined by (A) the distance between the reversely identified and the original addresses and (B) the number of buildings in which the patient could reside, given the reversely geocoded address.The reversely geocoded location was on average within 28.9 meters (95% CI, 27.4-30.4) of the correct address.The mean number of buildings in which the patient might reside was 7.7 (95% CI, 7.0-8.3).re-identification than might be expected.Although the Health Insurance Portability and Accountability Act Privacy Rule (Section 164.514) does not explicitly address the publication of such maps, certain formats of geographic data display most likely violate the spirit of that rule.
Results of reversely identifying patient addresses in Boston, Massachusetts
Curtis et al have also recently described a method to reidentify patients from published maps through manual outlining of case markers [7].Though the vector-based approach of heads-up digitizing can be more accurate than raster-based unsupervised classification in certain circumstances, in this case, it may be difficult to find the true border of the case markers from a scanned paperbased maps (such as the newspaper article described by Curtis et al) or even low-resolution digital images.If the marker is not digitized accurately, then it follows that the centroid of this polygon will also less accurately reflect the original geocoded location.Our approach differs from the manual approach in that we rely on analyzing the spectral properties of the map image through unsupervised classification to automatically identify patient locations.The raster-based method based on the spectral properties of the image can provide a reliable means of re-creating the original vector file and systematically obtaining the center point of a low-resolution marker.This comparison, however, warrants further evaluation.Nonetheless, the results of the two papers are very similar in that they show that maps containing point data are vulnerable to patient address re-identification.These studies and our previous publication on this topic [5] should be viewed together informing policy around the display of geographic data.
The main question that should be asked by both authors and editors is what are the benefits and risks of point localization of patients?Is it necessary to publish maps of point locations, for the presentation of relevant results of research or are they presented merely for illustrative purposes?The answer to these questions should guide decisions on how to report disease maps [16].If just for illustrative purposes, there are techniques available to visualize spatial data without revealing patient information [17].For instance, a common approach to de-identifying such data has been to use ZIP or postal code rather than home address to protect anonymity.While usually appropriate for the reporting of study results, aggregation of data to an administrative unit poses constraints on the analysis and visualization of disease patterns [17][18][19].Other approaches are available for masking geographic data, such as spatial masking of cases by randomly relocating cases within a given distance of their true location [20][21][22][23] or the population-density adjusted 2D Gaussian blurring approach which results in only a small reduction in sensitivity to detect clustering patterns [24].These methods avoid these visualization constraints of data aggrega-tion and afford sufficient privacy for publication without substantial loss to visual display.Masking methods provide more systematic and reliable means of de-identification rather than simply reducing map resolution.Spruill developed a measure of privacy protection for any mask, analogous to our measure of number of addresses within which the patient could reside [25].Such a measure could be used by journal editors as a rule for not publishing maps of individual cases unless a certain value of anonymity was attained.This measure, often referred to as Kanonymity, could help to establish guidelines for the safe publication of disease maps [13,24].
Our approach relies on simulation, rather than attempting to re-identify patients from published maps.We chose this approach to avoid propagating any prior inadvertent disclosures of patient identity, and to avoid impugning particular authors or journals.An advantage of our approach is that since we know the value of the original plotted location, we can precisely measure the accuracy of re-identification.Our analysis also does not address the geocoding method.Accuracy of re-identification will also be dependent on the method for geocoding patient address.Use of a global positioning system (GPS) will provide greater accuracy then that of an address geocoder (automatic conversion from home address text to latitude and longitude using interpolation along street line data).When a geocoder is applied, the input data source will affect the accuracy of the estimate address coordinate.Many US-based studies rely on the freely available US Census TIGER line file as input to assign coordinates to addresses.Although TIGER line files differ in accuracy across the US, they rarely, if ever, approach the geometric accuracy of GPS coordinates or even more detailed commercial datasets.In fact, geocoding based on the free Census data available to most health researchers increases patient anonymity as the proportional placement of the address location can greatly affect geocoding accuracy [10,26].Outside the US, street level data may not be available for address geocoding.Therefore, spatial analysis studies in these areas would rely on the more accurate GPS measures.By extension, greater positional accuracy is revealed in these studies.Our findings may therefore be highly pertinent for GIS-based studies in developing countries.
The issues we raise here have, of course, much wider implications than for just health data, including crime data, housing data (e.g.: Section 8 units, shelters for abused women, etc.), and other administrative data sets [20,27,28].New spatial data standards that protect confidentiality while still effectively communicating information about spatial patterns require immediate evaluation [29].
Conclusion
The publication of low-resolution disease maps poses an inherent jeopardy to patient privacy.Because the appropriate use of the patient address level data can bring real benefit to many areas of public health research that deal with spatial analysis, accidental disclosure of patient information from such maps may lead to constraints on obtaining geographically referenced health data.Thus, guidelines for the display or publication of health data are needed to guarantee privacy protection.Further, the editors of journals and textbooks should consider implementing policies to ensure the safe reporting of spatial data.
Figure 3
Results of reversely identifying patient addresses in Boston, Massachusetts.The green buildings are the randomly selected patient locations.The blue points are the predicted locations of the cases from the presentation quality map (50 dpi) and red points are predictions from the publication quality map (266 dpi).Proximities of the predicted to the actual location are displayed for both (A) a high density urban area and (B) a low density suburban area.
|
v3-fos-license
|
2024-04-01T15:28:53.176Z
|
2024-03-01T00:00:00.000
|
268812248
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.sleepx.2024.100109",
"pdf_hash": "6b563aad2ff9d4499199c7bd6d9bfac103e851f4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2967",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d35c5eb42a4088b003eb896823e4a36e4289ef25",
"year": 2024
}
|
pes2o/s2orc
|
Cataplexy response with extended-release once-nightly sodium oxybate: Post hoc responder analyses from the phase 3 REST-ON clinical trial
Background Once-nightly sodium oxybate (ON-SXB), an extended-release oxybate formulation, yielded significant (P < 0.001 at 6 g, 7.5 g, and 9 g) reductions in cataplexy episodes in participants in the phase 3 REST-ON clinical trial (NCT02720744). This post hoc analysis from REST-ON further characterized changes in cataplexy episodes in participants with narcolepsy type 1 (NT1). Methods Participants with narcolepsy aged ≥16 years received ON-SXB (1 wk, 4.5 g; 2 wk, 6 g; 5 wk, 7.5 g; 5 wk, 9 g) or placebo. Percentages of participants with NT1 who had ≥25%, ≥50%, ≥75%, and 100% reductions from baseline in mean number of weekly cataplexy episodes were determined. Two-sided P values comparing ON-SXB vs placebo were calculated with Fisher exact test. Results Participants with NT1 (ON-SXB, n = 73; placebo, n = 72; modified intent-to-treat population) had a baseline mean number of weekly cataplexy episodes of 18.9 (ON-SXB) and 19.8 (placebo). Of participants receiving the highest doses of ON-SXB (7.5 and 9 g), approximately half had a 50% reduction, one-third had a 75% reduction, and one-tenth had a 100% reduction in their cataplexy episodes vs placebo. Significantly greater proportions of participants receiving ON-SXB vs placebo had respective reductions in weekly cataplexy episodes of ≥25% at weeks 1 (4.5 g; P < 0.05), 3 (6 g; P < 0.001), 8 (7.5 g; P < 0.001), and 13 (9 g; P = 0.001). Conclusions A significantly greater proportion of participants receiving ON-SXB vs placebo experienced reductions in weekly cataplexy episodes at all tested doses. Approximately 10% of participants taking the 2 highest ON-SXB doses had complete elimination of their cataplexy.
Introduction
Narcolepsy is a rare, chronic sleep disorder whose primary symptoms include excessive daytime sleepiness (EDS) and cataplexy [1][2][3].People who have narcolepsy with cataplexy (narcolepsy type 1; NT1) describe their cataplexy episodes as embarrassing or, at worse, emotionally oppressive and having a substantial impact on their lives [4].There are limited options available to treat cataplexy [5,6].Until recently, pitolisant and twice-nightly oxybate therapy (immediate-release [IR] sodium oxybate [SXB] and calcium/magnesium/potassium/sodium oxybates) were the only approved therapies for the treatment of EDS and cataplexy in adults with narcolepsy in the US [7][8][9]; IR SXB and pitolisant are approved in the EU [10,11].The IR oxybate formulations are administered twice nightly, with the first dose taken at bedtime and the second taken 2.5-4 hours later [8,9].Awakening during the night to take the second dose may compromise treatment adherence and lead to safety concerns [12][13][14].
The extended-release, once-nightly formulation of SXB (ON-SXB; FT218 [LUMRYZ™ (sodium oxybate) for extended-release oral suspension], Avadel Pharmaceuticals, Chesterfield, MO) received US Food and Drug Administration approval for the treatment of cataplexy or EDS in adults with narcolepsy in May 2023 [15,16].
In the pivotal phase 3 REST-ON clinical trial (NCT02720744) of individuals with narcolepsy, ON-SXB treatment resulted in statistically significant, clinically meaningful improvements vs placebo on all 3 coprimary endpoints: mean sleep latency on the Maintenance of Wakefulness Test (MWT), Clinical Global Impression of Improvement rating, and number of weekly cataplexy episodes at all doses analyzed (6 g [week 3], 7.5 g [week 8], and 9 g [week 13]; all P < 0.001) [17].A post hoc analysis revealed a significant reduction from baseline in the mean number of cataplexy episodes vs placebo starting at week 1 (ON-SXB 4.5 g; P < 0.05).This post hoc responder analysis further assessed reductions in weekly cataplexy episodes in participants with NT1.
Study design and participants
REST-ON study design and eligibility criteria were previously described [17].Briefly, REST-ON was a phase 3, multicenter, randomized, double-blind, placebo-controlled trial with a 3-week screening period, a 13-week treatment period, and a 1-week follow-up period.and had an Epworth Sleepiness Scale score >10.Participants with NT1 were required to self-report continuing cataplexy for the last 3 months, with a mean of ≥8 weekly cataplexy episodes during screening.
Individuals were randomly assigned 1:1 to receive ON-SXB (1 wk, 4.5 g; 2 wk, 6 g; 5 wk, 7.5 g; 5 wk, 9 g) or placebo.Randomization was stratified by narcolepsy type.Concomitant stable stimulant therapy was permitted.All anticataplectic drugs were discontinued before study entry with a 3-week washout period to account for rebound cataplexy.
The study protocol was approved by the site's institutional review board, and each participant (or legally authorized representative for those aged <18 years) provided written informed consent before participation.Study conduct adhered to the ethical principles of the Good Clinical Practice guidelines, the International Council for Harmonisation guidelines, and the Declaration of Helsinki, in addition to applicable national and local laws and regulatory requirements.
Cataplexy assessments
Participants with NT1 recorded the number of cataplexy episodes as 0, 1, 2, 3, 4, or ≥5 per day in an electronic daily diary.At least 3 entries per week were required for the average to be considered an observation.The mean number of weekly cataplexy episodes was calculated as the number of episodes divided by the number of days with available diary data of valid weeks within that period, then multiplied by 7.
Statistical analysis
Efficacy was assessed in the modified intent-to-treat (mITT) population (all randomized participants with ≥1 efficacy measurement after receiving the 6-g dose).Two-sided P values were calculated using Fisher exact test.A mixed-effects model for repeated measures was used to analyze percentage change from baseline in cataplexy frequency [17].
Participants
A total of 212 participants were enrolled in REST-ON and received medication (safety population); in the mITT population (n = 190), 145 participants had NT1 (ON-SXB, n = 73; placebo, n = 72).Baseline demographic and disease characteristics were well balanced between groups in the safety population (Table 1).
Discussion
In these post hoc responder analyses of the phase 3 REST-ON clinical trial, a significantly greater proportion of participants treated with ON-SXB (6, 7.5, and 9 g) experienced ≥25%, ≥50%, and ≥75% reductions in number of weekly cataplexy episodes vs placebo.At the 7.5-g (week 8) and 9-g (week 13) doses, a significantly greater percentage of participants receiving ON-SXB had complete resolution of cataplexy episodes vs placebo (both P < 0.05).All evaluated doses of ON-SXB reduced cataplexy ≥25% from baseline vs placebo, meeting the American Academy of Sleep Medicine (AASM) criteria for clinical significance [18].These findings support the primary data from REST-ON, showing statistically significant improvements in number of weekly cataplexy episodes with ON-SXB (P < 0.001 vs placebo) [17].
In the primary REST-ON analyses, comparable efficacy and safety profiles were observed for the 2 highest doses of ON-SXB (7.5 and 9 g) vs placebo, with numerically greater improvement with the 9-g dose (weeks 9-13) [17].Similar results were observed for the 7.5-and 9-g doses in this post hoc analysis: approximately 10% had complete elimination of cataplexy and approximately one-third and one-half had ≥75% and ≥50% reductions in weekly cataplexy episodes, respectively.REST-ON is the only controlled trial demonstrating efficacy with a 7.5-g dose of oxybate, thus providing evidence to assist clinicians in making dosing decisions for their patients.
REST-ON is also the first clinical trial to show significant reductions in the number of cataplexy episodes after 1 week of SXB treatment (4.5-g dose; P < 0.05 vs placebo) [17].Further clinically meaningful reductions in cataplexy episodes early in therapy were seen in this post hoc analysis, with 44% of participants experiencing ≥25% reductions at week 1.SXB treatment is typically initiated at 4.5 g/night and uptitrated in 1.5-g/night increments weekly until the optimal dose is obtained.These data suggest that patients receiving ON-SXB can expect to have reductions in cataplexy episodes early in titration.
Cataplexy is a burdensome symptom to patients [19], and there are limited approved treatment options [7][8][9][10][11].Although both SXB and pitolisant are strongly recommended by AASM for the treatment of cataplexy in adults, only SXB is strongly recommended by the European Academy of Neurology; pitolisant is weakly recommended because of moderate quality of evidence due to attrition bias and short trial durations [5,6].An additional treatment option, particularly one that offers clinically significant improvements in episode frequency earlier in treatment or complete elimination of cataplexy, would be beneficial for patients.
Other treatments evaluating cataplexy have varied in baseline levels of cataplexy.For immediate-release SXB, the first trial enrolled 136 participants with moderate to severe cataplexy with a median of 21 episodes per week [20], which is comparable to the REST-ON baseline value of 19-20 episodes [17].
There are limitations to consider, particularly the post hoc nature of this analysis.In REST-ON, daily episode frequency ≥5 was calculated as 5, which may lead to underreporting of events.The placebo group had an increasing reduction in the number of cataplexy episodes over the 13week trial, likely reflecting participants understanding that the dose in the active group would increase over time.Despite both these limitations, the observed differences from placebo with ON-SXB were statistically significant in this post hoc analysis, except for 100% reduction at week 3 with the 6-g dose.Additionally, the numbers of cataplexy episodes were not recorded as severe vs mild, or generalized vs local, similar to other prior trials.Therefore, it is not possible to determine if cataplexy severity decreased with ON-SXB treatment.As the impact to people with narcolepsy is greater if generalized, capturing these data in future studies of any cataplexy treatment is needed.Despite this, these findings demonstrate that treatment with ON-SXB significantly reduced the frequency of cataplexy episodes and led to the elimination of cataplexy episodes for some.
Conclusions
These post hoc analyses on cataplexy reduction corroborate the primary REST-ON results and provide unique insights into the efficacy of a once-at-bedtime dose of ON-SXB as a treatment for cataplexy.These data may provide a useful framework for clinicians when setting expectations of effectiveness for patients with NT1.
Richard Bogan reports a relationship with Suven Life Sciences Ltd that includes: funding grants.Richard Bogan reports a relationship with Balance that includes: funding grants.Richard Bogan reports a relationship with Vanda that includes: funding grants.
Table 1
Baseline characteristics of REST-ON participants with NT1 (safety population).
Richard Bogan reports a relationship with Merck & Co., Inc that includes: funding grants.Richard Bogan reports a relationship with Eisai that includes: funding grants and speaking and lecture fees.Richard Bogan reports a relationship with Philips that includes: funding grants.Richard Bogan reports a relationship with FRESCA Medical that includes: funding grants.Richard Bogan reports a relationship with LivaNova that includes: funding grants.Richard Bogan reports a relationship with Roche that includes: funding grants.Richard Bogan reports a relationship with Sommetrics that includes: funding grants.Richard Bogan reports a relationship with Harmony Biosciences that includes: speaking and lecture fees.Akinyemi O. Ajayi reports a relationship with Avadel Pharmaceuticals that includes: consulting or advisory.Bruce C. Corser reports a relationship with Jazz Pharmaceuticals that includes: speaking and lecture fees.Bruce C. Corser reports a relationship with Merck & Co., Inc that includes: speaking and lecture fees.Bruce C. Corser reports a relationship with Eisai that includes: speaking and lecture fees.Bruce C. Corser reports a relationship with Harmony Biosciences that includes: speaking and lecture fees.Bruce C. Corser reports a relationship with Avadel Pharmaceuticals that includes: consulting or advisory and speaking and lecture fees.Jennifer Gudeman reports a relationship with Avadel Pharmaceuticals that includes: employment.If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
v3-fos-license
|
2020-07-09T09:05:48.643Z
|
2020-01-01T00:00:00.000
|
226523007
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://thesai.org/Downloads/Volume11No6/Paper_86-Factors_Influencing_Practice_of_Human_Resource_Information_System.pdf",
"pdf_hash": "a90036b59ed69b20fcc77b57ad710b85dd08f522",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2968",
"s2fieldsofstudy": [
"Business"
],
"sha1": "43cddc14aabcc1c92fb13bc3f1bb98b80ad6c5fe",
"year": 2020
}
|
pes2o/s2orc
|
Factors Influencing Practice of Human Resource Information System in Organizations: A Hybrid Approach of AHP and DEMATEL
This paper blends the development of the Technology-Organization-Environment (TOE) framework and Human-Organization-Technology (HOT) fit model to identify the factors that influence the administration choice in embracing human resource information system (HRIS) in the organizations. Here, a hybrid Multi-Criteria Decision Making (MCDM) model combining the Decision Making Trial and Evaluation Laboratory (DEMATEL) and Analytic hierarchy Processes (AHP) is used to achieve the objective of the study. In this study, the experts agree that the staffs IT skill is most significant than other factors for the Human dimension. Similarly, IT infrastructure, top level support, and competitive pressure are the most vital factors for Technology, Organization and Environment dimensions respectively. Moreover, this paper will help the managers to take care of some factors that are vital for HRIS implementation in the organizations. Keywords—Analytic Hierarchy Processes (AHP); Decision Making Trial and Evaluation Laboratory (DEMATEL); factor; Human Resource Information System (HRIS); Multi-Criteria Decision Making (MCDM) Model
I. INTRODUCTION
Conventional human resource management (HRM) processes have been moved to HRIS in order to achieve the organizational objectives [1]. Specialists realized the magnitude of HRIS applications and investigated an expansive number of persuasive elements for the choice and usage of HRIS among business organizations [2]. Experts contended that the relative weight of investigated factors might be changed alongside gradual development and its settings. Besides, researchers unveiled that the technological innovation is consistently unpredictable in a competitive setting. And, it is crucial to comprehend the possible factors impacting the choice of HRIS selection and usage in the associations for a specific setting [3].
The technology innovation adoption concept has evolved globally [4] though there are still some constraints for appropriateness in developing settings of innovation models of Western countries [5,6]. In compared to the West, Bangladesh is incredibly unique considering innovation, cultural, economic conditions as a developing nation. Thus, investigating the applicability model for HRIS adoption in developing countries is imperative. Past research indicates that there is a research gap in connection to the potential factors influencing the use of HRIS adoption in developing nations [7]. Therefore, the prime research target of this study is to exploring the most important factors of HRIS adoption among organizations of Bangladesh. Following this goal, we have some novelties in this paper. Firstly, the current study suggests an IT adoption model in the context of Bangladesh as there is a lack of theories of IT adoption for developing nations. Secondly, this paper reveals a thorough investigation assessing the essential level of interdependency among basic variables for the selection of HRIS usage in developing nations. Moreover, this study proposes an MCDM combining the DEMATEL and AHP approaches to assess and discover the significance level of the determining elements for HRIS usage in Bangladesh.
The remaining part of the paper is organized as follows. Section II labels the factors of human resource information system adoption based on Human-Organization-Technology (HOT) fit model and technological, organizational, and environmental (TOE) model. The research model and research methods and techniques that are adopted to achieve the objectives of the study are presented in Section III. Section IV includes data analysis along with some interesting results. A brief discussion on findings is provided in Section V, before concluding the paper in Section VI.
II. LITERATURE REVIEW
Researchers identified some factors for technological innovation in different settings. In this paper, we examined a few variables for IT innovation adoption. Both subjective and quantitative methodologies were utilized in technological adoption research. However, the subjective is a thoroughly detectable approach. It is apparent from exploring past researches that, the most significant and widely used fifteen variables are adopted dividing into four dimensions e.g. human (the characteristics of senior executives, Staffs IT skill), technology (comparative advantage, perceived compatibility, perceived complexity, and IT infrastructure), organization (top level support, organizational culture, organizational structure, organizational size, perceived cost), and environment (competitive pressure, support from technology service provider, government support)). The related research findings 701 | P a g e www.ijacsa.thesai.org regarding the above variables are summarized below. Remarkably, in most past studies, HRIS and electronic human resource management (e-HRM) are used interchangeably [8].
In this study, the three variables of human dimension is characteristics of senior executives (innovativeness and IT knowledge), staffs IT skill, and employee Behavioral Characteristics. Adoption of a new system is an strategic decision of an organization [9]. Therefore, the decision towards adopting or rejecting an innovation is depends on personal attributes of top managers and skill on IT [10]. On the contrary, a couple of studies asserted that senior officials' IT learning and creativity does not influence IT adoption in organizations [11]. Previous research of Alam, Masum, Beh and Hong [5] revealed that organizations having staff with IT learning background create probability of IT enabled HRM applications. In line with previous research, Teo, Lim and Fedric [4] and Bian [12] showed IT ability and employee intention to use IT applications are strong indicators to choose HRIS in organizations of China. However, a few scholars found that IT abilities of staff was unimportant factor for technological innovations considering multiple settings [2,13].
The organization dimensions include five variables such as top level support, organizational culture, organizational structure, organizational size, and perceived cost. Some researchers identified the support of top management as a vital factor for IT application use [2,4]. Conversely, few studies on adoption of IT have claimed that the support from top level is not always a subject of influence to adopt IT in organization. [11,14]. Various scholars explored the variable -Organizational culture to be an important factor for the selection of IT applications [15,16]. In addition, Cooper and Schindler [17] resolved that incase of any clashes in information system (IS) of an organization the software is become abandoned or customized so that it matches the existing culture of business organizations. Conversely, some researchers argued that organizational culture is not an essential divider amongst adopters and non-adopters of modern IT application [18]. Moreover, earlier studies on 110 manufacturing firms in Singapore confirmed that organizational size is the only constantly accepted factor among the possible causes for HRIS use in the organizations [4]. Scholars contended that if expenses are perceived to be high, people will be less inclined to invest in the selection of HRIS [5]. A contemporary study on the adoption of internet business in Iranian SMEs uncovered that the budgetary viewpoint (high expenses) is the sole issue behind not embracing web-based business applications [19].
The four variables of technology dimension such as perceived compatibility, perceived complexity, comparative advantage, and IT infrastructure are widely established technological factors in IT innovation research. Teo, Lim and Fedric [4] indicated that choice of HRIS adaption is strongly connected to an amiable impression of HRIS in human resources (HR) department. Similarly, Al-Dmour [2] claimed the significance of perceived relative advantage on HRIS usage. Whereas perceived compatibility was evident to be highest influencer on the use of HRIS [4]. Likewise, Ghobakhloo, Arias-Aranda and Benitez-Amado [19] recommended perceived compatibility as key component of IT application in adoption stage. In this regards, researchers concluded that the successful adoption of new technology suffers from a high extent of uncertainty and risk due to perceived complexity For this reason Gutierrez, Boukrami and Lumsden [14] mentioned that, in the UK, perceived complexity has a strong influence in the implementation of distributed computing services. Conversely, Teo, Lim and Fedric [4] and Bian [12] found it as a non-discriminate factor among the firms. And, few researchers have claimed that compatibility does not influence innovation adoption [11]. In additions, researchers uncovered that perceived complexity is not a vital factor for HRIS or related IT appropriation choice in firms or organizations [4,11]. Recent studies indicated that IT infrastructure is the most common barrier to adopt IT enabled applications in organizations of the developing countries [1,20]. Conversely, some current research showed that IT infrastructure is an insignificant factor for IT innovation adoption [5,21].
Lastly, environmental dimension consist three variables like competitive pressure, support from technology service provider and government support. Ghobakhloo, Arias-Aranda and Benitez-Amado [19] recommended that competitive pressure is one of the most important variables taken into account for deciding to implement IT strategies within an organization. Additionally, Bian [12] discovered that pressure from competitor was a critical factor for using HRIS in China. In Jordanian firms, Al-Dmour [2] stated that the availability of IT providers and their sponsorship is the most worthy factor in an environmental context for using HRIS. Conversely, various studies on IT adoption have concluded that support from technology service provider is not identified to be a major factor [11,22]. A present study on the adoption of HRIS in public organizations of Australia uncovered that the government rule and regulations is very important determinants for HRIS adoption and implementations [23]. Similarly, Al-Dmour [2] claimed that government policy regarding IT applications adoption is one of the significant factors for HRIS implementation in firms of Jordan.
In contemporary research on organizations of Bangladesh, researchers claimed that organization, environment and technology are observed to be the most persuasive dimensions for HRIS adoption [5]. The authors obtained five most critical factors for HRIS adoption in organizations. And, these factors are IT infrastructure, Staffs IT capabilities, top level support, perceived cost, and competitive pressure. Likewise, Masum [1] stated that the foremost obstruction for executing HRIS in developing countries is the lack of management knowledge, experience, training, and fear of expense. But, the advantages of the HRIS is outweigh the limitations [24]. The past studies indicate that research on HRIS in Bangladesh is in early-stage for Bangladesh setting [25].
Amalnick, Ansarinejad and Nargesi [26] adopted DEMATEL and ANP evaluated some important factor for successful implementation of ERP systems. They explored that ERP vendor selection, project team, project/business plan and business model, management, and budgeting made up the top five success factor for ERP implementation in the organizations. In a recent research, scholars employed AHP with Fuzzy AHP (FAHP) and group decision-making (GDM) www.ijacsa.thesai.org to identify the diversified factors from different dimensions of the web-based E-Learning system in university context [27]. A study on an Iranian steel company, Rouhani, Ashrafi and Afshari [28] evaluated CSFs in ERP using a hybrid model based on fuzzy DEMATEL and fuzzy AHP. The results exposed that project champion, clear project plan, project team competence, training and education and organizational culture were among the most significant 8 aspects to be considered.
Interestingly, most of the past studies depicted descriptive statistics, absence of robust inferential statistics, and advanced artificial intelligence (AI) tools and techniques are not used. So, research gap is visible and that need be resolved by different types of AI techniques such as MCDM Model Approach. MCDM is one of the famous techniques to manage complex issues that show high vulnerability, conflicting goals, different interests and numerous points of view [29]. Moreover, MCDM approaches are viable in decision making, weighing and choosing the most fitting options. Data was collected using questionnaire survey. And, the questionnaire attempted to obtain information relevant information related to the research such as factors of HRIS adoption decision, extent of usage HRIS application etc. We have selected 15 human resource managers from 15 reputed manufacturing organizations of Bangladesh, who have adequate knowledge and expertise on HRIS. In order to attain anonymity and free from biasness, the self-administered structured questionnaire was employed. For furtherance of the research the survey method was used as the sampled elements and the variables were treated having no to control, influence, or manipulate them. AHP, one of the famous MCDM techniques is used to analyze complex decisions for broader application areas. In the literal context, DEMATEL is adjusted for various subjective and factor-related points including modern industrial arranging, basic leadership, feasible development and other world issues [30]. Realizing the benefits of hybrid MCDM especially integrating DEMATEL and AHP techniques, scholars used them in different settings such as product selection, supplier selection, software selection, and so on [31]. DEMATEL is a helpful strategy in order to analyze cause-effect connections between factors and dimensions. But DEMATEL itself can't decide the weights of individual criteria, where AHP proves to be useful. The two techniques offer help in managing complex issues, as decision producers can have a superior understanding of the issues to be unraveled. As indicated by Horng, Liu, Chou, Yin and Tsai [32], combining DEMATEL and AHP techniques altogether can give a supportive tool for recognizing the basic traits of policy arrangement usage as well as computing the weights of the business decision criteria. Thus, DEMATEL and AHP strategies are embraced to fulfill the goals of this study.
IV. RESULTS
As we discussed in the previous section, a conceptual research strategy has been developed in this study. Consequently, in this segment, we constructed a combination of MCDM models for the procedure of HRIS adoption selection. The proposed MCDM model includes two principle stages namely AHP and DEMATEL.
A. Analytic Hierarchy Process (AHP)
Based on the inputs, taken from the respondents, AHP is used to derive ratio scales from pairwise assessments. The consistency ratio was the parameter used for finding out the accuracy of the respondents. In case of being inconsistent, the respondents were told to submit their responses once again. The stepwise procedure is given below: Step 1: Respondents provide pairwise comparison of dimensions and variables. The ranking scale is depicted using a scale of 1 to 9, where 1 indicates same preference and 9 represents the intense preference of dimensions, and variables. Meanwhile, ranking 3, 5, and 7 indicate little, strong and very strong preference. Based on the given input by the respondents, an n×n reciprocal matrix has been conducted where n represents dimensions of HRIS adoption. Sample input for the respondent and the matrix has been given in Fig. 1. The dimensions are represented in a short form such as Org, Hum, and Tech, Env for organization, human, technology and environment respectively. All elements in the reciprocal matrix should be greater than zero, i.e a ij >0 where a ij represents an element inside the matrix of row i and column j.
Step 2: Sum up each and every column of the reciprocal matrix. Divide every component of the matrix with the summation of its column. As a result, normalized relative weight matrix has been obtained. Whenever an average of the rows is being done, we can find the normalized principal www.ijacsa.thesai.org Eigenvector or priority vector. The priority vector can be depicted by multiplying by 100 to get the percentage of preference of the elements.
Step 3: Find out the consistency (CI) of the respondents: CI= where λ max is the principal Eigen value determined by summing up products between each element of Eigen vector and summation of columns of reciprocal matrix. λ max can be computed using the following formula: ∑w i x j where x j = sum of column of comparison matrix and w i = priority vector , here i=j=n.
Step 4: If n>2, calculate consistency ratio (CR) using CI and Random consistency index (RCI) proposed by Prof Satty given solution using 500 matrices as sample size. The formula of CR is given as below: Step 5: If CR<10%, the judgment is considered to be consistent, otherwise the respondent was requested to reconsider preferences once again. Step 6: Average the priority judgment of the 15 respondents (Expert 1 … Expert 15) and find out the highest value to be ranked 1 and so on for dimensions and respective variables in Table I. Thus, the highest and lowest priority of the dimensions as well as variables can be found out. The rankings are represented from higher ranking to lower ranking. It can be clearly seen that the organization dimension receives the highest average value (57.83%) whereas the environment receives the lowest average value (5.54%). Thus, organization is ranked as the most preferred one (ranked 1 st ) whereas environment is the least preferred one (ranked 4th) among the respondents. Moreover, human is ranked at 2nd with 26.05% and technology is ranked at 3rd with 10.56%.
B. Decision Making Trail Evaluation Laboratory
(DEMATEL) DEMATEL algorithm has been used in the next section in order to find out the most important causes as well as the correlations among variables. It reveals the influence between the variables. The steps have been described as follows: Step 1: Respondents will provide the influence based on the factors (dimensions and variables). The influence will be based on the assumptions given in step 1 of Section 3.1. Based on the ranking, answer matrices (let's name it as x) will be constructed. If there are H respondents and n factors, then there will be H matrices each comprised of n factors. Thus there will be H matrices each composed of n×n factors.
Step 2: Construct an initial direct relation matrix. It is also known as average influence qualification. Each and every component will be measured as a ij = where i and j represent row and column of answer matrix and n is the number of factors. Sum up all the components of each and every row and take the largest sum of the row as G. Now the value of G will be used to generate normalized direct relation matrix (D).
∑
Step 3: Direct relation matrix's Normalized form D will be computed by dividing all the elements of A by G.
Step 4: Total relation matrix will be worked out using the following formula: T=D (I-D)-1. The total relation matrix of the four variables has been shown below.
Step 5: (special condition): If i= = j, then ,ri+cj indicates the total effect given and received by factor i and ri-cj indicates net effect factor i has on the system. If ri-cj>0, factor i is a causer, else Factor i is a receiver, it is also known as effect.
The Total relation matrix for the Variables of Technology dimension is:
C. Net Effect and Total Effect of Variables
In the technology dimension, the value of r-c is greater than 0 for V3 and V4. More specifically, the value of perceived complexity and IT infrastructure are 0.4409 and 1.2113. So, these are the major causes of the technology dimension and should not be overlooked. The higher value of r+c indicates higher prominence. The Value of r+c is highest for variable V2, namely perceived compatibility, revealing the fact that this variable has major interaction with other variables compared to the interaction of other variables with this variable.
Same rule applies for organization, human and environment dimensions' variables. It can be unequivocally seen in the organization dimension that variable V5, top level support is the most important cause while variable V6, organizational culture has the strongest correlation than any other criteria for the organizational dimension. The variable V4, IT infrastructure is the most important cause in the technological dimension. Variable V11, Staffs IT skill is the most significant cause that has impact on other variables whereas V12, employee behavioral characteristics have major correlations with all other factors in the system in terms of human dimension. Turning to dimension environment, the variable competitive pressure is the significant causer affecting other variables.
In nutshell, top level support is the most crucial factor in the organization dimension and Staffs IT skill is the most significant in terms of human dimension. Moreover, IT infrastructure is the most vital factor in the technology dimension. Turning to environment dimension, variable competitive pressure has been considered to be having the prominent significance.
V. DISCUSSIONS This research adds new ideas to existing HRIS literature. The investigation explores the impacting variables on decision making for adopting HRIS in the industries of Bangladesh. The exploration of factors based on the theoretical model overcomes the limitations explained in the HOT-fit model and the TOE model. Analyzing the respondents' opinion, the research explored that organization is ranked as the most important one (ranked first) whereas environment is the least important one (ranked fourth). Moreover, human is ranked at second and technology dimension is ranked at third.
In human dimension, IT expertise of staff is recognized as the most significant driver that inducing HRIS usage in the organizations. This outcome is consistent with past studies [5,12,33]. In order to continue consistent growth and keeping long term focus in business organizations, the availability of expert HRIS is a vibrant factor in the IT-dependent HRM applications. In a contemporary study, researchers quantified that personnel who have both technical and managerial skills are considered as the most valuable asset for a firm [25]. It also shows the importance of technology readiness for any technological acceptance. So, HR professionals should be enriched with multidisciplinary knowledge of application based IT and the functions of HR to ensuring effective HRIS operations in organizations. However, this result rejects some contemporary studies where IT expertise of staff was identified as an insignificant driver for IT innovation adoption [2,13].
In organization dimension, this exploration finds that the top level support is factually critical determiner for HRIS utilization. In this way, administration support from higher authority is vital for distribution of recourse and to motivate people to use the system. The finding also supports past studies [2,4,5,12]. So, active support of top management speeds up the HRIS adoption in the organizations. Furthermore, to success the adoption project, top management provides sufficient resources such as people, materials, capital, and related support. The outcome of this study indicates that adopting a new technology in organizations will be easier to the organizations when the senior executives support the innovation adoption. Moreover, sometimes, innovation adoption encounter barrier within the organizations. At that time, top management helps to overcome these problems. So, this study strongly recognized the importance of top level support that significantly influences the incumbents to use HRIS in routine HRM tasks Conversely, this result rejects some contemporary studies where top level support was identified as an non-significant driver for HRIS implementation and usage [11,14].
In technology dimension, the outcome demonstrates that IT infrastructure is positioned as highest -ranked factor to HRIS selection in the organizations of Bangladesh. Also, this variable [34,35]. Likewise, Masum [36] additionally uncovered IT infrastructure as a critical success factor in organizations of Bangladesh. The contemporary research shows that it progressively becomes important to ensure that the HRIS fits in with the existing IT infrastructure for information systems used in an organization. Also, HRIS modules need easy to understand and user friendly interface. In almost all developing countries, related costs to IT, necessary infrastructure, and the quality of these infrastructures hinders the adoption of IT applications in the organizations. Nevertheless, this result rejects some current studies where IT infrastructure was acknowledged as an insignificant factor for IT innovation adoption [5,21].
From environment dimension, competitive pressure is signified to be dominant drivers for HRIS use in organizations. It implied that organizations are feeling pressure for using new technologies such as HRIS to gain competitive advantage and to achieve its goals .The results of previous research support the finding of the current study [14,37,38]. So, this study suggests that organization should adopt proper strategies to handle the competitive pressure as; nowadays, new technology adoption crying need of survival in the industry. In earlier research, researchers stated that a company cannot perceive competitive advantage without properly managing their human resource and IT applications [25]. Currently, organizations are adopting HRIS to support in getting maximum results from their employees, making better-informed decisions and streamlining HR processes and better distribution of human resources. Therefore, these things encourage other organizations to adopt and use the HRIS as well as related IT applications are being competitive in the holistic spectrum of business. However, the present research confronts findings of Ahmad, Abu Bakar, Faziharudean and Mohamad Zaki [39], Ahmadi, Nilashi and Ibrahim [11], and Teo, Lim and Fedric [4]. The researchers stated that competitive pressure is an insignificant factor for HRIS adoption and usage. VI. CONCLUSIONS This paper combines two theories of adoption such as HOT-fit model and TOE framework to recognize the factors influencing the organizations of emerging country like Bangladesh to implement and usage of HRIS for managing human resources efficiently and effectively. The findings of the research will enhance the managers to identify the issues that are related to adoption, implementation, and usage. As far our knowledge goes, organization-level adoption behavior of HRIS using MCDM techniques has never been examined in Bangladesh. Thus, this paper added value to the HRM discipline by improving present understanding of HRIS adoption issues, which is an unsearched field in Bangladesh. This study only applied 15 determinants to inspect the decision of use HRIS. Importantly, some others relevant factors may also influence HRIS usage such as pressure from trading partners, data security, government policy, and information intensity etc. For further research, the findings of the study might be used as a generalized model for the developing countries to take decision adopting HRIS in the organizations.
|
v3-fos-license
|
2018-01-25T18:11:29.988Z
|
2018-01-25T00:00:00.000
|
3559995
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2017.00739/pdf",
"pdf_hash": "21a6e584a291ff5935e41777bb2ec4e98529dbdb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2969",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "21a6e584a291ff5935e41777bb2ec4e98529dbdb",
"year": 2017
}
|
pes2o/s2orc
|
The Pathoconnectivity Profile of Alzheimer’s Disease: A Morphometric Coalteration Network Analysis
Gray matter alterations are typical features of brain disorders. However, they do not impact on the brain randomly. Indeed, it has been suggested that neuropathological processes can selectively affect certain assemblies of neurons, which typically are at the center of crucial functional networks. Because of their topological centrality, these areas form a core set that is more likely to be affected by neuropathological processes. In order to identify and study the pattern formed by brain alterations in patients’ with Alzheimer’s disease (AD), we devised an innovative meta-analytic method for analyzing voxel-based morphometry data. This methodology enabled us to discover that in AD gray matter alterations do not occur randomly across the brain but, on the contrary, follow identifiable patterns of distribution. This alteration pattern exhibits a network-like structure composed of coaltered areas that can be defined as coatrophy network. Within the coatrophy network of AD, we were able to further identify a core subnetwork of coaltered areas that includes the left hippocampus, left and right amygdalae, right parahippocampal gyrus, and right temporal inferior gyrus. In virtue of their network centrality, these brain areas can be thought of as pathoconnectivity hubs.
This is particularly true in the case of Alzheimer's disease (AD). So far great efforts have been made in order to identify a prototypical pattern of gray matter atrophy due to AD, and to put it into correlation with clinical symptoms (24). It is now known that cortical thinning of specific brain sites can be already detected even before the appearance of the symptomatology and that the atrophy tends to increase when the condition worsens (25). Although the cortical reduction is commonly found in normal aging (26,27), the pathological fingerprints of AD are mainly observed in a temporoparietal set of brain areas, including hippocampus, entorhinal cortex, precuneus, and posterior cingulate cortex (28,29). The involvement of these regions has been repeatedly confirmed by meta-analytical studies, which have additionally found the alteration of the right superior frontal gyrus (30). According to Ferreira et al. (31) the left medial temporal lobe is the most impaired area in AD, even in the preclinical phases of the disease, so much so that the impairment of this area can be a good predictor of the clinical worsening of AD. A study of the relationship between the cortical thinning in AD and large-scale structural organization of the brain has revealed that AD reduces both the nodal centrality of temporal and parietal heteromodal association cortices and the positive correlation of thickness values normally found bilaterally between the parietal regions. In contrast, authors reported an increase of positive correlation among brain areas that are part of the default mode network (DMN) (32).
Recently, investigations into the cognitive deficits caused by AD have taken advantage of the methodology of network analysis (33,34). According to this approach, altered brain areas can be represented by means of a set of nodes, linked together by means of edges representing different statistical values. Studies in this line of research have found that AD increases the correlation between the values of cortical thickness of the fusiform gyrus, temporal pole, parahippocampal gyrus, and cingulum, which are all in proximity to each other. Conversely, a decrease of the correlation has been observed between distant areas (35). Of note, it has been suggested that, by combining different sources of information: (i) large-scale structural networks data, (ii) values of cortical thickness, and (iii) the pace of cortical thinning along time, it could be possible to distinguish patients with AD from healthy controls with an accuracy of 96.1%, as well as predict the conversion of mild cognitive impairment (MCI) into AD 6 months before its clinical onset (36). These studies raise the issue of moving from group analysis to single-subject results, which is an essential aspect when dealing with potential biomarkers for diagnostic purposes and surrogate endpoints for disease-modifying clinical trials. Recent methods of single-subject graph measurements have allowed to link network alterations and cognitive decline. For instance, it has been showed that the more the network becomes disorganized, the worse the clinical condition is (37). Moreover, even in healthy subjects, it has been found an association between Aβ42 CSF low levels and alteration of network properties, which might be interpreted as a very early indication of an underlying pathological process (38). All these results provide evidence that the approach based on network analysis can bring valuable insight to clinical practice (33).
So far, at least four important mechanisms have been proposed to account for the distribution of brain alterations: transneuronal spread, nodal stress, shared vulnerability, and trophic failure (4,5).
The first mechanism suggests that misfolded proteins (native peptides with an incomplete or incorrect folding, as well as de novo polypeptides that become prone to self-aggregation) can diffuse along neuronal pathways (18,(39)(40)(41). Increasing evidence indicate that the spread of misfolded proteins presents several similarities to the plasma membrane prion protein intercellular transfer, along axonal fibers, potentially contributing to disease progression (42). This mechanism has been demonstrated in neurodegenerative diseases, such as Alzheimer's, Parkinson's, Huntington's, amyotrophic lateral sclerosis, and tauopathies (43,44); more recently it has been also generalized to other brain disorders (45).
The second mechanism hypothesizes that the functional stress of the network hubs may result in a greater vulnerability of these areas (1,4,14,46). This susceptibility has been supported in human beings with in vivo neuroimaging techniques and voxelbased meta-analyses (14).
Finally, the fourth mechanism hypothesizes a disruption in the production of trophic factors, which could bring about the deterioration of neural wiring (4,5,53,54).
If we consider the case of AD, neuropathological signatures, namely amyloid-β (Aβ) plaques and neurofibrillary tangles, are already present in the preclinical phase of the disease, with further spreading during progression. In fact several years before the clinical onset of AD, Aβ, and tau progressively accumulate in the brain with a certain degree of spatial specificity as well as a partial overlap among the two deposits (55). The relationship between tau and amyloid deposits in the cerebral cortex seems to have a hierarchical organization, with tau and Aβ clusters exhibiting distinctive intramodal and intermodal characterizations (56). These findings would support the view of AD as an amyloid-facilitated tauopathy (57). Furthermore, Aβ and tau propagation and the subsequent deposition and cytotoxicity effects appear to occur mainly between anatomically interconnected areas, thus affecting the functional communication among them (58).
The concept of a gradual spread of pathological signs is a crucial aspect put forward by recent theoretical models. Raj et al. (3) have proposed a network diffusion model of disease progression in dementia, according to which the propagation of pathogenic proteins follows the regional concentration gradients under the spatial constraints defined by brain connectivity. Other authors have proposed a stochastic epidemic spreading model to describe intra-brain Aβ propagation and deposition processes, according to which regions with a higher connectivity degree are the main target of Aβ, thus suggesting that brain hubs are the more exposed to the negative effects of these aberrant proteins (40). Finally, in addition to focusing on misfolded proteins and propagation pathways, a further interesting approach suggests the need to investigate the relationship between these two factors (18). This model considers molecular nexopathies as conjunctions of pathogenic protein and brain networks.
Key factors are therefore supposed to be structural/functional developmental factors and differential vulnerability of neural connections. Accordingly, long-range axonal connections may be more vulnerable to Aβ, so that functional and structural alterations could occur within the large-scale distributed frontotemporoparietal network, such as the one that supports the DMN processing.
In order to identify and study the coatrophy network of AD, we devised an innovative meta-analytic method for analyzing voxel-based morphometry (VBM) data. This methodology enabled us to address the following issues: a. How do gray matter alterations distribute across the brain affected by AD? b. Is it possible to recognize a network-like structure in the pattern formed by these coaltered areas? c. Can specific clusters of coaltered areas be identified within the coatrophy network of AD?
selection of studies
On March 2017, we performed with the software Sleuth an extensive meta-analytic search in the BrainMap VBM database (www. brainmap.org) (59)(60)(61). All the studies that fulfilled the following criteria were retrieved: "Contrast is Gray Matter"; "Context is Disease Effect"; "Observed Changes is Controls > Patients" and "Diagnosis is Alzheimer's Disease. " Results were controlled so as to keep only experiments comparing subjects diagnosed with AD against healthy controls. Our search focused on gray matter decreased values only, as the development of AD is strongly characterized by axonal deterioration and neuronal loss that result in brain atrophy (62). Furthermore, thus far just a few VBM studies have investigated gray matter increase in AD, so that these data were not sufficient for obtaining reliable results with our metaanalytical methods.
To ensure a transparent description of the selection process, we followed the "PRISMA Statement" international guidelines (63, 64) ( Figure S1 in Supplementary Material). The characteristics of the sample can be viewed in Table 1.
anatomical likelihood estimation (ale) and the creation of Modeled activation (Ma) Maps
Voxel-based morphometry data were statistically elaborated with the procedure of the ALE. ALE is a voxel-based meta-analytical technique that models the spatial coherence of different results (101)(102)(103). A three-dimensional Gaussian probability distribution is then centered on each focus of every experiment, using the following formula: in which d refers to the Euclidean distance between voxels and the considered focus, while e refers to the spatial uncertainty.
The SD can be obtained by means of the full-width half-maximum, such as: The combination of these Gaussian distributions constructs a MA map for each experiment. The definite ALE map is finally generated by uniting the MA maps. ALE maps were thresholded at a voxel-level FWD p < 0.05, in line with Eickhoff et al. (102,104,105). Given a specific threshold for cluster forming, a null distribution of cluster sizes was derived by simulating a long series of experiments with the same characteristics of real data and then by generating an ALE map. The score histogram so obtained was eventually employed to assign a threshold p-value.
construction of the Morphometric coatrophy network
To identify the distribution of gray matter alterations, we have developed a novel methodology capable of constructing the morphometric coalteration networks associated with brain disorders. Our analysis can in fact discover whether an altered brain area, say A, is statistically related to the alteration of one or more other brain areas (B, C, etc.). Thus, our analysis can construct the morphometric coatrophy network composed of the areas occurring to be altered together and, subsequently, investigate within the coatrophy network (i) how an altered region is statistically associated with other altered regions and (ii) which regions are likely to be involved in a more widespread net of alterations.
node creation and labeling
We superimposed the ALE map on the Talairach atlas so as to distinguish automatically the anatomical regions identified by the ALE algorithm. If (at least) 20 voxels of the ALE map were found to be inside a certain area of the atlas, then this area was considered to be altered. We chose this cluster threshold so that less relevant regions could be excluded. We employed a peak detection algorithm to identify the local maxima of the ALE map, and we subsequently selected only those peaks that were greater than the 90 percentile of the value distribution. This set was further reduced by applying a minimum interpeak distance of 10 mm. Finally, we positioned a node, labeled on the basis of the Talairach atlas, in correspondence of every survived peak.
Thresholding Values applied during nodes creation and Their rationale As described in the previous paragraph, three thresholds were applied during the nodes creation procedure.
The first threshold regulates the minimum number of voxel (i.e., 20 voxels) necessary to consider a brain area as altered. The rationale behind this threshold is to exclude from the coatrophy network nodes representing minimally (or, from a meta-analytical point of view, rarely) altered brain areas, thus improving and simplifying the interpretability of the results without losing highly Where no information about slice thickness was provided, the voxel-size was expressed. The items are the result of the entire selection process as shown in PRISMA ( Figure S1 in Supplementary Material) flow chart. relevant information. However, even considering brain areas in which only one voxel is altered, the results would have not been spurious, since ALE maps were voxel-level thresholded, which implies that each single voxel contains statistically significant information (104) (see Figure S2 in Supplementary Material for the visualization of the network obtained with different threshold values). This choice, however, would have unnecessarily increased the complexity of the coatrophy network. The second threshold, applied to the peaks-value distribution, allowed us to include in the network only nodes representing those areas for which there is a very high consensus between different experiments (i.e., high ALE value) (104). Even in this case, this threshold could have been removed; all the nodes that can be created with the present methodology represent statistically significant effects, since they can only lie inside the anatomical regions identified by the ALE algorithm, which already has its own statistical thresholding step (see Figure S3 in Supplementary Material for the visualization of the network obtained with different threshold values).
Finally, the interpeaks distance was chosen considering the mean value (10.2 mm; SD = 0.4 mm) of uncertainty in spatial location associated with the reported coordinate discussed in Eickhoff et al. (101).
Therefore, the only effect of those thresholds on our data is to decrease the redundancy of the network, so as to obtain clearer results to be visualized and further analyzed, minimizing the information loss.
coatrophy Distribution
From the set of the nodes as defined in the previous paragraph, we constructed a N × M matrix or a coalteration matrix, in which each row referred to an experiment, whereas each column referred to a network node. On the basis of a Bernoulli generation data model, we constructed a probability distribution of joint alteration values for each pair of nodes. In other words, for any couple of nodes (a and b), we were able to describe their four conjoint states of alteration by means of two binary variables: (1) a and b both altered; (2) a and b both unaltered; (3) a altered and b unaltered; and (4) a unaltered and b altered. Consequently, the following four probabilities were obtained by the frequencies of the different combinations of all experiments: , , , .
These formulas refer to the conjoint frequencies of a couple of nodes (a and b) in all their four possible combinations. Table 2 shows the marginal probabilities for each couple of nodes.
On the grounds of these four probabilities, we have applied the Patel's k index (106)-which has been validated with simulated data by Smith et al. (107)-in order to calculate the degree of coalteration between nodes. This index can measure the probability that two nodes (a and b) are actually coaltered against the probability that node a and node b are altered independently of each other. Patel's k is calculated as follows: The numerator refers to the difference between the probability that a and b are altered together and the expected probability that a and b are altered independently of each other. The denominator refers to a weighted normalizing constant. Min (ϑ1) refers to the maximum value of the conjoint probability P(a,b), given P(a) and P(b), whereas max (ϑ1) refers to the minimum value of P(a,b), given P(a) and P(b). Patel's k index has values that range from −1 to 1. A value of |k| that is close to 1 indicates a high degree of connectivity between nodes. The statistical significance of this index was assessed with a Monte Carlo algorithm that simulated a multinomial, generative model, which took into consideration the alteration of all nodes. This statistical procedure obtained an estimation of p(k|z) by sampling a Dirichlet distribution and by calculating the samples' amount for which k > e, where e was the threshold of statistical significance set at p < 0.01.
Topological analysis
We defined our system of interconnected nodes as a network of coatrophy areas and examined it with the network analyzer included in Cytoscape 3.5.1 (108,109). We were therefore able to achieve a good and reliable description of the net formed by the coatrophy areas under both the aspects of brain structure and functional organization.
node Degree and edge betweenness The node degree was defined as the number of edges linked to a node. We employed this parameter in order to detect the nodes that were more connected within the network, which are commonly considered as brain hubs. In turn, the parameter of edge betweenness was defined as the number of the shortest routes that go through an edge in a graph or a network (110). Thus, edges exhibiting high values of betweenness are supposed to be involved in a large number of shortest routes, so that their elimination is likely to have an impact on communication between many couples of nodes.
resUlTs common Patterns of Morphometric alterations
The ALE performed on all the data retrieved by our search (57 experiments, 883 subjects, and 691 foci) showed that gray matter alterations caused by AD are mainly located in the right medial frontal gyrus, the right inferior frontal gyrus, the left inferior parietal lobule, the right midcingulate gyrus, the left supramarginal gyrus, the right angular gyrus, the bilateral fusiform gyrus, the right precuneus, the bilateral insula, the right thalamus, the bilateral superior temporal gyrus, the bilateral superior temporal pole, the bilateral hippocampus, the bilateral parahippocampal gyrus, the bilateral amygdala, and the left caudate nucleus (Figure 1).
Morphometric coatrophy network
The left panel of Figure 2 illustrates the 40 nodes used to build the coatrophy network, while the heat map in Figure 2 shows the relationship between the elements of each possible couple of nodes measured by Patel's k index. Figure 3 illustrates the whole coatrophy network: the colors' scale ranges from blue to red for the 146 edges and indicates an increase in k values. Edges are to be assumed as undirected. Many nodes densely interconnected characterize the temporal lobe, especially the hippocampus and the parahippocampal gyrus. In contrast, only one node characterizes other brain areas, such as the cingulate cortex and precuneus. Although all the edges that are shown are statistically significant, the ones with the highest k value are those involving the left hippocampus, bilateral amygdala, right parahippocampal gyrus, and right inferior temporal lobe (Tables S1 and S2 in Supplementary Material).
network clustering
Given the great number of nodes as well as the high density of edges within the coatrophy network, we used the k-core decomposition algorithm (111,112)-as it is implemented in the clusterMaker plugin for Cytoscape-to detect a central subnetwork of highly interconnected nodes. This algorithm eliminates all the nodes showing a degree that is lesser than a user-defined reports the organic option of the yFiles Layouts available in Cytoscape 3.5.1 (based on a spring-embedded algorithm) attributed to the coatrophy network. Thick links connect the nodes located in the temporal cortex, parahippocampal gyrus, amygdala, and thalamus. The right precuneus is connected to the rest of the network just through one edge projecting to the left hippocampus, whereas the right cingulate cortex is connected to the network core through the right hippocampus and the right parahippocampal gyrus. In Figure 4, colors and dimensions of nodes are proportional to their network degree values. In particular, Amyg_L_1 shows the highest degree value (17), followed by Temp_Inf_R (16). In turn, Fusiform_L, Amyg_L, Temp_Pole_Sup_R, SupraMarginal_L, and Cingulum_Mid_R exhibit the lowest degree value (1). The edges' thickness is proportional to their degree of edge betweenness. The edge linking the nodes Hipp_R_2 and ParaHipp_R_2 shows the highest value, while the edge between Amyg_R and ParaHipp_L_1 shows the lowest one. Figure 5 shows the nodes according to their anatomical position. In order to simplify the visual interpretation, we have merged two or more nodes referring to the same brain area; however, we have kept the edges unchanged. It is worth noting that the coatrophy network of AD is composed of more interhemispheric (75) than intrahemispheric edges (71). Apart from the hippocampus, most of the inter-hemispheric connections link structures in the medial temporal lobes. Furthermore, unilateral nodes in the right inferior temporal gyrus and right precuneus are linked to areas of both hemispheres.
As many nodes populate the hippocampi, we projected them on a 2D template in order to better clarify their spatial localization (Figure 6). Five out of the six nodes in the left hippocampus were found to be located in the anterior part, while the remaining one was found to be located in the posterior section. In contrast, the right hippocampus exhibits a more uniform pattern, with two anterior nodes and one posterior.
We also analyzed the connectivity profile of the hippocampi within the coatrophy network so as to better understand their relationship with the other nodes of the network (Figure 7). Even though hippocampi have a lot of connections, they are scarcely interconnected (red edges) and, in particular, between the nodes of the right hippocampus there are no direct paths linking them to each other. What is more, the left hippocampus presents a greater number of edges (45) than the right hippocampus (15); however, these edges are generally characterized by a low degree of edge betweenness. In contrast, the 15 edges linking the right hippocampus to the other nodes of the coatrophy network are characterized by a high degree of edge betweenness. Overall, considering the anatomical topology of nodes (Figure 6), the left anterior hippocampus appears to be the most densely connected. Given the great number of nodes and the high density of edges of the coatrophy network, we used the k-core algorithm to identify the most connected components of the network. The analysis reported a core subnetwork formed by eight interhemispheric nodes (Figure 8), including the left and right amygdalae, left hippocampus, right parahippocampal gyrus, and right temporal inferior gyrus. The bilateral presence of nodes within this core subnetwork is consistent with the finding that the coatrophy network is characterized by a large number of interhemispheric edges.
DiscUssiOn
With an innovative voxel-based meta-analytic method, this study aimed to find out whether gray matter decreases caused by AD distribute throughout specific and identifiable areas rather than affect randomly the whole brain. After constructing a morphometric coatrophy network, we intended to identify which brain areas are more likely to be altered in conjunction with other ones rather than alone. Finally, we examined the potential existence of relevant subcomponents within the coatrophy network.
The gray matter decreases evaluated by ALE involve limbic and temporal areas, in particular the hippocampus and parahippocampal gyrus. This finding is in accordance with most of previous research (30,113). Nine out of 40 nodes of the coatrophy network are localized within the hippocampus. Specifically, six nodes are in the left hippocampus (five in its anterior part, one in its posterior part) and three in the right one (two anterior, one posterior). This is consistent with the neuropathological studies suggesting that AD is characterized by an earlier and greater involvement of anatomical structures (including hippocampus) in the left hemisphere (114)(115)(116). Although there is still debate about the exact functional organization of the hippocampus (117), the neuroscientific community has achieved a substantial consensus on its role in learning and memory (118), which are The Coatrophy Pattern of AD Frontiers in Neurology | www.frontiersin.org January 2018 | Volume 8 | Article 739 both deteriorated cognitive functions in AD. According to Thal et al. (119) the hippocampus (in particular the subfields CA1 and subiculum), along with the amygdala, are pretty soon affected by Aβ plaques during AD evolution (120). In line with AD diagnostic criteria (121) hippocampal and mesial temporal lobe atrophy have been considered as biomarkers of neuronal degeneration, potentially increasing the probability of an underlying AD pathophysiological process. Currently, however, the routinely utilization of hippocampal atrophy in clinical practice is not fully standardized, but preferentially applied in investigational studies and clinical trials. Furthermore, hippocampal atrophy rate could be better accounted for as a sensitive marker of disease progression (122,123), being able to trace AD natural development and potentially representing an interesting surrogate marker for disease-modifying clinical trials (124,125). Interestingly, an increased hippocampus and an asymmetry in the shape of the amygdala during the development of AD have been recently demonstrated, with significant correlation to cognitive impairment (126). According to our analysis, the gray matter coatrophy network of AD appears to be densely interconnected, as suggested by the presence of 146 edges and 40 nodes, 39 of which have at least one connection. The existence of a set of nodes (altered areas) is not a proof per se that the disease is spreading. In fact, generally speaking, Patel's k is not always able to identify edges between nodes, which means that, even though some areas are altered, there is no apparent temporal coherence in their capitulation to the disease. The fact, though, that our analysis was able to discover a significant number of edges between nodes is proof of the good reliability of our results pointing out that the alteration cooccurrence really happens, as well as of the consistency of our sample.
Our analysis suggests that AD tends to target a somewhat limited set of brain regions, rather than randomly affecting distinct sites. Furthermore, the left hippocampus, bilateral amygdala, right parahippocampal gyrus, and right inferior temporal lobe seem to follow a very similar pace of degeneration ( Figure S4 in Supplementary Material).
In order to evaluate the likelihood of each node of the coatrophy network to be coaltered with other ones rather than as an individual spot we calculated their node degree. The highest value pertains to the node of the left amygdala, which is reached by 17 edges, but we found other 13 nodes with at least 10 edges. These nodes are localized in the temporal lobes, right amygdala, parahippocampal gyrus, left hippocampus, and right thalamus. The high degree of pathoconnectivity of these nodes suggests that, when gray matter alteration affects one of them, it is highly probable that many other regions are also found to be altered. It is also true the other way round, that is, when nodes characterized by low degree show atrophy, it is very likely that this process cooccurs in one of the high-degree nodes, rather than in another low-degree node. These results, as well as the k-core decomposition, provide evidence that in the coatrophy network of AD certain nodes have the characteristic of being pathoconnectivity hubs. Furthermore, the values of the edge betweenness distribution indicate the existence of a dense subnetwork, which is composed of the nodes with the higher degree of pathoconnectivity.
The paucity of connections linking the two hippocampi suggests a limited cooccurrence of alterations between them. The hippocampus is known to be greatly affected by AD, and the MRI volume estimation of this structure is currently considered one of the most reliable in vivo biomarker of this disease (62). Our results suggest that both the hippocampi are substantially altered, albeit somewhat independently. According to previous studies, certain molecular alterations typical of AD are more evident in the left hippocampus compared to the right one (127,128). This discovery might explain the abundance of edges connecting the nodes in the left hippocampus, as well as support the transneuronal spread mechanism in AD. The nodal stress hypothesis could also play a role in virtue of the intense functional activity of this region. Finally, our finding that the anterior part of the hippocampus exhibits a greater number of edges than the posterior part seems consistent with the suggestion that the deterioration of CA1 and subiculum appears to be more correlated with the development of AD than the deterioration of CA3, which appears to be more correlated with healthy aging (11,120). Recently, the presubicular-subicular complex has been described as one of the earliest site of atrophy in AD, with a significant correlation with memory performances (even in MCI phase), potentially reflecting the ongoing degenerative process through the subiculum passing from entorhinal cortex to dentate gyrus (129,130).
In addition to the interpretation of the coatrophy network as a whole, some specific aspects deserve a detailed consideration. The first is the relationship between hippocampus and precuneus. In the coatrophy network of AD these regions are linked through an edge exhibiting a very high degree of edge betweenness, which reveals a direct interaction. According to the "hippocampus disconnection hypothesis" proposed by Tahmasian et al. (131), the disruption of functional connectivity between hippocampus and precuneus could induce the characteristic alterations in the hippocampus that we find in AD. Tahmasian et al. (131) have in fact demonstrated that in AD the hippocampus is much less inhibited, and this disinhibition may result in its hypermetabolism. A similar situation could induce neurotoxicity, which might be one of the causes behind gray matter decrease measured with VBM, thus explaining the identification of a significant number of nodes in the hippocampus.
A second interesting aspect is the relationship between the left hippocampus and right inferior temporal gyrus, which was highlighted by k-core decomposition. This result is in agreement with the study of Wang et al. (132), which found that the interaction between these two areas is typical of AD. Of note, Wang et al. (132) examined 80 pathological subjects using Bayesian network analysis and prior-defined regions of interest, while the present study has applied a meta-analytical approach on a substantially bigger VBM database of 883 patients diagnosed with AD. This agreement supports the sensitivity of our novel methodology. Furthermore, the slight prevalence of inter-hemispheric connections in the coatrophy network of AD (see Figure S5 in Supplementary Material) is consistent with the deterioration of white matter bundles in AD, in particular concerning the corpus callosum (133)(134)(135)(136)(137). Callosal atrophy has cOnclUsiOn This meta-analysis was able to address the following important issues. a. In AD, gray matter alterations do not occur randomly across the brain but, on the contrary, follow identifiable patterns of distribution. b. This alteration pattern exhibits a network-like structure composed of coaltered areas that can be defined as coatrophy network. c. Within the coatrophy network of AD, certain brain areas, in virtue of their node degree and values of edge betweenness, can be considered as pathoconnectivity hubs. The alteration of these areas is supposed to imply a wider distribution of gray matter abnormalities across the brain. d. Within the coatrophy network we can identify a core subnetwork of coaltered areas that includes the left hippocampus, left and right amygdalae, right parahippocampal gyrus, and right temporal inferior gyrus.
The innovative methodological analysis developed in this study for constructing the morphometric coatrophy network of an important neurodegenerative disease such as AD opens a new window into the comprehension of the pathological brain. Increasing evidence is supporting the idea that brain alterations distribute according to a network-like structure. The analysis carried out in this study not only provides support for this hypothesis but also puts forward the significant finding that certain nodes of the coatrophy network may play the role of pathoconnectivity hubs. What is more, our methodology can be equally applicable to study the morphometric coalteration network of any other neuropathological condition. Future investigations into this line of research on databases of different diseases promise to provide valuable insight to the study of the dynamics of brain disorders, so as to achieve a better predictive diagnostic power as well as to improve medical care and treatment.
aUThOr cOnTribUTiOns JM and AN implemented data collection, analyzed the data, drafted, and revised the article. EP, BB, and KT drafted and revised the article. TC designed the analysis tool, supervised data analysis, drafted, and revised the article. DL retrieved information of the sampled population and implemented bibliographic research. SD revised the article. FC conceived the experiment, supervised data collection, supervised data analysis, drafted, and revised the article.
FUnDing
This study was supported by the Fondazione Carlo Molo (FC, PI), Turin and by CSP UNITO Excellent Young PI grant CSTO162182 (FC, PI), Turin.
sUPPleMenTarY MaTerial
The Supplementary Material for this article can be found online at http://www.frontiersin.org/articles/10.3389/fneur.2017.00739/ full#supplementary-material. been associated with cognitive decline rate as well as to disease progression (138,139).
Gray matter alterations found in the hippocampus, precuneus, and inferior parietal cortex can be ascribed to the general disruption of the DMN in patients with AD (58,140). Recently, a study has showed that the DMN dysfunction, as well as the disruption of the interaction between different resting state functional networks, can be attributed to amyloid burden (58). What is more, Chang et al. (141) have found that amyloid burden in the cingulate cortex might promote gray matter atrophy in the other areas constituting the DMN.
Overall, the crucial role played by pathological proteins in AD supports the transneuronal spread hypothesis at the basis of gray matter alterations' distribution (4,5,39,40,42,45). However, the complex relationship among different factors (such as amyloid burden, Tau deposition, gray matter atrophy, and disrupted functional connectivity) and the presence of several hub nodes within the coatrophy network of AD suggest that the nodal stress mechanism could as well be involved in the development of the disease (142). Therefore, it is extremely likely that different spreading mechanisms, which are not mutually exclusive, may be involved in the etiology of AD.
limitations and Future Directions
The present investigation and the methodology on which it is based aim to better understand the nature of AD by examining its pathological fingerprints over the brain. To do so, we were able to get access to a very large sample size of patients. If this is an advantage on the one hand, it can also be a limitation on the other, as within this sample it was not possible to determine the average duration of disease, due to unavailability of information in the original studies. This aspect makes it difficult to associate the coatrophy network with a specific stage of AD progression. However, the methodological procedure for defining the areas to be included in the coatrophy network considers primarily the frequency of every single area to be found altered. In case of a neurodegenerative condition such as Alzheimer's we could imagine, generally speaking, a group of patients with a recent diagnosis exhibiting alterations in area A, another group with an intermediate development of the disease exhibiting alterations in areas A-B, and another group with an advanced development of the disease exhibiting alterations in areas A-B-C. Since our methodology privileges the frequency of each area to be found altered, in the final network area A will be more likely to be represented, while area C may be even excluded. Moreover, even if the group of patients exhibiting alterations in A-B-C were greater than the other groups, the pattern A-B-C would be less likely to be represented than the sole area A. For this reason, even if our input data could contain an overrepresented sample of patients in a specific stage of the disease, the resulting coalteration network would not represent the pattern of altered areas which is typical of that stage.
Future studies on longitudinal data analyzed by different methods are needed in order to investigate the sequential formation of the coatrophy network identified in this study, so as to achieve a more detailed picture of the temporal evolution of AD.
|
v3-fos-license
|
2020-03-26T10:52:36.750Z
|
2020-03-24T00:00:00.000
|
214643862
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://msystems.asm.org/content/msys/5/2/e00124-20.full.pdf",
"pdf_hash": "4c0b0fb5b3ae66269418bf5acb88a9c621570eb8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2972",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"sha1": "01918c745cd48811a46cb89df52f716aa4a26010",
"year": 2020
}
|
pes2o/s2orc
|
Shotgun Metagenomics of Gut Microbiota in Humans with up to Extreme Longevity and the Increasing Role of Xenobiotic Degradation
The study of longevity may help us understand how human beings can delay or survive the most frequent age-related diseases and morbidities. In this scenario, the gut microbiome has been proposed as one of the variables to monitor and possibly support healthy aging. Indeed, the disruption of host-gut microbiome homeostasis has been associated with inflammation and intestinal permeability as well as a general decline in bone and cognitive health. Here, we performed a metagenomic assessment of fecal samples from semisupercentenarians, i.e., 105 to 109 years old, in comparison to young adults, the elderly, and centenarians, shedding light on the longest compositional and functional trajectory of the human gut microbiome with aging. In addition to providing a fine taxonomic resolution down to the species level, our study emphasizes the progressive age-related increase in degradation pathways of pervasive xenobiotics in Western societies, possibly as a result of a supportive process within the molecular continuum characterizing aging.
L ongevity has been described as the result of a complex combination of variables, deriving from genetics, lifestyle, and environment (1,2). In this context, the intestinal microbiome has been proposed as a possible mediator of healthy aging that preserves host-environment homeostasis by counteracting inflammaging (3,4), intestinal permeability (5), and deterioration of cognitive and bone health (5,6). Correlations have been previously found between age-related gut microbiota dysbioses and levels of proinflammatory cytokines, hospitalization, poor diet, and frailty in the elderly (7). More recently, the longest human gut microbiota trajectory with aging has been built by comparing the fecal bacterial taxa from healthy adults and older individuals, including semisupercentenarians, i.e., people aged 105 to 109 years (8,9). However, the functional changes that occur in the gut microbiome along with aging are still largely unexplored. In an attempt to provide some glimpses in this direction and to advance our knowledge on whether and how the gut microbiome may support the maintenance of health in extreme aging, we here characterized the fecal microbiome of 62 individuals, with ages ranging from 22 to 109 years, by shotgun metagenomics. According to our findings, aging is characterized by an increased number of genes involved in xenobiotic degradation, as well as by rearrangements in metabolic pathways related to carbohydrate, amino acid, and lipid metabolism. These microbiome features are boosted even more in semisupercentenarians, probably representing the result of a lifelong remodeling response to progressive changes in diet and lifestyle.
RESULTS
We previously found considerable age-related variability in fecal microbiota composition of 69 people, including centenarians and semisupercentenarians, from the Emilia Romagna region of Italy and the surrounding area (8). In an attempt to go further, unraveling the functional and species-level taxonomic links between the gut microbiome and extreme aging, we applied shotgun metagenomics to a subset of 62 DNA samples derived from the same data set previously analyzed (8). Specifically, we characterized the gut microbiome from 11 young adults (group Y, 6 We first confirmed that the fecal microbiota in all age groups is dominated by a few bacterial families (i.e., Bifidobacteriaceae, Bacteroidaceae, Lachnospiraceae, and Ruminococcaceae) whose relative abundance decreases with age (mean relative abundance Ϯ SD: group Y, 73% Ϯ 3%; group K, 65% Ϯ 4%; group C, 62% Ϯ 4%; group S, 58% Ϯ 6%). When focusing our attention at the species level, we found that these contributions were mainly accounted for by 13 bacterial species: Bifidobacterium adolescentis, Bifidobacterium longum, Bacteroides uniformis, Faecalibacterium prausnitzii, Ruminococcus bromii, Subdoligranulum sp., Anaerostipes hadrus, Blautia obeum, Ruminococcus torques, Coprococcus catus, Coprococcus comes, Dorea longicatena, and Roseburia sp. Bray-Curtis principal-coordinate analysis (PCoA) of species-level relative abundance profiles provided evidence of an age-related trajectory (P Ͻ 0.05, permutation test with pseudo-F ratios), involving the establishment of age group-specific topological patterns in the taxonomic and functional microbiome structure, as shown by network plots (Fig. 1) and bar plots (see Fig. S1 in the supplemental material). However, the species-level compositional structure of the gut microbiota from the younger elderly group overall matches that from young adults (P ϭ 0.2), suggesting that the physiology of the aging process may not involve gross changes in gut microbiome species and their relative abundance. On the other hand, gut microbiota from centenarians and semisupercentenarians feature a distinctive rearrangement in their taxonomic configurations ( Fig. 2A). In particular, compared with younger individuals, long-lived people show a decreased contribution of B. uniformis, Eubacterium rectale, C. comes, and F. prausnitzii, along with a progressive increase of Escherichia coli, Methanobrevibacter smithii, Akkermansia muciniphila, and Eggerthella lenta (P Ͻ 0.05, Kruskal-Wallis test). These trends have already been reported in previous 16S rRNA gene-based microbiome works in the same subjects (3,8), as well as in Chinese centenarians (10), further strengthening that the observed gut microbiome variations may be part of the extreme aging process, regardless of environmental variables, such as geographical origin and cultural habits (i.e., diet and lifestyle) (11).
Interestingly, when we focused our analysis at a functional scale, we found a progressive age-related increase in the number of reads for genes devoted to xenobiotic biodegradation and metabolism, and a simultaneous decrease in genes involved in carbohydrate metabolism ( Fig. 2B and C; Fig. S2). This functional rearrangement is even more pronounced in the gut microbiome of centenarians and semisupercentenarians, where we observed a reduced contribution of pathways for starch and sucrose (KEGG pathway no. ko00500), pentose phosphate (ko00030), and amino sugar and nucleotide sugar (ko00520) metabolism and a concomitant increase in toluene (ko00623), ethylbenzene (ko00642), caprolactam (ko00930), and chlorocyclohexane and chlorobenzene (ko00361) degradation pathways. While the changes related to carbohydrate metabolism have already been reported in previous studies and suggested to be associated with age-related changes in dietary habits (7,9), the increase in genes for xenobiotic metabolism is reported here for the first time and appears particularly intriguing.
Ethylbenzene, chlorobenzene, chlorocyclohexane, and toluene are pervasive chemicals mainly deriving from industrial manufacturing and municipal discharges and are under monitoring all over the world as part of the main environmental contaminants of the atmosphere, due to their toxic effects (12)(13)(14). The primary man-made sources of these molecules are indeed the emissions from motor and exhaust vehicles, as well as cigarette smoke. Furthermore, they are known to be generated during the processing of refined petroleum products, such as plastics, and to be contained in common consumer products, such as paints and lacquers, thinners, and rubber products (14). As regards caprolactam, it is the raw material of nylon, used for the production of many indoor products, such as synthetic fibers, resins, synthetic leather, and plasticizers. Previous studies have demonstrated the higher indoor burden of these molecules than in the outdoor environment and emphasized the exceptional importance of indoor exposure on human health (15,16). It is a matter of fact that living in environments under strong anthropic pressures, such as the Emilia Romagna region in Italy (17,18), results in the continuous and constant exposure to these pervasive xenobiotic substances, favoring their maintenance and progressive accumulation in body tissues, including the gut (19)(20)(21)(22). We believe that this could create the appropriate conditions for the human host to select for gut microbiome components capable of detoxifying such chemical compounds, with a mutual benefit in terms of microbiome and host fitness in anthropic environments. Indeed, recent works have shown that the humanassociated microbial communities of urban Western populations are functionally suited to the degradation of xenobiotic molecules, including caprolactam (23)(24)(25). Further supporting the importance of human microbiomes in providing a response to xenobiotic exposure, in another recent work the upper airway microbiome of nonasthmatic individuals has been found to possess greater ability to metabolize caprolactam than that of asthmatic people (25). According to the authors, the selection of caprolactamdegrading microbes in the airway microbiome would decrease host exposure to indoor air pollutants, providing an ultimate impact on human health. Centenarians and semisupercentenarians are long-lived individuals who, as such, may boast an important history of exposure to xenobiotic stressors. Furthermore, as they have reduced mobility, these subjects tend to spend more time in their own houses than younger people (Fig. S3), with increased exposure to indoor pollutants. It is thus tempting to speculate that their microbiome is better equipped for the degradation of these xenobiotics as a result of a process driven by the more lasting and assiduous exposure to these chemicals. It is also worth noting that these metabolic functionalities are possessed by commensal bacteria belonging to the human core microbiome, i.e., microbial taxa that have been found to be shared by the microbiome of all human populations sampled to date (26-30) (Fig. 3). This raises important open questions on the biological mechanisms that lead to the consolidation and enrichment of xenobiotic-degrading abilities in centenarian and semisupercentenarian gut microbiomes. Here, we speculate that the highest contribution to xenobiotic degradation by commensals in long-lived people might be the result mainly of a top-down selection process related to the lifestyle habits of these exceptionally old individuals, i.e., stable and constant living settings within their own homes, together with a longer exposure and consequent accumulation of these chemicals in the host tissues due to their longer life.
Besides xenobiotic-degrading genes and those involved in carbohydrate metabolism, we also found age-related differences in other metabolic pathways, including those associated with lipid metabolism. In particular, centenarians and semisupercentenarians show more reads for alpha-linoleic acid (KEGG pathway no. ko00592) and glycerolipid (ko00561) metabolism; on the other hand, younger people show a greater contribution of genes involved in sphingolipid (ko00600) and glycerophospholipid (ko00564) metabolism. Given that glycerophospholipids and sphingolipids are known to be more abundant in animal-derived foods (31,32), while alpha-linoleic acid is derived mainly from plant foods (33), these profiles may be related to eating habits and, in particular, to the higher intake of plant-derived fats than animal fats by long-lived individuals than by younger people (Fig. S4). Moreover, when looking at functional pathways involved in amino acid metabolism, we found a progressive increase with age in genes for the metabolism of tryptophan (ko00380), tyrosine (ko00350), glycine, serine, and threonine (ko00260). On the other hand, genes for alanine, aspartate, and glutamate metabolism (ko00250) were found to be more abundant in younger individuals. These evidences are in agreement with our previous study (9), in particular with regard to the metabolism of tryptophan and tyrosine as an indicator of enhanced proteolytic metabolism. Furthermore, these findings fit with metabolite measures in the centenarians of our cohort, i.e., the decrease of the bioavailability of tryptophan in serum (34), as well as the increased urinary levels of phenolic metabolites, deriving from the metabolism of tyrosine (35). Finally, we found a progressive increase with aging of genes for lipopolysaccharide biosynthesis (ko00540), which can be associated with the presence of pathobionts (i.e., members of the Enterobacteriaceae family) and the low levels of chronic inflammation (i.e., inflammaging), as previously demonstrated in long-lived people (3,8,9).
DISCUSSION
Here we described-as far as we know, for the first time-the metagenomic changes of the human gut microbiota that occur with aging, up to extreme longevity, by characterizing the microbiome of semisupercentenarians, i.e., demographically very uncommon subjects who reach the extreme limit of the human life span (Ͼ105 years of age). In addition to confirming the known taxonomic features of an aging microbiota, we extended the definition of the human core gut microbiota down to the species level and provided an accurate depiction of the functional changes occurring along with aging. In a sort of continuum line with our previous study, where we demonstrated that the intestinal microbiome of Italian adults is equipped for the degradation of xenobiotics, probably as a functional response to exposure to these compounds (24), we here advance the fascinating hypothesis that aging in Western urban environments progressively selects for commensal microbiome strains with metabolic abilities toward specific xenobiotics. We speculate that this could represent an adaptive response of the human holobiont to the increased exposure to, and accumulation of, xenobiotic substances along the aging process. As recently discussed (36), future studies should be aimed at better understanding the complex interplay between xenobiotic exposure and the human gut microbiome. The individual gut microbiome structure will have to be matched with the personal exposure level, with the latter being dissected by monitoring xenobiotics in feces and body fluids. Long-term longitudinal studies must be conceived, with the aim of highlighting the mechanisms underlying this potential microbiome adaptive variation, as a result of a top-down selection process of microbiome functions for xenobiotic detoxification and the ultimate impact in terms of host health protection. Given that the xenobiotics that emerged in the present study are now ubiquitous in modern urban areas, it would also be interesting to assess the xenobiotic degradation capacity of ancient microbial communities by analyzing samples from the preindustrial era, in order to fully understand the effects of these molecules on the evolutionary history of the human holobiont. Studies of this type would help to shed light on whether the peculiar functional profiles of the gut microbiome of extremely long-lived hosts, as found in our work, are the result of an adaptive and remodeling process inherent to the physiology of human aging in modern urban societies and thus capable of supporting a new homeostasis.
MATERIALS AND METHODS
Subjects and study groups. The study used genomic DNA from 62 fecal samples collected for a study by Biagi et al. (8). Subjects were enrolled in the Emilia Romagna region (Italy) and categorized as follows: 11 young adults (group Y, 6 females and 5 males, aged 22 to 48 years [mean age, 32. Evaluation of the time spent indoors and outdoors by the elderly. Elderly participants signed the informed consent before undergoing the questionnaires with an interviewer as previously described (37). The participants were asked how often they left their homes (daily, weekly, monthly, etc.) and based on seven different answers were assigned a score: those who never went out, the lowest frequency, were given a score of 1, while those who left their homes "daily," the highest frequency, were given a score of 7. The answers, treated as a continuous scale (arbitrary scores of 1 to 7), were used to determine the frequency of movement outside home (FMOH) score.
Library preparation and shotgun sequencing. DNA libraries were prepared using the QIAseq FX DNA library kit (Qiagen, Hilden, Germany) in accordance with the manufacturer's instructions. Briefly, total microbial DNA was quantified by a Qubit fluorometer (Invitrogen, Waltham, MA, USA), and 100 ng of each sample was fragmented to a 450-bp size, end-repaired, and A-tailed using FX enzyme mix with the following thermal cycle: 4°C for 1 min, 32°C for 8 min, and 65°C for 30 min. Samples were then incubated at 20°C for 15 min in the presence of DNA ligase and Illumina adapter barcodes for adapter ligation. After two purification steps with Agencourt AMPure XP magnetic beads (Beckman Coulter, Brea, CA, USA), a 10-cycle PCR amplification and a further step of purification as described above, the final library was obtained by pooling the samples at equimolar concentrations of 4 nM. Sequencing was performed on an Illumina NextSeq platform using a 2 ϫ 150-bp paired-end protocol, in accordance with the manufacturer's instructions (Illumina, San Diego, CA, USA). High-quality paired-end sequences were uploaded to the SRA repository.
Bioinformatics and biostatistics. The functional annotation of the sequences deriving from the 62 genomic DNA samples (8) was conducted as previously described (9). In brief, shotgun reads were first filtered by quality and human sequences. This last step was achieved using the human sequence removal pipeline and the WGS read processing procedure of the Human Microbiome Project (HMP) (38). The obtained reads were taxonomically characterized at the species level by MetaPhlAn2 (39) and assigned for functionality at different levels of the KEGG database (40), using Metagenome Composition Vector (MetaCV) with default parameters (41). The resulting table consisted of multiple matrices, with sample identification numbers (IDs) in the columns and annotations at the species level or at different levels of the KEGG database in the rows.
PCoA analysis was carried out using vegan (https://cran.r-project.org/web/packages/vegan/index .html) in R. Significance testing and permutation analysis were performed using the R package stats and vegan. Data separation in the PCoA was tested using a permutation test with pseudo-F ratios (function adonis in the vegan package). When appropriate, P values were adjusted for multiple comparisons using the Benjamini-Hochberg correction. A false discovery rate (FDR) of Ͻ0.05 was considered statistically significant.
Network plots were determined as previously described (24). In brief, associations between KEGG pathway abundances were evaluated by the Kendall correlation test, displayed with hierarchical Ward linkage clustering based on the Spearman correlation coefficients, and then used to define pathway groups (circles with the same color). Significant associations were verified for multiple testing using the q value method (http://www.bioconductor.org/packages/release/bioc/html/qvalue.html) (P Ͻ 0.05). Permutational multivariate analysis of variance was used to determine whether the pathway groups were significantly different from each other. The network plots were created using Cytoscape software (42). Circle size represents the normalized overabundance of the pathway relative to the background. Connections between nodes represent significant positive Kendall correlations between KEGG pathways (FDR Ͻ 0.05).
Assignment of functions for xenobiotic degradation to commensal bacteria. Reads with assignment to xenobiotic degradation functions were further inspected for taxonomy. Where present, the species-level classification of MetaCV (41) was retrieved, and the taxon ID in the NCBI taxonomy database was obtained using the web interface of the NCBI Taxonomy Browser tool (https://www.ncbi.nlm.nih .gov/Taxonomy/TaxIdentifier/tax_identifier.cgi). In order to retrieve the entire phylogeny of the assignment, we transformed the NCBI taxonomy IDs into the full lineage by using the ETE3 toolkit (43). Hits for xenobiotic degradation were then split based on their taxonomy and collected in a new table containing the values for each sample. We finally identified the proportion of functions assigned to commensal bacteria of the human core gut microbiome, i.e., microbial taxa that have been found to be shared by all human populations sampled to date (26)(27)(28)(29)(30), by specifically looking for their abundance across samples and visualizing them by box plots using the R software.
Analysis of nutritional data. Dietary information for the elderly subjects of groups K, C, and S were provided and discussed in our previous publications (1,8). As regards group Y, the subjects were asked to compile 24-h dietary recalls to retrieve information on the composition of their diet, as previously reported by Barone and colleagues (44). Dietary data for semisupercentenarians (8) were converted to a numeric frequency, in order to infer the daily consumption of each food category. Total daily calorie intake as well as macro-and micronutrient contributions for individuals in groups Y and S were estimated through the MètaDieta software version 3.7 (Meteda, Rome, Italy).
Data availability. High-quality paired-end sequences were uploaded to the SRA repository under BioProject number PRJNA553191.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only.
|
v3-fos-license
|
2019-04-22T13:12:56.181Z
|
2018-11-11T00:00:00.000
|
126351033
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/mpe/2018/3268962.pdf",
"pdf_hash": "2dc020f6148da908b699cd308401c4cf2f1a11ed",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2976",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "2dc020f6148da908b699cd308401c4cf2f1a11ed",
"year": 2018
}
|
pes2o/s2orc
|
Rehabilitation Effect Evaluation of CFRP-Lined Prestressed Concrete Cylinder Pipe under Combined Loads Using Numerical Simulation
Prestressed concrete cylinder pipe (PCCP) has been widely used for water transfer and transit projects. However, prestressing wire breaks may result in the rupture of pipes and cause catastrophes. Carbon fiber reinforced polymer (CFRP) liners adhered to the inner concrete core can provide an effective method of internal repair and strengthening of PCCP. To evaluate the rehabilitation effect of CFRP-lined PCCP under combined loads, two contrasting three-dimensional finite element models that investigated the visual cracking of concrete and the yielding of steel cylinders were developed. A conceptual zone was introduced to analyze the different states of the pipe during the phase of wire break. In particular, the complex CFRP-concrete bonded interfacewas simulated by a cohesive element layer with a bilinear traction-separation response.The results show that CFRP has a good rehabilitation effect on the inner concrete core and steel cylinder but only a slight effect on the outer concrete core, prestressing wire, or mortar. A onehoop CFRP layer diminishes the area of a yielding steel cylinder of 4.72 m. In addition, CFRP works more effectively along with an increase in the number of broken wires. This research can provide a basis for strengthening distressed PCCP pipelines.
Introduction
Prestressed concrete cylinder pipe (PCCP) has been widely used in many areas including municipal, industrial, and water diversion because of its structural advantages such as a large diameter, low water head loss, and strong earthquake resistance.PCCP generally consists of a concrete core, hightensile steel wires that spirally wound around the concrete, a steel cylinder encased in concrete, and a mortar coating layer.The prestressing wires are typically designed to withstand all of the hydrostatic pressure.Owing to corrosion and hydrogen embrittlement, prestressing wire may break.If many wires broke, the rupture of pipe can lead to a catastrophe.Therefore, it is necessary for water supply engineers to understand the wire-break process of PCCP and determine a feasible repair plan.
Combined with former theories and technologies, some methods were proposed to rehabilitate PCCP.However, all these methods require excavation of the cover soil and processing of the complicated construction technologies by heavy machinery, which are costly and time-consuming [1][2][3][4].Installing carbon fiber reinforced polymer (CFRP) on the inner concrete to strengthen PCCP is an effective way to minimize these negative effects.
Lee and Karbhari [5] conducted an extensive test program with prefabricated FRP and wet layup of fabric-strengthened PCCP sections to verify the feasibility of this method.Zarghamee et al. [6] reported that the water tightness of CFRP liner can be improved by introducing an impervious glass layer in the CFRP laminate, painting polyurethane coating on the inside surface of the laminate, and increasing the number of CFRP layers.Gong et al. [7] conducted a full-scale experiment of new-type jacking PCCP to evaluate the structural performance of pipe.
Lee et al. [8] proposed a simplifying function for the horizontal deflection of buried PCCP considering the pipesoil interaction.Xiong et al. [9] took the contribution of the wire stiffness of PCCP into account and developed a wirewrapping model to simulate the process of wrapping wire.Hajali et al. [10] analyzed the structural performance of PCCP with broken wires at the spigot joint, bell joint, and barrel using a numerical model.
To study the nonlinear behavior and ultimate resisting capacity of a CFRP-repaired concrete structure, many numerical modeling methods have been adopted [11][12][13][14][15], providing a basic understanding of the FRP-concrete interface.For PCCP, Zarghamee et al. [16] designed a bundle of composite four connector elements to simulate the bonding properties of an FRP-concrete interface in a two-dimensional model.Similarly, Lee et al. [17] used spring elements to connect CFRP liner to concrete.
Installing CFRP liner, a completely new method for PCCP renewal, has a distinctive failure pattern [18,19] and is still under investigation.In this study, two contrasting models for PCCP and CFRP-lined PCCP were developed to evaluate the rehabilitation effect.The article focused on a CFRP-concrete interface modeled by a cohesive element layer.The main evaluation indexes of the rehabilitation effect were the visual cracking of concrete and mortar and the yielding of steel cylinder.
Model Geometrical Characteristics
The main function of PCCP is to divert water resources under internal loads plus external soil pressure.Therefore, maintaining a safe service state under combined loads is the essential level that the pipe must reach.For the finite element (FE) models, a three-dimensional model A of PCCP and another model B of CFRP-lined PCCP with one longitudinal CFRP and one hoop CFRP under combined loads are developed by using the ABAQUS/Standard program.Obviously, the hoop CFRP is designed to improve the hoop structural performance of distressed pipe.In the distressed pipe, bending between the broken wire zone without any prestress and the adjacent zone where it is affected by the broken wire zone will cause uneven radial displacement.This displacement may result in transverse gaps between waves of CFRP.As a result, water can bypass the CFRP liner through gaps, rendering the liner ineffective [20].The longitudinal CFRP is designed to meet this longitudinal strain demand.To this end, CFRP laminate is the only difference between model A and model B.
2.1.PCCP Size.CFRP-lined PCCP consists of various FRP layers and PCCP components.Figure 1 illustrates a typical pipe.The size parameters of the pipe for this application are given as follows.
The internal working pressure P w = 0.8 MPa.The geometrical parameters of PCCP were the inner diameter of the pipe D i = 4000 mm, outer diameter of the cylinder D y = 4183 mm, the total concrete thickness of including cylinder h c = 350 mm, the thickness of the steel cylinder t y = 2 mm, the wire diameter d s = 7 mm, the wire spacing d = 14.3 mm, the thickness of coating mortar h m = 32 mm, the length of the pipe l = 5000 mm, and the height of earth cover over pipe H = 2.8 m.After absorbing certain epoxy resin, carbon fiber waves then thicken.For CFRP, the mean thickness of each layer h CFRP = 0.75 mm.A schematic and detailed drawing of CFRP-lined PCCP are shown in Figure 2.There were no CFRP and cohesive element layer in model A.
Various Backfilling Areas.
To prevent the PCCP from cracking under different embankment conditions, the backfilling soil and bedding soil of the pipe can be divided into several zones.In these models, the surrounding soil of PCCP was partitioned into six different areas: the in situ soil, bedding soil zone, backfilling soil zone A, the foundation of the pipe, buffering soil zone, and backfilling soil zone B, as shown in Figure 3(a).Each zone had a particular property.According to Lotfi et al. [21], Katona [22], and Alzabeebee et al. [23], the width of the calculating model was determined to be five times the outside diameter of the pipe.
Mesh and Boundary Conditions.
In these two models, every component was meshed based on the geometric characteristics.Soil, concrete, and mortar were simulated using three-dimensional eight-node brick elements (C3D8R).An eight-node three-dimensional cohesive element (COH3D8) layer was used to model the CFRP-concrete interface.According to the property of the cohesive element, CFRP could be meshed with C3D8R element.The prestressing wire was simulated using a three-dimensional truss element (T3D2).A four-node, quadrilateral, stress/displacement shell element with reduced integration (S4R) was used to represent the steel cylinder.Furthermore, the C3D8R and S4R elements adopted the total stiffness approach as the hourglass control approach.The concrete core, mortar, prestressing wires, steel cylinder, and soil had 9600, With regard to the boundary conditions, the nodes on the bottom surface of the model were fully fixed, the nodes on the top surface of the model were free, the nodes on the left and right surfaces of the model were specified with no horizontal displacement or rotation because of the vertical consolidation settlement, and the nodes on the front and back surfaces of the model were restricted in the pipeline longitudinal direction.Typical loads such as the weight of soil, pipe, and internal/surge pressure were considered in the model.4(a) and 4(b), respectively.After reaching the failure stress 0 or ultimate stress , the stress-strain curves are inclined to show a strain-softening tendency accompanied by elastic stiffness being damaged.Thus, two damage variables and are introduced to characterize the degradation of the elastic stiffness for tension and compression behavior, respectively.The strain-softening branch can be described as (1) for tension behavior and (2) for compression behavior:
Modeling Approach
and are the stress in the softening branch, and and are the strain in the softening branch. 0 is the undamaged elastic modulus.On the other hand, strain softening also decreases the elastic modulus.The reduction of the elastic modulus E is calculated by the following equation: where d is the scalar degradation variable, which is a function of the uniaxial damage variables and , and 0 ≤ d ≤ 1.
and are parameters of the stiffness recovery effects associated with stress reversals [24].
In terms of the effective stresses, the yield function takes the following form [25,26]: with is the Mises equivalent effective stress; is the hydrostatic stress; σ is the maximum principal effective stress; ⟨ ⟩ is the Macauley bracket, which can be defined by ⟨⟩ = (|| + )/2; 0 / 0 denotes the ratio of the initial equibiaxial compressive yield stress to the initial uniaxial compressive yield stress; represents the ratio of the second stress invariant on the tensile meridian q(TM) to that on the compressive meridian q(CM) at the initial yield for any given value of the pressure invariant p; ( ) and ( ) are the effective compressive and tensile cohesion stresses, respectively; and and are, respectively, the compressive and tensile equivalent plastic strains.The compressive stresses and strains are positive, and the tensile stresses and strains are negative.The yield surfaces in the deviatoric face with different are shown in Figure 5.
The plastic flow rule determines the direction and magnitude of plastic deformation.The CDP model utilizes a nonassociated Drucker-Prager hyperbolic function to define potential function G: e is the eccentricity that controls the rate at which the plastic potential function approaches the asymptote, 0 is the uniaxial tensile stress, and is the dilation angle measured in the p-q plane.
The gross wrapping stress of the prestressing wire is 75% of the specified minimum tensile strength of the wire, as shown in (10).The yield strength of the wire is 85% of the specified minimum tensile strength of the wire, as shown in (11).The constitutive relationship for the prestressing wire, after wrapping at , can be given in ( 12) [27].
where is the strain of the prestressing wire and is the specified minimum tensile strength of the wire.The steel cylinder adopts the von Mises elastoplastic stress-strain relationship with a linear elastic branch and a constant stress after reaching the yield strength.
CFRP is treated as an orthotropic elastic-brittle material.In the fiber direction, once the stress reaches the ultimate tensile strength, CFRP will fracture.The unidirectional fiber laminate thickness, each fiber orientation, and constitutive constants are required as input for the numerical model of the CFRP liner.Regarding the surrounding soils, a modified Mohr-Coulomb model is used to calculate earth loads on PCCP buried pipes.
Material Parameters.
The concrete core was modeled with a compressive strength of 44.00 MPa and a tensile strength of 3.86 MPa.The mortar coating had a compressive strength and tensile strength of 47.5 MPa and 3.58 MPa, respectively.The uniaxial stress-strain relationships of the concrete and mortar were based upon a current standard [28].The prestressing wire had an ultimate strength of 1,570 MPa.The steel cylinder was a mild steel with a yield strength of 225 MPa.The detailed parameters of PCCP and the soil properties are listed in Table 1 [29].The CFRP was a composite material with a high tensile strength of 600 MPa in the longitudinal direction of the fiber but a low tensile strength in the transverse direction.The material properties of CFRP are listed in Table 2.
Interaction inside Pipe and Pipe-Soil Interaction.
All components in model A and model B were modeled as independent parts.Then, all parts were assembled together as an entire model.The interactions between components were vital to the composite pipe.The steel cylinder was embedded into the concrete core.Meanwhile, the prestressing wire was also embedded into the mortar coating.Concrete and mortar were completely tied together without considering delamination.The interface between the pipe and soil was modeled by surface-to-surface contact with small sliding, in which the value of the interface friction was 0.35.Through utilizing the temperature-drop method, the prestress was applied to the prestressing wire.
Cohesive Element in CFRP-Concrete Interface.
The CFRP-concrete interface has complex relationships, containing ruptures, debonding, and shearing behaviors [30][31][32].A cohesive element layer with a bilinear traction-separation response was introduced to simulate the bonded interface.The traction-separation model includes three components: initial linear elastic behavior, damage initiation criterion, and linear damage evolution law.A typical bilinear tractionseparation model is shown in Figure 6(a).
The initial elastic behavior relates the nominal stresses to the nominal strains across the interface by an elastic constitutive matrix.The nominal stresses are the force components that present the traction variables, and the nominal strains present the separation variables.In a threedimensional model, the nominal traction stress vector consists of , , and , which present the stress when the deformation is either purely normal to the interface or purely in the first or the second shear direction, respectively.The corresponding strains can be denoted by the following equation: where , , and are the corresponding separation displacements and 0 is the original thickness of the cohesive element.Thus, the elastic behavior can be described as However, once the damage initiation criterion is met, the interface begins to suffer damage according to the damage evolution law.The following quadratic nominal stress criterion is used to represent the damage initiation criterion: 0 , 0 , and 0 are the corresponding peak values of the nominal stress, and ⟨ ⟩ is the Macauley bracket.
A scalar damage variable D is used in the linear damage evolution law (like the CDP model for concrete) to denote the overall damage in the interface.After meeting the initiation of damage, D monotonically evolves from 0 to 1 with an increasing load.When the damage variable D reaches 1 at all of its material points, the cohesive elements are removed.For linear damage evolution, D is defined as follows: is the effective displacement at complete failure, is the effective displacement at the initiation of damage, and is the maximum value of the effective displacement attained during the loading history.The stress components at the softening branch that accompany the material stiffness descent can be described as in the following equation: , , and are the stress components predicted by the elastic traction-separation behavior for the current strains without damage.
Actually, interface will often fail as a mixed mode along three directions in which the relative proportions of the normal and shear deformation are quantified.The damage , , and are the work done by the tractions and their conjugate relative displacements in the normal, first, and second shear directions, respectively. is the mixedmode critical fracture. and refer to the critical fracture energies required to cause failure in the normal and first shear directions. is a material parameter.
The fracture energy is dissipated as a result of the damage process.It is equal to the area under the traction-separation curve.Figure 6(c) shows a three-dimensional map of the mixed-mode behavior [34].
3.4.Simulated Procedure.The simulated procedure covered two phases: progressively pressurizing to 1.12 MPa in 0.1-MPa intervals, followed by a stepwise wire break under the former steady internal load.The maximum design internal load was 1.12 MPa when considering the working pressure and transient pressure.With an increase in the number of broken wires, the internal load bearing capacity of the pipe decreases continuously.When the number of broken wires is high, the distressed pipe can no longer withstand the original pressure.According to previous experience, the steady pressure should be decreased.
In the second phase, a wire break started at the central location in the longitudinal direction of the pipe.Then, the break extended to the spigot end and bell end alternately with five broken wires at a time.The detailed scenario of the broken wires is shown in Figure 7.When the number of broken wires was from 0 to 60, the internal load was 1.12 MPa; from 65 to 95, 1.00 MPa; from 100 to 135, 0.9 MPa; and from 140 to 175, 0.8 MPa.
Results and Discussion
During a gradual increase in the number of broken wires, the full composite zone without prestress loss can be converted into a broken wire zone.The pipe was divided into three zones along the longitudinal length to present distinct states during the wire-break phase.As shown in Figure 8, these zones were the full composite zone, the transition zone, and the broken wire zone [35].The broken wire zone was defined as one in which all wires included in the zone are broken.
The transition zone presented the affected zone of the broken wires.The full composite zone presented the undamaged zone, which was far from the broken wire zone.
For each component, strains at the pipe crown, pipe springline, and pipe invert in four sections (S1, S2, S3, and S4 stand for monitoring sections 1, 2, 3, and 4, respectively) were acquired, as shown in Figure 9.The strain curves during the phase of wire break are shown in Figures 11 to 17.The legend XX Y ZZ in these figures designates a result with a running model of XX, a hoop location of Y, and a monitoring section of ZZ.
Phase of Internal Load Increase. Without any broken
wires, all components of the pipe remain in the elastic domain.The internal load is mainly withstood by prestressing wires.The strains of the concrete, steel cylinder, wire, and mortar in model A are remarkably close to those in model B. CFRP reaches a peak stress of 11.8 MPa under an internal flow pressure of 1.12 MPa, which is much lower than the ultimate strength.CFRP does not play a role in relieving the deformation of the pipe.Owing to the earth pressure, the deformation at the crown, invert, and springline is different.For CFRP, the inner concrete core, and the steel cylinder, the strains at the springline are greater than those at the crown and invert.By contrast, the strains at the springline are lower than those at the crown and invert for the outer concrete core, wire, and mortar.The reason for this phenomenon is that the external part of the pipe wall at the springline and the interior part of the pipe wall at the crown and invert are tensile, while the opposite parts at these hoop locations are compressive under external loads [36], as shown in Figure 10.The compressive behavior can cause a larger deformation than the tensile behavior.
Phase of Wire Break
4.2.1.Deformation of CFRP.S1 is always in the broken wire zone.Figure 11 shows the strain of CFRP increasing linearly with the number of broken wires at the beginning of the simulation (0-45).Subsequently, the strain grows rapidly as the number of broken wires increases, which indicates that the concrete core is damaged, and CFRP begins taking a positive effect.The larger the strain, the better the effect of CFRP. Figure 12 shows that the stress of CFRP at the crown for S1 is 151.7 MPa at the end of the phase of wire break, with CFRP as the primary bearing component.S2 is in the transition zone when the number of broken wires is less than 70 and in the broken wire zone when the number exceeds 75.The impact that the broken wires have on the transition zone decreases successively with an increase in distance.Like the strain of CFRP at S1, the strain at S2 also rises rapidly at a lower rate.The strains at the crown, invert, and springline for S3 stay at a relatively low level, illustrating light damage for S3.S4 is in the full composite zone prior to reaching 160 broken wires.The strain starts growing slightly as the number of broken wires increases further, which indicates that S4 turns into a transition zone.During the entire phase of the wire break, S4 experiences little damage.
In terms of the seriously damaged areas S1 and S2, the strain of CFRP at the crown is greater than that at the invert and springline, stating the crown would fail first as the number of broken wires increases.In other words, the crown is the most seriously damaged part, and CFRP best contributes to its material properties.The three peak values on each curve can be attributed to the decline of the internal flow pressure.
Deformation of Inner Concrete
Core.The inner concrete core at S1 experiences compressive plastic damage at 45 broken wires.The higher the number of broken wires, the greater the damage.Since the CDP model does not reflect cracks on concrete directly, 11 times the tensile strain of concrete is taken as the onset of a visual crack [37].The corresponding hoop strain can be defined as 1524 .By comparing the strain in Figures 13(a) and 13(b), the number of broken wires required to cause the onset of a visual crack is acquired, as shown in Table 3. Regarding S1, Table 3 shows that the onset of a visual crack at the crown, springline, and invert for PCCP occurs at 125, 130, and 125 broken wires, respectively.For CFRP-lined PCCP, the number of broken wires is 130, 165, and without a visual crack, respectively.
These results indicate that CFRP causes a stress redistribution and decreases the strain of the inner concrete core.This effect can be seen as protecting the concrete.Regarding the proportion of improvement, the invert shows the strongest increasing trend, followed by the springline and crown.This demonstrates that CFRP has the best rehabilitation effect on the invert, followed by the springline and the crown of the inner concrete core.
Regarding S2, Table 3 indicates that the number of broken wires required to cause onset of visual crack at the crown in model A is 3.13% higher than model B. During the phase of wire break, the springline and invert do not reach the hoop strain of the onset of the visual crack.It is also clear that CFRP relieves the deformation of the springline and invert.Both S3 and S4 are in the full composite zone or transition zone without macrocracks.
Deformation of Steel
Cylinder.The strain of the steel cylinder at S1 and S2 shows a rapidly rising rate when the number of broken wires is over 45, which indicates it undertakes part of the internal pressure.Owing to the compressive prestrain provided by prestressing wires, the yield strain of the cylinder is 1377 .Table 4 illustrates the number of broken wires required to yield the steel cylinder.Regarding S1, the table shows that yield of the cylinder at the crown, springline, and invert for damaged PCCP occurs at 115, 130, and 170 broken wires, respectively.Meanwhile, the crown and springline in CFRP-lined PCCP show a proportion of improvement of 4.35% and 23.08%, respectively.The cylinder at the invert for S1 in model B never yields.The proportion of improvement at the springline is much higher than that at the crown; this demonstrates that CFRP has a better rehabilitation effect on the springline.Meanwhile, the steel cylinder at the crown yields the earliest, followed by the springline and invert.For S2, the crown and springline require a higher number of broken wires than S1 to yield the cylinder.The steel cylinder at S3 and S4 does not yield.
At the end of the second phase, the yielding areas of the cylinder are counted, as shown in Table 5.This table indicates that the ultimate area of the yielding cylinder in PCCP is as much as 4.72 m 2 larger than that of CFRPlined PCCP.The proportions for which the area of the yielding cylinder accounts for the area of cylinder in broken wire zone for PCCP and CFRP-lined PCCP are 60.12% and 40.51%, respectively.Obviously, a one-hoop CFRP layer can diminish the yielding range of the cylinder under the same conditions.
Deformation of Outer Concrete
Core.The outer concrete core at S1 also exhibits large tensile plastic damage at 45 broken wires.Tensile plastic damage commences at the springline and spreads to the crown and invert with broken wires.Compared to the strain curves in Figure 15, the strains at the same location are very close.Even if the number of broken wires is high, there is only a slight gap between these two curves.The results are also reflected in Table 6.The table shows that the number of broken wires required to crack the outer concrete core visually in model A is the same as that in model B. This is completely different from the inner concrete core.CFRP seems to take no effect.All results indicate that CFRP has little rehabilitation effect on the outer concrete core.The greater the distance from the component to the CFRP, the worse the rehabilitation effect provided by CFRP.
Deformation of Prestressing
Wire.The broken wires are no longer used in the numerical analysis after breakage, which results in no strain in the broken wires, as shown in Figure 16.According to the constitutive relationship in (10) and (11), the actual yield strain of the prestressing wire after wrapping around concrete can be calculated as 1455 .
Figure 16 illustrates that no prestressing wire yields.The slope of the strain curve at S2 is relatively small when the number of broken wires is less than 50.Subsequently, these curves grow rapidly when the number of broken wires exceeds 50, indicating that the broken wire zone has a more serious influence on the adjacent wire.The regulation of S3 resembles that of S2.In addition, the strain of prestressing wire in model A is remarkably close to that in model B. CFRP does not relieve the deformation of the wire.
Deformation of Mortar
Coating.Similar to the concrete core, mortar has a strain that corresponds to the onset of a visual crack.8 times the tensile strain of mortar is taken, which works out 1133 [37].The visual crack in the mortar is illustrated in Figure 18.Table 7 shows the number of broken wires required to crack the mortar visually.The results show that S1 and S2 in model A and model B have the same cracking index at the crown and springline and an offset of only five broken wires at the invert.This phenomenon can be seen as there being little distinction between PCCP and CFRP-lined PCCP.
Before onset of a visual crack, CFRP takes little effect on the mortar.As shown in Figures 17(a) and 17(b), the gaps in the strain curves between PCCP and CFRP-lined PCCP at the same monitoring point increase as the number of broken wire wraps increases after the onset of a visual crack.This indicates that CFRP starts working to relieve the deformation of the mortar.The higher the number of broken wires, the more serious the damage level in the pipe, and the better the CFRP effect.Moreover, CFRP only has an obvious effect on the distressed PCCP, which is verified on the concrete core and steel cylinder.
Conclusions
In this paper, two contrasting three-dimensional FE models were established to evaluate the rehabilitation effect of CFRP-lined PCCP during a phase of internal load increase and a phase of wire break considering combined loads.A conceptual zone was introduced to analyze the structural performance of all components in different zones.The state of the pipe was investigated by examining two critical indicators: visual cracking of the concrete and mortar, and the yielding of the steel cylinder.The following conclusions were drawn from this study: (1) In terms of the PCCP during the phase of wire break, components at the crown fail first, followed by the springline and invert.(2) CFRP has a distinct rehabilitation effect on various components.The farther the distance from the component to CFRP, the worse the rehabilitation effect provided by CFRP.Namely, CFRP clearly reduces the development of cracking in the inner concrete core and relieves the yield of the steel cylinder while having little effect on the outer concrete, prestressing wire, or mortar.(3) For the same component, CFRP also has a different rehabilitation effect at different locations.The deformation relieved by CFRP is the largest at the invert, then at the springline, and was smallest at the crown.Meanwhile, a one-hoop CFRP layer diminishes the area of a yielding steel cylinder of 4.72 m 2 .(4) CFRP only has a clear effect on distressed PCCP.
Moreover, CFRP works more effectively with an increase in the number of broken wires.For a full composite zone and transition zone, CFRP has no effect.
Figure 5 :
Figure 5: Yield surface in the deviatoric face with different .
u n l o a d i n g / r e l o a d i n g l i n e b) Normal deformation mode F i r s t o r s e c o n d s h e a r d e f o r m a t i o n m
Figure 7 :Figure 8 :
Figure 7: Sequence of wire break in second phase.
Figure 9 :
Figure 9: Arrangement of monitoring point on each component.
Figure 11 :
Figure 11: Strain curves of CFRP during phase of wire break.
Figure 12 :
Figure 12: Stress in CFRP at end of phase of wire break.
Figure 13 :
Figure 13: Strain curves of inner concrete core during phase of wire break: (a) strain at S1, (b) strain at S2, and (c) strain at S3 and S4.
Figure 14 :
Figure 14: Strain curves of steel cylinder during phase of wire break: (a) strain at S1 and (b) strain at S2.
Figure 15 :
Figure 15: Strain curves of outer concrete core during phase of wire break: (a) strain at S1 and (b) strain at S2.
Figure 17 :
Figure 17: Strain curves of mortar during phase of wire break: (a) strain at S1 and (b) strain at S2.
Table 1 :
Material properties of PCCP and soil.
Table 2 :
Material properties of CFRP.
Table 3 :
Number of broken wires required to cause onset of visual crack for inner concrete core.
Table 4 :
Number of broken wires required to yield the steel cylinder.
Table 5 :
Area of yielding cylinder and relevant proportions.
Table 6 :
Number of broken wires required to cause onset of visual crack for outer concrete core.
Table 7 :
Number of broken wires required to cause onset of visual crack for mortar coating.
|
v3-fos-license
|
2019-04-21T13:12:38.530Z
|
2016-04-01T00:00:00.000
|
124768108
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.nuclphysbps.2015.09.458",
"pdf_hash": "bd4af5061e2bc7e1dc73dea2828b2105a1b8b572",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2977",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "44af76420baa3c21ae819ad008f27cad7bbd5d6b",
"year": 2016
}
|
pes2o/s2orc
|
COMPASS Polarized Target for Pion-induced Drell–Yan Experiment
The first ever polarized Drell–Yan (DY) measurement is under preparation at COMPASS experiment at CERN. One of the key parts is the low-temperature polarized target. Modifications are required to cope with the intense pion beam that will be used. Solid NH3 will serve as a transversely-polarized target. Polarization is expected to be up to 90%. Two 55 cm long target cells give the target volume of about 690 cm3. The data taking is expected to start on fall 2014 and to continue in 2015 (approximately 180 days). Current status of the target, the modifications and future plans are presented.
Introduction
COMPASS [1] is a fixed-target experiment situated at CERN Super Proton Synchrotron (SPS) North Area. For physics data taking it uses either hadron or muon beams 1 . Either normal or polarized targets are used. COMPASS detector is a universal spectrometer with good particle tracking and identification capability.
COMPASS physics program focuses on spin structure studies and hadron spectroscopy [2]. Among other experiments the first ever measurement of a singlepolarized Drell-Yan (DY) process using a pion beam and a transversely-polarized proton target was proposed by the COMPASS Collaboration [3]. Its goal is to test some crucial predictions of QCD, namely a change of sign of the Sivers and Boer-Mulders TMDs when measured in Semi-Inclusive Deep-Inelastic Scattering (SIDIS) and in DY processes.
As the DY cross section is small, the luminosity should be as high as possible. In the case of COMPASS this corresponds to the beam intensity of about 10 8 pions/s. That is the highest hadron beam intensity COM-PASS has used so far, which leads to several challenges for the detection, data acquisition and the polarized target.
Drell-Yan program at COMPASS
The Feynman graph on Fig. 1 shows the Drell-Yan (DY) process, which has lately attracted much attention as a tool for polarized hadron structure studies. The reason is that the cross section does not involve any fragmentation function, but only convolution of structure functions of both hadrons. That is an advantage with respect to SIDIS.
The disadvantage of the DY process is a small cross section. To acquire a reasonable statistics an intense beam 2 will be used resulting in a large flux of secondary-hadrons. To avoid a spectrometer floodingup a hadron absorber was designed [3]. The noninteracting beam and all secondary particles except muons will be stopped inside. It is made of stainless steel and alumina, with a tungsten beam plug in the centre.
The high intensity pion beam together with the hadron absorber will cause slightly higher radiation dose in the experimental building than in previous runs. Because of that the control room will be moved to another building.
Polarized target
The low-temperature polarized target [1,4] is an essential tool for COMPASS spin structure studies. It is one of the biggest polarized targets in the world. The target material occupies a cylindrical space that is about 120 cm long and has about 4 cm in diameter. The degree of polarization exceeds 80% in the case of H in NH 3 and 50% in the case of D in 6 LiD [5].
Polarization of the target material is reached using Dynamic Nuclear Polarization (DNP) method [6] at about 0.5 K. During the physics runs the target is kept in a frozen spin mode: The material is cooled down to about 50 mK, when a spin-lattice relaxation time is very long (in order of 10 3 hours) allowing reasonably efficient experiment. Two microwave systems for DNP allow to have target cells with opposite polarization. That helps to reduce systematic errors in measured asymmetries.
The degree of polarization is measured by a continuous-wave NMR. The cooling is provided by a dilution refrigerator, which has a power of about 5 mW at 75 mK [7]. A large-aperture superconducting magnet provides a field up to 2.5 T parallel and 0.64 T perpendicular to the beam axis. Homogeneity of the longitudinal field, which is important for DNP, is about 10 −5 T. Combination of the two magnetic fields allows measurement with transverse polarization and polarization rotation.
Modifications of the target for the DY program
The intense hadron beam and the presence of the absorber brought need for modification of the target. New target cells were made. There are two of them (4 cm in diameter, 55 cm long) with 20 cm long microwave stopper in between 3 . The gap between the oppositelypolarized cells ensures proper assignment of events to them. The gap is wider than in the SIDIS runs, since the hadron absorber introduces significant multiple scattering worsening the vertex resolution.
The NMR system for polarization measurement has 10 coils. Three coils are placed outside of each cell and are oriented for measurement in the longitudinal field. Two coils are placed inside each cell near the ends and serve for polarization homogeneity monitoring.
The target superconducting magnet was refurbished by CERN magnet group. In addition to the fixed trim coils it got various upgrades, e. g. better thermal insulation and new control and safety system.
Since the control room was moved from the experimental building, a remote control is necessary. COM-PASS uses a centralised Detector Control System (DCS) for monitoring (slow control) of most systems. The dilution refrigerator was, however, only partly monitored by it. It was decided to abandon the old LabVIEW TM system [8] for refrigerator monitoring and to develop a new, more robust, Linux-based software package called ptread instead. Its main advantage is that it is modular and easily adjustable. The goal to enable remote monitoring was met. The ptread can communicate with the DCS using DIM library [9] and insert data into MySQL and SQLite databases. Figure 2 shows how various subsystems of the target (including the refrigerator) are monitored.
In addition to the ptread PC there is a Programmable Logic Controller (PLC) that monitors the most important parameters of the refrigerator [8]. It is powered from a source insensitive to power failures.
Conclusion
The magnet was refurbished and is being cooled down and commissioned. Dilution refrigerator was tested and mounted in place. Its sensors are connected to the new Linux-based monitoring system ptread. The new target cells are ready. The target will be prepared for the physics data taking, that is scheduled from the beginning of November 2014. Second run is planned for 2015. In total there should be about 180 days of data taking.
|
v3-fos-license
|
2021-04-04T06:16:31.378Z
|
2021-03-26T00:00:00.000
|
232773086
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-9032/9/4/370/pdf",
"pdf_hash": "e9f94d397dac7ddda42a7c094dda1f04fde16018",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2978",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "31637b9d7c5fce24fb75c2b90e13a25b5a6e364a",
"year": 2021
}
|
pes2o/s2orc
|
Close Association between Awareness of Teeth-Alignment Disorder and Systemic Disorders in Late Adolescence
Background: Oral diseases are associated with various systemic disorders. Our previous research revealed new insights into the close relationship between occlusal disorder (functional disorder) and systemic disorders (allergic rhinitis, asthma, and arrhythmia) in late adolescence. Here, we investigated whether there was an association between the awareness of teeth-alignment disorder (morphological disorder) and common systemic disorders. Subjects and Methods: We retrospectively reviewed the data of the mandatory medical questionnaire that is required for the freshman medical checkup in Japan. We collected the data of all students who completed the questionnaire between April 2017 and April 2019. The data were analyzed using the χ2 test, and a multivariate analysis was performed with a binomial logistic regression model. Results: The subjects were 8903 students aged 17–19 who had no awareness of occlusal disorder. The rate of awareness of teeth-alignment disorder was 20.43% (1819 of 8903 eligible subjects), and the aware students had significantly greater rates of gum bleeding (p < 0.001), pollinosis (n = 0.007), and atopic dermatitis (n = 0.042). The multivariate analysis revealed significant rates of gum bleeding (odds ratio (OR) 1.540, 95% confidence interval (CI): 1.386–1.711, p < 0.001), pollinosis (OR 1.197, 95% CI: 1.040–1.378, p = 0.012), and female gender (OR 1.141, 95% CI: 1.002–1.299, p = 0.046) among the students with awareness of teeth-alignment disorder. Conclusion: We identified close associations between the awareness of teeth-alignment disorder and both gum bleeding and pollinosis in a late-adolescent population. The systemic disorders that are targeted by teeth-alignment disorder were found to be different from those targeted by occlusal disorder.
Introduction
Oral diseases that are typified by dental caries and periodontal disease are associated with various systemic diseases and disorders including heart disease, diabetes, respiratory disease, rheumatism, metabolic syndrome, systemic infection, and malignant tumors [1][2][3][4][5][6][7][8][9][10]. However, the association between malocclusion and systemic diseases/disorders has not been focused on. Malocclusion is defined as a deviation from normal occlusion [11,12]. In this study, we classified the malocclusion into two types: a functional abnormality (occlusal disorder) and a morphological abnormality (teeth-alignment disorder) to assess the effect of malocclusion on general health.
Our previous research obtained new insight into the close relationship between an oral occlusal disorder (a functional abnormality) and common systemic diseases/disorders during adolescence including allergic rhinitis, asthma, and arrhythmia. These results not only reinforced the associations between occlusal disorder and both allergic rhinitis and asthma; they also demonstrated a new association between occlusal disorder and arrhythmia [13].
On the other hand, the impact of teeth-alignment disorder (a morphological abnormality) on systemic health has not been established. We conducted the present study to determine the association between the awareness of teeth-alignment disorder and common systemic disorders in late adolescents, even though the impact of teeth-alignment disorder on systemic diseases was considered to be smaller than that of occlusal disorder. Potential subjects who were aware of occlusal disorder were excluded from the analyses in order to remove the functional impact of occlusal disorder on the results.
Study Design and Population
The completion of medical questionnaire is a legal requirement of freshman medical checkups (Students cannot have medical checkups without completion of the questionnaire). We retrospectively reviewed the data of this medical questionnaire completed between April 2017 and April 2019. The questionnaire is self-administered and consists of closed-and open-ended questions. The questionnaire was distributed to a total of 9376 students aged 17-19 during the specified period.
Questionnaire to the Students
The questionnaire was distributed to all freshmen aged 17-19 prior to the beginning of their medical checkups. The presence of the awareness of having teeth-alignment disorder was assessed by the question "Are you concerned about alignment of your teeth (appearance of teeth alignment)?" In the case of a "Yes" answer, the subject was categorized as having an awareness of teeth-alignment disorder (Supplementary Material Table S1). The presence of the awareness of occlusal disorder and the medical history were assessed by previously reported questions [13,14]. After evaluating the responses, the associations between the awareness of teeth-alignment disorder and systemic disorders were analyzed. Acute disorders and relatively rare diseases/disorders (i.e., those identified in <50 subjects) were excluded from the analysis [13].
Statistical Analyses
The data were analyzed using the χ 2 test. We performed a multivariate analysis with the use of a binomial logistic regression model. A p-value < 0.05 (two-sided) was accepted as significant. We used the statistical software program SAS ver. 9.4 (SAS, Cary, NC, USA) and SPSS Statistics 25 (IBM, Armonk, NY, USA) for the analyses.
Ethical Approval
This study was approved by the research ethics committee of the University of Tokyo in 2018, approval no. (revised as no. 19-324 in 2019).
Frequency of Awareness of Teeth-Alignment Disorder
We retrospectively analyzed the data from 9098 students (aged < 20 years) of the 9376 who received the questionnaire [14]. Of these, we excluded 195 students who had awareness of occlusal disorder [13]. The final study population was 8903 students aged 17-19 years (mean 18.3 years) and included 7156 males and 1747 females. The rate of awareness of teeth-alignment disorder was 20.43% (1819 of the 8903 eligible subjects, 1438 males and 381 females). The rate of awareness of teeth-alignment disorder was higher among the females than the males, although the difference was not significant (n = 0.111) ( Table 1).
Associations of Awareness of Teeth-Alignment Disorder with Systemic and Oral Disorders
The associations between the awareness of teeth-alignment disorder and 18 disorders in adolescence were analyzed. The disorders were as follows: 17 systemic disorders (pollinosis, food/drug allergy, inhaled antigen allergy, allergic rhinitis, otitis media/externa, sinusitis, pneumothorax/mediastinal emphysema, asthma/cough-variant asthma, atopic dermatitis, urticaria, scoliosis, spondylosis/spondylolisthesis/hernia, strabismus, myopia/hyperopia/astigmatism, arrhythmia, abnormal ECG other than arrhythmia, anemia) plus an oral disorder (gum bleeding) ( Table 2). The awareness of teeth-alignment disorder was associated with a significantly greater incidence of a history of gum bleeding (p < 0.001), pollinosis (n = 0.007), and atopic dermatitis (n = 0.042). Table 2. The association between awareness of teeth-alignment disorder and medical history.
All
Male Female
Awareness of Teeth-Alignment
Disorder p
Awareness of Teeth-Alignment
Disorder p The incidence of gum bleeding was significantly associated with the awareness of teeth-alignment disorder in both the male and female subjects (p < 0.001 in both genders). Pollinosis was significantly associated with the awareness of teeth-alignment disorder in only the female subjects (n = 0.007). Atopic dermatitis did not show a significant association in either gender.
Discussion
The oral cavity and maxillofacial region are vulnerable to various diseases and disorders. In particular, dental caries and periodontal diseases are known to be associated with various systemic diseases [3][4][5][6][7][8]. In a previous study, we focused on an oral functional disorder, i.e., occlusal disorder, in late adolescence and observed close associations between occlusal disorder and systemic disorders including allergic rhinitis, asthma, and arrhythmia [13]. However, the relationships between teeth-alignment disorder, a morphological abnormality of the dental arch, with systemic disorders have not been focused on. Teeth-alignment disorder is sometimes accompanied by occlusal disorder. We therefore excluded the subjects with awareness of occlusal disorder from the present analyses in order to remove the functional impact on the results. Our findings revealed close associations between the awareness of teeth-alignment disorder with gum bleeding and pollinosis among late adolescents in Japan. Interestingly, the systemic disorders that are targeted by teeth-alignment disorder were found to be different from those targeted by occlusal disorder.
Several studies suggest that teeth-alignment disorder poses a risk of periodontal diseases, although this has not been established [15][16][17]. The present results clearly demonstrated a close association between the awareness of teeth-alignment disorder and bleeding gums, which implies that orthodontic treatment could improve periodontal health status and/or prevent the onset of periodontal diseases [18]. Certainly, adequate daily brushing is important to prevent periodontal diseases [19].
Allergic diseases/disorders are considered to be associated with malocclusion [13]. In this study, pollinosis was closely associated with teeth-alignment disorder in the present population. Although it is well known that allergic rhinitis (another common otorhi-nolaryngologic diseases in adolescence) causes occlusal disorder due to mouth breathing [13,[20][21][22], the effects of pollinosis on dental disorders have not been clarified. As is a perennial/chronic disorder, allergic rhinitis could cause severe morphological disorder leading to the functional disorder by continuous mouth breathing [20][21][22]. In contrast, pollinosis (which is a seasonal allergic disorder) would not cause functional disorder but might induce a mild morphological disorder of the dental arch by temporary mouth breathing. Interestingly, we observed the significant association between pollinosis and teeth-alignment disorder only in the female subjects. Further research is required to examine this gender specificity.
Atopic dermatitis (which is an allergic disorder) was slightly but significantly associated with teeth-alignment disorder in our population. Perugia C et al. reported a higher prevalence of atopic dermatitis in pediatric dentistry patients compared to the general population; of the patients with atopic dermatitis, 64.4% had occlusal or alignment disorders [23]. Hannuksela et al. reported that atopic hyper-reactivity was a predisposing factor for posterior crossbites [24]. Recently, mouth breathing, which would cause malocclusion, has been reported as a risk factor for atopic dermatitis, even though the underlying mechanism has not been elucidated [25].
In the present multivariate analysis, female gender was an independent associating factor of teeth-alignment disorder. A difference in self-awareness and/or genetic or epigenetic backgrounds could be involved in this gender gap [26].
In summary, our retrospective analyses of the data of 8903 students aged 17-19 in Japan revealed close associations between the awareness of teeth-alignment disorder and both gum bleeding and pollinosis. Interestingly, associating disorders were different from those of occlusal disorder. These findings provide a foundation to obtain new evidence between malocclusion and systemic disorders. It can be said that orthodontic therapy may contribute to maintaining and improving the general health condition, even though these results were based on a self-report questionnaire completed by university students, and thus further research with clinical examinations is warranted.
Conclusions
We identified close associations between the awareness of teeth-alignment disorder and both gum bleeding and pollinosis in a late-adolescent population. The systemic disorders that are targeted by teeth-alignment disorder were found to be different from those targeted by occlusal disorder. Although further study is warranted to confirm the associations which were found here and explore the mechanisms underlying these associations, our results suggest that orthodontic therapy could contribute to maintaining and improving the general health condition. Institutional Review Board Statement: This study was approved by the Research Ethics Committee of the University of Tokyo in 2018, approval no. (currently revised as no. , "Retrospective analyses of medical and health record information retained by the division for health service promotion, the University of Tokyo."
Informed Consent Statement:
We abided by all relevant laws, regulations, and university rules related to privacy. Our privacy policy was provided online (http://www.hc.u-tokyo.ac.jp/ accessed on 1 February 2021). In accordance with the conditions stated in the ethics approval, we posted a notice on the website of the department for health service promotion to announce the privacy policy and opt-out (https://www.lifescience.mext.go.jp/files/pdf/n2181_01.pdf accessed on 1 February 2021).
Data Availability Statement:
Data is contained within the article or supplementary material.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-12-12T12:13:05.928Z
|
2016-01-01T00:00:00.000
|
112244801
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.19026/ajfst.10.1809",
"pdf_hash": "196b672d06a2343013b443e59bd12b3167332be2",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2981",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "196b672d06a2343013b443e59bd12b3167332be2",
"year": 2016
}
|
pes2o/s2orc
|
Fuzzy Evaluation on Students Practice Project Quality in Food Enterprise
This study puts forward an evaluation level model of students practice quality in food enterprise and the algorithm of the evaluation is designed. The test result proved it may improve the evaluation quality in food enterprise. It also proposed the dynamic fuzzy evaluation model of the study effect and designed the algorithm of the model. The test result proved it's effective to evaluate the study efficiency. And this model may avoid the shortage of traditional evaluation methods effectively.
INTRODUCTION
The students practice quality in food enterprise is the core of students practice work.It may help us to find out the actual students practice situation through students practice evaluation and monitoring.To improve the students practice quality in food enterprise and the students practice environment, promotes the school students practice training work development through the analysis of the students practice work with the teacher and the studies with the student (Zhang et al., 1998).
Assessment for practice as well as evaluation for practice is an important part during the whole students practice process.But sometimes, people who only took the examination as the whole of evaluation will get in the way for the overall development of students and improve the students practice level (Petrich and Stilwell, 2010).In the new practice oriented students practice mode, the standard of students practice effect is transformed from the standard of testing to the ability of practice knowledge, which can be included: practice motivation, interest and other non-intellectual factors (Cao et al., 2007).Therefore, it is difficult to use precise numerical data to represent and reflect the students' situation, thus, people would like to use natural language to describe.Among natural language, a large number of statements are fuzzy, evaluation of standard as well as description is not exceptional, which feature is fuzziness.Such as: very strong practice ability, strong practice interest, less practice motivation, not very good at cooperating with others, etc.Although the evaluation is clear, specific, it is fuzzy at the same time (Li et a1., 1996).
Theory of fuzzy system:
The theory of fuzzy system is generally spread theory, which was created by control theory expert, Zdahe, who is from University of California, the United States in 1965.This theory can describe the concept of fuzzy phenomena without clear boundaries and extension.By using these uncertain phenomenon and the membership function, it establish one to one corresponding relation, which can be used to analyze many inexact fuzzy phenomena in nature with favorable mathematical tool (Sprague, 1994).
Fuzzy concept:
In ordinary set theory, one object and its set of relation can be defined, either in the set, or not in the set, there is no other situation, that is to say, the ordinary set theory can only say "either/or" phenomenon (Oh, 1995).However, in real life, there are some common concepts, such as: the middle-aged people and young people, the meaning of these concepts is not exact and clear, we put this concept as fuzzy concept.Usually people in order to deal with the natural phenomenon, the concept that they formed in their brains are often fuzzy concept, moreover, the judgment and reasoning of the concept is also fuzzy.
Fuzzy set:
The fuzzy set is the method of representing fuzzy concept, it is an extension of ordinary set theory: among the ordinary set, the degree of membership for element u in set A has only two values, namely 0 and 1.While fuzzy set can expand the degree of membership for element u in set A from 0 or 1 to the expansion of (0, l).
Students' practice situation and fuzzy evaluation on practice:
The traditional student's practice situation and practice evaluation model can be included the itemized evaluation method, weighted comprehensive evaluation method, expert scoring method and so on.These models have the advantages of simple calculation, easy modeling, wide-spread use, etc.However, the basic starting point is based on the evaluation factors are based on the linear relationship, the result of evaluation can be superimposed, thus the application is not very satisfactory and the credibility of evaluation results is low, which cannot meet the needs of practice.
As for the evaluation of student's practice situation and practice evaluation, it is a multi-level, multi-target evaluation, which involves many aspects.And the evaluation factors are directly affected by the level of knowledge, the cognitive ability and personal preferences, which is difficult to completely eliminate the bias caused by human factors.Moreover, the evaluation factors are generally qualitative description, which have fuzzy characteristic and bring certain difficulty to the specific operation.On the other hand, the evaluation work needs from different sides, which requires multi-level comprehensive evaluation objectively, thus teachers' students practice quality in food enterprise evaluation is a hierarchical fuzzy comprehensive evaluation issue.As shown in Fig. 1.
The traditional evaluation on practice effect can be simply attributed to the end of the quantitative examination, this evaluation method makes it difficult for students to give their abilities full play, so it is difficult to give fair and impartial evaluation results, which is not conducive to the students to develop their abilities and block the quality in food enterprise of students comprehensively.At the same time, it also very difficult for teachers to obtain comprehensive and reliable evaluation feedback information from the students in the traditional way.
Based on the above understanding about the evaluation of traditional practice, constructing a new fuzzy evaluation model for the practice effect, making fuzzy language description with quantitative analysis can be carried on by traditional mathematical methods.As shown in Fig. 2.
Using fuzzy evaluation model by means of multiorder evaluation on student's practice situation and practice:
Such as set can determine the allocation of weight values by using statistical iterative method, in order to satisfy the normalization of the weight ( ), it will inevitably lead some weight is relatively small, which will make w i ≤r ij .The reasons are as follows: Through the operation, a lot of information will be "flooded and covered", so there will be vague evaluation results.Of course, we can adopt the layered approach to solve this kind of problem.
Fuzzy comprehensive evaluation method is a comp rehensive evaluation method based on fuzzy mathemati cs.According to membership theory of the fuzzy mathematics, the method translates the qualitative evalu ation into quantitative evaluation, which uses fuzzy mathematics to make an overall evaluation for things or object that subject to many factors.It results clearly and is very systematic, which can well solve the fuzzy problems, suitable for all kinds of uncertain problems to solve.
RESULTS AND DISCUSSION
The evaluation factors: Student's practice situation and quality in food enterprise of practice can be influenced by many factors, each factor we can use a symbol ui to represent, if there are n factors existed, then i = 1, 2, ... n.All these factors that have influence on the students' practice situation and practice can form a influenced factors set, which can be represented by U, i.e.: First of all, Dividing factor set U = {u1, u2, …, um} ui, i = 1, 2, …, m into subset U i = {Si1, Si2, S…, Sin} i = 1, 2, …, S according to some properties so as to meet the conditions: Then making a comprehensive evaluation on each factor set.If the evaluation result set V = {v1, v2, …, vn} vj, j = 1, 2, …, n, i = 1, 2, …, Then, the distribution of the weight of n is: . If Ri is the single factor evaluation matrix, then we will get a vector: Then taking each Ui as a factor, µ = △ {U1, U2, …, Un}, Thus, µ is a set of factor, the single factor evaluation matrix of µ is: Each Ui is a part of U, which can reflect an attribute of U, it can be assigned according to the importance of the weight of Wj = {wj1, wj2,... Wjn} j = 1, 2, ...m, therefore, we can get two grade evaluation vector: If each factor set U i contains more factors, then it can be further decomposed till it becomes three model, four model and so on.Establishing a hierarchy structure model, as shown in Fig. 3.
CONCLUSION
In recent years, the comprehensive fuzzy evaluation as a new method has got rapid development, its application scope is also expanding rapidly.The more complex things to make accurate and meaningful description, the more difficulty it will be increased, the description ability will be reduced at the same time.If it is beyond a certain threshold, the accurate feature and meaningful feature will repel each other.Generally speaking, the more complex the things are, the more fuzzy people's understanding will be, then it needs to use fuzzy mathematics at this time.The comprehensive fuzzy evaluation can give full consideration about the fuzziness of the complexity and the value system of the internal relationship, which can not only order the value of evaluation objects according to the comprehensive evaluation, but also can evaluate the object grade based on fuzzy evaluation set, in accordance with the principle of maximum degree of membership.
It is consisted of the realization of this subject will have a very important meaning on the theory and students practice in food enterprise.In theory, under the guidance of existing evaluation administration system, try to explore that combines advanced information, management theory, fuzzy theory and practice management together in order to improve the efficiency and students in food enterprise practice quality evaluation and realize that fair, high-efficient, rational teaching evaluation.
Fig. 1 :
Fig. 1: Evaluation on student's practice situation and practice
Fig. 3 :
Fig.3: The overall structure of evaluation on student's practice situation and practice based on AHP fuzzy theory Among them, ui, i = 1, 2, ...n can be called the evaluation factor of student's practice situation and practice, while U is known as the set of evaluation factor.First of all, Dividing factor set U = {u1, u2, …, um} ui, i = 1, 2, …, m into subset U i = {Si1, Si2, S…, Sin} i = 1, 2, …, S according to some properties so as to meet the conditions:
|
v3-fos-license
|
2022-11-05T16:02:31.634Z
|
2022-10-31T00:00:00.000
|
253330500
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://figshare.com/articles/journal_contribution/Taxation_and_Migration_of_Peasants_Evidence_from_the_Tax-for-Fee_Reform_in_Rural_China/21435293/2/files/38040707.pdf",
"pdf_hash": "55fedb4e82a9d0e540be9b6afe7c8e81714aa78d",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2983",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "df7fa3cacbf2e3a74d3c684508e2f74db5ad83bf",
"year": 2023
}
|
pes2o/s2orc
|
Taxation and Migration of Peasants: Evidence from the Tax-for-Fee Reform in Rural China
Abstract This paper studies the effect of rural taxation on Chinese peasants’ mobility towards urban areas. We find that the tax alleviation due to the tax-for-fee reform significantly reduces rural-to-urban migration, with a 10% decline in tax leading to a 10% reduction in migration. We provide compelling evidence that the result is not driven by unintended policy effects of the tax-for-fee reform in fostering rural non-agricultural job opportunities. The effect of taxation is more pronounced and precisely estimated on migration across provinces or at older ages. To the best of our knowledge this is the first study to identify the tax-induced migration among low-skilled and low-income laborers.
Introduction
Tax-induced mobility of human capital has become a central public policy issue, especially since economic integration lowers barriers to migration. Many European countries have introduced preferential tax schemes for highly skilled foreign workers, creating a de facto tax competition for talent. Some subnational governments regularly advertise their favorable tax environments in an explicit attempt to attract investments and skilled laborers, indicating that tax competition occurs not only across but also within national borders. In addition to the mobility of the talented and the rich, tax policies may also influence the migration of low-skilled and low-income laborers. For example, in order to promote outflow of labor in return for foreign exchange remittances, in 1995 the Philippine government exempted Filipinos working overseas from income tax, most of whom are low-skilled laborers working in manufacturing or service industries for low wages.
Although anecdotes suggest that people may vote with their feet for low taxes, there remains a frustrating consensus among scholars that little empirical evidence is available to support this claim (Moretti & Wilson, 2017). Existing literature on the influence of taxation on the labor market focuses primarily on how taxes discourage labor supply, reporting high elasticities among low-income workers and retiring older workers (see OECD, 2011, and the extensive references therein). Regarding mobility effects, although many studies find that corporations and investments move in response to tax policy (Giroud & Rauh, 2019), scarce literature has explored the effects of taxation on individual labor mobility, yielding little conclusive evidence (Feldstein & Wrobel, 1998;Young, Varner, Lurie, & Prisinzano, 2016).
Drawing upon a unique dataset constructed from the Chinese Household Income Project in 2002, and a series of county level statistics, our investigation of the tax-for-fee reform in rural China yields three main empirical findings in regards to migration. First, we uncover compelling evidence of a negative effect of the reform-induced tax cut on migration, with a 10% decline in tax leading to a 10% reduction in migration; Second, we show that as the tax-for-fee reform relieves rural households from the formerly burdensome tax obligations, rural laborers tend to farm more and migrate less, with their local non-farm employment unaffected; Third, we find that migration bearing higher costs, such as those across provinces or at older ages, are more responsive to rural tax reduction.
Our study adds to a longstanding literature that examines migration and its drivers (see surveys by Massey et al., 1993). In particular, our finding of the vital impact of taxation sheds new light on the neoclassical equilibrium explanation of migration, which sees laborers as rational actors whose mobility decisions are made based on cost-and-benefit calculations of migration relative to non-migration, according to the changing 'push and pull' factors (Hare, 1999;Harris & Todaro, 1970). Our study also contributes to a small but emerging literature on the sensitivity of domestic migration to taxation, which mostly concerns the developed countries in Europe and America, and uncovers generally mixed results. Complementary to this existing literature, our research considers the internal mobility of low-skilled and low-income peasants within a developing country such as China.
The tax-for-fee reform
Rural tax burden was one of the deepest social problems facing Chinese society in the early 1990s (Bernstein & L€ u, 2003). Before the tax-for-fee reform peasants were required to pay a wide array of taxes and fees including: (i) agricultural tax, (ii) township-and-village-retained fees (santi wutong) which were collected essentially to finance rural public goods, and (iii) ad hoc fees that local cadres imposed in the name of village governance (Bernstein & L€ u, 2003). While the tax was levied according to the amount of land that a rural household contracted, the fees were levied upon simple headcount. Local cadres were given great autonomy to determine the amount of 'head tax', namely, the fees that each rural resident should pay. This tax scheme was criticized harshly for being both arbitrary and excessive, impairing the economic interests of peasants.
Against this background, the central government implemented the tax-for-fee reform experimentally in Anhui Province in 2000. The basic rationale of the reform was to streamline the tax structure to reduce the scope of local officials' discretion. In particular, the 'one-issue-one-meeting' (yishi yiyi) system regulated that any public project that required financial contribution from the villagers had to be discussed thoroughly in a village meeting, and only the liability that was approved by the majority was considered legitimate. After expansion to 20 provinces in 2002, 1 the reform was finally nationally completed in 2003. Although the tax-for-fee reform slightly increased the agricultural tax rate, it substantially decreased fees. In fact, the reform was well-acknowledged as a great success in relieving rural households from excessive taxation and improving their economic well-being (Kennedy, 2007;Yep, 2004).
Rural-to-urban migration
Since the mid-1980s rural laborers have increasingly flowed to urban labor markets to pursue non-agricultural employment. As the tax-for-fee reform tremendously reduced rural tax, we speculate whether improvements in rural income opportunities would undermine the relative advantages of urban employment and therefore pose a negative effect on migration. Note that a unique characteristic of Chinese rural-to-urban migration is its temporary nature. On the one hand, as rural cadres may redistribute the right to use collectively owned land among rural households, migrants can be deprived of their use rights due to the long absence from the village (Zheng, Gu, & Zhu, 2020); On the other hand, rural migrants are de facto second-class citizens in urban areas, because the household registration system does not entitle them to the same social benefits as their urban native counterparts (Solinger, 1999).
This temporary nature of Chinese rural-to-urban migration bears two implications on the generalizability of our study. First, the large mobility response we uncover in the Chinese context does not necessarily carry over to other institutional settings, as it could be easier for people to change their decision about temporary migration than about permanent migration. Second, unlike permanent migrants who can flee from high tax rates in their places of origin, Chinese migrant workers still have to pay rural tax, in order to maintain their usage rights to collectively-owned land, as well as other benefits associated with their registration statuses in the countryside (Zheng & Gu, 2021). Therefore, taxation differentials between migration and local employment stay unchanged under the new rural tax scheme. However, the tax cut still deters labor outflows from Chinese villages, mostly because the marginal returns to migration become much less attractive to peasants, relative to the rising rural income levels.
Chinese Household income project
To substantiate our assertion that the formerly burdensome taxes and fees levied upon rural households functioned as an important push factor for migration, we analyze the rural portion of the data from the 2002 Chinese Household Income Project. Jointly conducted by the Rural Survey Group of the National Bureau of Statistics of China (NBSC) and the Institute of Economics at the Chinese Academy of Social Sciences, CHIP is arguably one of the most authoritative data sources on household financial conditions in China. Its survey instruments were carefully tailored to capture detailed information on the labor force activities of all household members, as well as a wide variety of family expenditures including miscellaneous fees that rural households paid to the government. Following a multistage sampling strategy, altogether CHIP 2002 investigated 9,200 rural households scattered throughout 961 villages in 22 provincial level administrative units, offering us a large sample reasonably representative of rural China.
Measures of migration
Our dependent variable is the proportion of laborers who were migrant workers in 2002, for each rural household. To construct this variable, we first define the labor force as people aged from 16 to 60, excluding the retired, disabled and students. We then carefully examine the labor force activities of each rural laborer, determining migrants as individuals whose primary job in 2002 was nonagricultural work outside their native township. In order to ensure that our empirical results are not sensitive to alternative definitions of migration, we further divide migration behavior into three categories based on the administrative borders they cross: (i) cross-township migration of which the destination is beyond the native township but within the native county; (ii) cross-county migration beyond the native county but within the native province; and (iii) cross-province migration outside of the native province. 2 We will conduct seemingly unrelated regressions to distinguish the impacts of financial burdens on these three types of migration, respectively.
Measures of decrease in financial burden
The explanatory variable of interest is the amount of decrease in tax liability for each rural household between 1998 and 2002, a time span that witnessed the beginning and the extensive expansion of the tax-for-fee reform. We recognize that in order to assess the responsiveness of migration to tax scheme differentials, a more ideal research setting would be a difference-in-differences analysis, which would compare migration behavior before and after the tax-for-fee reform, between households that were affected and unaffected by the new scheme. However, because the migration status of rural laborers before the tax-for-fee reform is not available in CHIP 2002, we resort to focus on the impact of changes in financial burdens on migration, with controlling for the migration rate at the village level in 1998, when the tax-for-fee reform had not yet begun. 3 Details on the construction of this independent variable are presented in the Supplementary materials.
Control variables
The effect of the reduction in financial burdens could be confounded by some overarching socioeconomic characteristics that impact decisions to migrate. Therefore, we control for a wide array of village characteristics including population size, per capita land endowment, total sales revenue of TVEs, and the distance of the village from the nearest transit station. Three dummy variables are incorporated, measuring respectively whether the village collective organized the rural labor force to migrate out for work, whether the proportion of the largest group of villagers sharing the same family name exceeded 50%, and whether there is occurrence of natural calamities (one if yes and zero otherwise). As the crux for our research is the impact of tax-forfee reform on reducing rural financial burdens and decelerating rural-to-urban migration, we also include a dummy variable indicating whether the village had experienced any financial burden alleviation policies other than the tax-for-fee reform (one if yes and zero otherwise).
Household and labor characteristics may also affect migration behavior. We thus include the share of males, the average age, the average schooling years, and the dependency ratio of household laborers in our equations. Moreover, we employ a set of dummy variables to indicate if a given household belonged to the largest surname group in the village, whether any household member was a Communist Party member, or was a rural cadre, respectively. We also control for the amount of land that a rural household contracted and the household size. Details on the descriptive statistics can be found in the Appendix (Supplemental Material).
Baseline estimates
We begin the analysis by estimating the relationship between the decrease of financial burdens from 1998 to 2002 and rural household migration behavior, using the baseline estimating equation as follows: where h indicates rural households, v villages, and p provinces. The dependent variable Migration h, v, p is measured by the proportion of migrant workers among laborers, for each rural household. Decrease h, v, p is a proxy for the amount of taxes and fees that had been relieved in 2002, relative to what was paid in 1998, varying across rural households. Therefore b 1 is the coefficient of our research interest, denoting the impact of the reduction in a rural household's financial obligations, on its strategic decision-making pertaining to laborers' migration.
Migration98 v, p indicates the migration rate of laborers for village v in 1998, operating as a rough baseline given that the household level estimation of migration in 1998 is not available. The vector X v, p denotes a set of village level covariates that are likely to correlate with migration, while the vector X h, v, p is a series of household level covariates, as we discussed in Section 3.4. h p stands for province fixed effects, which are included to capture some province specific Taxation and migration of peasants 311 characteristics that may influence migration, such as economic development and local governance, to take into account the regional disparity in terms of population outflows. e h, v, p is the error term. We use OLS to estimate the equations, although our empirical results hold when tobit models are employed to account for the truncation of our dependent variable between 0 and 100.
We report the baseline estimates of Equation (1) in Table 1. Column (1) reports the estimate of the correlation between financial burden decrease and migration, for specification that only includes province fixed effects. The estimate is negative as we expect, however small in magnitude and statistically insignificant. Columns (2) shows that the estimated coefficients for financial burden decrease stay statistically insignificant, after controlling for baseline migration rate in 1998 at the village level.
The estimated coefficient for the decrease of financial burdens is negative and statistically significant at the 10% level in column (3) and column (4), of which the specification additionally controls for the household and village level covariates. This is consistent with our hypothesis that the proportion of labor that rural households allocate to migration work negatively correlates to the alleviation of tax burden. In terms of magnitude, reducing financial burdens by 100 RMB from 1998 to 2002 decreases the rural household labor migration rate in 2002 by 0.2 percentage points, holding the village level migration rate in 1998, as well as province, village and household characteristics constant. To assess the magnitude of this implied baseline estimate of the effect of financial burden decrease on migration behavior, we note that the sample mean of rural household financial burden is approximately 362 RMB in 1998 and the average rural household migration rate is 16.81% in 2002. Therefore, for a typical rural household that bears the sample mean financial burden in 1998, our baseline estimate implies that reducing its financial burden by 10% from 1998 to 2002, leads to a decrease in the proportion of migrant workers in household labor by approximately 0.07 (362 Â 10%Â0.002) percentage points, which equals approximately 0.42% (0.07/16.81) of the sample mean in 2002.
Instrumental evidence
Some concerns arise when the association between the changes in financial burdens and migration behavior based on the estimates of Equation (1) is interpreted as a causal relationship. Specifically, we are concerned about possible measurement errors pertaining to our proxies for both the tax-and-fee expenditures and migration behavior, which could probably attenuate our estimation toward the null. Moreover, the residual term in Equation (1) might not necessarily be exogenous to the reduction in financial burdens, giving rise to the omitted variables problem. A notable case is the quality of local governance, which is likely to promote the reduction in tax-and-fee burdens on the one hand, and deter rural-to-urban migration through higher quality of public goods provision on the other, in which case our estimation of Equation (1) would be biased upwards. Moreover, the village leaders might treat rural households under their governance discriminatorily, based on their economic status, political capital, and their personal relations with village cadres, among others. If rural households that receive preferential treatments tend to migrate less and tend to bear lower tax initially, therefore enjoying smaller amount of reductions in their financial burdens, then the emerging omitted variables problem could bias our estimation of Equation (1) downwards. Furthermore, if a better local financial situation facilitates public goods provision and dampens migration, it could bias our baseline estimation in directions that are ambiguous ex ante. On one hand, estimates based on Equation (1) may be biased upwards, if affluent local finance improves the alleviation of financial burdens; on the other hand, downward bias emerges if in the villages with large public revenue, the financial burden for rural households was not heavy in the first place in 1998, leaving little space for alleviation. The possibility of the latter arises when the village committee considers radical fundraising from rural households as unnecessary and politically costly. In sum, we should be cautious about the endogenous nature of our key independent variable, the financial burden decrease, as it may result in biased estimates from Equation (1).
Although the reduction in the financial burdens is endogenous, the tax-for-fee reform offers a policy intervention to the economic opportunities of Chinese rural households for the following reasons. First, the tax-for-fee reform is a from-top-to-bottom campaign, for which the provincial level governments basically made all the decisions regarding to the depth and the scope of the reform, leaving the grassroots cadres and rural residents very little space for negotiation. Second, the tax-for-fee reform is a campaign that applies to all. As long as the reform was adopted in a village, a new tax-and-fee scheme would apply to all rural households, regardless of any household or individual characteristics. Third, the expansion of the tax-for-fee reform was largely, if not entirely, unanticipated by the rural households. As described in Section 2, after a whole year of stagnancy due to the State Council's decision to postpone the experiment, the central government unexpectedly extended the tax-for-fee reform to 20 provinces in 2002. All in all, the tax-for-fee reform is plausibly uncorrelated with most of the other factors that may affect the labor allocation of rural households; Also, it is unlikely that rural households would change their migration behaviors in advance with the knowledge of future implementation of the reform. We thus employ the implementation of the tax-for-fee reform between 1998 and 2002 as the instrumental variable for the decrease in financial burdens, to investigate the policy impact of the reform on alleviating the financial burdens levied upon rural households, and then examine its effect on rural-to-urban migration.
A widespread consensus that helps strengthen the validity of this instrumental variables strategy is that, the tax-for-fee reform achieved little except for the relief in peasants' fiscal burdens, neither overturning the county's structural policy bias against the peasantry, nor fundamentally improving rural governance (Yep, 2004). Previous studies even found that the reform eroded village-level public revenue and undermined public services (Kennedy, 2007;Luo, Zhang, Huang, & Rozelle, 2007). In this regard, our instrumental variables strategy probably yields a lower bound of the effects from rural tax cut, if peasants tend to migrate out when public services fall.
We begin our instrumental variables approach by comparing the household-level calculations of financial burdens in 2002 to those in 1998, in the villages that experienced the tax-for-fee reform relative to the villages with no such policy change. Panel A of Table 2 shows that tax-Taxation and migration of peasants 313 for-fee reform moderately raises rural taxes in exchange for tremendous reduction of fees, overall resulting in substantial alleviation of rural financial burdens. As reported in Column (3) of Panel A, the fees paid by an average household in villages that have experienced tax-for-fee reform are reduced by approximately 194 RMB although the taxes are slightly increased by approximately 31 RMB. Altogether, tax-for-fee reform relieves the financial burden by approximately 163 RMB, which accounts for a reduction in household taxation rate-the share of financial obligations in household yearly income-of approximately 3.22 percentage points. Panel B of Table 2 shows that financial burden also declined from 1998 to 2002 in villages with no tax-for-fee reform, again as a result of the decrease in fees, however to a much lesser extent. This is probably due to the central government's efforts to mitigate local fees through means other than tax-for-fee reform. As shown in Column (3) of Panel B, the financial burden of an average rural household in villages that did not experience the tax-for-fee reform is decreased by approximately 49 RMB, accounting for a reduction in household taxation rate of approximately 1.32 percentage points. Taken together, Panel A and Panel B in Table 2 illustrate that the tax-for-fee reform leads to greater mitigation in financial burden, 4 namely, there is a positive relationship between tax-for-fee reform and financial burden decrease.
Conceptually, our identification strategy compares migration behavior in the villages where tax-for-fee reform had been implemented during 1998 to 2002 to the villages where it had not. Therefore, before turning to the instrumental estimation, we present the estimates of the reduced-form effects of the tax-for-fee reform on rural household migration outcomes. The independent variable here is a dummy, coded as one if the village carried out the tax-for-fee reform between 1998 and 2002, and zero otherwise, based on the information that CHIP 2002 gathered in the investigation of village cadres. The results presented in Panel A of Table 3 reveal that, with or without the control variables, there is a strong negative relationship between (1) and (2) report means, with standard deviations in parentheses; Column (3) reports changes between 1998 and 2002, with standard errors in parentheses; Total financial burden is the sum of taxes and fees; Household taxation rate is measured by the share of household yearly income paid as taxes and fees. ÃÃÃ p < 0.01, ÃÃ p < 0.05, Ã p < 0.1. the tax-for-fee reform and rural households' migration behavior. As shown in Column (4), in the model that includes a whole set of controls, the implementation of the tax-for-fee reform is associated with a reduction of household migration rate of 4.67 percentage points, which then is translated into 28% when evaluated at the mean household labor migration rate in 2002. As our explanatory variable of interest does not vary across individuals but rather vary at the village level, given the potential for within-village correlation of the residuals, we report in square brackets the standard errors adjusted for clustering effects of observations of the same village. Although this method produces larger standard errors, the effect of the tax-for-fee reform on migration decision stays statistically significant at the 5% level across all columns. We then simultaneously estimate a system of two equations using 2SLS, instrumenting for the financial burden decrease with the tax-for-fee reform. The first stage equation is as follows: where Reform v, p is a dummy variable that indicates whether village v had performed the taxfor-fee reform during 1998-2002, and all other variables have the same definitions as in Equation (1), namely our second stage regression. In order to address the concern of whether the exclusion restriction assumption is satisfied, we perform a number of regressions to predict a comprehensive set of village and household characteristics, including the labor migration rate at the village level as well as the household financial obligations in 1998, using the implementation of the tax-for-fee reform, as shown in Supplementary materials. None of these confounding factors seems to correlate with the reform, providing further confidence in the validity of our instrumental variable strategy. The first stage estimates in Panel C of Table 3 provide evidence for a strong positive correlation between the instrument, namely the tax-for-fee reform, and financial burden decrease between 1998 and 2002. The Kleibergen-Paap F-statistic for the excluded instrument ranges from 37 to 41 across the models, showing that it is very unlikely that our estimates are biased by weak instrument problem. In terms of magnitude, the estimated coefficient in Column (4) suggests that the reduction of rural household financial burden is significantly more pronounced in villages that experienced the tax-for-fee reform, by approximately 99 RMB. Evaluated at the sample mean of financial burdens in 1998, the tax-for-fee reform is predicted to alleviate total taxation by approximately 27%. To gauge the plausibility of this effect, it is useful to compare the magnitude to estimates from other studies. According to Yep (2004), only after the first year of tax-for-fee reform, the financial burden for rural households throughout Anhui province was trimmed by 31%. It appears from this comparison that as intended by the central government, tax-for-fee reform does achieve tremendous success in the relief of rural tax-and-fee burdens. For similar reasons as we described when presenting reduced-form estimates, we also report standard errors adjusted for clustering effect in square brackets. This adjustment inflates standard errors, however the positive relationship between the tax-for-fee reform and financial burden decrease stays statistically significant across all regressions.
Panel B of Table 3 presents 2SLS estimates of Equation (1). Unlike the baseline estimates reported in Table 1, the 2SLS estimates remain fairly stable in magnitude and statistically significant as we introduce a large set of controls across Column (1) to (4). According to the estimates using the full set of covariates reported in Column (4), a 100 RMB decrease in financial burden reduces the proportion of migrant workers among rural household laborers by 4.7 percentage points, an effect that is statistically significant at the 5% level. This implied 2SLS estimate of the effect of financial burden decrease suggests that, for a rural household that bears the average financial burden in 1998, a 10% decrease in its financial burden (36.2 RMB) reduces its migration rate by approximately 1.70 percentage points, which is approximately 10.11% of the sample mean rural household migration rate in 2002. 5 Thus the underlying changes in migration behavior associated with rural financial burden alleviation are not only statistically significant, but also substantively meaningful.
Comparing with the baseline estimates reported in Table 2, the 2SLS estimates are more than 20 times larger, suggesting that our baseline estimates are possibly biased downwards by the measurement errors in our calculations of financial burden decrease and household migration. A plausible alternative explanation is that rural households that were treated more favorably by village leaders may be less likely to benefit from the reform and migrate out simultaneously; or that affluent local financial budgets may constrain the space for burden mitigation on one hand, and discourage migration on the other, as we discussed earlier.
Seemingly unrelated regressions
A natural concern over the validity of our estimation strategy is that rural households' migration decisions may be correlated with other options in terms of family labor allocation. As rural migrants suffer from inferior work conditions and unequal pay, not to mention they are denied access to public services in the cities due to the hukou system, Chinese peasants usually consider migration as the second-best choice that they settle for only after they fail to achieve comparable local employment (Song & Knight, 2003;Zhao, 1999). Specifically, rural laborers' willingness to migrate may be offset by the availability and attractiveness of local non-farm economic opportunities. To address this concern, we employ three simultaneous equations to investigate 316 B. Zheng and Y. Gu alternative rural household labor allocation choices, including migration, local non-farm employment, and farming. Given that the disturbance terms are likely to be highly correlated across equations, we perform the seemingly unrelated regressions (SUR) method to estimate the following equations simultaneously: where Y 1 , Y 2 and Y 3 stand for the shares of rural household laborers who were migrant workers, local off-farm workers and farmers in 2002, respectively. Dec is an abbreviation for Decrease h, v, p , denoting the amount of financial obligations that had been reduced during 1998-2002, while C refers to the whole set of covariates that are included in Equation (1). We again use the tax-for-fee reform as an instrument to yield policy-induced variations in the reduction of rural financial burdens between 1998 and 2002. Instrumental estimates of the SUR model based on Equation (3) are reported in Columns (1), (5) and (6) in Table 4 respectively. We omit the first-stage regression results in Table 4, as they are identical to the estimates in Panel C, Table 3. We only report the SUR estimates for the most extensive specification, although our results are robust to the reduction of any controls. The negative and statistically significant coefficient in Column (1) provides confirmation of the braking effect of financial burden alleviation on rural-to-urban migration. However as reported in Column (5), we observe no relationship between financial burden decrease and rural laborers' engagement in local nonfarm employment: the coefficient is negative and statistically insignificant. In contrast, as the tax-for-fee reform mitigates financial burdens, rural laborers flow back to work on their family farm plots, which is manifested in the strong positive correlation between financial burden decrease and farming behavior, as shown in Column (6). 6 The estimates of our SUR model help relieve the concern that the tax-for-fee reform may yield some unintended policy effects that foster rural non-farm job opportunities and therefore decelerate migration, in which case our instrumental estimation strategy may violate the exclusion restriction assumption. They also suggest that our analytical finding is robust to the concern that the slowdown of rural-to-urban migration may correlate with the development of rural non- (Lai, 2002).
Heterogeneous effects
Motivated by the concern that our measure of the migration rate at the household level could be unstable due to small denominators, which are the numbers of household laborers, we change the unit of analysis to an individual, or more specifically a rural laborer, to assess whether the financial burden decrease in a rural household would influence an individual laborer's migration behavior. We estimate the following equation using binomial probit model: where Migration i, h, v, p is an indicator variable that equals one if rural laborer i was a migrant worker, and zero otherwise. The vector X i, h, v, p denotes a set of labor characteristics that may correlate with individual migration behavior, including gender, age, years of schooling, and Communist Party membership. All other variables are defined as before. We again exploit the tax-for-fee reform to instrument for the financial burden decrease, with the second-stage results shown in Column (1), Table C1 in Supplementary materials. The negative coefficient of financial burden decrease is statistically significant at the 1% level, even after the correction of standard errors for the within household clustering effect of observations, suggesting that our empirical finding is not sensitive to the household level measure of the migration rate.
Recall that we find rural households transfer their laborers from migrant work back to agriculture after the tax-for-fee reform, according to our instrumental SUR estimates. This is plausibly because rural households reevaluate whether the marginal returns to migration are still worth the cost, when the reform alleviates their financial burdens and improves post-tax income. While the existing literature suggests that the tax evasion effect is most pronounced when migration cost is low (Kleven et al., 2013), we conjecture that in the case of rural China, the adverse effect of relief in financial burden should be more noticeable on the migration behavior of rural laborers with lower marginal returns in the urban labor market. To the extent that the data allow, we test this conjecture by examining the heterogeneous effects of financial burden decrease on migration by rural laborers' age, given that the migration costs rise as age increases (Bodvarsson, Hou, & Shen, 2014;Schwartz, 1976). While allowing for this heterogeneity, Equation (4) becomes: where all other variables take the same meanings as in Equation (4), except that Age i, h, v, p refers to the age of rural laborer i. We then instrument for Decrease h, v, p and Decrease h, v, p  Age i, h, v, p with the tax-for-fee reform, and the interaction term between the reform and individuals' age, respectively, to establish the causal inferences of b 4 and b 5 : The results are plotted in Figure 1, in which the horizontal axis indicates the age of rural laborers, and the vertical axis measures the change in migration behavior corresponding to financial burden decrease, conditional on province fixed effects and a series of village, household, and individual characteristics (namely, b 4 þ b 5  Age i, h, v, p Þ: Our estimation of Equation (4) suggests that the overall treatment effect of financial burden decrease under the homogeneous effect assumption is -0.002, statistically significant at the 1% level (Table C1, Supplementary materials). However, the downward trend of the solid line in Figure 1, together with the dotted lines indicating the 95% confidence intervals of which the standard errors are clustered at the household level, shows that the assumption of homogeneous effect does not hold true: the inhibiting effect of financial burden alleviation on migration is strongly contingent on the age of rural laborers. For rural laborers age 35 or below, the effect of financial burden decrease stays statistically insignificant, although it starts to be negative when age reaches 24 and above. In contrast, the impact of reducing financial obligations is exclusively pronounced among rural laborers over the age of 35, and one can see that the older the rural laborer is, the larger the negative effect becomes. Overall, the findings in Figure 1 confirm that the effect of the rather modest financial burden relief is driven largely by rural laborers with lower marginal returns to migration, in this case older laborers, distinguishing our research context from tax-fleeing migration among high-skilled and high-income laborers that was widely discussed in the existing literature.
In light of this finding, we speculate that the effect of financial burden relief may vary on different types of migration with unequal economic and psychological costs. To explore these potential heterogeneous effects, we further disaggregate rural-to-urban migration based on the administrative borders that laborers cross into three ranks, including migration to nonnative townships (that are within native counties), migration to nonnative counties (that are within native provinces), and migration to nonnative provinces. We then extend the SUR model in Equation (3) to distinguish the impacts of financial burden decrease on three categories of migration, as well as local non-farm employment and farming. The estimates reported in Columns (2)-(4) of Table 4 show that, the decelerating effect of financial burden alleviation is only significant on migration across provincial boundaries, however the impact is muted for migration within native provinces. Assuming the cost of migration increases as it spans higher level administrative boundaries (Carrington, Detragiache, & Vishwanath, 1996;Lucas, 2001), this analysis confirms that the negative effect of financial burden mitigation during the tax-forfee reform is isolated to migration behaviors that are economically or psychologically more costly, namely, those that bear lower marginal returns. Taken together, our findings of the heterogeneous effects across age groups and migration types provide evidence for our previous conjecture, that tax cut deters migration mainly because relative to the rising rural income levels, the marginal returns to migration become less attractive to peasants. This is especially true for migration behaviors bearing higher costs, such as those across provinces or at older ages.
Concluding remarks
In this paper, we analyze the effects of taxation on the rural-to-urban migration of Chinese peasants. We put particular emphasis on the tax-for-fee reform, which posed a policy intervention on rural tax burden, to examine its impact on rural households' decisions to migrate. Our investigation has yielded three main results. First, the rural tax cut, as a result of the tax-for-fee reform, significantly reduces rural-to-urban migration. In terms of magnitude, a 10% alleviation in rural tax can lead to a 10% decline in migration. Second, it is unlikely that the reduction in migration is driven by unintended policy effects of the tax-for-fee reform in fostering rural nonagricultural job opportunities. As the government mitigates rural tax, peasants tend to farm more and migrate less, leaving their local non-farm employment unaffected. Third, migration across higher level administrative boundaries or at older ages is more responsive to taxation.
To the best of our knowledge, our study is the first to show that tax-induced migration does exist among low-skilled and low-income laborers in developing economies. The results of this study suggest that future research should carefully examine the impact of tax policies not only on labor supply, but also on labor mobility. Joining a small and recent literature, our study shows that the migration behaviors of laborers at both ends of the income curve are very sensitive to the changes in tax liability. Estimating the elasticities of migration to taxation is therefore very important for policy debates on optimal tax formulas, and their impacts on the redistributive power of governments, as well as the healthy development of labor markets. Notes 1. They are Anhui, Hebei, Inner Mongolia, Heilongjiang, Jilin, Jiangxi, Shandong, Henan, Hubei, Hunan, Chongqing, Sichuan, Guizhou, Shaanxi, Gansu, Qinghai, Ningxia, Jiangsu, Zhejiang, and Shanghai. Provincial governments were given the autonomy to determine whether to conduct the reform in the entire province or just a part of the jurisdiction. 2. Our results stay robust if we treat rural laborers that work across townships but live out of their households for less than 180 days in 2002 as local non-farm workers. Our definition of migration does not involve the duration of stay at the destination, since the crux is that the formerly predatory taxation in rural China pushed peasants out to search for better income opportunities. 3. 1998 is chosen as the benchmark mostly because of data availability. In addition to rural households, the CHIP 2002 survey instrument was also administered to the leaders of all 961 villages to gather information on the implementation of the tax-for-fee reform, as well as some basic village characteristics in 1998, including the labor migration rate at the village level. 4. The difference is 114 RMB (163-49), and is statistically significant at the 1% level. 5. To suggestively put these numbers into perspective, the elasticities to the net-of-tax rate of the number of foreign soccer players in Kleven et al. (2013), and of the number of foreign superstar inventors in Akcigit et al. (2016) are both around one. 6. Baseline SUR model without an instrumental variable produces qualitatively identical results. Moreover, the correlation of residuals is -0.41 between migration and local non-farming, and -0.37 between migration and farming, suggesting that whatever unobserved factors that lead to migration tend to negatively influence local employment, especially off-farm occupation. This verifies our assertion that rural households make laborallocation decisions based on comparing marginal gains and losses of migration and its local alternatives. 320 B. Zheng and Y. Gu
|
v3-fos-license
|
2017-06-21T23:19:14.209Z
|
2013-06-06T00:00:00.000
|
16518742
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-13-548",
"pdf_hash": "fde9d220a22326a7c65b6f7ef20d6c8d42adae1a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2985",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "32c35e9f024c1d9c00f4a14ab1e1faf223060c3a",
"year": 2013
}
|
pes2o/s2orc
|
Prevalence of health-risk behaviours among Canadian post-secondary students: descriptive results from the National College Health Assessment
Background It is important to understand health-risk behaviours among young adults, as modifications in this can enhance and lessen the risk of chronic illness later in life. The purpose of the current study was to determine the prevalence of a broad range of health-risk behaviours among post-secondary students from across Canada, and to determine whether institutional variability exists in the prevalence of these behaviours. Methods Data were collected from 8,182 undergraduate students enrolled in one of eight Canadian post-secondary institutions during the fall or spring of 2009, using the National College Health Assessment (NCHA). The NCHA consists of 60 questions, assessing student health status and engagement in various health behaviours. Results Findings show relatively low prevalence in smoking (13.1%) marijuana (17.5%) or other illicit drug use (3.5%), and risky sexual behaviour (12%). Binge drinking, however, was much higher, with nearly 60% of students consuming more than 5 alcoholic drinks in a single occasion during the past 15 days. Similarly, prevalence rates for physical inactivity (72.2%), inadequate sleep (75.6%) and low fruit and vegetable intake (88.0%) were all high among the student population. Results also found that students in smaller institutions exhibited higher rates of inactivity, binge drinking, and marijuana and illicit drug use compared to institutions having a larger student body. Conclusion Overall, findings point to the need for more concentrated health promotion campaigns, specifically targeting sleep, fruit and vegetables intake, and greater participation in physical activity. Given evidence of some institutional variability, future efforts are warranted in exploring how best to increase institutional commitment for collecting surveillance data on Canadian post-secondary students.
Background
Despite health promotion efforts, young adults continue to engage in high rates of health-risk behaviours [1,2]. Many risk factors such as smoking, excessive alcohol consumption, inadequate nutritional status, and low levels of physical activity have direct behavioural links to chronic disease, yet are amenable to change. Modification of these health-risk behaviours can enhance health and lessen the risk of chronic illnesses later in life [2,3].
Evidence suggests that initiation of diseases such as atherosclerosis, obesity, and diabetes related to physical inactivity, is more frequently emerging in the second and third decades of life [4]. Young adults, including post-secondary students, however, do not attribute such health-risk behaviours to the development and progression of these diseases [5]. The transition out of high school is a critical time for which individuals begin to take definitive steps towards independence, and is considered to be the first major transition an individual faces [6].
Together, there is growing recognition that postsecondary students should be a target population for health promotion efforts, including suggestions that their health is an 'important and neglected public health problem' [1,7,8].
Across Canada and the United States, epidemiological evidence indicates that the prevalence of smoking and binge drinking rapidly increases as the adolescent population moves towards early adulthood [5,[9][10][11][12][13]. While the initiation of smoking tends to occur prior to early adulthood, [13] there is research to suggest that the collegiate years are a critical time when many students begin to experiment with smoking. Approximately 25% of post-secondary students are either daily or occasional smokers [14,15]. Likewise, entry into college/university also appears to lend itself to problem drinking, affecting almost all post-secondary campuses [10]. Nearly 70% of post-secondary students reported alcohol use within the last month, [9] with a vast majority of those students reporting at least one session of binge drinking (defined as ≥ 5 drinks in a single occasion) during that time [14,15]. Significant declines in physical activity participation during the transition between high school and post-secondary education have also been evident [16][17][18].
Given the public health implications of these healthrisk behaviours, greater efforts must be taken to prevent their occurrence as Canadian adolescents make their transition into early adulthood. The Canadian Campus Survey [1,11] has been the only focused attempt at estimating the prevalence of health behaviours of the Canadian postsecondary population. However, this survey was restricted to alcohol and drug usage. In 2004, 6,282 full-time university undergraduates from 40 universities completed questionnaires by mail or online. Overall, 18.5% and 6.6% of the students reported consuming 5+ and 8+ drinks on a single occasion, 12.7% were current smokers, and 16.7% had smoked cannabis in the previous 30 days.
Unfortunately, there is no coordinated system for collecting health-related data on post-secondary students in Canada. This is a significant gap because to inform interventions in this setting, a mechanism is required to assess the prevalence and correlates of health behaviours. The information can then in turn, guide intervention prioritization, selection, implementation, and ongoing evaluation and program/health service refinement. There have been some efforts to address this gap. Formed in 1973, the Canadian Association of College and University Student Services (CACUSS) is a professional bilingual association representing and serving those individuals who work in Canadian post-secondary institutions in Student Affairs and Services. The Canadian Organization of University and College Health (COUCH), a Division of CACUSS, is dedicated to improving the health and wellness of college and university communities. In the absence of a Canadian surveillance mechanism, COUCH has advocated for institutions to subscribe to the National College Health Assessment service of the American College Health Association (NCHA-ACHA; for more detail see [13]). Data from one of these institutions has been used to address a range of research questions [19,20].
In 2009, eight Canadian institutions participated in the NCHA-ACHA. The primary purpose of the current study was to examine the prevalence of a broad range of health-risk behaviours among Canadian post-secondary students from these institutions. The secondary purpose was to test for institutional variability in the prevalence of these health-risk behaviours.
Database and sample
Data were collected from 8 English-speaking institutions across 5 provinces in Canada during the fall or spring of 2009, using the NCHA-ACHA. The NCHA consists of 60 questions and approximately 300 items, including assessments of student health status and engagement in a variety of health behaviours; and has been evaluated extensively for reliability and validity in US postsecondary students (for further information, see ACHA [21]). Approximately 10% of the student body from each institution was randomly chosen to receive an invitation by e-mail to participate in the survey. Over a 30-day period, each potential participant received three such invitations to complete an online survey, which was maintained by the ACHA. As incentives, participants were entered into draws to receive gift cards. To obtain a more homogenous groupreflective of the majority of post-secondary students in Canadaonly full-time undergraduate students were included in the current analyses. All institutions received approval for administering the survey by their respective University Research Ethics Boards, and the authors received approval from the American College Health Association to conduct the analysis with data from individual institutions.
Student-demographics
Participants provided demographic information such as age, gender, ethnicity, living situation (e.g., on/off campus/ parental home), and year of study.
Smoking, marijuana, illicit drugs
Questions related to cigarette smoking, marijuana use, and illicit drug use asked: "Within the last 30 days, on how many days did you use the following…" Response options ranged from never used and have used but not in the last 30 days to used all 30 days, and were subsequently dichotomized to user (in the past 30 days), or non-user (have not used in the past 30 days).
Binge drinking
The question relating to binge drinking asked, "Within the last 15 days, how many times did you have 5 or more drinks in one sitting?" Respondents indicating that they engaged in one or more sessions of binge drinking during the past 15 days were considered 'binge drinkers' , and those who did not engage in binge drinking within the past 15 days were considered 'non-binge drinkers'.
Risky sexual behaviour
Participants were asked, "Within the last 12 months, have you experienced the following as a consequences of your drinking…had unprotected sex?" Responses were categorized as yes (they engaged in this health-risk behaviour), or no (for reasons that they do not drink or that they did not engage in this health-risk behaviour).
Physical inactivity
Moderate-to-vigorous physical activity (MVPA) behaviours were assessed by two items: "On how many of the past 7 days did you: Do moderate-intensity cardio or aerobic exercises (caused a noticeable increase in heart rate, such as brisk walk) for at least 30 minutes?"; and "Do vigorous-intensity cardio or aerobic exercises (caused large increase in breathing or heart rate such as jogging) for at least 20 minutes?" Participants answered on a scale from 0 days to 7 days. Consistent with the former Canadian physical activity guidelines that specifies a minimum of four days of either moderate activity or endurance activity, scores of the two items were summed, and reclassified to reflect insufficiently active (students that engaged in 3 days or less of MVPA per week) or sufficiently active (students that engaged in MVPA for 4 or more days of the week).
Lack of fruit and vegetable intake
A single item asked participants to indicate: "How many servings of fruits and vegetables do you usually have per day? (1 serving = 1 medium piece of fruit; ½ cup fresh, frozen, or canned fruits/vegetables; ¾ cup fruit/vegetable juice; 1 cup salad greens; ¼ cup dried fruits)." Response options ranged from 0 servings per day, 1-2 servings per day, 3-4 servings per day, and 5 or more servings per day. Consistent with previous research, [22] participant responses were dichotomized to reflect insufficient fruit and vegetable intake (consuming < 5 servings of fruit and vegetables each day) or sufficient fruit and vegetable intake (consuming ≥ 5 or more servings of fruits and vegetables).
Inadequate sleep
Participants were asked "On how many of the past 7 days did you get enough sleep so you felt rested when you woke up in the morning?" Responses ranged from 0 days to 7 days, and were subsequently dichotomized to reflect either not having sufficiently enough sleep (restful on < 4 nights per week) or having sufficiently enough sleep on most nights of the week (i.e., ≥ 4 or more nights).
Results
A total of 10,778 students from across the eight institutions completed the NCHA survey. The final sample size decreased to 8182 following a list wise deletion of participants not meeting the inclusion criteria (see Table 1 for sample characteristics). Participants in the current study were similar to the representative sample included in the 2004 Canadian Campus Survey [1]. Overall, respondents were predominantly female with most living either offcampus with family or on their own, while the year of study was evenly distributed (i.e., 29% first year, 23% fourth year).
Overall, the findings show low prevalence in terms of smoking and drug use. Results indicate that only a small proportion of the student population smoked cigarettes (13.1%), used marijuana (17.5%), used other illicit drugs (3.5%), and/or had unprotected sex as a consequence of being intoxicated (12.0%) over the past month. The prevalence of binge drinking, however, was much higher, with nearly 60% of students who reported consuming > 5 alcoholic drinks in a single occasion during the last 15 days. Results also indicated that 72.2% of students were physically inactive (engaged in < 4 days of MVPA) and 75.6% were not getting enough sleep to be rested on ≥ 4 nights each week. Even less encouraging, results show that 88.5% of the student population consumed < 5 servings of fruits and vegetables each day. It is important to note that measures from both physical activity and fruit and vegetable intake likely underestimate the actual prevalence of not meeting current physical activity or fruit and vegetable intake guidelines [23].
In testing for institutional variability, whilst trying to maintain school anonymity, comparisons in the prevalence of health behaviours were made between large (campuses with ≥ 20,000 students) and small institutions (campuses with < 20,000 students). Overall, the results indicate that physical inactivity, binge drinking, marijuana and other illicit drug use, and risky sexual behaviours were all significantly higher among students on smaller campuses (see Table 2). Significant differences also emerged in fruits and vegetables intake, smoking, and insufficient sleep. Further analyses conducted to determine whether there were significant differences between individual institutions within the larger and smaller category of schools. With the exception of illicit drug usewhich appears to be consistent across all schoolsthere appears to be significant differences in health behaviours across different institutions.
Discussion and conclusion
Overall, prevalence estimates of health-risk behaviours across the eight Canadian post-secondary institutions ranged from 3.5% in use of illicit drugs, to 88.5% that are consuming less than 5 servings of fruits and/or vegetables each day. Traditionally, there has been a public health focus to reduce "risky health behaviours", with particular interest directed towards risk factors associated with non-communicable diseases (i.e., smoking, illicit drug use, and binge drinking) [24]. Among these, binge drinking had the highest prevalence (59.4%) in our sample. Although assessed differently, results from the Canadian Campus Survey [1] reported lower prevalence in binge drinking, with 18.5% of their undergraduate sample consuming 5 or more drinks on a single occasion Note: Prevalence statistics are represented as percentages; There were 4 institutions that were considered large (≥ 20,000 students), and 4 institutions that were considered small <20,000 students). * Significant differences in campus size (p < 0.05). ** Significant differences in campus size (p < 0.01). a Binge drinking characterized as 5 or more drinks in one sitting over the past 2 weeks. b Illicit drug use characterized as having used an illicit drug other than marijuana over the past 30 days. c Significant differences between individual institutions (p < 0.05). d Significant differences between individual institutions (p < 0.01).
at least twice a month. The prevalence of smoking (12.7% versus 13.1%), cannabis use (16.7% versus 17.5%), and illicit drug use (2.2% versus 3.5%) however, were comparable to findings of the population-based study by Adlaf and colleagues [1]. This may suggest that campuses as a whole are doing a good job of dissuading the use of these substances among post-secondary students.
Despite the low prevalence of traditional health-risk behaviours, the majority of students were not engaging in healthy behaviours either. The absence of engaging in these positive health risk behaviours poses its own health a risk, [25] thus was characterized as health-risk behaviours. Findings show that the vast majority of the student population was considered to be physical inactive (72.2%), lacking of sleep (75.7%), and consuming inadequate servings of fruits and vegetables each day (88.5%). Perhaps a shift from the public health perspective is required for administrators at post-secondary institutions. In addition to prevention of prevailing health-risk behaviours, greater investments should be placed towards the promotion of positive health behaviours. A secondary purpose of the current study was to examine the institutional variability across health-risk behaviours. Significant between-group differences were found based on the basis of campus size. Specifically, institutions that had fewer students (i.e., < 20,000) exhibited higher rates of inactivity, binge drinking, and marijuana and illicit drug use compared to institutions having a larger student body (i.e., ≥ 20,000). Furthermore, there appears to be variability between individual institutions irrespective of the size of the campus. Speculatively, the disparities in health-risk behaviour may be indicative of discrepancies in health-promoting efforts across each campus to reduce the prevalence of such behaviours across the post-secondary institutions. For example, institutions with a smaller student population may have less funding available to build capacity for delivering effective campus-wide health-promotion initiatives in comparison to institutions with a greater student population. Similarly, there might be institutional differences in health promotion efforts, with each school employing different strategies at targeting the health of students. The development of audit tools would be helpful for assessing school level variation in health promotion efforts to confirm such possibilities.
Having a surveillance system such as the NCHA survey that can regularly monitor health-risk behaviours across institutions might make it possible to determine over time the institutions that are successful in changing health behaviours of interest. In turn, this information might pinpoint promising policies or strategies associated with such change, which can then be disseminated nationally. This is the first study to collectively examine the NCHA data from multiple Canadian post-secondary institutions. As such, our study provides a baseline for the prevalence of health-risk behaviours among Canadian post-secondary students. In the absence of a Canadian surveillance mechanism, the US-based NCHA survey seems to be a promising monitoring tool to collect healthrelated data on Canadian post-secondary students. However, more institutions should be encouraged to participate in future surveys. By reaching a greater number of institutions, collected data will enable opportunities to examine priority health issues affecting the broader Canadian post-secondary population, explore differences between geographic regions, and provide a basis for making comparisons of prevalence and progress to national and provincial norms.
There are some limitations to the study. First, as there were only eight Canadian post-secondary institutions that participated in the 2009 NCHA survey, the reported prevalence may not be representative of the larger, Canadian post-secondary student population. Second, only 10% of the student body from each of the participating institutions was randomly invited to participate in the survey; thus there is the potential for non-response bias. Third, there were seasonal differences in terms of the administration of the NCHA across the institutions. Two of the institutions administered the survey during the spring of 2009, while the remaining six administered the survey during the fall of 2009. It may therefore be possible for these seasonal differences to influence students' responses to some of the measured health-risk behaviours. Finally, while self-report instruments such as the NCHA are useful tools for gathering public health data, [26,27] the nature of such data may be influenced by response bias. Additionally, some measures within the NCHA instrument are less than optimal. In particular, physical activity is not assessed in a way that allows interpretation in terms of ascertaining whether individuals are meeting current physical activity guidelines.
Overall, our study provides preliminary populationlevel data on the prevalence of common health-risk behaviours among the Canadian post-secondary population. Our findings point to the need for more concentrated health promotion campaigns targeting health promoting behaviours; including obtaining more sleep, consuming more fruit and vegetables, and greater participation in physical activity. Future efforts are warranted in exploring how best to increase post-secondary institutional commitment to collecting surveillance data. Consideration should also be given to the development and validation of a Canadian Postsecondary Health Surveillance System with more rigorous sampling procedures to ensure representativeness.
|
v3-fos-license
|
2024-03-20T05:11:53.670Z
|
2024-02-24T00:00:00.000
|
268520371
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.7150/thno.89703",
"pdf_hash": "71e62e6ce2f1d95c9dee0c74794a29c8782df79f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2986",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"sha1": "71e62e6ce2f1d95c9dee0c74794a29c8782df79f",
"year": 2024
}
|
pes2o/s2orc
|
Dual regulation of NEMO by Nrf2 and miR-125a inhibits ferroptosis and protects liver from endoplasmic reticulum stress-induced injury
Rationale: The surge of severe liver damage underscores the necessity for identifying new targets and therapeutic agents. Endoplasmic reticulum (ER) stress induces ferroptosis with Gα12 overexpression. NF-κB essential modulator (NEMO) is a regulator of inflammation and necroptosis. Nonetheless, the regulatory basis of NEMO de novo synthesis and its impact on hepatocyte ferroptosis need to be established. This study investigated whether Nrf2 transcriptionally induces IKBKG (the NEMO gene) for ferroptosis inhibition and, if so, how NEMO induction protects hepatocytes against ER stress-induced ferroptosis. Methods: Experiments were conducted using human liver tissues, hepatocytes, and injury models, incorporating NEMO overexpression and Gα12 gene modulations. RNA sequencing, immunoblotting, immunohistochemistry, reporter assays, and mutation analyses were done. Results: NEMO downregulation connects closely to ER and oxidative stress, worsening liver damage via hepatocyte ferroptosis. NEMO overexpression protects hepatocytes from ferroptosis by promoting glutathione peroxidase 4 (GPX4) expression. This protective role extends to oxidative and ER stress. Similar shifts occur in nuclear factor erythroid-2-related factor-2 (Nrf2) expression alongside NEMO changes. Nrf2 is newly identified as an IKBKG (NEMO gene) transactivator. Gα12 changes, apart from Nrf2, impact NEMO expression, pointing to post-transcriptional control. Gα12 reduction lowers miR-125a, an inhibitor of NEMO, while overexpression has the opposite effect. NEMO also counters ER stress, which triggers Gα12 overexpression. Gα12's significance in NEMO-dependent hepatocyte survival is confirmed via ROCK1 inhibition, a Gα12 downstream kinase, and miR-125a. The verified alterations or associations within the targeted entities are validated in human liver specimens and datasets originating from livers subjected to exposure to other injurious agents. Conclusions: Hepatic injury prompted by ER stress leads to the suppression of NEMO, thereby facilitating ferroptosis through the inhibition of GPX4. IKBKG is transactivated by Nrf2 against Gα12 overexpression responsible for the increase of miR-125a, an unprecedented NEMO inhibitor, resulting in GPX4 induction. Accordingly, the induction of NEMO mitigates ferroptotic liver injury.
Introduction
The emergence of acute liver injury (ALI) presents a clinical challenge and represents a key focal point within clinical practice and public health apprehensions [1,2].It encompasses hepatocellular
Ivyspring
International Publisher impairment induced by pharmaceutical agents, herbal compounds, or other chemical entities.This spectrum extends from subtle, asymptomatic elevations in hepatic enzymes to profound hepatotoxicity, ultimately leading to acute hepatic failure or requiring transplantation [3].Among many drugs and chemicals implicated in ALI, acetaminophen (APAP) intoxication has been a prominent cause [4,5].So, APAP-induced hepatotoxicity draws substantial public attention, prompting research to ascertain its pathology and underlying mechanisms.Nonetheless, the molecular basis of APAP-induced ALI remains incomplete.
Ferroptosis is a form of regulated cell death characterized by lipid peroxide accumulation and oxidative damage, which are the key features associated with APAP-induced liver injury.This liver injury also occurs when glutathione peroxidase 4 (GPX4) is suppressed in the cell, which leads to a decrease in reduced glutathione (GSH) content and impairs cellular defense against lipid peroxidation, exacerbating ferroptosis [6][7][8][9].This type of cell death also relies on intracellular iron contents [10].Moreover, ferroptosis is closely linked with endoplasmic reticulum (ER) stress, one of the mechanisms implicated in liver pathophysiology [11,12].Sustaining ER stress indeed enhances ferroptosis by promoting iron overload through autophagy [13].Despite the link between ferroptosis and ER stress, the molecular association of ferroptosis and ER stress regulation should be characterized in the context of ALI progression.
Nuclear factor erythroid-2-related factor2 (Nrf2) protects cells from oxidative injury triggered by external stresses, including xenobiotic intoxication, through antioxidant enzyme induction and inflammatory stress inhibition [21][22][23].Nrf2 also controls cell fate determination, induces cell survival molecules such as Bcl-2 and certain microRNAs (e.g., miR-125b), and inhibits pro-apoptotic molecules [24,25].In addition to apoptosis, Nrf2 activation attenuates necrosis or pyroptosis [26,27].Since the regulatory roles of Nrf2 in ER stress-induced ferroptosis and its associated targets remain elusive, the present study sought to explore antioxidant proteins that can be transcriptionally controlled by Nrf2 as part of an effort to find the regulator(s).This study investigated whether Nrf2 transcriptionally induces IKBKG (the NEMO gene) for ferroptosis inhibition and, if so, how NEMO induction protects hepatocytes against ER stress-induced ferroptosis.These findings reveal Nrf2's previously unrecognized role as a transcriptional IKBKG regulator, providing insights into the underlying mechanisms that govern ferroptosis via GPX4.
In our recent investigation, APAP-induced liver injury was characterized by ferroptosis via GPX4 from ER stress-mediated Gα12 overexpression [28].These findings evidence a potential ferroptosis involvement in APAP-induced liver injury pathogenesis and highlight the need for further investigation into the APAP toxicity and ferroptosis relationship.Another objective of the present study was to ascertain whether Gα12 signaling regulates the Nrf2-NEMO axis, potentially influencing ferroptosis under APAP toxicity.Thus, apart from Nrf2's induction of IKBKG, we wondered if NEMO expression is under the control of the Gα 12 signaling pathway.In this inquiry, this study intriguingly identified miR-125a as a newly determined NEMO inhibitor downstream of the Gα 12 -ROCK1 axis.Consequently, our research results support the dual regulatory pathways of NEMO expression by Nrf2 and miR-125a, which contributes to the inhibition of ferroptosis, protecting the liver from ER stress-induced ALI.The outcomes were validated through patient specimen examination, consolidating identified target relevance in human liver pathophysiology.Hence, this research may offer insights into discovering potential approaches for the treatment of ALI.
Human sample analysis
Human liver specimens were obtained from donors and recipients undergoing liver transplantation from 2011 to 2020 after histologic examination and ultrasonography at Asan Medical Center (Seoul, South Korea) for ALI analysis.During procurement, an intraoperative liver assessment was systematically performed to rule out fibrosis, cirrhosis, steatosis, and other abnormalities before transplantation.All patients in this study provided written informed consent.This study was approved by Asan Medical Center's Institutional Review Board (IRB no.2021-0839) and adhered to the 1975 Declaration of Helsinki ethical guidelines.
Animal models and experiments
The animal experiment protocol was approved by the Seoul National University's Institutional Review Board and conducted under the Institutional Animal Care and Use Committee (IACUC) at Seoul National University (No. SNU-171115-2, SNU-200723-2-2, SNU-201229-4, SNU-201229-5, SNU-201229-6) guidelines.The mice were housed in a 12 h light/dark cycle and relative humidity of 50% ± 5% under filtered, pathogen-free air, with food and water available ad libitum.Male mice at 8 to 12 weeks of age, unless otherwise indicated, were used.To mitigate environmental disparities, mice were acclimated for a minimum of one week preceding each experimental procedure.For the establishment of an acute liver injury model, male C57BL/6 mice underwent overnight fasting followed by a single intraperitoneal dose of APAP treatment (300 mg/kg BW, i.p.), and the liver and blood samples were collected 6 h afterward; In this experiment, we chose a 6 h time point because APAP treatment at the above dose increased serum ALT activities up to 24 h, which peaked 6 h post-treatment [29].Where indicated, mice were given an injection of ferrostatin-1 (1 mg/kg BW, i.p.) 1 h prior to APAP treatment.
Separately, carbon tetrachloride (CCl4, 0.5 mL/kg BW, 1:20 in corn oil, i.p.) was administered twice weekly over the specified duration (6 weeks) as a liver injury model.For the induction of ER stress, male C57BL/6 mice were received a single intraperitoneal injection of 2 mg/kg tunicamycin (Tm, Sigma Aldrich) in 150 mM dextrose for 72 h, whereas only dextrose solution was injected into control mice.Male C57BL/6 mice underwent overnight fasting before a single dose of BSO treatment (1 g/kg BW, i.p.), and tissue and blood samples were obtained 6 h later.In the context of an in vivo rescue experiment, male C57BL/6 mice were fasted overnight and treated with a single dose of APAP (300 mg/kg BW, i.p.).After 1 h, the mice were subjected to ripasudil exposure (50 mg/kg BW, i.p.), and tissue samples were collected 5 h afterward.
Nrf2 knockout (KO) mice supplied by RIKEN BioResource Center (Tsukuba, Japan) were bred and maintained.Details of the generation of the Gna12 KO mice used in this study have been described previously [30].
Statistical analyses
Statistical significance was tested via two-tailed Student's t-tests, Mann-Whitney U test, one-way ANOVA coupled with Bonferroni's method, Tukey's honestly significant difference test, or the least significant difference multiple comparison procedure, when appropriate.Correlation coefficients (r) were determined via Pearson's or Spearman's correlation methods.Differences were considered significant at P < 0.05.Statistical analyses were performed using IBM SPSS Statistics 26 software or Prism version 8.0 (GraphPad Software).
Additional details regarding materials and experimental protocols are provided in the Supplementary Materials and Methods.
Inhibition of IKBKG transcript levels in liver injury patients
As a first step toward identifying prominent liver injury regulators, pathways were analyzed using a public dataset (GSE99878); Tolvaptan, known for its capacity to cause ALI, induced a marked shift in the transcriptional heatmap profile (Figure 1A, left).Notably, NF-κB signaling process-associated gene sets were the most downregulated (first rank) in tolvaptan-treated human primary hepatocytes among hallmark pathways (Figure 1A, right).In addition, NF-κB-related gene sets were diminished among the top 15 Wikipathways in subsequent leading-edge analyses, which elucidated overlaps between NF-κB-related pathways and the associated genes (Figure 1B).
Next, we assessed gene ontology pathway changes in APAP-induced acute liver failure patients and healthy individuals (GSE74000); 1220 genes were significantly downregulated, whereas 348 genes were upregulated (Figure 1C, upper).Interestingly, the genes were tagged by several gene ontology (GO) terms analyzed in the Reactome pathway; 13 of each were clustered into five GO groups, belonging to the 'NF-κB signaling-related pathways' among downregulated genes in patients (Figure 1C, lower).GO and biological process analyses of the same dataset confirmed downregulated genes associated with the NF-κB pathway in the same patients (Figure S1A).Similarly, GO and biological process analyses indicated enhanced 'regulation of acute inflammatory response' gene expressions (Figure S1B).
To explore the new functional molecule(s), we focused on the IκB kinase (IKK) that participates in upstream NF-κB signaling and promotes NF-κB activation through pro-inflammatory stimuli [31].and GSEA-enrichment plot of representative gene sets (NES = -2.584,FDR = 0.000) negatively correlated with tolvaptan treatment in primary human hepatocytes using hepatic transcriptome data (n = 3 each, GSE99878) (right).The top 20 genes comprising the enrichment score's leading edge are indicated in the corresponding heatmap (blue, low; red, high).(B) Leading-edge analysis (left) and bar graphs (right) of significantly enriched GSEA Wikipathways using the same data as in A. GSEA leading-edge analysis results are represented as a matrix where the green color intensity indicates the overlap degree between core genes in each gene set combination; the more intense the green color, the greater the overlap.NES and FDR are presented as a bar graph (NES < -1.82, FDR < 0.024).NF-κB signaling-related pathways were marked with red asterisks.(C) Volcano plot (upper) of RNA-seq data from a public dataset (GSE74000, n = 2 or 3 each).Horizontal and vertical lines indicate the filtering criteria (absolute fold-change > 1.5 and adjusted P < 0.05, respectively).Red and blue dots indicate upregulated or downregulated differentially expressed genes (DEGs) in acute liver failure patients.Reactome pathway analysis of DEGs identified NF-κB-related pathway enrichment in acute liver failure patients (lower).The Circos plots illustrate the enriched DEGs overlaps and specific responses in significant Reactome pathways.Hallmark genes and associated pathways are color-coded and are represented by a specific color in the inner ring.The ribbon/arc that originates from different genes and terminates at associated Reactome pathways demonstrates the connectivity between genes and Reactome pathways.(D) Fold change of IkappaB kinase (IKK) complex transcript levels in livers of healthy individuals or HBV-acute liver failure (ALF) patients.MHN, massive hepatic necrosis; SHN, submassive hepatic necrosis (n = 10 samples from 10 individual normal subjects; n = 8 samples from 2 HBV-ALF(SHN) patients; n = 9 samples from 2 HBV-ALF(MHN) patients, as described in the GSE38941 database).(E) IKBKG, CHUK, and IKBKB transcript levels in livers of healthy individuals (n = 10) or ALI patients (n = 22).For E, values were expressed as mean ± SEM (*P < 0.05, **P < 0.01).Statistical significance was tested via two-tailed Mann-Whitney U test.
In another public GEO database analysis regarding HBV-associated acute liver failure (HBV-ALF) patients (GSE38941), we assessed fold change values obtained from mRNA levels of enzyme complex IKK subunits (i.e., IKKα, IKKβ, and IKKγ).Relative levels in mRNA expression of IKBKG and CHUK were significantly downregulated in patients with hepatic necrosis, whereas IKBKB was unchanged (Figure 1D).To validate the IKBKG and CHUK association, we used our own samples of patients with ALI on drugs (i.e., liver intoxications from herbs, drug medications, or unknown origins), as described previously [28].Among transcripts, IKBKG was the most significantly diminished in patients with ALI compared to healthy subjects (i.e., -70.7%) (Figure 1E).These results substantiate the inverse IKBKG transcript level and hepatic injury association.
Nrf2 and NEMO associations during liver injury
We first analyzed the RNA sequencing (RNA-seq) dataset from APAP-subjected mice (GSE104302) to explore NEMO regulation.Principal component analysis (PCA) exhibited a gene expression segregation between APAP and the vehicle (Figure 2A, left).Differentially expressed genes (DEGs) accounted for 11.5% of the entire transcriptomes; Among the 2652 DEGs, 816 were downregulated, whereas 1836 were upregulated (Figure 2A, middle).In the Biocarta analysis for the downregulated gene group, several biological processes expressed high-fold enrichments in 'Mechanism of APAP activity and toxicity' and 'Extrinsic prothrombin activation' (foremost affected), and 'Oxidative stress-induced gene expression via Nrf2' (second-most affected) (Figure 2A, right).Furthermore, we analyzed GO gene sets and GSEA hallmarks using the transcriptome dataset from mouse liver (GSE104302, GSE173595); APAP treatment affected protein refolding genes (Figure S2A) and those linked to the 'unfolded protein response' (Figure S2B).
Considering the Nrf2 and hepatic injury association, we experimentally assessed the potential Nrf2 and NEMO correlation.APAP treatment in mice inhibited NEMO and Nrf2 in the liver with increased Grp78 levels (Figure 2B and C), as confirmed in primary hepatocytes (Figure S2C).Additionally, we examined other liver injury inducer effects (i.e., carbon tetrachloride [CCl4] and tunicamycin [Tm]), discovering that the toxicants treatment displayed similar effects (Figure 2D and E).In analyzing the diclofenac (i.e., an agent that causes liver injury) medication dataset, Ikbkg mRNA levels were inversely correlated with those of cellular stress markers (Hspa5 and Ddit3) (Figure 2F).These findings support Nrf2 and NEMO inhibition and their correlations with liver injury.
NEMO induction of GPX4 for the inhibition of ferroptosis
To understand the direct effect of chemical intoxication on NEMO's uncharacterized role in a different hepatocyte death pathway, we performed the KEGG analysis using our dataset (GSE104302), APAP intoxication promoted the ferroptosis pathway (second rank) among the different types of cell death (Figure 3A and Figure S3A).Moreover, NEMO ablation in hepatocytes upregulated ferroptosisassociated gene sets in other Biological processes, Reactome pathways, and Wikipathway analyses (Figure 3B).In subsequent experiments, NEMO's modulation of ferroptosis biomarkers in the liver and hepatocytes were examined.NEMO overexpression in hepatocytes, through hydrodynamic injection in mice [32], completely reversed the inhibitory effect of APAP on GPX4 along with decreases in 3-nitrotyrosine (3-NT) and 4-hydroxynonenal (4-HNE) levels (Figure 3C).Consistently, the reduced glutathione (GSH) content was recovered in the liver (Figure 3D).In AML12 cells, NEMO overexpression and siRNA knockdown affected GPX4 levels (Figure 3E), corroborating NEMO's direct antioxidant effect in hepatocytes.In addition, APAP treatment reduced Gpx4 transcript levels, which was not reversed by NEMO overexpression (Figure S3B), suggesting that NEMO is unlikely to transcriptionally activate GPX4.In the cycloheximide experiment, modulations of NEMO by overexpression and siRNA knockdown changed GPX4 levels, supportive of NEMO's effect on GPX4 stabilization (Figure S3C).Moreover, NEMO overexpression prevented APAP from increasing Fe 2+ levels (Figure S3D).We further examined mitochondrial fusion and fission marker transcripts (i.e., Mfn1, Mfn2, and Opa1 for fusion; and Fis1, and Drp1 for fission); APAP treatment decreased all of the marker transcripts, but this effect was not changed by NEMO overexpression (Figure S3E).Reduced glutathione (GSH), oxidized glutathione disulfide (GSSG), and GSH/GSSG ratio measurements in WT mice livers treated with a single APAP dose (300 mg/kg BW, i.p., 6 h) 3 days post-hydrodynamic injection with NEMO or control plasmid DNA (25 μg each) via tail vein (n = 3 or 5 each).(E) GPX4 and NEMO immunoblottings for APAP-treated AML12 cells (10 mM, 12 h) after transfection with NEMO (Mock, 1 µg, 24 h) (upper) or siNEMO (siCon, 100 nM, 24 h) (lower) (n = 3; repeated three times with similar results).(F) Liver histopathology and immunohistochemistry for NEMO in the livers of mice treated with a single dose of BSO (1 g/kg BW, i.p., 6 h) (n = 3 each).Scale bar, 200 µm.(G) Immunoblottings for NEMO and GPX4 in AML12 cells treated with the indicated BSO (upper) and Erastin (lower) concentrations for 12 h (n = 3; repeated three times with similar results).(H) Liver histopathology and immunohistochemistry for NEMO in the livers of mice treated with APAP (300 mg/kg BW, i.p., 6 h) 1 h after vehicle or Fer-1 (1 mg/kg BW, i.p.) treatment.Scale bar, 200 µm.(I) Immunoblottings for NEMO and GPX4 in the same mice as in H (upper). Band intensities represent values relative to the respective control (n = 3 or 5 each) (lower).For C, D, and I values were expressed as mean ± SEM (*P < 0.05, **P < 0.01).Statistical significance was tested via one-way ANOVA coupled with Bonferroni's method or the LSD multiple comparison procedure when appropriate.
Subsequently, we confirmed that treatment with DL-buthionine-[S, R]-sulfoximine (BSO), an agent that induces ferroptosis and depletes cellular GSH through GPX4 inhibition [10], inhibited NEMO in the liver and AML12 cells.Erastin, another ferroptosis inducer, also inhibited NEMO and GPX4 (Figure 3F and G).Consistently, treatment with ferrostatin-1 (Fer-1), a specific antioxidative ferroptosis inhibitor [6,33], ameliorated liver injury and enhanced NEMO and GPX4 levels against APAP (Figure 3H and I).Similar outcomes were obtained for Nrf2, NEMO, and GPX4 in mouse primary hepatocytes (Figure S3F), supporting the antioxidant spin-trapping effect of Fer-1.In this event, an increase of Nrf2 by Fer-1 may contribute to GPX4 expression against APAP.All of these results provide evidence that NEMO inhibits hepatic ferroptosis by upregulating GPX4 and is controlled by ferroptosis inducers and inhibitors in APAP-induced ALI.
Protective NEMO effects against oxidativeand ER stress-induced hepatic injury
In subsequent studies, we examined NEMO effects on oxidative stress and ER stress in hepatocytes.In the public RNA-seq dataset analysis (GSE61100), NEMO gene knockout resulted in DEGs accounting for 1.7% of total transcriptomes; Among the 423 DEGs, 40 were downregulated, while 383 were upregulated (Figure 4A, left).NEMO abrogation upregulated 383 genes, of which 29 were associated with reactive oxygen species (ROS) pathways in the Sankey diagram visualization (i.e., macrophage markers and inflammatory response pathways) (Figure 4A, right).Conversely, downregulated genes were related to lipid metabolic processes in the dataset from hepatocyte-specific NEMO knockout mice (Figure S4A).Hallmark pathway analysis of our RNA-seq dataset (GSE173595) from APAP-treated mice conveyed upregulated gene sets associated with ROS and ER stress (Figure S4B and C) [34].In our experiment, NEMO overexpression prevented APAP and H2O2 from increasing DCF fluorescent intensity in HepG2 cells (Figure 4B), confirming NEMO's ability to inhibit ROS.In addition, NEMO overexpression in hepatocytes through hydrodynamic gene deliveries attenuated p-PERK and CHOP intensities increased by APAP treatment (Figure 4C).IRE1α, ATF6, and other stress markers were largely unaffected.In addition, PERK overexpression enhanced APAP's inhibition on GPX4, whereas PERK siRNA exerted the opposite effect; However, PERK modulations did not change NEMO levels (Figure S5).These data support the notion that NEMO increases GPX4 by inhibiting PERK against ER stress.Furthermore, NEMO overexpression inhibited ALT and AST activities, ameliorating liver histopathology and TUNEL staining intensity (Figure 4D and E).Consistently, cell death markers including p-JNK, p-RIP1, p-RIP3, and p-MLKL were all inhibited (Figure 4F).Our results support the concept that NEMO inhibits oxidative stress and ER stress in association with liver protection.
Then, we explored the potential transcriptional activation of Nrf2 for the IKBKG gene regulation.Sulforaphane (SFN, an Nrf2 inducer)-treated HepG2 cells exhibited increased IKBKG mRNA levels but not those of CHUK or IKBKB (Figure 5D).Other Nrf2 activators (i.e., tert-butylhydroquinone [tBHQ] and oltipraz [Olt]) exerted the same effects (Figure 5E).We also confirmed the nuclear accumulation of Nrf2 by the Nrf2 activators (SFN, Olt, and tBHQ) in HepG2 cells (Figure S6B).Likewise, these compounds increased Ikbkg mRNA levels in AML12 cells (Figure S6C).for IKBKG antioxidant response element regions in sulforaphane-treated lymphoblastoid cells using the cDNA array database (GSE37589) (upper).IKBKG promoter-reporter assays.Luciferase activity was measured in HepG2 cells after Mock or hNrf2 transfection (n = 3 each) (lower left).WT or MT luciferase constructs were used in HepG2 cells after Mock or hNrf2 transfection (n = 3) (lower right).(G) NEMO and Nrf2 immunoblottings for SFN-and Olt-treated HepG2 cells for the indicated times (upper).Band intensities represent values relative to the respective control (n = 3 each) (lower).For B and D-G values were expressed as mean ± SEM (*P < 0.05, **P < 0.01).Statistical significance was tested via two-tailed Student's t-tests or one-way ANOVA coupled with Tukey HSD or the LSD multiple comparison procedure when appropriate.
The role of the Gα 12 -ROCK1 axis in NEMO regulation
Following the observation of significantly inhibited NEMO by APAP-induced ER stress, we further probed into the mechanistic underpinnings of NEMO downregulation induced by APAP, which aimed to elucidate factors beyond NEMO's transcriptional activity.Having previously identified Gα12 overexpression by ER stress influences ROCK1-dependent ferroptosis [28], we wondered whether Gα 12 modulations affect NEMO expression.Notably, Gα 12 KO enhanced gene sets associated with NF-κB-related pathways in the GSEA analysis regarding our RNA-seq data from WT and Gα 12 KO mice (GSE173595) (Figure S7A).Also, Gα 12 deficiency enhanced basal Ikbkg mRNA levels in the liver, preventing APAP from inhibiting Ikbkg (Figure 6A and Figure S7B).In addition, immunoblotting assays using primary hepatocytes demonstrated that Gα 12 abrogation considerably reversed APAP's inhibition on NEMO (Figure 6B).Consistently, Gα 12 overexpression using a liver-specific lentiviral albumin-Gα 12 (Lv-Alb-Gα 12 ) fortified APAP's inhibition on Ikbkg mRNA, reversing Gα 12 KO plus APAP's promotion of the mRNA (Figure 6C).Immunohistochemistry and immunoblottings further confirmed that Gα 12 overexpression inhibited NEMO in the liver of WT and Gα 12 KO mice challenged by APAP (Figures 6D and E).
Considering that ROCK1 is a downstream Gα 12 signaling effector, we further examined whether ROCK1 modulations impacted NEMO levels.ROCK1 knockdown and chemical inhibition (ripasudil) prevented APAP from inhibiting NEMO in primary hepatocytes (Figure S7C).Immunohistochemistry and immunoblottings corroborated ripasudil's influence on NEMO in the liver (Figure 6F and G) while also ameliorating liver injury (Figure S7D).Nrf2 and p-MLC levels were accordingly changed by a downstream ROCK1/2 effector; Ripasudil's ability to heighten NEMO levels was seemingly greater than Nrf2, indicating that the Gα12 pathway may additionally regulate NEMO (Figure 6G).Together, these results support that the Gα 12 -ROCK1 axis negatively controls NEMO under liver injury conditions.
miR-125a inhibition of NEMO downstream from Gα 12
Next, we aimed to ascertain how microRNA (miRNAs) were involved in post-transcriptional NEMO regulation downstream from the Gα 12 pathway.Four miRNAs with conserved sites and the potential to bind to IKBKG mRNA's 3'-UTR were selected using the 'Targetscan' database (Figure 7A).Out of the four, miR-125a was the most significantly elevated in ALI patient livers (Figure 7B).Then, miR-125a's effect on the IKBKG 3'-UTR was examined.IKBKG 3'-UTR and the miR-125a seed sequence exhibit a nearly complete pairing (Figure 7C, upper).When assessing miR-125a modulation effects on NEMO protein, miR-125a antisense oligonucleotides (ASOs) increased NEMO levels in HepG2 cells, whereas its mimic exerted the opposite (Figure 7C, lower).This effect was confirmed in primary hepatocytes (Figure 7D).Furthermore, GO analysis was accomplished to distinguish downstream pathways and the functional role of miR-125atargeted genes.The target genes correlated with the MAPK and NF-κB pathways (Figure S8A, left), which were then implemented in a GO analysis (red color), establishing that they were correlated with the 'Apoptotic process' (the most significantly affected) (Figure S8A, right and S8B).Further, we verified miR-125a's functional role in ferroptosis elicited by APAP or Erastin treatment.As expected, APAP treatment induced morphological changes in HepG2 cells and primary hepatocytes, which were ameliorated by miR-125a ASO pretreatment (Figure 7E, left, S8C, and S8D).APAP's ferroptosis biomarkers (i.e., 4-HNE and 3-NT) were also impeded by miR-125a ASO transfection (Figure 7E, right), confirming miR-125a's effect on ferroptosis processes.A, C-E, and G, values were expressed as mean ± SEM (*P < 0.05, **P < 0.01).Statistical significance was tested via one-way ANOVA coupled with Bonferroni's method or the LSD multiple comparison procedure when appropriate.
Then, we observed APAP treatment increased miR-125a, miR-125b, and miR-4319 levels in mice livers.Gα 12 KO completely inhibited APAP's effect on miR-125a but not the others (Figure 7F); This finding is also evidenced by the absence of Gα 12 regulatory effect on miR-125b in the Gα 12 QL microarray dataset analysis (Figure S9), indicating the specific Gα 12 signaling effect on miR-125a.In addition, this inhibitory effect was entirely reversed through enforced Gα 12 expression.Using ripasudil to inhibit ROCK1 resulted in similar outcomes (Figure 7G).In the experiment using WT and Gα 12 KO primary hepatocytes, a deficiency of Gα 12 prevented APAP from inhibiting NEMO, which was completely abrogated by miR-125a mimic transfection, but increased by miR-125a ASO (Figure 7H).Intriguingly, Nrf2 levels were not changed.Overall, our results fortify the concept of the dual NEMO regulatory pathways; (1) Nrf2 transcriptionally induces NEMO, and (2) ER stress-mediated Gα12 overexpression inhibits NEMO de novo synthesis by enhancing miR-125a level, which can be countered by ROCK1 inhibition.
Dysregulation of NEMO, Nrf2, GPX4 and miR-125a in patients with ALI
To further assess the association of IKBKG transcript levels and liver injury in clinical situations, we analyzed samples of patients with ALI on drugs.Significant negative correlations existed between IKBKG and each ALI marker: alanine aminotransferase (ALT), aspartate aminotransferase (AST), prothrombin time international normalized ratio (PT INR), Child-Turcotte-Pugh score (CTP score), MELD score, and total bilirubin contents (Figure 8A).As expected, NFE2L2 and GPX4 transcript levels were decreased in ALI patients (Figure 8B).In patients with ALI or fulminant liver failure, positive correlations existed between IKBKG and NFE2L2; or IKBKG and GPX4 transcripts.Hepatic miR-125a negatively correlated with IKBKG (Figure 8C).We also confirmed the inhibition of NEMO and GPX4 protein levels in patients with ALI, and correlations between NEMO and GPX4 (Figure 8D).In summary, NEMO and GPX4 levels were decreased in the patients through the dual regulatory pathways, exacerbating hepatocyte ferroptosis and ALI (Figure 8E).
Discussion
The worldwide surge of severe liver injury highlights the necessity for identifying new drug targets and therapeutic compounds to treat this disease [11].Recent reports have indicated that lipid peroxide and oxidative stress accumulations, as observed in ferroptosis, contribute to hepatocyte injury and trigger inflammatory responses, exacerbating liver disease progression [8].Developing medications to regulate ferroptosis has garnered substantial attention [35]; however, studies on ferroptosis and its underlying mechanisms are currently insufficient.Hence, future investigations must identify novel regulators and elucidate the underlying mechanisms behind ferroptosis modulation relative to liver injury.
The NF-κB pathway is vital in regulating immune and inflammatory responses in the liver, thereby exerting considerable influence on liver homeostasis and disease processes.The canonical NF-κB activation pathway involves the IKK complex [36], whereas the non-canonical pathway incorporates NF-κB inducing kinase (NIK) upon receptor ligation [37].However, emerging evidence has highlighted that NEMO, a prominent NF-κB activation protein, can protect hepatocytes independent of the canonical NF-κB pathway [38].
Our study uncovered a critical finding expounding upon NEMO's inhibitory role in ferroptosis during APAP intoxication, supporting NF-κB and its influence in further cell fate determination.Our findings showed that 3-NT and 4-HNE intensity changes and their reversal through NEMO overexpression support the ability of NEMO to protect hepatic cells from ferroptosis.Specifically, the outcomes of this study introduce a previously uncharacterized role of NEMO transcriptionally activated by Nrf2 and NEMO induction which inhibits hepatic ferroptosis via GPX4.Our findings indicate that NEMO levels are additionally affected by the Gα12 axis responsible for an increase of miR-125a as a previously unknown NEMO inhibitor.These findings establish significant links not only between Nrf2 and NEMO but also between NEMO and miR-125a controlled by the Gα 12 -ROCK1 axis.Representative blots were shown.A correlation between NEMO and GPX4 protein levels (n = 27) (lower).(E) A schematic depicting how Nrf2-dependent NEMO induction inhibits liver injury through GPX4 against the Gα12-miR-125a axis.For B and D, values were expressed as mean ± SEM (*P < 0.05, **P < 0.01).Statistical significance was tested via two-tailed Mann-Whitney U test or Spearman's correlation analyses.
We found that NEMO depletion causes upregulation of various gene clusters, including those involved in redox state and arachidonic acid metabolism, showing that NEMO may have a more substantial role in regulating cellular homeostasis beyond its well-known function in NF-κB [39].The reactive intermediate NAPQI, produced from APAP biotransformation, depletes cellular GSH content and binds to cellular macromolecules in hepatocytes [8], effectuating ferroptosis, as confirmed in the present study.Our study also highlights the importance of GPX4 and its cofactor GSH in removing lipid peroxides and oxidation products, such as phospholipids and PUFAs [9], enabling hepatocytes to survive GSH depletion caused by BSO and Erastin treatment [40].Furthermore, this study strongly evidenced that NEMO participates in cell fate determination mediated by GPX4, which alters the glutathione/oxidized glutathione disulfide (GSH/ GSSH) ratio.This notion is corroborated by DCF-DA intensity alterations upon APAP exposure following NEMO overexpression.
The PERK/CHOP-dependent JNK pathway has been proven to induce cell death in response to ER stress [41,42].Our findings demonstrate that NEMO overexpression inhibits p-PERK and CHOP upregulation mediated by ER stress, suggesting NEMO's potential in attenuating ER stress-induced hepatocyte injury.However, ALT activity and TUNEL staining intensity changes after NEMO overexpression were more considerable than those in p-PERK and CHOP, indicating that other ER stress mediators may also be involved.IRE1α, a known Gα12 overexpression [28] and eIF2α regulator through PERK [43], potentially contributes to p-PERK/CHOP regulation under APAP intoxication conditions, even when NEMO is present.Therefore, NEMO likely regulates ferroptosis by inhibiting the PERK/ CHOP-dependent JNK pathway and interacting with other mediators to regulate overall cellular responses to ER stress.
APAP-induced liver injury activates the apoptosis pathway and various other regulated necrosis types, namely necroptosis, and pyroptosis [44][45][46].Among diverse and complex cell death pathways, JNK is an integral stress kinase in APAP-induced liver injury and hepatocyte death as it activates RIP1 and engenders necrosome complex formation [47,48].Furthermore, necroptosis is a programmed necrosis triggered by RIP1, RIP3, and MLKL.Previous studies have demonstrated that NEMO overexpression attenuates p-JNK and RIPK1 activation, preventing the RIPK1/FADD/caspase-8 complex from forming to inhibit necrotic cell death [18], consistent with our findings that NEMO overexpression prevents APAP from increasing p-JNK, p-RIP1, p-RIP3, and p-MLKL.In the present study, other gene clusters affected by NEMO depletion include inflammatory response pathways, being consistent with our previous observation confirming JNK's role in NF-κB-mediated inflammatory responses [33].
Our study identified Nrf2 as an IKBKG gene transcription factor, revealing a previously unknown NF-κB pathway regulation.Additionally, the observation that Nrf2 activation upregulates Bcl-2 and other Nrf2-target genes confirmed that Nrf2 could protect cells from chemical stresses while maintaining cellular homeostasis [24].Nrf2 is not only imperative for cell survival as a nuclear receptor but also as a direct regulator of target genes, holding significant scientific importance concerning liver injuries.Moreover, Nrf2 can modulate key cytoprotective and antioxidant enzyme expressions pivotal in mitigating liver damage and promoting tissue repair [49].Understanding the direct Nrf2 regulation of target genes provides insights into molecular mechanisms underlying liver injury and offers potential targets for developing interventions to combat liver pathologies.
Notably, the previous study demonstrated that Gα12 overexpression was associated with ER stress in acute liver injury through ROCK1, mediated by ALOX12 and miR-15a dysregulation [28].This study also aimed to elucidate the newly acknowledged targets downstream from the Gα 12 axis.Our results isolate prominent mechanisms underlying the Nrf2-dependent NEMO-GPX4 pathway's protective effect against ER stress, aligning with the previous report on Nrf2 activation through Gα 12 deficiency [50].Interestingly, ripasudil treatment prompted a substantial NEMO elevation against APAP intoxication but only weakly increased Nrf2 levels.This observation indicates that the Gα 12 -ROCK1 axis plays a distinct NEMO regulatory role, identified by ROCK1-mediated translational control separate from Nrf2-dependent NEMO induction.
Additional efforts enabled us to identify miR-125a as a functional NEMO inhibitor downstream from Gα12.Moreover, our data demonstrating miR-125a inhibition of IKBKG mRNA translation into proteins correlates with ROCK1-mediated inhibitory protein regulation.The different NEMO and Nrf2 levels after ripasudil treatment corroborate miR-125a involvement in NEMO expression downstream from Gα 12 .Also, apoptosis with JNK activation initiates from miR-125a targeted gene clusters associated with the NF-κB pathway, with IKBKG as the core gene.When analyzing miR-125a target genes using GO, we extracted the Biocarta pathway of the 'MAP kinase signaling pathway' and 'NF-κB signaling pathway' (red color) and determined IKBKG as a core gene clustering with the 'JNK phosphorylation and activation mediated by activated human TAK1' pathway (Figure S8B).Thus, the miR-125a-associated gene network may link to the JNK pathway for NEMO regulation.
Gaining a comprehensive understanding of non-canonical NEMO regulation pathways would be crucial to insights into therapeutic interventions.This study's RNA-seq analyses, human sample analyses, in vivo, and vitro-based outcomes established that ALI downregulated NF-κB and its related pathways.In addition, our findings confirmed that NEMO expression was diminished in ALI patients and was negatively correlated with representative liver injury markers.Consistently, we observed that miR-125a levels were significantly increased in the patient samples, matching miR-125a upregulation in the livers of mice with fibrosis [51].Moreover, the close associations between NFE2L2 and IKBKG, or IKBKG and GPX4, or miR-125a and IKBKG in our ALI sample analyses further support these pathways' prominence in human liver health and diseases.
Collectively, our research reveals that NEMO mitigates hepatocytes ferroptosis by inducing GPX4.This effect is achieved through the inhibition of both chemical-induced ROS production and ER stress.Furthermore, the outcomes have illuminated the role of Nrf2 as a transcriptional regulator of NEMO.We have also uncovered miR-125a as an unprecedented suppressor of NEMO de novo synthesis, acting downstream from the Gα12-ROCK1 axis.Together, our data evidence the dual antagonistic NEMO regulation by Nrf2-dependent transcription and ROCK1-mediated microRNA regulation of protein synthesis (Figure 8E).Thus, the identified molecules will likely work together for ER stress-mediated ferroptosis.These newly identified pathways and targets present implications for developing therapeutic strategies aiming at ALI.
Figure 1 .
Figure 1.Inhibition of IKBKG transcript levels in ALI patients.(A) Heatmap and hierarchical correlation analysis of DEGs (absolute fold-change > 1.5 and P < 0.05) (left) and GSEA-enrichment plot of representative gene sets (NES = -2.584,FDR = 0.000) negatively correlated with tolvaptan treatment in primary human hepatocytes using hepatic transcriptome data (n = 3 each, GSE99878) (right).The top 20 genes comprising the enrichment score's leading edge are indicated in the corresponding heatmap (blue, low; red, high).(B) Leading-edge analysis (left) and bar graphs (right) of significantly enriched GSEA Wikipathways using the same data as in A. GSEA leading-edge analysis results are represented as a matrix where the green color intensity indicates the overlap degree between core genes in each gene set combination; the more intense the green color, the
Figure 2 .
Figure 2. The Nrf2 and NEMO relationship and association with ER stress responses.(A) RNA-seq dataset (GSE104302) obtained from the liver of APAP-or vehicle-treated mice.Principal component analysis (PCA) score (left) and volcano plots of RNA-seq data (middle) (mint color, downregulated; red color, upregulated; DEGs with P-value < 0.01 and absolute FC > 1.5).Enrichment bubble plot of the Biocarta pathway indicating that the 'Oxidative stress-induced gene expression via Nrf2' was significantly downregulated in response to APAP treatment (n = 3 each) (right).(B) NEMO and Nrf2 immunohistochemistry in the liver of the mice treated with a single APAP dose (300 mg/kg BW, i.p., 6 h) (n = 5 or 6 each).(C) NEMO, Nrf2, and Grp78 immunoblottings (upper left) in the livers of the same mice as in B. Band intensities represent values relative to each respective control (n = 5 or 6 each) (lower left).Nrf2 and NEMO correlations (upper right); Grp78 and NEMO correlations (n = 11) (lower right).(D, E) NEMO, Nrf2, and Grp78 immunoblottings (left) in the liver of CCl4-treated mice (0.5 mL/kg BW, i.p., 6 weeks) (D) or those treated with a single Tm dose (2 mg/kg BW, i.p., 72 h) (E).Band
Figure 3 .
Figure 3. Toxicant-induced ferroptosis inhibition through NEMO.(A) KEGG analysis using a cDNA microarray dataset obtained from WT mice livers treated with APAP or the vehicle.The ferroptosis pathway (red asterisk) exhibited high fold enrichment (second rank) (n = 3 each, DEGs of P-value < 0.01 and FC > 2, GSE104302).FDR is indicated in the bar graph.(B) Ferroptosis-related pathways based in Biological process, Reactome pathway, and Wikipathway obtained from NEMO f/f or NEMO Δhepa mice livers (n = 3 each, DEGs of P-value < 0.05 and FC > 2, GSE61100).(C) Immunoblottings for representative ferroptosis markers in WT mice livers treated with a single APAP dose (300 mg/kg BW, i.p., 6 h) 3 days post-hydrodynamic injection with NEMO or control plasmid DNA (25 μg each) via tail vein (left).Band intensities represent values relative to each
Figure 4 .
Figure 4. Inhibition of toxicant-induced liver injury through NEMO.(A) Volcano plot (left) (mint, downregulation; red, upregulation) and ROS-related pathways based on the Sankey diagram (plot) of DEGs using hepatic transcriptome data from NEMO f/f or NEMO Δhepa mice (right).The Sankey diagram represents genes within each pathway; dot plots with sizes indicate gene numbers and dot colors display P-values (n = 3 each, DEGs of P-value < 0.01 and FC ≥ 2, FC ≤ -2, GSE61100).(B) Representative fluorescence images of DCF-DA (2′,7′-dichlorofluorescein diacetate) staining for ROS detection in HepG2 cells treated with APAP (20 mM, 12 h) or H2O2 (100 μM, 10 min) as a positive control after NEMO transfection (or Mock) (1 µg, 24h) (left).Green and pseudocolor images were captured, and the relative color scale indicates DCF fluorescence levels.Scale bar, 100 μm.DCF-DA-enhanced fluorescence-positive cells were analyzed with Image J software (right).(C) Immunoblottings for representative ER stress markers in the same samples as in Fig. 3C (left).Band intensities represent values relative to each respective control (n = 3 or 5 each) (right).(D) Serum alanine transaminase (ALT) and aspartate transaminase (AST) activities in WT mice livers treated with a single APAP dose (300 mg/kg BW, i.p., 6 h) 3 days post-hydrodynamic injection with NEMO or control plasmid DNA (25 μg each) via tail vein (n = 6-10 each).(E) Liver histopathology (H&E) and terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) assays.H&E and TUNEL stainings were done on the same mice livers as in Fig. 3C.Scale bar, 200 µm.(F) Immunoblottings for representative necroptosis markers in the same samples as in Fig.
Figure 5 .
Figure 5. Nrf2-mediated IKBKG (NEMO) transactivation.(A) Gene Venn diagrams of downregulated and upregulated genes in liver samples from Nrf2 and Keap1 knockout mouse models (n = 3 each, GSE8969 and GSE11287; DEGs of P-value < 0.05 and absolute FC > 1.5, respectively) (left).GO term enrichment of overlapped genes (84 genes, 4.5%) between two groups for biological processes, such as the lipid metabolic process (first rank) and positive gene expression regulation (second rank) (middle).The gene-concept network (cnetplot) of functional GO enrichment results (right) from the leading genes highlighted in the circles' red areas in the first and second biological process ranks.The cnetplot depicts gene and biological concept (GO terms) linkages as a network.Circle size indicates genes represented in a given biological process.(B) Ikbkg transcript levels obtained using a public dataset concerning livers of WT and Nrf2 KO (GSE8969) or Keap1 KO mice (GSE11287).(C) Heatmap of significantly down-regulated IKBKG transcript and Nrf2 target genes in APAP-intoxicated patients (n = 2 or 3, GSE74000).(D, E) Real-time RT-PCR assays for IKBKG, CHUK, and IKBKB in HepG2 cells treated with 10 μM sulforaphane (SFN) for the indicated times (n = 3) (D); or IKBKG in HepG2 cells treated with tBHQ and Olt for the indicated times (n = 3) (E).(F) ChIP-seq analysis
Figure 8 .
Figure 8. Correlations between human NEMO and liver disease scores, and identified target levels in patients with ALI.(A) Correlations between IKBKG transcript and ALT, AST, PT INR, CTP, MELD scores, or total bilirubin contents in livers of healthy individuals or ALI patients (healthy: n = 5, ALI: n = 22).ALT, alanine aminotransferase; AST, aspartate aminotransferase; PT INR score, prothrombin time international normalized ratio.CTP, child-turcotte-pugh; MELD score, model for end-stage liver disease score.(B) NFE2L2 and GPX4 transcript levels in livers of healthy individuals (n = 10) or ALI patients (n = 22).(C) Correlations between NFE2L2 and IKBKG (left),
|
v3-fos-license
|
2018-10-19T00:08:50.216Z
|
2015-11-11T00:00:00.000
|
88160371
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=61415",
"pdf_hash": "6cd7b621d3ffe30a763c032b3d3afcbac9a0d0e9",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2990",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "6cd7b621d3ffe30a763c032b3d3afcbac9a0d0e9",
"year": 2015
}
|
pes2o/s2orc
|
Effects of Arbuscular Mycorrhizal Fungi on Metals Uptake , Physiological and Biochemical Response of Medicago Sativa L . with Increasing Zn and Cd Concentrations in Soil
The effect of mycorrhizal symbiosis on metal accumulation and plant tolerance are not commonly studied in medicinal plants under metal stress. The objective of this study was to assess the impact of mycorrhiza on alfalfa plants with the increase of Zn and Cd toxicity. The experiment was conducted under controlled laboratory conditions. Zinc (Zn) and cadmium (Cd) uptake, some biochemical and physiological parameters were studied in eight-week-old alfalfa plants in response to inoculation or not with arbuscular mycorrhizal fungi (AMF) and with the increase of Zn (0, 100, 300, 900 mg∙kg−1) and Cd concentrations (0, 100, 300, 600 mg∙kg−1) in soil. The results showed that mycorrhizal (M) plants exhibited tolerance to Zn and Cd up to 300 mg∙kg−1 in comparison to non-mycorrhizal (NM) plants which exhibited a significant growth reduction at the same soil Zn and Cd level. M inoculation reduced the Zn and Cd accumulation in shoot and showed higher Zn and Cd contents in roots which showed a different Zn and Cd distribution in AMF associated or non-associated plants. Mycorrhizal plants increased phosphorus (P) contents at all Zn and Cd concentrations except the highest (600 and 900 mg∙kg−1) leading significant alterations in biochemical contents such as proline, antioxidant enzymes in leaves and also in nutrients (N, P, K, Cu, Ni, Fe, Mn). Zn and cadmium toxicity cause to increase the proline content in shoot of NM plants, however, proline contents are lower in M plants. Results confirmed that AMF protected alfalfa plants against Zn and Cd toxicity. Mycorrhizal colonization was able to form an efficient symbiosis with alfalfa plants in moderately contaminated Zn and Cd soils (300 mg∙kg−1) and play an important role in food quality and safety. Corresponding author.
Introduction
Soil pollution by heavy metals due to anthropogenic activities is the most important problem nowadays because metals are noxious, persistent and non-biodegradable.They tend to accumulate readily in soils and organisms, mainly where effects of human activities are severe.Cadmium (Cd), a non-essential element, is considered as more toxic because it tends to accumulate more readily in the environment especially in biological organisms even at low concentrations in the environment, leading to harsh consequences [1].There are many sources through which the level of heavy metals are increasing in the environment.The main sources are mining, refining, and electroplating activities.
Cd is rarely present alone in soil and is mostly linked to other heavy metals such as high levels of zinc (Zn).In polluted soils, Cd and Zn uptake in plants and soils are associated [2].Zn as an essential element is a second major concern because of its toxicity.Zn toxicity may occur in soils polluted by human activities such as mining, smelting and application of sewage sludge in agricultural soils [3].Many studies revealed the aspects which influenced the bioavailability of metals in plants.The most important factors which affects metal uptake in plants are: pH, redox potential, texture, organic matter, mineral composition, temperature and water regime [4].
In soil microorganisms, arbuscular mycorrhizal (AM) fungi are commonly studied because of their capacity to develop plant strength under toxic and inappropriate conditions [5].Due to colonization with most of terrestrial plants, these symbiotic fungi enhance essential nutrients to plants such as low available P which cause to increase shoot biomass [6].AM fungi increase acquisition of nutrients by increasing soil volume [7].This strategy can be useful for host plants that are growing in unfavourable soil conditions like in nutrient deficient soils or in contaminated areas.AM fungi can also improve the plants growth and nutrient contents by decreasing the metals uptake in plants [8].
The non-essential elements such as Cd can also transfer by AM fungi towards plants and store them in roots [9].However, AM fungi have differential effects on metal uptake and growth of host plant [10] as in some cases AM fungi enhance the uptake of nutrients and the growth of host plants [11].AM fungi isolated from polluted areas are thought to have a better effect towards plants than isolates from non polluted soils [12].The metal accumulation of AM fungi is not easy to consider because they cannot be grown without the plant.It has been reported that Medicago sativa (alfalfa) accumulate heavy metals concentrations more than the permissible levels in different plant tissues.This may be due to specialized chemical functional groups that could be responsible for metal accumulation.Improvements in plant mineral nutrition are mainly related to elemental uptake by extra-radical hyphae from the non-rhizosphere soil region and its transport to the root [13].AMF inoculation has been shown to be advantageous for the development of sustainable agriculture in nutrient-deficient tropical soils showing AMF to be an undiscovered resource for sustainable management and soil conservation.
In the study, we used alfalfa as a test plant because it is one of the most popular species used for perennial grazing and is widely cultured on the global scale for medicinal purposes.Medicago sativa (alfalfa) is a flowering plant in the pea family Fabaceae.It is a perennial legume from three to twelve years, depending upon climate and variety [14].The objective of this study was to investigate the responses and development of Medicago sativa plants in association or not with AMF with increasing Cd and Zn concentrations in soil.The uptake of nutrients and the distribution of Cd and Zn in different plant organs were evaluated.Additionally, total protein and proline contents in leaves were analysed as biochemical indicators of metal stress in alfalfa plants and discussed in relation to their mycorrhizal status.The hypothesis of the present study was that AMF associated-Medicago sativa plants would perform better under metal stress conditions than non-associated plants, improving tolerance, nutrition and consequently, plant growth.
Inoculum of Glomus Species
The AMF used was the mixture of different Glomus sp with dry soil substrates obtained from the AMF collection maintained by the company (Agrauxine) in France.Spores and dried sand-soil mixture (growth medium) were used in mycorrhizal inoculated treatments.Each pot (10 cm diameter and 12 cm height) contained 2 kg growth medium plus 50 g of AM fungal inoculum to mycorrhizal treatments, while the same amounts of growth medium were added to non-mycorrhizal treatments.Each pot received approximately 2500 spores at the time of sowing.AMF inoculation was performed during the transplantation process and was not provided in non-mycorrhizal treatments.
Sterilization of Alfalfa Seeds
Seeds of alfalfa (Medicago sativa L.) were obtained from Department of Crop Science, National Agriculture Research Centre, Islamabad.Seeds were surface sterilized (10 min, 3% Chlorox) and gently washed by deionized water for several times at room temperature and then put them on the sterile moist filter papers (Xin Hua No. 101, China) in Petri dishes at 28˚C for 48 hours for germinating.These were selected for uniformity before sowing.Five pre-germinated seeds were sown per pot and the plants were allowed to grow for 8 weeks.Seedlings were grown in the growth chamber with 12 h light per day at 25˚C -35˚C.Water lost was replaced daily by top watering with deionized water and to maintain the moisture of the soil at about 60% until the end of the experiment.Each pot was irrigated with long Ashton's nutrient solution (20 ml) every week.Six pots per treatment were used and seedlings were randomly harvested 60 days after sowing.
Evaluation of Mycorrhizal Colonization
Root mycorrhizal colonization was estimated after clearing and staining [15] using the grid-line intersect method [16].The stained roots were then mounted on glass slides (5 pieces of root per slide) for examination with an eyepiece cross-hair.Colonization percentage of mycorrhiza was estimated for each sample by examination of one hundred 1cm long pieces of roots.
Plant Growth
The growth performance including stem diameter, shoot and root height, breadth and area were recorded.Height and diameter were measured by precision straight edge (Sword fish, China) and vernier caliper (ECV150C, China).
Plant Biomass
At harvest, roots and shoots were separated.Subsamples of fresh roots were taken to assess mycorrhizal colonization.Fresh weights of total roots and sub-samples were measured.Leaves and remaining roots were rinsed with tap water and then with deionized water.Tissues were weighed after oven drying at 60˚C for 72 h and then ground to <0.25 mm in a stainless mill.The percentage of water content in remaining roots and total root fresh weight were used to estimate total root dry weight.
Heavy Metals Quantification
After dry weight determination, the oven dried tissue samples (shoots and roots) were ground and digested in HNO 3 (70%) and H 2 O 2 using the microwave digestion system (CEM-MDS 2000).The digest was filtered using Whatman No. 42 filter paper and made up to 50 ml by using deionized water.The metal contents (Na, K, Ca, Mg, Co, Cr, Cu, Fe, Ni, Pb, Mn, Cd, Zn) in plant tissues (shoot and roots) were determined by using atomic absorption spectrophotometer (Varian FAAS-240).Total Phosphorus (P) in plant digest was determined by ammonium-vanadomolybdate method [17].Total N was determined by Kjeldahl method [18].
Biochemical Analysis
Chlorophyll content in the fresh leaves (50 mg) of the plant was measured in 10 cm 3 dimethylsulfoxide (DMSO) by using the method [19].Carotenoid and sugar content was determined by the method [20].Proline content of leaves was estimated by using the method [21].Protein content in the leaves (50 mg) of the plants was measured using Bovine Serum Albumen (BSA) as a standard [22].
Assay of Enzyme Activity
For enzyme analysis, fresh samples of leaves (300 mg each) were ground in a chilled mortar and extracted with 3 ml of 100 mM potassium phosphate buffer (pH 7.5).The homogenate was centrifuged at 12,000 rpm for 15 min.The supernatant was used for the estimation of antioxidant enzyme activities.Superoxide dismutase (SOD) activity was assessed spectrophotometrically at 560 nm based on the inhibition of the photochemical reduction of nitroblue tetrazolium (NBT) as described by method [23].One unit of SOD was defined as the quantity of enzyme required to inhibit the reduction of NBT by 50%.The activity of POD was measured by following the method [24].Catalase (CAT) activity was determined by the method [25].The activity of Ascorbate peroxidase (APX) was measured by estimating the rate of ascorbate oxidation.The change in absorbance was monitored at 290 nm [26].
Quality Control Analysis
The chemicals used were analytical grade and obtained from Sigma, Aldrich and Merck.All the analyses were performed in triplicates under standard optimizing conditions.Analytical data quality of metals in soil and plant samples was ensured through repeated analysis (n = 6) of roots and shoot samples.The blank reagent and standard reference soil (NIST, 2709 San Joaquin) and plant materials (NIST, 1547 Peach leave) of National Institute of science and Technology were included in each sample batch to verify the accuracy and precision of the digestion procedure.Recoveries of metals from the plant tissues were found to be 99%.The blanks were run after five samples.
Statistical Analysis
Physiological parameters, biochemical contents, antioxidant enzymes and root colonization were analyzed with two way analysis of variance (ANOVA) technique using statistix (version 8.1) software.For significant F value, Tukey test was used for mean comparison at 5% level.
Mycorrhizal Colonization of Roots
Figure 1 shows the percentage of AMF colonization with roots of alfalfa (Medicago sativa) plants with increasing Zn and Cd concentrations.The results showed AMF colonization was not found in non-inoculated plants, while all the inoculated plants showed high colonization rates.The colonization was detected in roots with formation of arbuscules and hyphal structures.Figure 1(A) shows the highest colonization of 70% was found at 100 mg•kg −1 Zn concentration.The trend was decreasing as the concentration of Zn increased at 300 mg•kg −1 and 900 mg•kg −1 .Figure 1(B) shows the root colonization of alfalfa plants with increasing Cd concentrations.The highest colonization was observed in control plants where no Cd concentration was applied.The trend observed was decreasing as the concentration of Cd increased from 100 to 300 mg•kg −1 .In general, the results showed that Zn and Cd addition negatively affects mycorrhizal root colonization and decreasing trend was observed with the increase of metal concentration in soil.
Plant Growth and Biomass
Figure 2(A) shows the effects of increasing zinc concentrations on growth and biomass of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants.The interaction of plants and AM fungi had very significant effects on growth and biomass of plants under increasing Zn stress.The reduced plant growth and biomass was observed in non-inoculated (NM) treatments under increasing Zn stress.However, the positive significant effects were observed on plants growth and biomass in mycorrhizal (M) inoculated treatments.The results showed that in M plants, shoot and root biomass was increased at all the Zn addition levels of 100, 300 and 900 mg•kg −1 , While reduction in biomass was observed in NM inoculated plants as the Zn concentration increased in soil.The highest trend was recorded at 100 mg•kg −1 Zn concentration in M and NM plants.While lowest biomass was recorded at 300 and 900 mg•kg −1 Zn concentration in both inoculated and non-inoculated treatments.The highest length, breadth and area of shoot and root tissues was observed at 100 mg•kg −1 Zn concentration in M and NM plants, while the reduction in trend was recorded at 300 and 900 mg•kg −1 of Zn concentration in both M and NM plants.However, M inoculated plants had significant positive effects on plants growth and biomass at all Zn treatments as compared to NM plants.
Figure 2(B) shows the effects of increasing cadmium concentrations on growth and biomass of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants.The interaction of plants and AM fungi had very significant effects on growth and biomass of plants under increasing Cd stress.The plant growth and biomass was reduced in non-inoculated (NM) treatments under increasing Cd stress.The highest trend was recorded in control plants in M and NM plants, while the lowest biomass was recorded at 100, 300 and 600 mg•kg −1 Cd concentration in both inoculated and non-inoculated treatments.The shoot and root growth was enhanced in M inoculated plants while decrease in growth was observed in NM inoculated plants.The highest length, breadth and area of shoot and root tissues was observed in control plants in both M and NM plants.While the reduction in trend was recorded at all cd addition levels (100, 300 and 600 mg•kg −1 ) in both M and NM plants.The statistical significance was obtained for K, P, Na, N, Ca, Mn in M and NM plants but not significant results obtained in Cu and Mg at 0, 100, 300 and 900 mg•kg −1 Zn.The detrimental effect of highest Zn concentration (900 mg•kg −1 ) was recorded on the concentration of the analyzed nutrients as there was a significant decrease in both inoculated (M) and non inoculated (NM) plants.
Plant Nutrient Contents
The result of the experiment indicated that mycorrhizal inoculation significantly affects the mineral nutrition of alfalfa plants.In M inoculated plants, the increase in K, N, Ca, Mg, Na, Cu, Ni was recorded in shoot part of plants but decrease in Mn and Fe contents was recorded.In M roots, the Fe, Ni, Cu contents was increased, while reduction of K, N, Ca, Mg contents was also observed.In NM inoculated plants, the increase of soil Zn concentrations caused reductions in K, P, N, Mn, Ni and Fe contents in the shoots except Cu and Na in which the increase in concentration was observed in shoots of NM plants.The increase of nutrient contents were observed in N, Ca, Na, K and Ni concentrations in roots of the alfalfa plants with increasing soil Zn concentration.
Table 2(A) and Table 2(B) show macro and micronutrients contents in shoots and roots of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants grown in soils with increasing Cd concentrations.In general, all nutrient contents were increased in M inoculated shoot part of plants except Mn, Ni, Cu, and Zn.However, the decrease in nutrient contents was observed in root part of plants except in Zn and Cu.The statistical significance was obtained for K, P, Na, N, Ca, Mg, Fe in shoot tissue but not significant results obtained in Mn and Ni at 0, 100, 300 and 900 mg•kg −1 Zn.The detrimental effect of highest Zn concentration (900 mg•kg −1 ) was recorded on the concentration of the analyzed nutrients as there was a significant decrease in both inoculated (M) and non inoculated (NM) plants.
The result of the experiment indicated that mycorrhizal inoculation significantly affects the mineral nutrition of alfalfa plants.In NM inoculated plants, the increase of soil Zn concentrations caused decrease in K, N, Ca, Na, Mg, Fe, Mn, Ni and Zn contents in the shoots except Cu in which the increase in contents was observed in shoots of NM plants at all Zn concentrations of 100, 300, 900 mg•kg −1 .While, the decrease of nutrient contents were observed in roots part of alfalfa plants with increasing soil Cd concentrations.
Plant Phosphorus (P) Uptake
Figure 3 shows trend of plant phosphorus contents in M and NM inoculated plants under increasing Zn and Cd concentrations.The results showed that P content was increased in M inoculated root and shoot part of plants under Zn and Cd stress.However, the decreasing trend was recorded in NM root and shoot tissues of plants.In general, Plant P nutrition was improved by mycorrhizal inoculation.Figure 3(A) shows that M plants exhibitng significantly (P < 0.001) higher shoot and root P at Zn concentration of 100 mg•kg −1 , while the P content was decreased at the Zn concentration of 300 and 900 mg•kg −1 in both M and NM plants.In the experiment, the shoots of mycorrhizal plants maintained higher P level than shoots of non mycorrhizal plants in 0 and 100 mg•kg −1 Zn in soil, but not at 300 and 900 mg•kg −1 .At this level, plant growth also was very limited.The trend of P uptake was decreased as the Zn concentration was increased.
Figure 3(B) shows trend of plant P contents in M and NM inoculated plants under increasing Cd concentrations.The trend was decreasing as the concentration of Cd increased at 100, 300 and 600 mg•kg −1 in both M and NM plants.In control plants, the increased P contents were recorded in both shoot and root part of plants.However, the reduced P contents were recorded as the Cd concentration increased in both M and NM plants.
Zinc and Cd Uptake in Alfalfa Plants
Figure 4 shows the Zn and Cd concentration in plant tissues was linearly correlated to the soil concentration.The Zn and Cd concentration was increased in shoot and root parts of plants as the concentration of metals increased in soil.However, the results indicated that NM plants accumulated more concentration of Zn and Cd in the shoots and roots at all Zn (100, 300, 900 mg•kg −1 ) and Cd (100, 300, 600 mg•kg −1 ) treatments than M inoculated plants.In general, NM inoculated plants accumulated more Zn and Cd in both shoot and root parts of plants as compared to M inoculated plants.In control treatments when no Zn and Cd was applied, shoot and root Zn and Cd uptake were similar but as Zn and Cd application rate increased, shoot Zn and Cd uptake increased much less than root Zn and Cd uptake.The uptake of Zn and Cd decreased by M roots and shoots compared with NM plants.However, as the application of Zn and Cd increased, the concentration increased in shoot and root of NM plants as compared to M plants.
Figure 5(B)
shows the effects of increasing Cd concentrations on biochemical contents of M and NM inoculated plants.The results indicated that chlorophyll a, b and carotene contents were decreased as the concentration of Cd increased in soil.The higher chlorophyll and carotene contents were recorded in control plants in which no Cd was applied.The lower contents were recorded at 300 and 600 mg•kg −1 Cd concentration.In M plants, the higher chlorophyll and carotene contents were observed as compared to NM plants at all Cd concen-tration (100, 300, 600 mg•kg −1 ).The sugar content was decreased linearly as the concentration of Cd increased in soil.However, the more sugar contents was observed in M plants as compared to NM plants at all Cd concentration (100, 300, 600 mg•kg −1 ).The proline contents were increased as the Cd concentrations increased in soil.The lower proline content was recorded at 100 mg•kg −1 of Cd concentration in both M and NM plants.The higher proline content was observed at 900 mg•kg −1 of Cd concentration.
Effects of Metals on Plants Antioxidant Enzyme Activities
Figure 6 shows the antioxidant enzymatic activity in leaves of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants in response to Zn and Cd addition to soil. Figure 6(A) shows the trend of SOD, CAT, APX and POD activities in M and NM plants with increasing Zn concentration (0, 100, 300, 900 mg•kg −1 ).The trend was increasing in M and NM inoculated plants but their activity was induced in response to the increasing concentrations of Zn in the soil.In NM plants, SOD activity increased with increasing Zn addition to soil, except at Zn concentration of 900 mg•kg −1 .In M plants, an increase in SOD activity in response to 100 and 300 mg•kg −1 was observed, however the activity was reduced at 900 mg•kg −1 Zn concentrations.
Slight but statistically significant increase in leaf POD activity was observed in alfalfa plants after treatment with Zn concentration.The highest POD activity was observed at 100 mg•kg −1 Zn concentration in both M and NM plants.However, the decrease in POD activity was recorded at 300 and 900 mg•kg −1 Zn concentration.The CAT activity induced in the similar manner as the SOD activity.The activity was enhanced as the Zn concentration increased in the soil except at the highest Zn concentration (900 mg•kg −1 ) in both M and NM plants.The APX activity was decreased as the Zn concentration increased in soil except at 100 mg•kg −1 in which the increased activity was recorded.In both M and NM plants, the trend of APX was same as the POD activity.The highest APX activity was observed at 100 mg•kg −1 in both M and NM plants.
Figure 6(B) shows the trend of SOD, CAT, APX and POD activities in M and NM plants with increasing Cd concentration (0, 100, 300, 600 mg•kg −1 ).The trend was increasing in M and NM inoculated plants but their activity was induced in response to the increasing concentrations of Zn in the soil.In NM plants, SOD activity decreased with increasing Cd addition to soil, except at Cd concentration of 100 mg•kg −1 .In M plants, the increased SOD activity was observed at all Cd concentrations (0, 100, 300, 600 mg•kg −1 ) as compared to NM inoculated plants.The reduced activity was recorded at 300 and 900 mg•kg −1 Zn concentration.
The leaf POD activity was increased in alfalfa plants as the concentration of Zn increased in soil.The lowest POD content was observed at 900 mg•kg −1 Zn concentration.The highest POD activity was observed at 300 mg•kg −1 Zn concentration in both M and NM plants.In M plants, the POD content was increased as compared to NM inoculated plants at all Zn concentration.The CAT activity was decreased as the Zn concentration increased in the soil except at the highest Zn concentration (100 mg•kg −1 ) in both M and NM plants.The decreased APX activity was recorded as the Cd concentration increased in soil.The Cd concentrations (0, 100, 300, 600 mg•kg −1 ) caused to decrease the APX activity in both M and NM inoculated plants.
Effect of AMF Inoculation on Alfalfa Growth, Biomass and Colonization under Zn and Cd Toxicity
The results of the present study indicated increased alfalfa growth and biomass in the presence of AM fungi under Zn and Cd toxicity.However previous studies have shown that AMF has resistance to toxic metals found in the soil [27].These results indicated that AM fungi were able to colonize plants roots under Zn and Cd polluted conditions and the useful effects of plant mycorrhizal interaction is primarily due to enhancement of P uptake by mycorrhizal fungus.The results also indicated that Cd and Zn toxicity had no negative effect on root colonization of plants with AM fungi interaction as compared to plants with no fungal interaction.The same results were reported by [28] [29].The growth inhibition in plants grown under high levels of Zn and Cd was due to interference of these metals with P uptake by plants.
The benefits of the mycorrhizal symbiosis on plant growth and nutrition are well known and have been extensively studied for many plants.The application of Vesicular Arbuscular Mycorrhiza (VAM) fungi at contaminated sites increased plants biomass even at elevated levels of Zn and Cd in the soil [30].It is also reported that root colonization of plants inoculated with non-indigenous AMF isolates to HM-contaminated soils was not decreased by increasing the HMs additions to soil [31].
Effect of AMF on Zn and Cd Uptake in Alfalfa Plants
In the present study, the presence of AMF contributed more to the retention of Cd and Zn in alfalfa roots and also to soil stabilization.The reason of plant protection against Zn and Cd toxicity in plants inoculated with AMF may occur indirectly by enhancing plant nutrition and increasing plant growth therefore resulting in a diluting effect of Cd and Zn in plant tissues [32].Also, mycorrhizal plants might actively diminish heavy metal uptake from soils by solubilising the metals via soil pH changes, which may be a defense strategy adopted by mycorrhizae to avoid or escape the negative impacts of high soil metal concentrations Furthermore, chelation/immobilization of metals by extraradical mycelium, glomalin, or exudates can sequester metals [33].Therefore, inoculation of HM contaminated soils with AMF seems to be a strategy which can be suggested for enhancing plant growth in soil polluted with Zn and Cd.AM fungus sensitivity to excessive concentrations of heavy metals in the soil may have resulted in a negative effect on the functioning of the symbiosis.In addition to decrease rate of root colonisation, functioning of the symbiotic structures such as external mycelium and arbuscules are also decreased.Some previous reports suggested that Cd, Pb and Zn were strongly retained within roots of both mycorrhizal and non-mycorrhizal plants showing that plants have filtering mechanisms that cause to decrease the metal translocation to shoots [27].
Effects of AMF on Nutrients Uptake and Biochemical Contents under Zinc and Cd Toxicity
The present study reported that plants inoculation with AMF improved growth and shoot P, N, Fe, Mn and Zn uptake of plants in M inoculated plants polluted with Zn and Cd in comparison with only metals polluted soils.The beneficial effects of inoculation of plants and AM fungi on nutrients uptake may act as a protection mechanism that decreases Zn and Cd toxicity.The primary mechanism by which mycorrhizal fungi improve P uptake is through more extensive soil exploration rather than a unique capacity to mobilize sources of P not available to plants [34].Mycorrhizal plants alleviate the severe effects of Zn and Cd by changing the translocation of metals and sequestering it in their hypha, so the toxic effects of Zn and Cd on photosynthesis and carbohydrate metabolism might decrease.The reduce amount of phosphorus observed in non-AM plants may be due to interference of toxic concentrations of Zinc and Cd with phosphorus uptake by alfalfa plants.The great amount of phosphorous in M plants emphasizes the enhancement of P uptake from the soil and its translocation to plants by the extra-radical mycelium of AM fungi [35].These results indicated the beneficial effects of AM fungi in the protection of plants and alleviation of toxic effects of heavy metals.Therefore additional researches are needed to explore the behaviour of AM fungi in various plants species and familiar for plant protection under heavy metal stress.
The results of the study indicated that AMF associated alfalfa plants had better biochemical activities than non AMF plants under high Zn and Cd concentrations.However, the decreased chlorophyll and carotene content was observed at toxic concentrations of Zn (900 mg•kg −1 ) and Cd (600 mg•kg −1 ).[36] reported the same results that AM plants possesss greater amount of chlorophyll in comparison with non-AM plants.The increased protein and sugar contents were also reported in the present study.The process involves increase protein synthesis as well as induction of antioxidant enzymes to avoid heavy metal-mediated oxidative stress.In non mycorrhizal plants, reduction in total proteins content may be due to the toxic effects of Zn and Cd on cellular metabolism and protein synthesis.
Concluding Remarks
It is concluded from the result of the present study that mycorrhizal association with alfalfa plants has beneficial positive effects on growth, biochemical contents and antioxidant enzymatic activity.The plant grew faster, exhibited improved mineral nutrition and had higher yields than non-mycorrhizal seedlings.AMF protect the alfalfa plants against metal toxicity and also benefit for nutrient uptake.AM fungi immobilize heavy metals such as Zn and Cd in moderately polluted soils.The decrease Zn and Cd uptake in mycorrhizal plants could be associated with the decline of Zn and Cd availability resulting from the increase in soil pH caused by the AM fungi.
Figure 1 .
Figure 1.Colonization percentage of root length (% RLC) in mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants grown in soils with increasing Zn and Cd concentrations.M and NM means with different letters are significantly different by the Tukey test (5%).
Figure 2 .
Figure 2. (A) Effects of increasing zinc concentrations on growth and biomass of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants: (I, II) Shoot and root biomass, (III, IV) Shoot and root length, (V, VI) Shoot and root breadth, (VII, VIII) Shoot and root area.M and NM means with different letters are significantly different by the Tukey test (5%).(B) Effects of increasing cadmium concentrations on growth and biomass of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants: (I, II) Shoot and root biomass, (III, IV) Shoot and root length, (V, VI) Shoot and root breadth, (VII, VIII) Shoot and root area.M and NM means with different letters are significantly different by the Tukey test (5%).
Figure 3 .
Figure 3. Phosphorus (P) contents in: (A) shoots and (B) roots of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants in response to Zn and Cd addition to soil.Means (n = 3) with the different letters are significantly different (P < 0.05) by the Tukey test (NM: black lines and M: light grey lines).
Figure 4 .
Figure 4. Cd and Zn concentrations in shoots and roots of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants growing in soil with increasing Cd or Znconcentrations, respectively.Means (n = 3) with the different letters are significantly different (P < 0.05) by the Tukey test (NM: black lines and M: light grey lines).
Figure 5 (
Figure 5(A) and Figure 5(B) shows the biochemical indicators in M and NM inoculated plants with increasing Zn and Cd concentrations.In general, relative chlorophyll and carotene contents were significantly higher in M inoculated plants as compared to NM inoculated plants at each Zn and Cd concentration.Figure 5(B) shows the effects of Zn increasing concentrations on biochemical contents in M and NM inoculated plants.The highest contents of chlorophyll a and b contents were observed at Zn concentration of 100 mg•kg −1 in both inoculated and non-inoculated plants.The lowest contents were observed at highest Zn concentration of 900 mg•kg −1 in both inoculated and inoculated treatments.The sugar contents were decreased linearly as the Zn concentration increased in soil in both M and NM plants.The sugar content of M plants was significantly higher than those of NM plants at each Zn concentration (100, 300, 900 mg•kg −1 ).The proline content in alfalfa plants increased as Zn concentrations increased in soil from 100 to 900 mg•kg −1 of soil.In NM plants, higher proline levels were recorded as compared to M inoculated plants.Total proline contents in plants were drastically increased with the increase in Zn concentration in the soil.
Figure 5 (Figure 5 .
Figure 5(A) and Figure 5(B) shows the biochemical indicators in M and NM inoculated plants with increasing Zn and Cd concentrations.In general, relative chlorophyll and carotene contents were significantly higher in M inoculated plants as compared to NM inoculated plants at each Zn and Cd concentration.Figure 5(B) shows the effects of Zn increasing concentrations on biochemical contents in M and NM inoculated plants.The highest contents of chlorophyll a and b contents were observed at Zn concentration of 100 mg•kg −1 in both inoculated and non-inoculated plants.The lowest contents were observed at highest Zn concentration of 900 mg•kg −1 in both inoculated and inoculated treatments.The sugar contents were decreased linearly as the Zn concentration increased in soil in both M and NM plants.The sugar content of M plants was significantly higher than those of NM plants at each Zn concentration (100, 300, 900 mg•kg −1 ).The proline content in alfalfa plants increased as Zn concentrations increased in soil from 100 to 900 mg•kg −1 of soil.In NM plants, higher proline levels were recorded as compared to M inoculated plants.Total proline contents in plants were drastically increased with the increase in Zn concentration in the soil.
Figure 6 .
Figure 6.(A) Antioxidant enzymes activity (I) SOD activity, (II) POD activity, (III) CAT activity, (IV) APX activity, in leaves of mycorrhizal (M) and nonmycorrhizal (NM) alfalfa plants in response to Zn addition to soil.Means (n = 3) with the different letters are significantly different (p < 0.05) by the Tukey test.NM: black color lines and M: light grey lines.Bars represent standard error.(B) Antioxidant enzymes activity (I) SOD activity, (II) POD activity, (III) CAT activity, (IV) APX activity, in leaves of mycorrhizal (M) and nonmycorrhizal (NM) alfalfa plants in response to Cd addition to soil.Means (n = 3) with the different letters are significantly different (p < 0.05) by the Tukey test.NM: black color lines and M: light grey lines.Bars represent standard error.
Table 1 (
A) and Table 1(B)show macro and micronutrients contents in shoots and roots of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants grown in soils with increasing Zn concentrations.In general, the increased
Table 1 .
(A).Macronutrient contents measured in shoots and roots of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants grown in soils with increasing Zn concentrations.(B).Micronutrient contents measured in shoots and roots of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants grown in soils with increasing Zn concentrations.
Means (n = 3) with the different letters are significantly different (P <0.05) by the Tukey test.
Table 2 .
(A) Macronutrient contents measured in shoots and roots of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants grown in soils with increasing Cd concentrations.(B) Micronutrient contents measured in shoots and roots of mycorrhizal (M) and non-mycorrhizal (NM) alfalfa plants grown in soils with increasing Cd concentrations.
Means (n = 3) with the different letters are significantly different (P <0.05) by the Tukey test.
|
v3-fos-license
|
2018-04-03T00:48:17.934Z
|
2018-01-31T00:00:00.000
|
26585444
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "BRONZE",
"oa_url": "https://www.jmcp.org/doi/pdf/10.18553/jmcp.2018.24.2.160",
"pdf_hash": "f7af064441beb1bcf2ce530b43f6a307e7b41b83",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2992",
"s2fieldsofstudy": [
"Business",
"Economics",
"Medicine"
],
"sha1": "8d3ea91aa3ca9a25dd8b3db33f9ac5f499f2668c",
"year": 2018
}
|
pes2o/s2orc
|
Clinical and Economic Benefits of Pharmacist Involvement in a Community Hospital-Affiliated Patient-Centered Medical Home
BACKGROUND: The primary goals of an accountable care organization (ACO) are to reduce health care spending and increase quality of care. Within an ACO, pharmacists have a unique opportunity to help carry out these goals within patient-centered medical homes (PCMHs). Pharmacy presence is increasing in these integrated care models, but the pharmacist’s role and benefit is still being defined. OBJECTIVE: To exhibit the clinical and economic benefit of pharmacist involvement in ACOs and PCMHs as documented by clinical interventions (CIs) and drug cost reductions. METHODS: This is a retrospective quality improvement study. All interventions made by the pharmacist during the study period were documented using TAV Health. The interventions were then analyzed. Specific identified endpoints included the total number of documented interventions and number of CIs from each category, transition of care (TOC) medication reconciliations performed, discrepancies identified during TOC medication reconciliation, and cost savings generated from generic and therapeutic alternative use. CI categories were collaborative drug therapy management, medication therapy management (MTM), medication reconciliation, patient and provider education, and drug cost management. RESULTS: During the study period (October 2016-March 2017), a pharmacist was in clinic 8 hours per week. Sixty-three patients were included in the study. There were 283 CIs documented, with a majority of the interventions associated with MTM or cost management (94 and 88 CIs, respectively). There were 37 education CIs, 36 TOC medication reconciliations performed, and 28 collaborative drug therapy management CIs. From the 36 TOC medication reconciliations, 240 medication discrepancies were found, with a majority associated with medication omission. A cost savings of $118,409 was gained from generic and therapeutic alternative substitutions. CONCLUSIONS: Clinical benefit of pharmacy services was demonstrated through documented CIs. Pharmacists can have a dramatic and quantitative effect on reducing drug costs by recommending less expensive generic or therapeutic alternatives. Documenting CIs allows pharmacists to provide valuable evidence of avoided drug misadventures and identification of medication discrepancies. Such evidence supports an elevated quality of care.
A n accountable care organization (ACO) is a group of health care providers that share financial benefits and risks of participating in the Medicare Shared Savings Program (MSSP) or similar contracts with other insurance providers. ACOs are compared against benchmarks intended to assess whether they generate savings or losses for the Medicare program, and they are either rewarded a portion of the savings or held liable for a portion of the losses. In order to generate savings, the ACO must reduce total health care expenditures per patient and increase quality of care. The provider network within an ACO consists of any practitioner or health system that provides care to Medicare beneficiaries. However, a 2015 study reported that only 26% of ACOs include a pharmacy, and only 19% have contracted pharmacy services. 1 This study revealed new opportunities for pharmacist involvement by combining the goals and the inherent multidisciplinary model of the ACO. Pharmacists can play a vital role in reducing drug spending and providing quality medication therapy management (MTM). 1
What is already known about this subject
• This study provides further confirmation of the beneficial role of pharmacy within patient-centered medical homes and ACOs. • As more pharmacists become incorporated into integrated care models, this clinic's project may serve as an example for other systems and further define the function of pharmacists within the multidisciplinary team. • This study establishes groundwork for conducting additional research to corroborate the pharmacist's role in increasing quality of care.
■■ Methods Study Design
This is a single-center, retrospective, quality improvement study conducted in a 3-provider internal medicine clinic with PCMH certification. Within the pre-existing structure of the clinic, an ambulatory care pharmacist provided several services for the PCMH under partnership with the ACO that included but was not limited to collaborative drug therapy management, cost management, TOC medication reconciliation, MTM, and patient and provider education. For each service, the pharmacist recorded any CI using TAV Health, the documentation and communication software used by the PCMH. Patient demographics were also obtained from TAV Health. MTM patient candidates were obtained through 2 pathways. First, providers consulted the pharmacist to manage a patient's medication and provide recommendations. Second, each patient discharged from the hospital was scheduled for a TOC visit with the physician or nurse practitioner within 7-14 days after discharge, after which the pharmacist made contact with the patient for follow-up and MTM. Immediately following discharge from the hospital, a chart review of the patient's medications was conducted, and a TOC medication reconciliation CI was documented to record the event, along with any recommendations made to the provider. Collaborative drug therapy management CIs were typically recorded after identification of patient needs, and subsequent referrals to a different health care provider (such as a health educator) were on an as needed basis. Any cost management CI was defined as a direct cost savings from switching one medication to another. Assisting patients with medication assistance programs or insurance coverage was also considered a cost management CI, but a numeric cost savings was unable to be calculated for these interventions. If a recommendation led to a switch from one medication to a less expensive medication, the direct cost savings was determined by subtracting the annual cost of the less expensive medication from the more expensive medication based on average wholesale price. Patient or provider education CIs were documented any time the pharmacist presented drug information to a provider or counseled a patient on appropriate medication use.
Study Population
This study included any adult patient for whom a pharmacist documented a clinical CI and who also had an eligible insurance plan (Medicare or employee Blue Cross Blue Shield). For the TOC services, patients were excluded if they were discharged to a skilled nursing, hospice, rehab, or other facility; were not evaluated by a pharmacist after discharge; or expired before the TOC visit.
A patient-centered medical home (PCMH) is a model of care that a practice may voluntarily adopt, which offers coordinated, comprehensive, and integrated care and offers patients more involvement in their own care. To become a PCMH, a practice must be certified by an organization such as the National Committee for Quality and Assurance. Data have shown that certified PCMHs can provide improved outcomes, quality of care, and patient experience, as well as reduce costs. Payers also offer enhanced reimbursement and incentives for having a PCMH certification. 3 The role of a pharmacist in a PCMH is the same as that in an ACO: helping the organization to meet benchmarks set within the MSSP. To do so, pharmacists often provide collaborative care, MTM, and patient and provider education and manage drug costs and medication reconciliation.
Many different types of integrated care models exist in which pharmacists manage drug therapy. A literature search revealed a multitude of studies demonstrating improvements in outcomes, readmission rates, cost savings, and/or quality measures, as well as favorable return on investments with pharmacist-driven programs within integrated health care teams. [4][5][6][7][8][9][10][11][12][13] The lack of pharmacy presence in many ACOs may be a result of several challenges, including difficulty coordinating and communicating across the health care continuum, identifying a specific population group that would benefit from pharmacy services, and lack of provider status in many states. 1 However, demonstrating value by measuring the quality and financial effect of pharmacy-driven services may be the largest barrier because of the inability to bill for services and the expensive salary of today's pharmacist. Despite the barriers that other studies have noted, it would be strategic to include pharmacists within the integrated care model in order to optimize medication use as a key component of improving quality and avoiding excessive drug expenditures. 14 The ambulatory care pilot at CHI Memorial Hospital was established in 2016 as an initiative between the ACO and pharmacy department in hopes of increasing pharmacy presence in the provider network. Pharmacy services were first piloted in a hospital-affiliated PCMH. The PCMH is an internal medicine clinic with 2 physicians and a nurse practitioner. Services provided by pharmacists were aligned with the goals of the ACO and were focused on collaborative drug therapy management, cost management, transition of care (TOC) medication reconciliation, MTM, and patient and provider education. The pharmacist was employed through the ACO, so the scope of practice was limited to Medicare beneficiaries and CHI Memorial employees with contracted Blue Cross Blue Shield employee insurance.
The purpose of this study was to exhibit the clinical and economic value of pharmacist involvement within the PCMH by quantifying cost savings and documenting clinical interventions (CIs). The data collected from this study will be used to support and expand the role of the pharmacist within the ACO.
Data Collection Procedures
For this study, only the number and types of interventions were obtained from TAV Health. For any conversion to a less expensive drug therapy, the names of the drugs were recorded so that cost savings could be extrapolated. Study endpoints included total documented CIs and the number of documented CIs associated with collaborative drug therapy management, MTM, medication reconciliation, patient and provider education, and cost management. Cost savings resulting from a switch to a less expensive medication was totaled as a part of cost management services. For TOC, the total number of medication reconciliations and associated discrepancies were recorded. Data collection began following institutional review board approval in October 2016 and continued through March 2017.
Statistical Analyses
No statistical analysis was performed, since data were only used to quantify the number of interventions and associated cost savings. Means of central tendency were used to aid in describing patient characteristics.
Data Security
This study is an observational retrospective analysis with minimal risk. Anticipated problems could include breach of security. HIPAA guidelines were strictly followed to protect patient confidentiality and maintain records. No personally identifiable information was collected during this study. The information collected only included quantity of interventions, types of interventions, and associated cost savings, and the analysis of this data was only used to support and expand pharmacist involvement within the ACO.
■■ Results
Sixty-three patients were included in this study. Table 1 lists patient demographics. Some patients, who were included for the TOC arm of the study, also had additional CIs documented, so there was some overlap between inclusion populations. Of the patient group, 79% were female; 21% were male; and the average age was 68 years (Table 1). There were a total of 283 CIs documented during the study period, which encompassed 192 hours of pharmacist clinic time (equivalent to 24 full-time days). The breakdown of CIs was as follows: 94 (33.2%) MTM interventions, 88 (31.1%) cost management interventions, 37 (13.1%) patient or provider education interventions, 36 (12.7%) TOC medication reconciliations, and 28 (9.9%) collaborative drug therapy management interventions (Figure 1). For those interventions associated with cost management, 16 recommendations were made to switch a medication to a less expensive generic or therapeutic alternative, resulting in an extrapolated cost savings of $118,409 per year.
For the TOC portion of the study, 36 medication reconciliations were performed following discharge from the hospital and before the TOC appointment at the PCMH. From these medication reconciliations, 240 medication discrepancies were uncovered, including medication duplications and dosing errors, but the vast majority of these discrepancies were medication omissions, that is, a medication was missing from either the inpatient list or the outpatient list. Home medications that were added, changed, or discontinued upon discharge were not considered a medication discrepancy.
■■ Discussion
MTM constituted the largest portion of interventions. MTM activities primarily included either face-to-face meetings or patient interviews via telephone. During these discussions, medicationrelated issues were often uncovered and resolved. For example, it was discovered that a patient had been taking acyclovir
Number of Documented Clinical Interventions
chronically and did not know why she was prescribed the medication. Upon discussion with the provider, it was determined that the medication was unnecessary and was subsequently discontinued.
Cost management interventions provided the most convincing evidence of economic benefit. Just shy of $120,000 yearly savings were appreciated from recommending a medication switch to either a generic or a less expensive therapeutic alternative. Brand name drugs that were switched to generic included Glumetza, Nuvigil, Vimovo, or Treximet. Examples of therapeutic interchanges included switching fluoxetine tablets to fluoxetine capsules, Silenor to zolpidem, armodafinil to modafinil, or rosuvastatin to atorvastatin. Such switches are noted to be a benefit to the ACO, since it reduces total health care expenditure per patient, which is one of the major benchmarks that must be met in order to receive shared savings. In addition, the pharmacist assisted many patients in acquiring financial aid via medication assistance programs or obtaining proper Medicare Part D insurance plans.
As the pharmacotherapy expert, the pharmacist served as a drug information resource to providers while in clinic. A newsletter was drafted to provide further education, such as commonly prescribed brand name drugs with less expensive generics available and other pharmacy-related health care updates. The newsletter will continue to be published periodically for practitioners and other clinic staff. The pharmacist was a well trusted and respected member of the integrated team and was frequently sought out for recommendations and drug information. Patient education was also a vital role of the pharmacist during MTM or medication reconciliation.
The concerning number of medication discrepancies identified during TOC led to the conclusion that the TOC medication reconciliation process was disjointed and needed improvement. The lack of an electronic health record and communication across the continuum represent barriers that will be difficult to overcome. Increased collaboration between settings is vital, and future directions will be focused on to optimize current resources and overcome existing barriers.
Collaborative patient care plays a central role in the PCMH and ACO models, and we believe that the pharmacist was vital for collaborative drug therapy management. During patient encounters, the pharmacist was able to identify barriers or needs of patients and refer the patients to different resources such as health coaches or case managers. In return, the health coaches in the clinic were able to refer patients to pharmacy services.
Limitations
There were several limitations to this study. First, the study was conducted during the infancy of the ambulatory care pilot. The pilot underwent many changes and transformations early on, and the pharmacy department, PCMH staff, and ACO are continuously reassessing the needs of all parties and adjusting accordingly. As the program expands, a larger system with a more robust provider population would eliminate the possibility of confounders attributed to personality or other characteristics. Second, a pharmacist was only in clinic 8 hours per week, which limits patient and provider contact time. In addition to limited hours, pharmacists were restricted in patient outreach to Medicare beneficiaries and patients with contracted hospital insurance. Finally, the hospital system lacked a universal electronic health record, and as a result, the systems used by the hospital and the PCMH could not communicate with each other. The lack of a universal system makes a smooth TOC medication reconciliation process extremely difficult and disjointed.
■■ Conclusions
The presence of pharmacists in an integrated health care team setting such as a PCMH has positive benefits for patients, the clinic, and the ACO. Recommending generic and less expensive therapeutic alternatives provide for a quantitative and dramatic effect on cost savings as a direct result of pharmacist involvement. While the majority of the cost savings was a direct savings to insurance providers, this savings also assisted the ACO in meeting benchmarks by reducing total health care expenditures per patient. In addition, there is a clear clinical benefit of using pharmacists in this setting, as demonstrated by documented CIs, which will ultimately lead to higher quality patient care. CI documentation allows for valuable evidence in the avoidance of adverse drug events or medication errors and identification of medication discrepancies across the TOC process. Finally, closing the gap in the continuum of care is an ongoing process. With a lack of an electronic health record, current resources will need to be optimized in order to further advance the TOC process.
DISCLOSURES
No outside funding supported this study. The authors have nothing to disclose.
Study concept and design were contributed by Tate and Hopper, along with Bergeron. Tate collected and interpreted the data, as well wrote the manuscript, which was revised by all the authors.
|
v3-fos-license
|
2023-02-10T14:16:51.687Z
|
2021-05-13T00:00:00.000
|
256708659
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41541-021-00329-0.pdf",
"pdf_hash": "ed3436ec753be30c543a0335040e3942a7af02e0",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2996",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ed3436ec753be30c543a0335040e3942a7af02e0",
"year": 2021
}
|
pes2o/s2orc
|
Fentanyl conjugate vaccine by injected or mucosal delivery with dmLT or LTA1 adjuvants implicates IgA in protection from drug challenge
Fentanyl is a major contributor to the devastating increase in overdose deaths from substance use disorders (SUD). A vaccine targeting fentanyl could be a powerful immunotherapeutic. Here, we evaluated adjuvant and delivery strategies for conjugate antigen vaccination with fentanyl-based haptens. We tested adjuvants derived from the heat-labile toxin of E. coli including dmLT and LTA1 by intramuscular, sublingual or intranasal delivery. Our results show anti-fentanyl serum antibodies and antibody secreting cells in the bone-marrow after vaccination with highest levels observed with an adjuvant (alum, dmLT, or LTA1). Vaccine adjuvanted with LTA1 or dmLT elicited the highest levels of anti-fentanyl antibodies, whereas alum achieved highest levels against the carrier protein. Vaccination with sublingual dmLT or intranasal LTA1 provided the most robust blockade of fentanyl-induced analgesia and CNS penetration correlating strongly to anti-FEN IgA. In conclusion, this study demonstrates dmLT or LTA1 adjuvant as well as mucosal delivery may be attractive strategies for improving the efficacy of vaccines against SUD.
INTRODUCTION
The United States is in the midst of a nationwide opioid crisis, with an estimated 2.1 million individuals suffering from an opioid use disorder (OUD) 1 . Particularly alarming are the increasing fatal overdoses resulting from illicit fentanyl (FEN). Higher incidence of drug overdose deaths was proportionally associated with the number of FEN seizures in a given community 2 . FEN is believed to specifically account for nearly 30,000 of the 50,000 deaths involving opioids annually, due to its high potency, ease of manufacturing, and/or addition to other illicit drugs unknowing to the user [3][4][5] . Since the 2020 COVID-19 pandemic, there is new evidence that FEN use and opioid overdoses may also be increasing 6 . FEN and FEN-analogs rapidly penetrate the central nervous system and are 50-100 times more potent than heroin and morphine respectively. Because of this potency as well as its half-life, higher or repeated doses of the mu opioid receptor antagonist naloxone (Narcan © ) are needed to be administered quickly to reverse lethal overdoses 7 .
One option to help combat this opioid epidemic is vaccination. A vaccine specifically targeting illicit opioid(s) could attenuate the reinforcing effects of FEN and prevent overdose deaths by preventing the drug from penetrating the CNS, particularly in individuals recovering from OUD during critical periods when relapse is especially common. Such a vaccination strategy would induce serum antibodies that diminish drug activity by assembling an antibody-drug complex that is too large to cross the blood-brain barrier, but not prevent other pain medications (e.g., tramadol, morphine) from working. Vaccines for substance use disorders (SUDs) have been under development for decades, including for cocaine, nicotine, heroin, amphetamines, and other synthetic opioids but without commercial success and FDA approval [8][9][10][11][12][13] . As observed with cocaine and nicotine vaccines in Phase III clinical trials, a lack of sustainable antibody levels and consistent levels of antibodies in vaccinated subjects has been a persistent problem [14][15][16][17][18] . Yet subjects who attained high levels of antibodies against nicotine or cocaine following vaccination showed significantly reduced drug use compared to those that only achieved low levels of antibodies and the placebo group 14,19 .
FEN is not naturally immunogenic, so the drug must be conjugated to a protein in order to create an immunogenic vaccine antigen. Recent animal studies with various opioid conjugate immunogens have demonstrated development of antibodies to FEN or heroin resulting in decreasing potency of heroin and FEN distribution to the brain after experimental exposure, reducing the drug-induced antinociception (or blocking of pain) [9][10][11][12][13][20][21][22][23][24] . However, a frequent problem of these conjugate vaccines is still short duration of immunity and/or insufficient antibody magnitudes even with an added adjuvant. As recently reviewed 25 , no single adjuvant has emerged as ideal for vaccines against SUDs, though alum, TLR-based agonist, or combinations thereof have been tested. Alum, or aluminum salts, are the most commonly used adjuvants in licensed vaccines; evaluations of alum in drug abuse or addiction vaccines thus far has had limited success in inducing an enduring antibody response 13,[25][26][27] . Few studies have shown efficacy for routes other than injected delivery (e.g., by intramuscular (IM), subcutaneous, and intraperitoneal routes), with a few exceptions in vaccines for cocaine 28 . Mucosal vaccination has an advantage of needle-free delivery, lack of injection-related infections, possibility of non-medical personnel administration or self-administration, and induction of robust systemic and mucosal immunity [29][30][31] . In addition, mucosal vaccination can alter the quality of antibody isotypes, particularly IgA, and may better protect against mucosal drug exposures, (e.g., snorting or smoking), than parenteral vaccination.
A newer class of adjuvants being developed are derived from heat-labile enterotoxins from E. coli (LT) or V. cholerae. These proteins have an enzymatic A-subunit critical for adjuvant effects [32][33][34][35] that ADP-ribosylates Gsα, leading to irreversible adenylate cyclase activation and accumulation of intracellular cAMP. dmLT or LT(R192G/L211A) is the most advanced and clinically relevant protein from this adjuvant family 36 . dmLT improves parenteral and mucosal immunity to bacterial and viral antigens following a variety of routes in animal models, including IM and sublingual (SL) delivery routes [37][38][39][40][41][42][43] and has been tested in a number of recent human clinical trials [44][45][46] . In addition, dmLT enhanced antibody responses to conjugate antigens using a polysaccharide-protein conjugate against V. cholerae 47 . We have also developed a B-subunit free adjuvant called LTA1 based on the A1 domain 39,48,49 . LTA1 was specifically developed for safe intranasal (IN) use to overcome the risk of Bell's palsy with AB 5 LT proteins 48,50,51 . Like dmLT, LTA1 also activates antigen presenting cells 49 and stimulates immunity to parenteral injection of vaccine antigen 52 . However, the ability of dmLT, LTA1, or any bacterial-enterotoxin based adjuvant for vaccines targeting substances associated with use disorders has not been explored (though the cholera toxin binding B-subunit has been used as a carrier for cocaine vaccines 16 ).
The objective of this study was to test whether LT-based adjuvants can enhance the efficacy of a candidate FEN vaccine. To test this, we evaluated two FEN conjugate immunogens in combination with dmLT, LTA1, or alum adjuvants and administered to mice by IM, SL, or IN routes.
RESULTS
IM vaccination with FEN-BSA conjugate admixed with dmLT results in robust FEN-specific serum IgG and antibody secreting cells (ASCs) in the spleen and bone marrow We first vaccinated mice with 8 μg of a commercially available FEN-BSA conjugate antigen alone or admixed with 1 μg dmLT. These formulations were delivered by prime/boost IM immunization to mice (week 0, 4) with sample analyses performed two weeks after the second immunization (week 6, Fig. 1a) compared to naïve mice. We observed a statistically significant increase in serum antibodies to carrier antigen, using ELISA plates coated with BSA, as well as specific to FEN, using ELISA plates coated with FEN-TT, in all vaccinated mice compared to naïve mice; however, this was highest when dmLT was included in the vaccine formulation ( Fig. 1b-c, F-statistics and degrees of freedom reported in Supplemental Table 1). The majority of induced anti-FEN antibodies were an IgG1 isotype, with some IgG2a, but not IgA or IgM (Fig. 1d- serum antibodies corresponded with an increased number of antigen-specific ASCs, bone-marrow and spleen tissues were also analyzed for anti-FEN ASCs by ELISPOT. We found a statistically significant increase in the number of anti-FEN IgG ASCs in FEN-BSA + dmLT vaccinated mice compared with FEN-BSA vaccinated mice in either tissue ( Fig. 1f-g). This was maximal in bone-marrow, where memory B-cells and long-lived plasma cells typically reside.
Anti-FEN immunity following prime/boost IM vaccination with FEN-CRM 197 conjugate admixed dmLT or LTA1 is superior to alum or antigen alone Based on promising results using the FEN-BSA antigen above and published FEN hapten conjugation strategies 13 , we prepared a FEN-CRM 197 (FEN-CRM) antigen (Supplemental Fig. 1). The E. coliexpressed CRM 197 protein, or EcoCRM, is a low-cost manufactured protein derived from detoxified diphtheria toxin 53 with fewer complications reported from pre-existing immunity compared to other common carrier proteins 54,55 . It has successfully been used in a recent pre-clinical heroin vaccine study 26 . To test the efficacy of the FEN-CRM antigen as well as adjuvant combinations in generation of anti-FEN immunity, we next immunized groups of mice with 5 μg FEN-CRM alone or admixed with 0.1 μg dmLT, 5 μg LTA1, or absorbed 1:1 with 2% alhydrogel (alum). Since responses to substance abuse vaccines have not been optimal with alum 13 , we aimed to have higher responses with dmLT or LTA1 adjuvanted vaccination than observed with the alum group. Formulations were delivered by prime/boost IM to mice (weeks 0 and 3) followed by immunologic analyses at two-weeks (week 5) or 6 weeks later (week 9, Fig. 2a). We observed a statistically significant increase in serum antibodies to CRM 197 carrier antigen in all adjuvanted groups compared to antigen-alone vaccination, which was highest in the alum adjuvanted group (Fig. 2a-b). In contrast, antibody responses specific to FEN, determined using ELISA plates coated with FEN-BSA or FEN-TT, were highest in dmLT and LTA1 adjuvanted groups. These observed serum antibody responses were evident even 6 weeks after the boost (week 9). As before, we also evaluated vaccinated animals for number of anti-FEN ASCs in the bone-marrow (tissue selected based on Fig. 1 experiments). We observed ASCs in all groups that were vaccinated with FEN-CRM antigen, with significantly higher levels in dmLT and LTA1 adjuvanted groups compared with antigen alone, but not in the alum adjuvanted group (Fig. 2d-e). Evaluation of memory B cells in bone-marrow or spleen, using 5 days of ex vivo expansion and differentiation into ASC with mitogens using standard methods 56 , revealed only the LTA1 adjuvanted group in bone-marrow tissue exhibited significant memory B cells after vaccination compared to all other FEN-CRM groups (Fig. 2f). Similar findings were also observed with a shorter time between vaccination boosters and a smaller dose of antigen (Supplemental Fig. 2).
Anti-FEN immunity following three IM vaccinations with FEN-CRM197 conjugate admixed dmLT or LTA1 is superior to alum or antigen alone We observed higher levels of anti-FEN immunity with dmLT and LTA1 adjuvants and FEN-CRM after prime/boost immunization (Figs. 1-2). To test how an additional booster immunization would change the magnitude of immunity post vaccination, we repeated our immunizations from above ( Fig. 2) except on a prime/boost/boost schedule on weeks 0, 3, and 6, with sample collection on week 8 (Fig. 3a). As before, FEN-CRM alone or adjuvanted with alum, dmLT or LTA1 resulted in serum antibodies and ASCs to CRM carrier or FEN ( Fig. 3b-e). Immune responses to carrier were more similar between adjuvant groups (than with two immunizations in Fig. 2), but higher levels of anti-FEN serum IgG, IgG1, or IgG2a or anti-FEN IgG ASCs were observed in dmLT and LTA1 groups. No clear differences between dmLT and LTA1 groups were observed in serum antibody analyses; however, the dmLT group had the higher number of IgG ASCs when compared to FEN-CRM vaccination alone (Fig. 3e).
Anti-FEN immunity is promoted with mucosal booster immunizations, including dmLT SL or LTA1 IN vaccination
While our studies thus far utilized IM delivery, both dmLT and LTA1 are unique adjuvants because they are also highly effective by mucosal delivery. Past animal and human studies have shown that immunization by sublingual (SL) delivery with dmLT plus antigen directly to the mucosal tissue under the tongue 37,43,[57][58][59][60][61][62] or animal studies with immunization by intranasal (IN) delivery with LTA1 + antigen to the nasal epithelium 39,48 are safe and promote vaccine immunity. No mucosal vaccinations have previously been reported with OUD vaccines, but presumably could be advantageous for outpatient or self-delivery immunization schemes, or to counteract intranasal (snorting) or intrapulmonary (smoking) delivery of opioids. To test whether mucosal delivery alone or with adjuvant could be effective as booster immunizations, we performed additional animal vaccinations selecting antigen/adjuvant doses based on previous studies 48,52 . Mice were first primed by IM delivery as before (e.g., Figs. 2-3) with 5 μg FEN-CRM alone or admixed with 0.1 μg dmLT or 5 μg of LTA1. This was followed by booster vaccinations with 9-10 μg FEN-CRM alone or admixed with 5 μg dmLT by SL delivery or 5 μg LTA1 by IN delivery on weeks 3 and 6. Sample collection was performed on week 8 (Fig. 4a). FEN-CRM immunization with mucosal boosting with and without dmLT or LTA1 adjuvants resulted in serum antibodies and ASCs to CRM carrier and/or FEN (Fig. 4b-e). Higher levels of anti-FEN serum IgG, IgG1, and anti-FEN IgG ASCs were observed in dmLT SL and LTA1 IN groups, with highest levels in the latter. LTA1 IN group also had significant levels of anti-FEN serum IgA, but less IgG2a than the IN (no adjuvant) group (Fig. 4c). Significant levels of IgA ASCs in the bone-marrow were not observed (Fig. 4e). Similar findings were also observed with SL booster vaccination with FEN-BSA with and without dmLT, with serum IgA also being detected in this experiment (Supplemental Fig. 3). A.E. Stone et al.
Anti-FEN serum antibodies or ASCs are significantly correlated for IgG but not for IgA Immunologic analyses of FEN conjugate vaccinations require a different FEN hapten conjugate to evaluate the immune responses. Thus far, we had observed slight differences in vaccination outcomes depending upon specific assay or coating antigen, with higher levels of antibodies when FEN-TT coating antigen was used (Figs. 1-3). To better compare the results of these assays, data were compiled and compared from IM experiments (Fig. 5a) or from IM prime with mucosal boosts (Fig. 5b, c). We observed that all anti-FEN IgG comparisons (e.g., FEN-BSA ELISA vs FEN-TT ELISA vs FEN-TT ELISPOT) were significantly and positively correlated using Spearman's correlation test (r values between 0.52 and 0.76). However, limit of detection, background (estimated from naïve groups), and overall magnitude of observed responses were assay specific. These data indicate that the selection of a FEN-hapten conjugate for ELISA analyses can impact quantification of anti-FEN serum IgG antibodies. For anti-FEN IgA comparisons (e.g., FEN-BSA ELISA vs FEN-TT ELISPOT) serum antibodies and bone-marrow ASCs were not significantly correlated (Fig. 5c), indicating that another tissue area likely related to mucosal vaccination is serving as the niche for IgA ASCs. Importantly, these comparisons indicate that serum IgG antibodies are likely being produced by the ASCs found in the bone-marrow, cells critical for maintenance of antibodies in systemic circulation; however, circulating IgA antibodies are likely being produced by ASCs in mucosal tissue or draining secondary lymphoid organs yet to be identified.
Protection from FEN-induced antinociception and brain tissue distribution after FEN-CRM immunizations is observed with parenteral and mucosal vaccination with dmLT and LTA1 adjuvants, but not with alum Next, we evaluated the protective effects of adjuvanted vaccination against FEN challenge. To do this we repeated select vaccination groups, including 5 μg of FEN-CRM alone, or with 1500 μg alum, 0.1 or 1 μg dmLT by IM prime/boost/boost. This higher dose of alum was chosen to improve upon the anti-FEN antibody responses observed with the 150 μg alum used in Figs. 2-4. This required a formulation volume of 100 μl for the 1500 μg alum group (delivered by 50 μl injection in both hind limbs), whereas all other IM injections were delivered at 20 μl as used previously. We also included mucosal booster groups (as in Fig. 4) including dmLT IM +SL+SL and LTA1 IM+IN+IN. All mice were vaccinated on weeks 0, 3, and 6. Four-weeks after the last boost, we conducted nociception tests using tail flick and hotplate assays in mice using challenges with 30 and 100 μg/kg fentanyl during weeks 10-12 (Fig. 6a). There was a one-week washout period between FEN test doses. These tests were chosen to evaluate the efficacy of vaccines in reducing opioid-induced brain and spinal antinociception effects to provide a measure of vaccine potency. Both assays have been extensively used in identification of lead opioid vaccines 12,13 . All mice developed serum anti-FEN IgG antibodies at weeks 6, 8, and 10; the highest levels of anti-FEN IgG were observed in the alum IM, 1 μg dmLT, and LTA1 IN groups (Fig. 6b). Protection from FENinduced antinociception was greatest in the LTA1 IN group, followed by the dmLT SL and 1 μg dmLT IM groups (Fig. 6c, d).
We observed similar protection from FEN brain penetration, with A.E. Stone et al.
the majority of injected FEN remaining in the serum unable to penetrate the CNS (Fig. 6e). While FEN brain distribution and serum IgG antibodies at week 8 were significantly correlated (Fig. 6f), protection from antinociception (tail flick assay during 100 μg/kg fentanyl challenge) was not. This may be attributed to a limited sample size or the spinal cord mechanism of tail flick analgesia and our measurement of brain, but not spinal cord levels of FEN, which can differ. Regardless, these studies reveal that both dmLT and LTA1 adjuvant by parenteral and mucosal delivery provide protection against FEN challenge.
Protection from FEN challenge is strongly correlated to anti-FEN serum IgA, as well as anti-FEN serum IgG binding affinity and IgG2a
To determine if we could better understand the correlates of protection in our challenge model, we evaluated serum from our protection study (e.g., Fig. 7) for anti-FEN IgG antibody affinity and antibody isotypes (IgG1, IgG2a, IgA). Both antibody isotypes and binding affinity can improve drug binding and sequestration of opioids from the brain 63 . We observed that both adjuvant selection and route impacted the quality of the antibody responses to FEN (Fig. 7a-c). In particular, the alum group resulted in only anti-FEN IgG1, whereas dmLT and LTA1 adjuvant groups also resulted in IgG2a with maximal levels in the 1 μg dmLT IM group. The group that received alum also had significantly skewed serum IgG1/IgG2a ratio (Fig. 7c). As seen previously highest levels of anti-FEN IgA were observed in mucosal booster groups, including dmLT IM+SL+SL and LTA1 IM+IN+IN. Next, we correlated these anti-FEN antibody affinity and isotype responses from immunized groups to each FEN challenge outcome using Spearman's correlation test (Supplemental Fig. 4). Anti-FEN IgG2a shows moderate correlation to serum FEN post-challenge (P = 0.07, r = 0.38), and IgG binding affinity demonstrated a modest inverse relation with % MPE hotplate antinociception (P = 0.06, r = −0.28). Improved anti-FEN IgG2a isotype class switching and IgG binding affinity appeared to be the major driver of protective responses in higher vs lower dose dmLT IM immunizations with the expected inverse relationship to %MPE (respectively, P = 0.02 and r = −0.44 or, P = 0.03 and r = −0.41 Fig. 7d). In contrast, anti-FEN IgA was significantly related to all FEN challenge outcomes measured (P = 0.03 to <0.001), including brain fentanyl (r = −0.78) and tail flick antinociception (r = −0.42; Fig. 7e and Supplemental Fig. 4). These results confirm the importance of IgG2a isotype, but also reveal a previously unknown role for anti-FEN IgA in FEN analgesic effects, which was maximally induced by mucosal boosting with LTA1 IN or dmLT SL vaccination. by preventing the drug from penetrating the CNS. To achieve this, vaccine studies have strived to produce large quantities of FENspecific antibodies with high binding affinity. In this study, we found that FEN-carrier vaccination with combinations of dmLT and LTA1 adjuvants delivered by IM, IN, or SL routes produced high antibody titers and had significant protective effects against FEN challenge, superior to alum adjuvant. This is the only report of any enterotoxin-derived adjuvant improving efficacy of an opioid vaccine or any substance abuse vaccine. This is also the only report identifying mucosal booster vaccination and generation of IgA antibody isotypes having a significant role in the efficacy of an opioid vaccination strategy. Administration of the FEN-CRM vaccine with dmLT or LTA1 using IM delivery evoked a potent and long-lasting immune response, with sufficient antibody levels lasting six weeks postimmunization to produce significant blockade of FEN behavioral effects. In spite of being smaller than the typical BSA carrier and having only a lower haptenization ratio, CRM 197 was an effective carrier protein, showing results similar to recent studies using CRM 197 for a morphine or oxycodone vaccine 20,26 and recently for FEN vaccination 64 . Minor differences were observed between dmLT and LTA1 depending upon number of booster immunizations and specific analyses performed (Figs. 2-3); however, these adjuvants were consistently superior to the alum adjuvant comparison groups (Figs. 2, 3, 6, and 7). Both of these adjuvants broadened the immune response to target the FEN hapten more rapidly (e.g., after one booster) than the more commonly used alum adjuvant 12,13,23,25 , and alum generated more rapid and higher responses to the carrier protein antigen BSA or CRM 197 . We chose dmLT IM for challenge experiments, given dmLT's advanced status in clinical trials 36 and slightly higher levels of anti-FEN IgG ASCs after three doses (Fig. 3). We observed greater protection from FEN challenge with 1 μg dmLT over 0.1 μg dmLT. This robust attenuation of FEN's analgesic effects was likely driven by the improved FEN antibody binding and induction of IgG2a isotypes in the higher dose group (Fig. 7). IgG2a has been previously identified as superior to IgG1 in protection against opioids, including a morphine vaccine in mice depleted or lacking in IL-4 cytokine, which reduced class switching to IgG1 with alum adjuvanted vaccination 63 . In conclusion, dmLT may be a superior adjuvant for opioid use disorder when multiple immunizations are provided by parenteral injection due to higher IgG2a isotype generation and FEN antibody binding.
DISCUSSION
Given the nature of dmLT and LTA1 adjuvants, we also tested mucosal booster immunizations. dmLT has successfully been delivered by sublingual vaccination 43,52,58,61,62 and LTA1 by intranasal vaccination 39,48 . In addition, within the intended recipient population of a FEN vaccine, many individuals take daily buccal buprenorphine (a pill dissolving between the gums and the cheek) to manage opioid dependence relapse and cravings 65 addition of a self-delivered mucosal or intranasal booster vaccine within this treatment context could be advantageous. We observed high levels of anti-FEN immunity in the animals receiving mucosal boosters, including dmLT SL or LTA1 IN, as well as the best blockade of FEN analgesia and brain tissue distribution. The LTA1 IN group achieved the best overall protection; however, despite these responses, anti-FEN IgG, IgG1, IgG2a, and antibody binding affinity were similar to the adjuvanted IM booster groups (Figs. 6-7). One difference we were able to identify was induction of anti-FEN IgA in these mucosal booster groups. IgA levels also correlated to protection from parenteral FEN challenge better than any other antibody measure assayed and this relationship was dose-dependent (Figs. 6f and 7e, Supplemental Fig. 4). At this point, it is unclear if serum IgA or secretory IgA (unmeasured) play direct roles in protection from challenge or if serum IgA is just a surrogate of another protective responses. Regardless, this is the only such report of IgA related to opioid vaccination and drug protection. IgA antibodies have been reported after cocaine and nicotine intranasal vaccines that protected animals from corresponding drug challenges 28,66,67 . IgA has also long been appreciated for high affinity binding to toxins and microbes at mucosal tissue, where most IgA secreting plasma cells reside. This is likely the case in our study, where serum IgA level did not correlate to IgA ASC in the bone-marrow (Fig. 5c). It is interesting to note that IgA is also found in cerebral spinal fluid of healthy humans in both monomeric and dimeric forms 68 . Future evaluations into if or how IgA could be more protective against FEN parenteral challenge (or smoked FEN inhalation) than IgG subclasses are warranted to better understand this phenomenon. In our ELISA analyses, we observed strong detection of anti-FEN antibodies through FEN-BSA or FEN-TT conjugate antigens, but with slight variability between assays (Fig. 5). Serum IgG levels against FEN were higher with FEN-TT than with FEN-BSA coating antigens, indicating that quantification of anti-FEN immunity must be considered in light of the differences in sensitivity and quantification observed in assays using carrier antigens as detection systems for antibodies. For example, SL dmLT groups had more appreciable levels of anti-FEN serum IgA when FEN-TT was used as coating antigen than with FEN-BSA coating antigen (Figs. 4 and 7, Supplemental Fig. 3). In addition, we reported that serum IgG (by ELISA) significantly correlated with the number of IgG-secreting ASCs found in the bone-marrow, both of which were maximally enhanced in dmLT or LTA1 adjuvant groups. Similarly, flow cytometry studies have shown that ASC B cells at 14 days post immunization correlated with vaccine efficacy against opioids, with adjuvant-dependent induction of these ASC and germinal center formation (where B-cell clonal expansion, differentiation, and maturation take place) 63,69 .
anti-FEN IgG (week 8) correlations
In conclusion, we report that dmLT and LTA1 adjuvants enhance immunogenicity of an anti-FEN conjugate vaccine and promote a robust blockade of FEN-induced analgesia. We observed strong evidence for a greater protective effect with vaccination that included mucosal booster immunizations with subsequent induction of IgA. Future investigation and development of a FEN vaccine with these adjuvants and combinatorial or mucosal delivery routes are warranted, including the duration of vaccine-mediated blocking of FEN analgesia, respiratory depression, other toxicity or reinforcing effects and comparisons or combinations with current FDA-approved pharmacotherapies (e.g., buprenorphine). Furthermore, in light of our findings we anticipate that combination of a dmLT or LTA1 opioid vaccination with antigens for other substances associated with use disorders, including cocaine, amphetamines, and nicotine, could benefit the design of new immunotherapies against SUDs.
Antigens and adjuvants
FEN-BSA was purchased from Cal BioReagents. FEN-CRM and FEN-TT were synthesized using a FEN derivative with a carboxylic acid linker coupled to lysine residues on CRM 197 (Fina Biosolutions) or TT (Statens Serum Institute) similar to previously published 13 . Briefly, the FEN hapten was created in a series of four chemical reactions starting with pure FEN (depicted in Supplemental Fig. 5). The product of each step was characterized and validated by 1 H and 13 C NMR spectrum (Supplemental Fig. 6). Purity of the FEN hapten was validated by HPLC (Supplemental Fig. 7). The final product was then conjugated to CRM 197 haptenization ratio of 2.3. All conjugates were dialyzed in PBS and quantified using BCA kit (Pierce) prior to immunization. dmLT GLP was produced according to cyclic GMP (cGMP) specification by IDT in sodium phosphate buffer supplemented with 5% lactose as a lyophilized product in vials containing 400 μg product in a 3 ml sterile, multidose, Wheaton serum vial and was stored at 4°C. His-tagged LTA1 was prepared from solubilized inclusion bodies by HPLC with a nickel-affinity column as previously described 39 . Proteins were stored lyophilized and freshly resuspended prior to use (dmLT) or kept frozen at −80°C until use (LTA1). Alum (Alhydrogel ® adjuvant 2%) was purchased from Invitrogen.
Animals, immunizations, and sample collections
Female BALB/c mice, 6-8 weeks of age, were purchased from Jackson Laboratories or Charles River (challenge experiments) and housed in sterilized cages. Animal studies were approved by Tulane University and University of Houston Institutional Animal Care and Use Committee. Immunization formulations were prepared immediately before administration by admixing antigen +/− adjuvant in sterile PBS in a 20 μl volume or absorbing to alum in 20-100 μl volume. Animals were injected with a 0.5cc insulin syringe into the right or left caudal thigh muscle, alternating sides with each intramuscular immunization (or both legs for 1500 μg dose alum/50 μl per leg, note that this dose of alhydrogel or Al(OH)3 has a~3.3x greater weight than free Al3+ and thus is well below the recommended human doses of 1.14 mg Al3+ (US Code of Federal Regulations, 21CFR610.15)). For intranasal immunizations, mice were first anesthetized with intraperitoneal (IP) ketamine/xylazine then kept horizonal while formulations were pipetted into one or both nostrils. For sublingual immunizations, mice were anesthetized with IP ketamine/xylazine and then held with their jaw horizontal lifting up their tongue with forceps while formulations were pipetted under the tongue. This position was held for 1-2 min, then the mouths of the mice were closed and held for another min. Immunizations were performed 2 or 3 times at 3-week intervals prior to CO 2 euthanasia for sample collection or FEN challenge. Blood was collected by tail or cardiac venipunctures and processed for serum. Spleens were homogenized in 3 ml of PBS containing 2% BSA (MilliporeSigma) and 1 mM EDTA using gentleMACS C tubes and tissue dissociator (Miltenyi Biotec). Splenocytes were then RBC lysed using ACK lysis buffer (Gibco), filtered, and counted. Both left and right tibia and fibulas were removed, stripped of muscle and connective tissue, and bone marrow was flushed using 10 ml of PBS containing 2% BSA and 1 mM EDTA with a 30-gauge needle attached to a 10 ml syringe. Cells were filtered and counted.
Antibody ELISAs
Anti-FEN, -CRM, and -BSA serum antibody IgG ELISAs were performed using similar methods described in ref. 57
FEN challenge and nociception tests
Fentanyl citrate salt was purchased for these studies (Sigma-Aldrich). Nociception tests were conducted using tail flick and hotplate assays [70][71][72] during weeks 9-12. Tail flick tests were conducted by restraining the mouse using a device that enabled a consistent angle so the tail could be exposed to an infrared heat stimulus (25 IR) 3 cm from the tip using an automated device (Ugo Basile). The time from the onset of the heat to the withdrawal of the tail (latency) was automatically recorded by the machine. Baseline latencies were first recorded following saline administration and were determined three times (1-2 min inter-trial interval). Mice were then administered one of two doses of FEN (30 and 100 μg/kg, IP) in a counterbalanced order with at least one week intervening between tests. Latencies were determined at 10 min post injection. Three measures were acquired with an inter-trial interval of 1-2 min. A single hot plate test was conducted immediately after the tail flick tests using a hotplate apparatus (Columbus Instruments, Columbus, OH). To control for baseline differences following saline, data from the tail flick assay are presented as %MPE (percent Maximum Possible Effect) − (test latency−control latency)/(cutoff criterion−control latency) × 100. Hot plate data are presented as mean (+s. e.m.) latency. A cut off time of 10 s for tail flick and 90 s for hotplate was used to prevent tissue damage (cut-off latency). Brain and blood samples were collected as done previously 73 . For the final blood and tissue collection, mice were administered 100 μg/kg fentanyl and anesthetized with isoflurane. Ten minutes later, mice were deeply anesthetized with isoflurane anesthesia. A bilateral thoracotomy was performed and whole blood removed via left ventricle puncture with a 22-gauge needle then processed for serum. The heart was perfused with cold PBS then the brain was removed, washed in PBS, and immediately placed on dry ice. Samples were stored at −80°C until FEN levels were determined. After hot-plate tests, serum and brain were collected and analyzed for FEN tissue analyses.
FEN tissue level analysis
Samples were diluted in an acetonitrile solution (1:3) and centrifuged at 8609 x g for 10 min. The supernatant was transferred, evaporated, and diluted 1:1 in PBS. Samples were extracted using Bond Elut Plexa PCX, 3mL extraction cartridges (Agilent), evaporated, and reconstituted in a solution of H 2 O:0.1% ammonium formate:0.01% formic acid. Sample were injected onto a reversed phase Agilent Zorbax Eclipse plus C18 column (2.1 mm × 50 mm i.d., 1.8 μm). The LCMS/MS system consisted of an Agilent G6470A TQ with an Infinity II 1290 G7116B Multicolumn Thermostat, G7120A High Speed Quad Pumps, G7267B Multisampler. Data were analyzed using Mass Hunter software.
Western blot
Protein conjugates and controls were analyzed by Western blot. Cell lysates were loaded into NuPage 12% or 4-12% Bis-Tris gel wells (ThermoFisher Sci.) for gel electrophoresis, then transferred to nitrocellulose membrane using iBlot Transfer Stacks and the iBlot Gel Transfer Device (ThermoFisher Sci.). Blots were initially stained with Ponceau stain, then blocked 5% skim milk and probed with anti-Fentanyl (Cal BioReagents) and goat anti-mouse IgG1-HRP (Santa Cruz). After imaging, blots were stripped with Restore Plus Stripping Buffer (ThermoFisher Sci.) then re-developed with mouse anti-CRM antibody (Antibody and Immunoassay Consultants) and goat anti-mouse HRP antibody (Santa Cruz). Blots were imaged with Pierce™ ECL Western Blotting Substrate (ThermoFisher Sci.) and Amersham Imager 600.
Statistical analysis
Statistical analyses were performed using Prism (GraphPad Software v7). Parametric data were analyzed by one-way ANOVA with Dunnett's posttest for all compared to a control group or Bonferroni correction for comparison of selected pairs. F-statistics, degrees of freedom, and significance are also recorded in Supplemental Table 1. Data were tested to confirm lack of normality (D'Agostino & Pearson) and then tested by Spearman correlation.
A.E. Stone et al.
|
v3-fos-license
|
2020-08-16T13:40:41.766Z
|
2020-08-16T00:00:00.000
|
221132419
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11695-020-04907-4.pdf",
"pdf_hash": "d12bf8604242456e91b953c606362e221811128a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2999",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d12bf8604242456e91b953c606362e221811128a",
"year": 2020
}
|
pes2o/s2orc
|
Smoking Behaviour and Beliefs About Smoking Cessation After Bariatric Surgery
Purpose Currently, bariatric surgery is the most effective intervention for treating morbid obesity and its complications. Smoking cessation is likely to improve smoking-related comorbidities and decrease postoperative complications. This study evaluated the smoking behaviour and thoughts about smoking cessation of patients more than 18 months after bariatric surgery. Materials and Methods A cross-sectional study was performed in patients who underwent bariatric surgery from July 2012 to December 2013. A questionnaire was used to evaluate smoking status, thoughts about the health benefits of cessation and characteristics of previous quit attempts in current and former smokers. Finally, actual bariatric surgery outcomes were evaluated in current, former and never smokers. Results Six hundred nine patients (response rate 52.0%) were included. Of them, 101 (16.6%) patients were current smokers, 239 (39.2%) former smokers and 269 (44.2%) patients were lifetime never smokers. Compared with former smokers, current smokers were less aware of the beneficial effects of smoking cessation on their general health; 66.4% of the former smokers thought smoking cessation would be much better for general health, compared with 20.6% of current smokers. Total weight loss was 2.8% higher in current smokers compared with former smokers. Actual long-term bariatric surgery outcomes were not significantly different between the groups. Conclusion Despite advice to quit smoking and temporary quitting before surgery, a considerable group of bariatric surgery patients continues smoking after surgery. These patients were less aware of the beneficial effects of smoking cessation. This study emphasizes the need for better strategies to increase the number of successful cessations.
Introduction
Currently, bariatric surgery is the most effective intervention for treating morbid obesity and obesity-related complications. It contributes to long-term weight loss, improvements in comorbidities and reduction in mortality [1][2][3]. Smoking is suggested to be a modifiable preoperative risk factor that significantly increases the incidence of postoperative morbidity and mortality in bariatric surgery [4][5][6][7]. Several studies have shown that smoking cessation is associated with improved smoking-related comorbidities and decreases the incidence of postoperative complications [4,5,[8][9][10][11]. Therefore, patients are urged to quit smoking before undergoing bariatric surgery and continue this after the operation in our centre.
However, some aspects could negatively affect a successful smoking cessation and its duration before and after surgery, which could impact short-and long-term outcomes of bariatric surgery. Examples are the concern of weight gain after smoking cessation, inadequate beliefs about the health consequences of smoking, doubts about the benefits of cessation, previous unsuccessful attempts and the experienced lack of support [12][13][14][15][16][17][18][19].
A few studies have evaluated the smoking cessation rates after bariatric surgery, which ranged from 0 to 20.7%. They have demonstrated that smokers who quit (long) before surgery started again after surgery [20][21][22]. These disappointing percentages could partly explain why several studies showed no difference in smoking prevalence before and after bariatric surgery or even higher prevalences of smoking after surgery [23][24][25].
In this study, we evaluated the smoking status of patients at least 18 months after bariatric surgery using a questionnaire. Secondly, we compared smoking history and thoughts about the effects on health outcomes between patients who stopped smoking and those who continued. Finally, we evaluated actual bariatric surgery outcomes in the subgroups defined by smoking status.
Design, Setting and Population
A cross-sectional study was performed in a high volume bariatric surgery centre where bariatric surgery has been performed since 2007, from July 2015 until September 2016. The hospital has been accredited as Bariatric and Metabolic Surgery Centre of Excellence by the European Accreditation Council for Bariatric Surgery in 2013 [26]. At least 18 months after surgery, patients who underwent primary or revisional bariatric surgery between the 1st of July 2012 and 31st of December 2013 were eligible for the current study. Preferably, patients were approached when attending the clinic for routine follow-up visit. If patients did not show up, then they were contacted by mail, e-mail or telephone if details were available. All patients in this cohort were advised to stop smoking at least 2 weeks prior to the surgery. They received regular follow-up visits after surgery, after the visits of the first year at least one visit every year.
Ethical Approval
This study was performed in accordance with the ethical standards of the Helsinki Declaration. All patients provided written consent. The Institutional Review Board of the hospital confirmed that formal ethical review was not required.
Data Collection
After enrolment, patients were asked to complete a written questionnaire. The compiled questionnaire composed of the modified Fagerström test and other questions were derived from the questionnaire based on the ASE-model of STIVORO, the Michigan Alcohol Screening Test, Jellinek's self-test on addictions, the DSM V criteria and Compulsive Behavior Questionnaire [27][28][29][30][31][32]. The questionnaire was tested with cognitive interviewing [33]. Subsequently, we made minor changes that simplified the completion of the questionnaire. The compiled questionnaire is available upon request (English, Dutch). It consisted of 51 questions on general information, current and past smoking, alcohol and addictive drugs use. In this study, we do not report the results on alcohol, drugs and other addictions.
The questions on general information were about educational level, current medication use and any readmission or reoperation since bariatric surgery. Questions on smoking involved the following: current smoking behaviour; any history of smoking; total time of smoking during lifetime, during preoperative period and postoperative period (pack-years); time of preoperative smoking cessation; type and amount of tobacco (cigarette, rolling tobacco, pipe, cigar); amount and duration of longest quit attempt; number and type of used methods at attempt; thoughts about smoking cessation; consequences on health, weight and success of bariatric surgery; reasons to quit; the experienced support by others for cessation; and chance of continuation of cessation in stressful times. Questions on alcohol and addictive drug use involved any use during last week; comparison of current use with the use before bariatric surgery; signs of dependency; problems in relations because of use; and fights/hospital admission/detention because of alcohol or drugs.
In addition, preoperative and follow-up data were retrieved from medical records. Preoperative characteristics that were extracted from medical records included sex, preoperative weight, preoperative body mass index (weight in kg/(height in meter) 2 ), diabetes mellitus type 2 (DM 2), hypertension, previous abdominal surgery, type and date of bariatric surgery, haemoglobin A1c (HbA1c) and C-reactive protein (CRP). Postoperative characteristics that were collected at the time of questionnaire included remission of hypertension and DM 2, symptomatic gallstones, (history of) reported physical or mental problems due to excess skin (including already performed plastic surgery), reported hypoglycaemia, readmissions related to bariatric surgery or due to abdominal complaints, any reoperation change of HbA1c, change of CRP and % total weight loss (%TWL). Reoperations included laparoscopic cholecystectomy, diagnostic laparoscopy, closure of mesenteric defects (with or without active internal hernia), redo surgery, undo surgery, revisional surgery (for instance, because of bleeding, leakage, stenosis, abscess), surgical neurectomy and incisional hernia repair.
The questionnaires and other data were collected and managed by two researchers (PW and OA). The data were collected and recorded systematically, in accordance with instructions that were formulated in advance. Deviations were discussed until consensus was reached.
Definitions
Definitions of smoking status were based on behaviour of the last week. Current smoking was defined as smoking at least once during the last week. Before surgery, three non-smoking categories were distinguished; 'never smoker', 'former smoker' (abstinence longer than 6 weeks) and 'recent smoker' (abstinence for 1 to 6 weeks). After surgery, we distinguished two non-smoking groups: 'former smokers' (abstinence since 1 week or longer) and 'never smokers'. For the postoperative smoking status, we also specified whether patients smoked continuously after surgery. We reported the behaviour to be 'continuous' when the current behaviour was present in ≥ 90% of the postoperative time. Accordingly, we used the term 'not continuous' when the behaviour was present in less than 90% of the total time after surgery. Lastly, patients who started smoking after surgery for the first time were called 'new smokers'. One pack-year was defined as 20 cigarettes or rolling tobacco daily during 1 year. One cigar equalled three cigarettes in case of cigar smoking. One water pipe equalled one cigarette. 'Addictive drug' was defined as any addictive prescribed or addictive illicit drugs. Hypertension remission was defined as a blood pressure < 140/90 mmHg after surgery, without using any antihypertensive drug in patients with a history of hypertension before surgery. DM 2 remission was defined as HbA1c <48 mmol/mol after surgery, without any antidiabetic drug in patients with a history of DM 2 before surgery. Educational level was divided into three groups: 'low' (primary education, lower vocational education), 'middle' (general secondary education, secondary vocational education, higher general and preparatory scientific education) and 'high' (higher vocational education and scientific education, scientific education-doctor of philosophy).
Outcomes Measures
With this study, firstly, we aim to estimate the frequency of current smoking, former smoking and never smoking in patients who underwent bariatric surgery at least 18 months ago.
Secondly, in patients with a history of smoking, we aim to compare questionnaire findings on history and thoughts about smoking cessation between smoking behaviour categories.
We also aim to establish the association between long-term outcomes of bariatric surgery and smoking behaviour categories.
Statistical Methods
Characteristics of patients before surgery were summarized for all patients and by postoperative smoking behaviour group. Characteristics of patients before surgery, questionnaire findings on smoking history and thoughts about smoking and long-term outcomes of bariatric surgery were compared between the postoperative smoking behaviour groups.
Values are expressed as mean and standard deviation (SD) in case of continuous variables that are normally distributed. Median and interquartile range (IQR) are presented in case of non-normal distribution. Categorical variables are expressed as frequencies and percentages.
Categorical variables were compared using chi-square tests. The independent t test (two groups) or one-way ANOVA (more than two groups) were used for normal distributed continuous variables. In case of non-normal distributed data the Mann-Whitney U test (two groups) or Kruskal-Wallis test (more than two groups) were used. Finally, in case of p value < 0.2, group differences were tested separately using chi-square test, independent t test or Mann-Whitney U test as appropriate.
The associations between postoperative smoking behaviour categories and long-term outcomes of surgery were explored using univariable logistic or linear regression. Odds ratios (ORs) with 95% confidence interval (95 CI) or betas with standard errors (SE) were presented. Since this evaluation was not the primary aim of the study, we proceeded only to a multivariable model if there was an indication for a relation for the specific outcome (p < 0.2). In a multivariable model, variables with a p value < 0.4 at univariable analysis were then further explored using multivariable logistic or linear regression (full model). By means of stepwise backward selection, variables with p > 0.05 were eliminated from the model (final model).
A p value ≤ 0.05 was considered statistically significant. Data analysis was performed using IBM SPSS Statistics software package for Windows version 25 (Chicago, IL).
Study Population
A total of 609 (52.0%) patients responded and were included in the study, whereas 557 (47.5%) patients did not respond and 6 (0.5%) patients had died before the inclusion period. Respondents were slightly older than non-respondents (mean (SD) age was 45.9 (9.7) versus 42.2 (10.4) years; p < 0.001, respectively), but the remaining characteristics were not significantly different (Table 1).
Overall, 522 (85.7%) of the patients were female, the mean (SD) age was 45.9 (SD 9.7) years and the median (interquartile range [IQR]) body mass index (BMI) was 42.5 (40.1-46.2) kg/m 2 . Medical history revealed that 242 (39.7%) patients had hypertension, 119 (19.5%) had DM 2 and 328 (53.9%) patients had a previous abdominal surgery, including 67 (11.0% of total population) patients with a previous bariatric surgery. The educational level was reported to be 'low' by 54 (9.0%) patients, 'middle' by 435 (72.5%) patients and 'high' by 111 (18.5%) of all patients. Table 2 describes and compares the preoperative characteristics of respondents for each smoking behaviour as reported at the time of questionnaire after surgery, separately.
Smoking Prevalence and Smoking Behaviour During Preceding Years
Median (IQR) time after surgery was 3.0 (2.3-3.3) years, when patients completed the questionnaire. After surgery, 101 (16.6%) patients were current smokers, 239 (39.2%) were former smokers (226 (94.6%) of them were already former smoker before surgery), and 269 (44.2%) patients were lifetime never smokers. Figure 1 demonstrates the postoperative smoking status, with information on the smoking behaviour category on the day of surgery.
In Fig. 1 and Table 3, the smoking, smoking cessation, resuming and starting rates after surgery are demonstrated.
Smoking History in Postoperative Former Smokers
Thirteen (5.4%) of all former smokers had smoked more than 10% of the time after surgery (median (IQR) 73.7% (51.0-92.3)) but were stopped already when they filled in the questionnaire.
Postoperative former smokers had a median (IQR) history of 15.0 (5.0-30.0) pack-years; this was 4.6 (2.4-12.0) packyears for those who had a temporary relapse after surgery and 15.0 (5.5-30.0) pack-years in those who continued not to Table 4 demonstrates the ideas about the health consequences of smoking cessation in postoperative current and former smoking patients (based on combination of pre-and postoperative smoking behaviour). Former smokers were more aware of the beneficial effects of smoking cessation on general health.
Beliefs About the Effects of Smoking Cessation and Description of Quit Attempts
Also, the result of bariatric surgery after smoking cessation was thought to be 'much better' by 67% of former smokers versus 17% of current smokers. Characteristics of current and/ or previous quit attempts of all current and former smokers at the time of questionnaire after surgery (based on lifetime history of smoking) are described in Table 5. Table 6 shows the long-term bariatric outcomes per smoking behaviour category after surgery (based on lifetime smoking history). Mean %TWL in current smokers was 33.6 (SD 8.9). %TWL was 3.4% (SE 1.0) higher in current smokers compared with former smokers and 2.1% (SE 1.0) higher compared with lifetime never smokers (overall p < 0.001). In a multivariable model with adjustments for sex, preoperative BMI, HbA1c before surgery and time after surgery, %TWL in current smokers was 2.8% (SE 0.9) higher compared with former smokers and 2.1% (SE 0.9) higher compared with never smokers (overall p = 0.011).
Outcomes of Bariatric Surgery and Smoking Behaviour
Median % HbA1c change was similar in the three groups, with somewhat more decrease in former smokers compared with never and current smokers (p = 0.013); median (IQR) 3 missing a significant difference (p < 0.05) between current and former smoker b significant difference (p < 0.05) between former and never smoker c significant difference (p < 0.05) between current en never smoker absolute HbA1c decrease was 3.0 (0.0-7.0), 2.0 (0.0-5.0) and 1.0 (0.0-5.0) mmol/mol, respectively. In the multivariable analysis, where we adjusted for preoperative HbA1c and revisional surgery, the difference was not statistically significant.
Remission of DM 2, remission of hypertension, reported hypoglycaemia, readmissions because of abdominal complaints, reoperations related to bariatric surgery, symptomatic gallstones and reported physical or mental hindrance of excess skin were not statistically different between the groups. Univariable analysis of smoking subgroups and CRP change showed an indication to proceed to multivariable analysis. However, in the multivariable model, the association was not statistically significant.
Discussion
This is the first study in bariatric surgery evaluating selfreported smoking behaviour combined with thoughts about the health consequences of smoking cessation and actual health outcomes in current, former and never smokers. In this Fig. 1 Prevalence of smoking after bariatric surgery, combined with smoking behaviour category on day of surgery. At the time of questionnaire, 101 (16.6%) of the 609 patients reported to be a current smoker; on the day of surgery, 44 of them were current smokers, 23 were recent smokers, 31 preoperative former smokers resumed after surgery and 3 never smokers before surgery started after surgery. Two hundred thirty-nine (39.2%) patients were former smokers. The majority of them (n = 226 (94.6%)) was already a former smoker before surgery, 12 patients were recent/current smokers before surgery and one never smoker started and stopped after surgery. Two hundred sixty-nine (44.2%) patients were lifetime never smokers. The groups were significantly different after surgery (chi-square test p < 0.001) study, the prevalence of smoking after surgery was 16.6% and 44.2% of the patients had never smoked. Nineteen percent of the patients that were ex-smokers (recent or former smokers) before surgery resumed smoking after surgery. Of the patients that currently smoked or recently smoked before surgery, 15% stopped after surgery. Compared with postoperative current smokers, former smokers were more aware of the beneficial effects of smoking cessation on the general health and the result of the bariatric surgery. Some of them even expected an unrealistic positive effect of smoking cessation on weight loss. Actual mean weight loss was somewhat more in smokers compared with former and never smokers. We could not confirm that other long-term bariatric outcomes were significantly different between current, former and never smokers after surgery.
Previous studies describing and associating smoking behaviour before and after bariatric surgery have several methodological limitations [34]. Timing of registration and definitions of smoking were dissimilar, rough and/or missing; studies had small sample sizes; high percentages of loss-to-followup and inclusion of types of surgery were often unclear. This could explain why the ranges of smoking prevalences are divergent, before and after bariatric surgery (1.2-62.0% and 8.1-43.3%, respectively) [23,25,31,35,36]. Also, it could (partially) explain why studies investigating the association of smoking with TWL (and other bariatric surgery outcomes) have conflicting results [7, 20-23, 25, 35-37].
Theoretically, the increased TWL in relation to smoking could be explained mainly by the effect of nicotine on (1) energy expenditure, (2) appetite/satiety and (3) eating behaviour (motivational and emotional influences). The absence of nicotine may cause reduced energy expenditure, more appetite and different eating behaviour. Furthermore, after smoking cessation, the rewarding value of food increases, which induces intake of greater amounts of sugar and fat, achieving pleasure similar to that of smoking [38]. One study found that weight gain was less in patients who were treated with cognitive-behavioural therapy focussing on the concerns about the weight gain after cessation, as opposed to the patients who were treated with strategies to reduce the weight gain itself [39]. The weight gain after smoking cessation is on average 4.5 kg in the general population and occurs especially during the first 6 months; 13% of quitters gain more than 10 kg. Lastly, two studies showed relatively higher smoking prevalences more than 6.5 years after surgery (27.5-43.0%), possibly due to the typical relapse pattern of smoking All values are expressed as number with percentages in parentheses addiction, or because patients use smoking for weight control when the normal slope of weight gain after surgery begins. [13,22,35,37,38] The time after surgery in our study was too short to confirm these findings. The analyses of associations of postoperative smoking with the other clinical outcomes showed no significant differences. Remarkably, compared with former and never smokers, the prevalences of DM 2 and hypertension in current smokers were already relatively low before surgery. The observed differences in TWL and number of cases in the subgroups are probably too subtle to translate in detectable changes in metabolic regulation, symptomatic gall stones and other clinical outcomes after surgery. And, although we were not able to statistically verify previously described relations with smoking and postoperative comorbidities, we are still convinced that smoking cessation should be promoted.
Beliefs About Smoking and Health Risks Among Smokers and Ex-Smokers
This is the first study exploring the attitudes of smokers and ex-smokers toward benefits of smoking cessation and risks of smoking in bariatric surgery patients. The attitudes and beliefs have been investigated in other populations [12][13][14][15][16][17][18][19]. We could imagine that the success and willingness to quit are affected by several aspects of which the concern of weight gain after smoking cessation is one, both before and after bariatric surgery. Other factors that could give impact on the willingness to quit are of a social and cultural nature; as in our study we found differences in the experienced support and amount of smokers in social environment between former and current smokers. Future (qualitative) research could deepen the knowledge about (former) smokers, in respect of the contributing or interfering beliefs and factors associated with successful cessation in bariatric surgery populations. Psychosocial theories are used to investigate behavioural change, and smoking cessation in particular [14-19, 40, 41]. In general, smokers are more likely to deny or rationalize their risk of developing smoking-related diseases (self-exempting beliefs). When smokers experience enough cognitive dissonance, this could discomfit the self-exempting beliefs, thereby inducing smoking cessation attempt. [17,41]. This could explain the differences in beliefs between smokers and former smokers about the effects of smoking cessation on general health, bariatric surgery and weight loss. The proportion of responses of 'do not know' can be explained in several ways; respondents truly did not know, or they were not able to figure it out due to cognitive dissonance. Some factors are known to be associated with poor self-reported health rate, including smoking and, among other things, BMI, gender, age and educational level [42,43]. It is possible that these factors also influence the attitudes toward health effects of smoking cessation, but we did not adjust for them.
Limitations
Our study has several limitations. First of all, self-reported smoking can be subject to recall and reporting bias [12,44]. It is known that reporting bias is greater in situations where quitting expectations on part of the healthcare team are higher [44]. In our experience, these biases are highest before surgery, because continuation of smoking could result in postponement of the surgery. In this study, we used written selfreport without interference of the attending doctor. Thus, the Including bariatric surgery $ Significant difference between current and former smokers (p = 0.001) and between current and never smokers (p = 0.028) # Significant difference between former and never smokers (p = 0.012) and between current and former smokers (p = 0.017) (retrospective) design could have been a positive aspect, because patients did not have to fear rejection when they retrospectively reported that they were a current smoker at the day of surgery. Additionally, we previously demonstrated that the sensitivity and specificity of written self-reported smoking behaviour compared with serum cotinine in patients more than 18 months after bariatric surgery were 93.5% and 96.4% [34]. Thus, we expect that the contamination of the 'former smoker category' with smokers that reported their smoking status incorrectly was limited. However, associations with smoking could actually be more distinct when we could have prevented any misclassification. Second, the response rate was only 52%. Apparently, the bariatric surgery population is not easily willing to participate in studies concerning smoking behaviour, considering most of the response rates in other questionnaire studies concerning bariatric surgery and smoking were even lower or unclear [21,24,31]. Additionally, the percentages of loss-to-follow-up after more than 2 years in bariatric surgery patients are high (26-95.7%), even in studies involving only medical record review, often above 40% [7,23,45,46]. We compared preoperative characteristics of responders and non-responders to check for selection bias, which seems limited. Still, the response rate could influence the representativeness of the data. Patients with specific characteristics may have systematically refused to participate, i.e., based on smoking, the most (un)satisfied, (un)healthy or optimistic patients could have ignored our request to participate. Therefore, we interpret the results with caution, because attrition bias could affect the magnitude and even direction of the associations that we found. Because of the possibility of Berkson's bias, current findings cannot be externalized to other populations and should first be examined in other bariatric surgery populations.
Third, the questionnaire was developed specifically for this study and not validated. However, we carefully selected questions used in the previous publications and in clinical practice. Also, we used cognitive interviewing to improve the formulating of the questions. Therefore, we are convinced that we can draw reliable and clinically relevant conclusions. Lastly, we included a heterogeneous group in terms of type of bariatric procedures and primary and revisional surgery, which could have influenced our results. On the other hand, in terms of preoperative assessment and postoperative follow-up duration and method, this group was uniform.
In conclusion, 3 years after bariatric surgery, 16.6% of the patients was smoking, and 44.2% had never smoked in their entire life. Former smokers were more aware of the beneficial effects of smoking cessation on the general health and the result of the bariatric surgery compared with current smokers. Actual mean weight loss was somewhat more in smokers compared with former and never smokers. We could not confirm that other long-term bariatric outcomes were significantly different between current, former and never smokers after surgery. The present study emphasizes the need for larger cohort studies with long-term follow-up, investigating better pre-and postoperative strategies to convince smokers to quit and prevent former smokers to relapse. Future research should focus on these strategies, possibly by addressing concerns about weight gain and other reasons interfering with quitting.
|
v3-fos-license
|
2020-08-28T14:24:28.209Z
|
2020-08-28T00:00:00.000
|
221346574
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ejrnm.springeropen.com/track/pdf/10.1186/s43055-020-00286-z",
"pdf_hash": "2b27c1f3834572799b0a3395b4c56e396272d0b5",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3000",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "2b27c1f3834572799b0a3395b4c56e396272d0b5",
"year": 2020
}
|
pes2o/s2orc
|
MR defecography in ano-rectal dysfunction: a clinical-radiological correlation study
The aim of this work is to assess the diagnostic accuracy of MR defecography in ano-rectal dysfunction [ARD] for proper treatment planning. MR defecography was done to 30 patients with ARD for detection of its functional and structural causes. Increased perineal descent was detected in 70% of cases, rectocele in 46.7%, rectal intussusception in 40%, cystocele in 26.7%, uterine prolapse in 27.7%, enterocele in 13.3%, and paradoxical puborectalis contraction in 30%. MR defecography is an essential diagnostic tool for optimum management of ano-rectal dysfunction patients.
Background
Constipation is a major health problem as it is considered one of the most common digestive complains [1]. Ano-rectal dysfunction [ARD] is an important sub-type of this disorder as it is involving almost 50% of constipated patients [2], it is defined as the urge to defecate but impaired ability to expel the fecal bolus [3], patients who suffer from this syndrome complain of excessive straining, impaired rectal evacuation, pain and/or bleeding after defecation, and sometimes the need of manually assisted defecation [4].
Evaluation and treatment of those patients have been difficult as it needs assessment of the pelvic floor integrity and function in its three compartments: anterior containing urinary bladder, urethra, and prostate; middle containing vagina and uterus; and the posterior anorectal one [5]. As a result, many imaging modalities have been used for further characterization of pelvic floor abnormalities such as endoanal sonography and endoanal MRI [6] with the ability to detect anal sphincter complex and its pathological changes in refined anatomical details, but both fail to assess pelvic floor function [7].
For many years, fluoroscopic defecography was used to evaluate different causes of ARD including paradoxcial puborectalis contraction, increased perineal descent, rectocele, enterocele, and rectal intussusception, but this modality has its draw backs, as it cannot detect pelvic floor soft tissues, its projectional nature along with the ionizing radiation hazards [8]. Recently MR defecography emerged as a perfect modality considering its multiplanar capability, superior temporal resolution, and excellent soft tissue contrast in evaluating pelvic organs and soft tissues supporting structures in addition to assessing the defecation process in dynamic way [9].
The aim of the study was to evaluate the benefits of MR defecography in patients with ano-rectal dysfunction and its addition to the diagnosis and hence better management and outcome.
Methods
This prospective study included 30 patients with anorectal dysfunction, from May 2015 till October 2015, [12 males, 18 females, age range 20-77 years, mean age 48.5 ± 15.7] and control group included 10 healthy adults having normal bowel habit [4 males and 6 females, age range 25-62 years; mean age 43.5 ± 13.7]. Inclusion criteria include adult patients with chronic constipation for at least 3 months fulfilling at least 2 of the following according to "Rome Diagnostic Criteria III" for functional constipation [10], less than 3 motions per week, straining, hard stools, sensation of incomplete evacuation, sensation of ano-rectal obstruction, and manual assistance needed for defecation. Exclusion criteria were patients with 2ry constipation due to bad dietary habits, structural cause; as colonic strictures, tumors or volvulus, systemic diseases including metabolic and endocrinal disorders as diabetes mellitus, hypothyroidism, hyperparathyroidism, pregnancy, hypokalemia, or hypercalcemia. Neurologic disorders as stroke, head injury, spinal injury, multiple sclerosis, or Parkinson disease. Also, Hirschsprung disease and connective tissue disorders as amyloidosis and scleroderma.
Clinical assessment
Clinical evaluation was done to all patients by detailed history taking and physical assessment of abdomen and anal examination. Colonoscopy was done to all patients to exclude any obstructing cause like stricture, tumor or polyp. Also, Colon transit time was done to exclude patients with colonic inertia. Medical treatment was given to patients with ano-rectal dysfunction for 3 months in the form of bulk forming laxatives, and the patients that responded to the medical treatment were excluded from the study.
MR defecography was done to all patients needing no patient preparation with the explanation of the procedure done to the patients prior to the procedure to ensure their cooperation throughout the scanning. Consent was taken from all the patients along with human ethics committee approval from the institutional review board of the private hospital where the study took place.
Image acquisition
MR defecography was performed on 1.5 Tesla closed MR scanner "MAGNETOM Avanto, Siemens, Germany, " using a body-array-surface coil. First, the patient was on the left lateral decubitus to insert 300 ml ultrasound gel through a rectal tube then the examination was done while he/she was in the supine position with the hips and knee flexed, stimulating physiological defecation position. Initially static imaging was done to evaluate pelvic anatomical details by axial and coronal T2weighted turbo spin-echo sequence with the following parameters: TR/TE 4000/100, matrix size 256 × 256, FOV 25 mm-35 mm with RFOV 100%, slice thickness 4 mm, then dynamic imaging was performed using T2weighted multiphasic 2D steady-state free precession sequence [SSFP] in the mid-sagittal plane through the anal canal with the following parameters: TR/TE 5.3/2.4, matrix size 320 × 220, FOV 40 mm, slice thickness 10 ml; this sequence was running for 2 min while the patient was instructed to strain till defecation occurs, acquiring about 250 images [2 image/s] and was repeated for another 2 min if the patient failed to defecate.
Image analysis
Images were analyzed by two different radiologists [4-year experience in reading MRD, and the findings were recorded by consensus of both] on a PACS workstation [SECTRA IDS7 Sweden]; first, static images were reviewed to detect any pelvic floor abnormality then dynamic mid-sagittal images were evaluated in the two phases: rest and defecation in cine loop mode.
Normally at rest, all the pelvic organs, namely urinary bladder base, vaginal vault, and peritoneal cavity [omental fat-small bowel-sigmoid colon] should be above the pubococcygeal line (PCL) which is the line joining the inferior border of the symphysis pubis and the last coccygeal joint. Also, at rest, the ano-rectal angle (ARA) which is formed between the posterior border of the rectum and the central axis of the anal canal is between 65 and 100°with no noticeable differences between males and females [11]; in straining the ARA increases and the perineum descends, its landmark is ano-rectal junction (ARJ) and it is considered normal when caudal migration is less than 2 cm relative to the resting position ( Fig. 1). During defecation as a result of relaxation of the puborectalis muscle and anal sphincter, rectum and anal canal are in alignment causing more widening of the ARA [12]. Increase of the ARA less than 15-20°is considered abnormal [13].
Presence of organ descent was measured as the perpendicular distance of ARJ (rectal descent), bladder base (cystocele), vaginal vault (uterine prolapse), and omental fat-small bowel-sigmoid colon (enterocele) below PCL [14]. Its grading was as follows: mild 2-< 3 cm, moderate from 3 to < 6 cm, and severe = > 6 cm [15]. Also, detection of associated findings, like rectocele, rectal intussusception, and paradoxical puborectalis, was documented. Rectocele which is defined as anterior rectal wall protrusion beyond the rectal wall during defecation, and it is graded as follows: mild (< 2 cm), moderate (2-4 cm), and large (> 4 cm) [4]. Rectal intussusception which is internal invagination of the rectal wall, and it is classified according to the location into intra-rectal, intraanal, and according to thickness into mucosal or full thickness [16]. Paradoxical puborectalis is detected when there is failure of puborectalis muscle to relax during defecation with an increase of the ARA less than 15-20°o r even decrease [11].
Results
Fifty-nine cases complaining of constipation were evaluated, 18 cases responded to the medical treatment, and hence were excluded from the study, rest of the cases [41 cases] underwent colonoscopy, 3 cases were excluded due to presence of colonic mechanical obstruction (2 cases of benign strictures due to diverticulosis and 1 case of malignant mass) with the remaining 38 cases underwent colonic transit time excluding 4 more cases of colonic inertia with the rest of the 34 cases showed outlet obstruction pattern for whom MR defecography was done, and 4 cases were excluded due to inability to perform defecation during the examination. Clinical assessment showed 10 cases of rectal descent and prolapse (33.3%) [6 cases were females and 4 cases were males], 12 cases of rectocele (40%) [10 cases were females and 2 cases were males], 4 cases of rectal intussusception (13.3%) [1 case was female and 3 cases were males], and 5 cases of cystocele (16.6%); all were in the female population. Two cases of uterine prolapse (11.1% of female population) and 3 cases of paradoxical puborectalis (10%); all in the male population with no enterocele detected clinically.
By MR defecography, rectal descent was detected in 21 cases (70%): 7 cases (33.3%) were severe [6 cases were females and 1 case was male] as seen in (Figs. 2, 3, and 4) and 14 cases (66.6%) were moderate [7 cases were females and 7 cases were males]. Fourteen cases of rectocele were observed (46.7%), 3 cases were large (10%); all were females (Fig. 3). Eleven cases were moderate (36.6%) [8 cases were females and 3 cases were males] (Fig. 2). Twelve cases of rectal intussusception (40%); all were intra-rectal. Four cases (13.3%) were full thickness [1 case was female and 3 cases were males] (Figs. 3 and 4). Eight cases (26.6%) were mucosal [2 cases were females and 6 cases were males] (Fig. 2). Cystocele was seen in eight cases (26.7%): five cases moderate (16.7%) (Fig. 2), and three cases mild (Fig. 3) (10%); all were females showing statistically high significance than males with P value 0.002. Uterine prolapse was seen in five cases (27.7% of female cases): three cases were moderate and two cases were mild. Four cases (13.3%) showed enterocele one was severe, and it was a female (Fig. 3); three were moderate [two cases were females and one case was male] (Fig. 4). Failure of puborectalis to relax during defecation was detected in nine cases (30%) [two cases were females and seven cases were males], three cases with no other abnormality detected; all were males (Figs. 5 and 6). The rest six cases were four males and two females; all showed moderate rectal descent along with paradoxical puborectalis. Paradoxcial contraction of the muscle was seen in 6 cases (Figs. 5 and 6) while 3 cases showed increase in ARA angle ≤ 10° (Fig. 7). MR defecography showed statistically significant difference over clinical findings in detecting rectal descent, rectal intussusception, enterocele, and paradoxical puborectalis with no significant difference in detecting rectocele, cystocele, and uterine prolapse (Table 1). Both clinical assessment and MRD findings show predominance of the structural causes of ano-rectal dysfunction in females while functional one as in paradoxical puborectalis was predominant in males (Tables 2 and 3).
Discussion
Ano-rectal dysfunction (ARD) is of complex etiology requiring a multidisciplinary approach to assess the pelvic floor anatomy and function is its three compartments [17]; mostly, it is treated medically but nearly 20% of cases need surgery, and each surgery should be tailored according to the precise cause of ARD to obtain optimal results and to decrease both complications and recurrence [18].
Paradoxical puborectalis contraction is one of the commonest causes of functional constipation [19] where the puborectalis muscle fails to relax or even contracts during defecation causing no appreciable increase or even decrease in ARA resulting in ARD [20]. The etiology of this condition is unclear [21], and its diagnosis depends on both physical examination and ano-rectal physiologic testing, like ano-rectal manometry and anal/ pelvic floor electromyography (EMG), but both have certain limitations as voluntary contraction of puborectalis muscle can occur due to anxiety to rectal examination itself, and the EMG has false-positive results caused by pain from the needle placement [22]; on the other hand, MR defecography can detect this pathological action of puborectalis muscle by measuring ARA during rest and during defecation and if there is less than 15-20°widening or even decrease in the ARA detected in the latter phase diagnosis is made directing those patients to nonsurgical treatment. like biofeedback therapy [9]. In the current study, there was statistically significant difference between clinical assessment and MRD findings, with the capability of the latter to measure changes in ARA during defecation and this was in agreement with the findings of Nikjooy et al. [9] that showed abnormal ARA changes had sensitivity of 95%, specificity of 92%, PPV of 95%, and NPN of 92% in differentiating patients with this disorder from normal subjects. Structural causes of ARD include descending perineum syndrome which is abnormal pelvic floor descent during straining, and it is due to weakness of the pelvic floor muscles and ligaments from pudendal nerve injury as a result of obstetric trauma or chronic straining, it can occur in any of the three compartments of the pelvis; most of the times, more than one compartment is simultaneously involved [23]. In this study only 1 compartment involvement was detected in 17 cases (56.6%), 2 compartments in 8 cases (26.6%), and 3 compartments in 2 cases (6.6%) ( Table 4). The importance of this finding is directing the treatment plan to target more than one compartment to avoid recurrence of symptoms. Rectal descent was the most frequent finding in the current study representing 70%, which was in agreement with Rentsch et al. [24] findings being 60% and El-Nashar et al. [25] being 65% and were more in females than males with no statistically significant difference, and this was in accordance with the results of Savoye et al. [26].
The second most frequent finding was anterior rectocele representing 46.7%, and it was more frequent in females (78.5%) with the large ones [> 4 cm] were only detected in females; this result was similar to most of the studies evaluating rectocele showing significantly higher percentage among females [27], with some studies like Dietz and Clarke [28] and Carter and Gabel [29] included only female population. In spite that rectocele can be detected clinically and in the current study, there was no statistically significant difference between the clinical findings and MRD findings, but better evaluation of the rectocele size, emptying and presence of associated findings can only be achieved by imaging [5].
Enterocele was found in 4 patients 13.3 %, being more predominate in female population with none could be detected clinically showing superiority of MRD in detecting it, which is important prior to surgical treatment to avoid bowel injury [4]. Rectal intussusception was detected in twelve cases (40%), all were intra-rectal, four cases (13.3%) were full thickness [One case was female and three cases were males], eight cases (26.6%) mucosal [Two cases were females and six cases were males]. Cystocele represented 26.7% were only detected in females as it is due to weakness of the pubo-cervical fascia from obstetric trauma. Uterine prolapse was seen in 5/ 18 cases presenting 27.7%.
The current study showed that the additional data obtained from MRD over the clinical evaluation play an important role in adjusting the treatment plan regarding non-surgical management as starting by biofeedback and physiotherapy in cases involving paradoxical puborectalis with surgical treatment decision was adjusted after
Conclusion
MR defecography is essential in the work up of patients complaining of ano-rectal dysfunction by providing valuable information to the physician aiming for proper management decreasing both complications and recurrence.
|
v3-fos-license
|
2018-12-15T17:09:40.238Z
|
2015-02-27T00:00:00.000
|
154985501
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ccsenet.org/journal/index.php/ijbm/article/download/43350/24890",
"pdf_hash": "fca97287f858c227ae193692f5c31497741c563b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3001",
"s2fieldsofstudy": [
"Business"
],
"sha1": "fca97287f858c227ae193692f5c31497741c563b",
"year": 2015
}
|
pes2o/s2orc
|
The Influence of Retailer Enterprise Marketing Information System on Bullwhip Effect
This study aims to investigate the influence of utilizing enterprise level marketing information system in the form of adopting a combination of software tools on the reduction of bullwhip effect. In order to formulate our research hypothesis, we revised the literature extensively. The empirical study findings indicated a positive influence and relationship between the Enterprise Marketing Information System in the form of ERP software, Social Media Monitoring Tools, Business Intelligence Tools and Reduction in Bullwhip Effect.
Introduction
As the 21st century dawns, the battle for customers have never been so intense that deregulation, diversification and globalization have stimulated a dramatic rise in competition (Algharabat & Zamil, 2013).Companies are forced to switch from product-centric approach to a customer-centric approach.
Marketing initiatives have a considerable impact on supply chain management (Lummus, Leslie, & Vokurka, 2003).Marketing mediation involves matching supply to demand, so effective market mediation requires accurate and timely information about the dynamics of supply and demand and incorporates IT-enabled processes, including collaborative planning and forecasting replenishment (CPFR), advanced supply chain planning and logistics and distribution management.Information sharing and collaborative forecasting can mitigate the impact of demand variability on operations and reduce the upstream escalation (Lee et al., 1997).
Leading firms after gradually adopting the product, sales and marketing philosophies, are now facing with new challenges of being customer-centric.The interest for the vendors-customer relationship through relationship marketing has been evolving.Through this relationship orientation, companies are trying to build trustworthy relationship with customers for achieving competitive advantage by accumulating and analyzing customer-focused information (Parvatiyar & Seth, 2001).Furthermore, Customer Relationship Management (CRM) in terms of sales force automation and marketing automation is poised for revolutionary changes in its organizational context as well as in its relationship with customers (Achrol & Kotler, 1999).Companies need to assess the business situation and understand the real requirements for automation.In addition to supporting the customer order fulfilment processes, the information capture and its management within CRM applications are playing vital role through the integration of CRM with Enterprise Resource Planning (ERP) and Supply Chain Management (SCM) to enhance customer service (Xu, Yen, & Lin, 2002).
Implementing ERP, CRM and SCM based enterprise systems can show significant impact on corporate performance (Hendricks, Singhal, & Stratman, 2007).For example, co-creation of value (about joint creation of value by the company and the customer) fundamentally challenges the traditional distinction between supply and demand.It often creates an experience which consumers can have an active dialogue and co-construct personalized experiences.It is not only giving focus on the demand-site innovation for new products and services, but also innovating new experience for co-creation of value.It is an innovative environment where demand and supply are emergent and contextual.Supply will be associated with facilitating a unique consumer experience on demand (Prahlad & Ramaswamy, 2004).
Contemporary Enterprise Marketing Information Systems involve Web 2.0 based applications where retailers can look for the opportunities to unleash the power of the Social Media Monitoring (SMM) as a new marketing channel to promote their products and services.In particular, the web has created opportunities for electronic word of mouth communication through online discussion forums, electronic bulletin board systems, new groups, blogs, review sites and social networking sites (Cheung, Lee, & Rabjohn, 2008).Therefore, if companies want to unlock the potential of social media to reinvent their customer relationships, they need to think about CRM in a new light while building and adopting Social CRM strategies (Barid & Parasnis, 2011).
Even there is mixed evidence about the aggregate prevalence of the bullwhip effects and its consequences (Cacheon et al., 2007), we noticed that investigating the impact of marketing information system as a part of demand management to provide demand certainty from enterprise perspective especially to mitigate the bullwhip effect is very limited in the Arab region (e.g., Al-Shaikh, 2010;Ismail, 2011).Thus, our research study aims to investigate empirically the influence of enterprise information system on providing information sharing in the form of utilizing state-of-the-art information technology based enterprise software tools such as using ERP Software, Social Media Monitoring tools, Business Intelligence Tools and corresponding in reducing the Bullwhip Effect.
The Bullwhip Effect
The bullwhip effect (Forrester 1961;Lee et al., 1997) has been studied extensively because of its ability to explain the situation of inventory in supply chain.Bullwhip Effect involves risk to manufacturers since it is concerned with the amplification in order-size variance for upstream node in a supply chain (Lee et al., 1997).Demand variability increases from downstream to upstream in a supply chain (Sharma, Balan, Vrat, & Kumar 2006).At the macro level, Bullwhip Effect induces inefficiencies in production, scheduling (capacity utilization), sourcing, distribution, revenue generation and its realization.At the operating level, it generates more inventories and keeps it in the most inappropriate place, to meet a specified service level.At the performance level it can reduce the velocity of cash, destroy potential revenue by price discounts.It can potentially dilute a competitive (strategy) and position (Ravichandran, 2003).It indicates higher inventories at the upstream and downstream of the supply chain as a result of deficient information sharing, insufficient market data and deficient forecasting capability (Svensson, 2003).
Strategies for Mitigating Bullwhip Effect
The management of multiple relationships across the supply chain is often referred to as SCM.SCM addresses the supply chain from the point of origin to the point of consumption.SCM requires cooperation and coordination between companies' activities and resources in a supply chain.While SCM helps in mitigating the bullwhip effect, implementing SCM strategies such as; demand forecasting with vendor managed inventory and continuous replenishment programs, order batching, point of sale data, electronic data interchange and reduction in price fluctuations effect through offering every day low price (Moyux, Chaib-draa, & D'Amours, 2006) are essential for companies wishing to reduce the costs associated with the bullwhip effect.Therefore, there are three possible remedies which companies often follow up to mitigate the bullwhip effect.(i) Information sharing i.e. demand information at a downstream site is transmitted upstream in a timely fashion.(ii) Channel alignment i.e. the coordination of pricing, transportation, inventory planning and ownership between the upstream and downstream sites in a supply chain.(iii) Operational efficiency i.e. the activities that improve performance such as reduced costs and lead-time (Lee, Padmanabhan, & Whang, 1997).
Demand information aggregation is considered one of the important strategies for reducing the bullwhip effect.It means gathering or generating information regarding all demands and orders for a specific supply chain at the required level of detail (Fransoo & Wouters, 1996) and sharing this information across the supply chain.The information aggregation has the following influences in supply chains and markets (Glazer, 1991): The greater the information intensity of a firm; (i) The shorter are its product life cycles; (ii) The more the traditional product-market boundaries; (iii) The more tendency moving towards market or customer knowledge as opposed to product knowledge; (iv) The more the buyer purchasing power; (v) The more the reliance on flexible marketing systems so that a greater degree of managerial discretion about the nonproduct elements of the marketing mix, about marketing communication activities such as distribution, advertiser, sales force can become value addition.
(vi) better competitive position and better market-share leadership with differentiation; (vii) longer Term Market share and shorter term profitability; (viii) greater possibility of maximizing the number of transaction with the same loyal customer; (ix) increased strategic alliances; (x) Better cooperation strategies; (xi) Promoting agile manufacturing trends by allowing the customer to participate in the product design; (xii) Greater involvement of marketing department with activities of other functional areas; (xiii) Allows more reliance on decision making considering both external and internal factors as well as tangible and intangible factors lies with business along the supply chain.
The information aggregation influences the supply chain in terms of cost, quality, delivery speed and agility by assembling organizational resources in response to market opportunities making the enterprise seamless and extendable.Each extended enterprise member contributes to the collective enterprise by sharing facilities, resources, technology and know-how.Transactions within the extended enterprise represent a balance between market and hierarchies.It brings changes in the organization of productive resources, in the relationships between traditional competitors and in the governance of transaction with stimulated new thinking about inter-organizational information flows and the nature of competitive cooperation.The interdependent destinies of allied companies of the collective enterprise reflect the new environment in which supply chain compete against supply chain, and in which traditional competitors increasingly find themselves mutually beneficial collaborators (Greis & Kasarada 1997).The impact of information sharing on the bullwhip effect has been discussed by many authors (Lee et al, 2000;Moinzadesh, 2002;Moyaux, Chaib-draam, & D'Amours, 2006;Agarwal et al., 2009, Hussain & Drake, 2011).
Research Model and Hypotheses
Figure 1 shows the research model and the hypothesized relationships among its variables which are all developed based on relevant literature.As shown in Figure 1, the model consists of four interrelated latent variables: core ERP software tools, SMM software tools, Business Intelligence Software tools and Reduction in bullwhip effect, where Reduction of bullwhip effect is a dependent variable depending on three independent variables: Use of ERP Software, Use of SMM Software and the Use of Business Intelligence Software Tools.
Influence of ERP on Mitigating Bullwhip Effect
ERP is a strategy where all levels of the supply chain have visibility to final demand (Torres & Maltz, 2010).The multiuser ERP system helps to reduce bullwhip effect (Badilo-Pina, 2012).Use of ERP as an IT tool often helps to mitigate the bullwhip effect resulting from information distortion to upstream firms of supply chain (Lai, Xu, & Zhu, 2010, p. 59).A case study approach (Ravichandran, 2006) of investigating the influence of ERP implementation shows that the use of ERP software has a positive influence in mitigating the bullwhip effect.Information delays can be removed with the use of ERP systems (Burns & Janamanchi, 2006).Advanced resource planning as a module of extended ERP, improves supply chain effectiveness and helps to mitigate one of the primary sources of the bullwhip effect (Nieuwenhuyse et al., 2011).Elragal and Al-Serafi (2011) investigate the relationships between ERP and business performance.The authors find that the ERP has reduces the amount of inventory.Further Tsai (2011) asserts that ERP influences corporate performance as such inventory levels are expected to decrease.Mandal & Gunasekaran (2002) posit that using bar-coding technique also improves the inventory reduction.Information sharing of inventory reduction can influence in mitigating bullwhip effect (Lee, Padmanabhan, & Whang, 1997;Lee, So, & Tang, 2000).
ERP software usage with techniques such as vendor managed inventory, just-in-time purchase, SCM software can considerably influence reducing the time and information delays which as a result mitigate the bullwhip effect (McCullen & Towill, 2002;Disney & Towill, 2003a;Disney & Towill, 2003b;Ravichandran, 2008).Furthermore, ERP helps in improving order management and cycle times (Kelle & Akbulut, 2005;Ravichandran 2008;Tarantilis, Kiranoudis, & Theodorakopoulos, 2008) and it can reduce procurement costs (Baudmann, Ahsen, & Diaz, 2004).Thus, H1: There is a significant positive contributory relationship between using the ERP software and reduction in Bullwhip Effect in Supply Chain.
Influence of SMM on Mitigating Bullwhip Effect
SMM tools are becoming essential to measure social media performance, business impact and return on investment.SMM tools allow businesses to monitor the information related to; business, product competition, key words, phrases and results categorized by the type of social medium such as tweet, blog, post, forum, news item, video or image, eWord-of-Mouth, real time update of information through e-mail alerts and Really Simple Syndication (AlSudairi, Vasista, Zamil, & Algharabat, 2012).Further AlSudairi et al. (2012) show that eWoM, CRM tools, Web 2.0 tools and Business Intelligence tools can contribute to information sharing and contributes in mitigating bullwhip effect uniquely in its own way.
Though digital actions can positively influence e-business value creation in terms of increasing sales, consumers will migrate to being more active participants and creators of content as social media increasingly becomes the backbone of digital experience.Social Media actions could generate nearly 2.5 times more conversations per 100 consumers than digital actions (Bihr, 2010).The new trends enabled by state-of-the-art information and communication technology are transforming and streamlining supply chain information management.For example, SCIM 2.0 which stands for effective supply chain information management in supply networks using Enterprise 2.0 can have a potential to include semantic representation techniques and applying it to members of a whole supply chain network.The primary objective of Enterprise 2.0 is the support of internal and external information and communication processes and to ensure that relevant information is available through the use of Web 2.0 technologies to stem the bullwhip effect (Auinger & Nedbal, 2009).Social capital is also identified as influence, control and power.Strong social norms provide increased solidarity among industry groups.
IDG Research services along with the partnership of Kemp Goldberg has conducted an online survey in USA on finding the influence of Social CRM on the supply chain among the business decision-makers with involvement in or responsibility for supply chain operations.The survey concluded the following: (i) current adoption of social media for use in the supply chain is relatively low and is more often used as a method of community participation as opposed to replacing or augmenting traditional CRM activities.(ii) While current adoption is low there is interest at organizations not currently using social media in their supply chain relationships to do so.One key inhibitor to adoption is the willingness and ability of supply chain vendors to engage their customers via social channels.(iii) Among both current and future users, opportunities to engage seem to be focused primarily on mainstream services as well as private communities, vendor wikis and blogs.(iv) This suggests that vendors may start by creating a corporate presence on Facebook Twitter Linkedln and perhaps targeted user communities as a place to start the conversation with their customers.It reveals that respondents want to stay on top of the industry trends but also want to gather specific vendor-related intelligence whether that is pricing, product specs, shipment status etc. Respondents want social media to be a tool for managing vendor relations but also tools that can help inform purchase decisions and to engage in dialogue with vendor/experts.(v) Respondents that are utilizing social media in the supply chain provide evidence that they are deriving business value from doing so.However in order for near-term adoption to really grow vendors must educate the market on the benefits of social media in the supply chain, increase the visibility of their social presence and assure privacy, confidentiality and security concerns.
Social Media engagement strategy can be achieved by developing and implementing an integrated and coordinated Web 2.0 tools.A high level of social norms for the supply chain partners can positively influence the information sharing (Chiu et al., undated) which in turn influences mitigating bullwhip effect (Lee & Whang, 2000).To reduce bullwhip effect, suppliers try to get the real-time market information through brand visibility (Lin, Chiu, & Wu, 2013).Therefore, the use of Web 2.0 and other Mash-Up techniques help in creating real-time market information (Cooke & Buckley, 2008).Thus, H2: There is a significant positive contributory relationship between using the Social Media Monitoring software and reduction in Bullwhip Effect in Supply Chain.
Influence of BI on Mitigating Bullwhip Effect
Multi-dimensional deep-dive web analytics can optimize marketing efforts with: (i) geo-profiling insights, (ii) search engine analytics, (iii) internal search analysis, (iv) user behaviour insights, (v) social media influencers, (vi) and customer segmentation (AlSudairi et al., 2012).Web Intelligence when combined with Business Intelligence (BI) can provide campaign optimization intelligence and site optimization intelligence.Once entered into the domain of BI, rooted in rigorous scientific methodologies, BI provides on-demand conversion of real time sampling of attitudinal and likely behaviour data into validated insights as a part of predictive intelligence so that managers peer into the future with increased certainty.It uses advanced analytics and visualization tools for this purpose.Web-based predictive intelligence systems are able to reach out and collect accurate data from areas once obscured by their separation from the business.Applying the concepts of BI to data from SCM systems, supply chain analytics seek to provide strategic information to decision makers in organizations.BI has a potential to provide essential valuable interpretation based information regarding not only the customer service delivery time and cost derived from downstream supply chain to upstream supply chain using analytical eCRM techniques, but also to solve strategic business problem brining visibility to key supply chain processes with supply chain intelligence (AlSudairi et al., 2012).
The optimal supply chain is built around the needs of the end customer.The perfect supply chain includes only those activities that the end customer values and is willing to pay for.This requires developing an accurate supply chain intelligence capability.The measurement of the effectiveness of supply and demand balance within a firm is probably best represented by the supply chain operations reference model (SCOR).The SCOR model reveals the capability of developing supply chain metrics such as demand intelligence, logistic intelligence, demand/supply alignment, product intelligence and supply intelligence can be achieved through applying Data Warehousing and Data Mining Techniques over Social CRM and Other external data such as SMM data (Haydock, 2003).AlSudairi et al. (2012) posit that use of BI software in analyzing the SMM capability can positively influence the bullwhip effect in a unique way.Thus, actionable customer insights can be developed using web analytics, social media monitoring, customer/business feedback through new web site and online communities and web content/text mining and analysis.Web 2.0 in the form of eCRM 2.0 aims at exploiting both the networking and social/customer intelligence of web 2.0 by integrating and engaging customers and communities along firm's value chain operations (AlSudairi et al., 2012) and further helps in mitigating the bullwhip effect.
World class BI environment allows retailers to increase their visibility into inventory management without hampering daily operations.By extracting information from disparate source systems into a centralized repository such as an enterprise data warehouse retailers are concurrently reporting on metrics related to their supply chain, sales, production and internal operations to make better fact based business decisions.By utilizing Data Warehousing that supports trending on both historical and future data metrics such as weeks of supply, sell-through, inventory turnover gross margin return on inventory and shrinkage, retailers can improve information quality and accuracy, manage inventory level to avoid lost sales or over supply, provide external vendors with increased visibility into product performance and allow managers and executives to make more timely decision using a common set of data trusted across the entire user community (Algara & Charen, 2008).Supply Chain control dashboards when combined with electronic solutions and data calculation capability can help in improving operative decisions in supplychain (Dreyer et al., 2009) and reduces bullwhip effect (Kim & Lm, 2002).
H3: There is a significant positive contributory relationship between using the Business Intelligence software and reduction in Bullwhip Effect in Supply Chain.
Research Method
A research study is employed to test the hypothesis based on a simple random sample collection of the required data.The population study consists of IT departments at manufacturing & process industries, retail, transportation/logistics, and wholesaler/distributor in Saudi Arabia and Jordan.The data collected through a pre-designed questionnaire which was directed to the marketing people.The questionnaire is validated by academicians and experts.Their opinions and comments are considered while designing the questionnaire.Based on following the IDG Research suggestions, and based on the self-administered questionnaire, a survey is conducted for studying 300 qualified responses.Furthermore, participants received an email message inviting them to participate in the survey.Each participant receives a unique URL link to access the survey.The research Construct Operationalization is explained in Table 1.
Research Construct
Author (s)
Exploratory Factor Analysis
The analysis started by examining the structure and dimensionality of the study constructs using exploratory factor analysis (EFA) and reliability analysis.After examining the pattern matrix of the EFA, all items had loadings greater than 0.4 and communalities greater than 0.5.
Measurement Model
We evaluated the measurement and structural equation models using SPSS AMOS.The measurement model includes 14 indicators, and we provide its results in Table 2 including the standardized factor loading, standard error (SE), t-values, and average variance extracted (AVE).The standardized factor loadings (λ) are all greater than .6.The composite reliabilities for ERP (0.92), SSM (0.92), BIS (0.92), and BRE (0.87) are all within the acceptable range (Hair et al., 1998).Moreover, AVE by each construct exceeds the minimum value (0.5 or above) recommended by Hair et al. (1998), indicating convergent validity.The square roots of the average variance extracted by each construct exceed the correlation between them (Table 3), demonstrating discriminant validity.Thus, our instrument had satisfactory construct validity (Anderson & Gerbing, 1988;Fornell & Larcker, 1981).Further, we used a battery of fit indexes (χ 2 = 200.250,df = 70; and χ 2 /df = 2.86)comparative fit index [CFI] = .924;goodness-of-fit index [GFI] = .924;Tucker-Lewis index [TLI] = .921;incremental fit index [IFI] = .907;root mean square error of approximation [RMSEA] = .0601)to measure the validity of the proposed model, which all are acceptable (Byrne, 2001;Hair et al., 1998).The figures under the diagonal are the Pearson (R) correlations between the variables.Diagonal elements are square roots of average variance extracted.
Discussions
This research aims to investigate the impact of utilizing enterprise level marketing information systems on the reduction of bullwhip effect.We adopted a combination of software tools; ERP software, Social Media Monitoring Tools, Business Intelligence Tools to study the impact.We found that utilizing ERP software often helps organizations to mitigate the bullwhip effect resulting from information distortion to upstream firms of supply chain.This result comes in accordance with previous research (Lai, Xu & Zhu, 2010;Ravichandran, 2006;Burns & Janamanchi, 2006;Nieuwenhuyse et al., 2011).Utilizing ERP software often reduces inventory levels, improve business performance via reducing number of employees, and reduce the time and information delays.
All the above will have a positive impact on mitigating the bullwhip effect which often enhance supply chain effectiveness.
Furthermore, utilizing SMM tools, to interact with supply chain transactions, to monitor the information related to business, product competition, key words, phrases and results categorized by the type of social medium such as tweet, blog, post, forum, news item, video or image ensure that relevant information is available through the use of Web 2.0 technologies to stem the bullwhip effect.Social Media engagement strategy can be achieved by developing, implementing an integrated and coordinated Web 2.0 tools.Utilizing SMM for the supply chain partners can positively influence the information sharing which is expected to influences mitigating bullwhip effect.
Moreover, utilizing BI software provides on-demand conversion of real time sampling which reduce; customer return handling costs, marketing costs, and time to market product or services.Further, BI has a potential to provide proper information which help the customer service delivery time and cost derived from downstream supply chain to upstream supply chain using analytical eCRM techniques but also to solve strategic business problem brining visibility to key supply chain processes with Supply Chain Intelligence (AlSudairi et al., 2012).Thus, use of BI software positively influences the reduction of bullwhip effect in a unique way.Therefore it can improve information quality and accuracy.
Managerial Implication
In a modern society where internet usage is increasing, organizations must rely on the importance of employing enterprise marketing information system in the form of ERP software, SMM Tools and BI Tools which often reduce the bullwhip effect.Therefore, we encourage organisations to use the above tools in order to; (i) ensure that all levels of the supply chain have visibility to final demand, (ii) reduce the time and information delays, (iii) improve order management, cycle times and reduce procurement costs, (iiii) influences the information sharing and gets the real-time market information.
Limitations
We admit the following limitations.First, we did not include particular SMM tools in our study such as the impact of Facebook, or Twitter on reducing the bullwhip effect.Therefore, we invite other researchers to investigate this issue.Second, while conducting this research we did not measure the impact of the software on organization size.Future research should investigate the impact of organization size as a moderator on the proposed relationships.Third, generalizability of our study could be limited to the environment which we conducted this research on (non-western context), further research are welcome to apply this study on a western-context.
Figure
Figure 1.Proposed research model in inventory levelsElragal & Al-Serafi (2011);Tsai (2011);Mandal & Gunasekaran, (2002);Lee, Padmanabhan & Whang (1997);Lee, So & Tang, (2000) ERP2: Reduction in time delaysMcCullen and Towill(2002);Disney and Towill (2003a); use social media sites to interact with our supply chain transactions/vendors/partners in the form of e-Word-of-Mouth AlSudairi et al. (2012) SMM2: We are aware of our primary supply chain vendors/partners usage of social media sites or services in the form of Communicating the relevant information Auinger & Nedbal (2009) SMM3: Our supply chain transactions/vendors/partners are likely to use social media sites to provide visibility of their real time market information through their brand presence and product promotions Lin, Chiu & Wu (Undated); Cooke and Nick (2008) SMM4: Our supply chain transactions/vendors/partners find that there is Business value exists by following any social media software IDG & Kemp GoldBerg (2011) III.Business Intelligence Software (BIS1-BIS3)contributes positively in reducing Bullwhip Effect Elbashir, Collier and Davern (2008) BIS1: Web based Predictive Intelligence Capability can reduce variability AlSudairi et al. (2012) BIS2: BI software allows integrating and engaging customers and communities along firm's value chain operations to reduce uncertainty Algara & Charen (2008).BIS3: BI Software improves information quality and accuracy in quantitative aspect to help intime-to-
Table 2 .
Measurement model summary
|
v3-fos-license
|
2023-11-17T16:12:06.408Z
|
2023-11-14T00:00:00.000
|
265250213
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "http://igsspublication.com/index.php/ijppr/article/download/247/289",
"pdf_hash": "f68062b0618c89132b51880e51b74f68c764f5ec",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3003",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "4bc1d38a8d59bf84109327a59e35c2614e209e58",
"year": 2023
}
|
pes2o/s2orc
|
Employing Strategic Planning in Making Financing Decisions: An Analytical Study on a Sample of Iraqi Commercial Banks
This study provides a comprehensive analysis of the impact of employing strategic planning on financial decision-making using both exploratory and descriptive methods. The results demonstrated a consensus among participants regarding the importance of strategic planning and financial decisions, enhancing the competitiveness of organizations. The study also emphasizes the importance of continuous research and precise analysis of financial challenges, predicting future market changes. The researcher strongly recommended making data-driven decisions, maximizing the benefits of data analysis and financial modeling techniques in informing strategic financial decisions
Introduction
The economic, administrative and political transformations in the local and regional environment, in addition to the effects resulting from globalization on the international scene, have led to increased interest in studying the tools that have an effective impact on the success of institutions and organizations, regardless of their type and activity, and among the most important of these tools that researchers have paid attention to in the recent period is " Strategic Planning" Strategic planning is one of the main factors that contributed to the development of internal and external changes, which aims to advance organizations in various administrative, economic and financial fields.Research in the fields of management and economics, as well as market studies carried out by organizations, has confirmed that strategic planning has played an effective role in developing methodologies to protect against changes and risks (Jarallah, 2017).Despite the difficulties and problems faced by institutions and organizations as a result of continuous changes at all levels of administration and economics, strategic planning continues to have a significant impact on achieving their strategic goals.It also keeps pace with modern trends in administration and economics, such as globalization, which depend on the realization that the environment is an integrated global unit (Theeb, 2020).Multiple.It is a systematic effort aimed at making critical administrative and economic decisions, including financing decisions.
Prepares future planning One of the main activities that helps organizations avoid multiple challenges and crises, It represents systematic efforts aimed at making decisive administrative and economic decisions, including financing decisions.Finance arouses the interest of individuals, companies and governments, and this is because finance is linked to economic development.Individuals want to obtain finance in order to satisfy their needs and desires, with
Research Problem
The use of strategic plans In making financial decisions it is considered an issue.It is of great importance in the world of business and corporate management, as the correct financing decision is based on a strategic plan based on the main objectives of the organization.It is the criterion for its success and continuity in the current competitive market (Amici, 2020), also contributes Using strategic plans in making financial decisions in directing companies towards achieving specific goals Taking advantage of available opportunities and dealing with challenges effectively (Agger, 2018).Also, through Employing strategic plans in making financial decisions Priorities are set and planned to achieve sustainable growth and business expansion, in addition to controlling expected risks.At the same time, these plans contribute to the optimal exploitation of available resources.Most administrative studies that focused on Implementing strategic plans in the administrative decision-making process.The institutions and organizations that have applied this approach improve and develop performance to achieve their desired goals accurately and effectively (Alvesson, 2020).
Thanks to this planning process as part of an integrated management process, institutions are able to increase their competitiveness, achieve outstanding performance, and pay their financial obligations on the specified dates, in addition to achieving their goals.For this reason, strategic planning is a critical tool for ensuring the success of organizations.Based on the above, it is possible to clarify the problem of the study in the following main question:
How effective is strategic planning in making financing decisions?
The main question results in a number of sub-questions, which are: Q1: What is strategic planning?Q2: What is the concept of financing and financing decisions?
The research is based on a main hypothesis that
There is an impact of strategic planning on financial decisions, and the following hypotheses emerge from it; (1) "There is an impact of the information dimension in financial decisions."; (2) "There is an impact of the time dimension in financial decisions."(3) "There is an impact of the spatial dimension in financial decisions.";(4) "There is an impact of the institutional dimension in financial decisions.";(5) "There is an impact of the human dimension in financial decisions"; (6) "There is an impact of the financial dimension in financial decisions.";(7) "There is an impact of the influence dimension in financial decisions."
Study by Abdul-Lawi & Babana (2020)
Salwa Abdel-Lawi and Dalila Babana conducted a study entitled "The Role of Strategic Planning in the Decision-Making Process."The aim of this study was to know the importance of strategic planning in the institutional context and its role in making the right decisions.The study relied on the descriptive approach; She also pointed out that strategic planning represents an attempt to anticipate the future of the organization and set goals that contribute to defining the boundaries of the organization's directions and the processes necessary to achieve them.The study concluded that the forward-looking aspect, which is based on planning for the future, is neglected in organizations that suffer from severe centralization, due to bureaucratic practices that negatively affect the decision-making process.
Study Hilary (2023)
The study, "Financial Decision Making: An Overview" in the Encyclopedia of Financial Decision Making, 2023, represents a comprehensive research effort by researchers from Georgetown McDonough Business School.The study focuses on the topic of financial decision making, which has become increasingly complex over time, due to the increasing influence of new insights from academic research, new models used in society, as well as modern tools provided by private entities.Researchers in this groundbreaking study show the state of the art in their areas of expertise and provide predictions about future trends.The research covered a wide range of topics, from traditional ones such as regulation and the role of analysts, to more cutting-edge ones such as artificial intelligence, financial neuroscience and robo-advisory.This study highlights the increasing importance of understanding financial decisions in light of continuing advances in economic and technological fields.The value of this study is enhanced by the methodology used in it as a reliable reference for researchers and those interested in this topic.
Definition of Strategic Planning and its Importance
The strategic process represents the planning and organizational aspect that aims to achieve the long-term goals of an organization by using available resources and adapting to the conditions of its environment, whether those conditions are known in advance or unknown.The importance of this process lies in enabling the organization to interact strongly with challenges and changes in its surrounding environment.This is especially important in the case of governmental and non-profit institutions, as these institutions face changing, complex, and potentially hostile challenges.(Brorström, 2020: 23(5), 611-630)Therefore, the strategic plan is considered a platform that guides the organization's management in making key decisions, whether regarding its audience or customers, and how to provide services or interact with them.(Atef, 2016: p. 181).
The researcher defined it procedurally as "Strategic planning jIt is one of the basic tools that contribute to organizations achieving their future goals and adapting to changing challenges.It is a continuous and integrated process that contributes to the organization's ability to adapt to the requirements of the changing environment and improve its overall performance" The importance and benefits of strategic planning appear in clarifying values or goalsDistinct from the empirical analysis of alternative policies which is usually a prerequisite, to show the most appropriate means for achieving the desired ends, every important relevant factor is taken into account(John M. Bryson, 2018: p: 319.)
Finance
It is defined as "providing the necessary cash amounts to advance and develop projects, whether private or public, and it can also be defined as the process of obtaining funds from their various sources and using and exploiting them optimally."Including obtaining the necessary cash, sound strategies must be developed for the available methods for obtaining cash, and how to Its investment, how to repay the debt, its interest, or what is called (debt service), and measuring the risk of this financing(Krimades, 2022: p. 22)Based on the above, finance was defined as a group of administrative functions associated with managing cash security and ensuring that the institution is able to achieve its goals and meet its obligations on time."(Sheikha, previous source, 2022: p. 12)All types of investments involve risks, and it is necessary for strategic plans to be based on financial statements and analysis of the company's performance before initiating investment or requesting financing.(Herbert B. Mayo, 2023: p:96) Types of financing decisions and their impact on institutional strategy.
Financing decisions relate to managing and directing an organization's financial resources, and include choosing different sources of financing, investing in new projects, distributing profits, managing debt, and controlling costs.These decisions greatly impact an organization's strategy, and are among the major types of financing decisions (Bryson, 20(3), 2018: p: 317-339.)Project financing and investment decisions.These decisions include investing in new projects or expanding current activities.Such decisions are made based on an analysis of available opportunities and an expectation of potential returns and risks.
Capital financing decisions these decisions relate to raising the necessary capital for the institution, whether by issuing new shares or obtaining loans from banks or other sources of financing.Capital management decisions: These decisions relate to how to use available capital effectively and achieve the best possible return on investments.Debt management decisions: Related to determining the appropriate level of debt and dealing with it on an ongoing basis to avoid expected financial problems.Dividend distribution decisions: Related to determining the percentage of gains that can be distributed to shareholders in exchange for reinvesting profits in the growth of the organization.
The researcher concludes that these financing decisions play a major role in directing the organization's strategy, as they determine how to exploit financial resources to achieve its goals and ensure sustainable growth.When these decisions are made wisely and based on careful analysis, they can enhance an organization's ability to compete and contribute to the successful realization of its vision.(Richard A. Breal, 2011).
Analyzing the financial environment and forecasting challenges and opportunities
Analyzing the financial environment and forecasting challenges and opportunities is a vital process for institutions and organizations seeking to achieve success and continuity in the rapidly changing financial market.They must understand the internal and external financial factors that affect the organization's performance and make sound strategic decisions to deal with them.Here are some tools that can be used to analyze the financial environment and how to evaluate future financial challenges and opportunities: (Kitsios F. a., 2017: pp. 1-24.): Among the tools for analyzing the financial environment, we review the following: Analysis SWOT: It is considered an analysisSWOT is one of the most prominent tools used in analyzing the financial environment.This analysis focuses on identifying the strengths and weaknesses of the organization (internal factors), as well as identifying opportunities and threats in the external environment.The analysis contributes to developing directive strategies based on strengths, facing challenges, and dealing with threats (Kitsios, 2019: 25 (4), pp. 606-624.)Inaddition, it became an analysisSWOT is a major tool used by companies in strategic planning processes, as it helps in examining the strengths, weaknesses, opportunities and threats of organizations.(Mostafa Ali Benzaghta, 2021, p: 64.) Analysis PESTEL: PESTEL analysis focuses on evaluating the political, economic, social, technological, environmental, and legal factors that may affect an organization.This analysis helps in examining the external environment and understanding how it affects financial operations (Ivana Marinovich Mitrovic, 2020, p: 96.).
Competition analysis: Focuses on studying competitors' strategies, strengths and weaknesses in the market, determining the strength of their influence and anticipating their potential strategies(John A. Czepiel, 2020, p:16.).
Hence, we must evaluate future financial challenges and opportunities.When data from an analysis of the financial environment is available, individuals can begin to evaluate future financial challenges and opportunities.This requires some important steps and points: Identify challenges: The organization must identify factors that may challenge it in the future.These challenges can include increased costs, changes in government policies, etc.
Potential impact analysis: The organization must analyze how these potential challenges will impact the organization's finances.
Identify opportunities: The organization must look for potential financial opportunities that could arise as a result of this changing financial environment.Can these challenges be exploited to expand into new markets or improve internal operations?".(John M. Bryson, 2019.)Develop future strategies: Based on our assessment of future financial challenges and possibilities; we must develop appropriate financial strategies to deal with them.These strategies may include reducing risks, developing new sources of financing, and improving capital management.
It is important that analyzing the financial environment and assessing challenges and opportunities is an ongoing process.Organizations must keep track of ongoing changes in the financial environment and resort to best practices to make smart, informed financial decisions.(Ahmed, 2017: pp. 151-173)
Compatibility of financial objectives with institutional strategy
An organization's success requires that financial goals be consistent with the corporate strategy.When an organization adopts a particular strategy, it must set financial objectives that support that strategy and ensure the achievement of its vision and strategic objectives in the short and long term.This consistency ensures that financial resources are directed effectively, and contributes to achieving sustainable financial performance and successful growth of the organization.
Determine short and long-term financial goals
Short-term financial goals: These goals focus on achieving financial results in the near future, usually within one to three years.These financial goals are often related to quarterly or annual financial returns, and increasing sales or profitability in the short period.
Long-term financial goals: These goals aim to achieve financial results in the long term, usually exceeding three years.These goals include sustainable growth for the company, expanding market share, increasing financial value for shareholders, and achieving a distinguished position in the market.(Molnár, 2017:pp. 37-44.).
Aligning financial goals with the company's vision and strategic goals: This requires simulating the vision and goals, Effective coordination between departments, monitoring performance and making adjustments (Molnár V. a., 2017:1, pp. 35-40.)The fourth section: identifying the influencing factors and challenges Implementing financing plans may face many influencing factors and challenges that must be taken into consideration when developing the company's financial strategy.Here are some influencing factors and challenges that companies may face in implementing financing plans(Tooth, 2018): Monetary and economic policy: Financing plans are strongly influenced by the monetary and economic policies adopted by the Central Bank and the government.For example, changes in interest rates and inflation can significantly affect the cost of debt and loans and the availability of financing.
Regulatory and legal legislation: Companies must comply with financial legislation, taxes and laws related to finance.Regulatory and legal challenges may affect a company's flexibility in Financial and liquidity risks: Companies must determine the financial risks they face and how to manage them, as financial risks vary, including financial market fluctuations, currency price fluctuations, and uncertainty about the liquidity of assets.Effectively identifying and managing these risks plays a critical role in a company's financing strategy and success in achieving its financial goals.It may be related to financial market fluctuations, currency fluctuations, and uncertainty about the liquidity of assets.
As for the obstacles and challenges that companies may face, they are evident in lack of financing, high debt, technological challenges, and market competition.These obstacles represent major challenges that may affect the company's ability to implement its financial plans and achieve its goals in a sustainable manner.Companies need effective strategies to overcome these challenges, ensure their continuity in the market and achieve sustainable success.(Foster Provost, 2021).
Therefore, to overcome these influential factors and challenges, companies must develop harmonious financial strategies that ensure the effective achievement of their financial and strategic goals.Financing plans must be compatible with current economic conditions and support the institutions' vision, mission and goals.Balancing financial and strategic requirements is the way to ensure sustainable growth and success in the changing economic environment.(StephenRoss, 2022.)
Research community
For the purpose of achieving the main objectives, the study adopted the purposive random sample method.It was chosen randomly and was limited to a community of workers in the financial sectors in particular.
It is clear from Table (1)
Gender, It turns out that the percentage of (males) in the study sample reached (56.85%), while the percentage of (females) was (43.15%).It is clear from the table above that the majority of the working group is male.
Age group, It appears that the highest percentage of age groups was in the age group (from 40 -less than 50 years old) at a rate of (44.8%), while the lowest percentage was in the age group (less than 30 years old) at a rate of (3.2%).
Years of service, The highest percentage of years of service was among those with (11-15) years of service, at a rate of (31.5%), while the lowest percentage was at (1-5) years, at a rate of (2.8%).
Descriptive analysis of the research variables
This aspect seeks to present and analyze the results of the field study by using statistical tools such as measures of central tendency and measures of dispersion, as well as using inferential statistics tools to test the main hypothesis of the study, by preparing a questionnaire based on the Likert scale (Likert) with a five-point distribution and distributed in (248) questionnaires, consisting of three sections: the first section (demographic data), the second section, paragraphs related to the variable (strategic planning), and the third section, which includes paragraphs on the variable (the impact of financial decisions).2) The statistical outputs are in the direction of the answers to all paragraphs of the main variable (strategic planning), which obtained an overall arithmetic mean of (4.216), which reflects the high degree of agreement, and a standard deviation that measures the dispersion of answers from their arithmetic mean of (0.277), and a coefficient of variation that reflects the amount of homogeneity of (6.56).%), which demonstrates the high degree of homogeneity in the content of the paragraphs it includes, to a degree reaching (93.44%), which indicates the degree of agreement in the importance of applying the strategic planning strategy from Diversifying funding sources And Regulatory compliance And Competitive landscape And Staff turnover and changes in staff are on a continuum Which is considered one of The most important factors that helped develop the internal and external variables that aim to advance organizations in all administrative, economic and financial directions Which contributes to supporting the role Strategic planning in developing a methodology to protect against changes and risks.
Descriptive statistics for the axis (strategic planning)
The fourth paragraph ranked first in terms of importance included (She believes incorporating strategic planning improves the efficiency of financing decisions over time) within the (time) dimension, with an arithmetic mean of (4.36), a standard dispersion of (0.578), and a coefficient of variation of (13.26%), which shows the high homogeneity in the content of the paragraph and indicates the importance of Incorporating strategic planning improves the efficiency of financing decisions over time.
The seventeenth paragraph ranked last in terms of importance included (Aligning financial objectives with overall strategic objectives is a priority in your bank's decision-making process) within the (time) dimension, with an arithmetic mean of (4.18), a standard dispersion of (0.783), and a coefficient of variation of (18.73%), which shows the degree of homogeneity in the content of the paragraph and indicates the importance Aligning financial objectives with overall strategic objectives is a priority in your bank's decision-making process.The bank adapts to changes in regulatory policies and provides relief to its clients' accounts 3.963 0.529 13.36% 2
18.
The bank reviews annual reports to assess its compliance with regulatory guidelines 4.000 0.741 18.52% 16 mean of (4,129) that reflects the high degree of agreement, with a standard deviation that measures the dispersion of answers from their arithmetic mean of (0.221) and a coefficient of variation that reflects the amount of homogeneity of (5, 34%), which shows the high degree of homogeneity in the content of the paragraphs it included, to a degree that reached (94.66%), which indicates the degree of agreement in the importance of applying the financial decision strategy from familiarity with the concepts of behavioral economics and their importance in making financial decisions and the participation of senior executives in the bank you are dealing with.He is actively involved in shaping major financial decisions and the bank's use of formal decision-making frameworks, such as cost-benefit analysis of financial options, which contributes to the effectiveness of decision-making and the institution's capabilities and capabilities that ensure the successful implementation of decisions and the achievement of the desired results.
The fifth paragraph ranked first in terms of importance Included (having clear levels of delegation of powers for different types of financial decisions) within the (administrative) dimension, with an arithmetic mean of (4.47), a standard dispersion of (0.559), and a coefficient of variation of (12.50%), which shows the high homogeneity in the content of the paragraph and indicates To have clear levels of delegation of powers for different types of financial decisions.
The twelfth paragraph ranked last in terms of importance Included (there is a clear delegation of powers within the executive team regarding different types of financial decisions) within the (executive) dimension, with an arithmetic mean of (4.01), a standard dispersion of (0.772), and a coefficient of variation of (19.25%), which shows the degree of homogeneity in The content of the paragraph indicates the importance of having a clear delegation of powers within the executive team regarding different types of financial decisions.
Testing the main study hypothesis according to the multiple linear regression model
Table (4) shows the statistical outputs of testing the main hypothesis according to the multiple linear regression model between the dimensions of strategic planning in financial decisions (X1:The informational dimension,X2:The temporal dimension,X3:Spatial dimension,X4=Institutional dimension,X5:The human dimension,X6:Financial dimension,X7 :Influential dimension,=Y financial decisions) and can be formulated according to the equation: Y= α +β1X1+β2X2+β3X3 +β4X4+β5X5+β6X6+β7X7 ano :For the purpose of the dimensions that have a significant impact on financial decisions, the first method shows us, according to (the complete model), that the (strategic planning) model has an effective relationship and impact on (financial decisions) based on the significance of the value coefficient (f) which has reached a value of (35,411) at the level of significance (5%), and the significance of the regression is interpreted at a level of confidence (95%) and indicates that (the entire model) is valid in predicting the values of financial decisions.
The value of the coefficient of determination ( R2) for the "estimated" model according to the full model method reached (0.729).This ratio explains what the moral dimensions contribute to explaining the variance that occurs in financial decisions, amounting to (72.9%).This ratio highlights the model's predictive accuracy in financial decisions..The method showed, according to a complete model, the moral dimensions that have the most influence on financial decisions, which were arranged according to the values of their direct influence on financial decisions and were as follows and according to the strength of influence (financial dimension, informational dimension,The spatial dimension, the human dimension, the time dimension, the influence dimension) as for the excluded dimension (the institutional dimension), which reflects the insignificant influence on financial decisions, and thus the multiple linear regression equation.
Multiple linear regression equation according to the full model Y= α +β1X1+β2X2+β3X3 +β4X4 +β6X6+β7X7 Y= 1.625 + 0.098X1+ 0.080X2 + 0.091X3+ 0.084X4 + 0.121X6+ 0.074X7 According to the method (Stepwise) Explain to us the second method of measuring the effect)Strategic planning in (financial decisions)In the dimensions of the highest impact on (financial decisions) that the model in its form according to (Stepwise) It has a moral and significant effect based on the value of (F) for the model (40,626) with a confidence level of (95%) and a significance level of (5%).
The value of the coefficient of determination wasR2) for the "estimated" model according to the method (Stepwise ) she has reached (0.724)This percentage explains what the moral dimensions contribute to explaining the variance that occurs in financial decisions, amounting to (72.4%).This percentage highlights the model's predictive accuracy in financial decisions.. Show the method according Stepwise Show us the moral dimensions that have the most influence on financial decisions, which were arranged according to the values of their direct influence on financial decisions and were as follows and according to the strength of influence (financial dimension, The informational dimension, the spatial dimension, the human dimension, and the influence dimension) as for the excluded dimension (the institutional dimension), which reflects the insignificant influence on financial decisions, and thus the multiple linear regression equation is The summary is clear from the analysis of hypothesis testing in ordering the dimensions that have the most influence on financial decisions.
Accept the hypothesis
The table above shows the direct impact values of each dimension of the independent variable (strategic planning) independently on financial decisions and shows the importance of applying strategic planning as it is a necessary administrative tool that helps the organization improve its performance in all successive circumstances.
The conclusions reached by the research were as follows; (1) The outputs of the statistical analysis showed a description of the variable strategic planning, which highlighted agreement in the content of the paragraphs and the dimensions they included, which highlights the role and importance of strategic planning, as it is considered Identify the basic tools that help organizations achieve their future goals and adapt to changing challenges; (2) The statistical outputs of the analysis of the second variable (financial decisions) showed their agreement in all the paragraphs and the dimensions they included, which reflect the importance of financial decisions, asIt greatly influences an organization's strategy, determining how financial resources are used to achieve its goals and sustainable growth.If financing decisions are made intelligently and based on careful analysis, they can enhance the competitiveness of the organization and contribute to the successful realization of its vision; (3) The statistical outputs showed the significance of the direct effect of the variable (strategic planning).The strategic planning variable was able to explain (72.9%) of the total variance in financial decisions; (4) The results of the direct effect showed that the financial dimension had the strongest impact on financial decisions according to the full model method, while according to the (Stepwise) The financial dimension confirmed this influence in terms of its strongest influence on financial decisions; (5) The statistical outputs showed a non-significant effect of the institutional dimension on financial decisions according to the full model method and the method (Stepwise).In conclusion, the research paper addressed the critical intersection between strategic planning and financial decision-making within organizations.Based on a comprehensive analysis and synthesis of existing literature, coupled with an exploration of real-world case studies, several key recommendations can be made: Integration of strategic planning and financial management: Organizations should seamlessly integrate strategic planning and financial management processes to ensure that goals are achieved efficiently.
Risk assessment and mitigation: Strategic financial decisions inherently involve risks, so it is recommended to develop contingency plans and mitigation strategies to effectively address these risks.
Make data-driven decisions: Utilize data analytics and financial modeling techniques to inform strategic financial decisions.The use of advanced analytics can provide valuable insights, enabling organizations to make more informed and accurate financial forecasts.
Continuous monitoring and evaluation: Implement a robust system for continuous monitoring and evaluation of both strategic initiatives and financial performance.This iterative process allows organizations to adapt their financial strategies and decisions based on real-time feedback and changing market conditions.
Conclusion
Employing strategic planning in making financing decisions is essential for the success of institutions and the effective achievement of their goals.Strategic planning contributes to developing integrated financial plans that are in harmony with the organization's vision and strategic objectives, allowing financial resources to be allocated intelligently and effectively according to clear priorities.Strategic financial analysis provides the necessary data to make informed financial decisions, and can be used to develop multiple financial scenarios to deal with potential risks and develop alternative plans for the future.Strategic financial planning also provides a systematic mechanism for allocating financial resources according to clear priorities based on an in-depth analysis of the organization's internal and external environment.This ensures that financial spending is directed towards priority areas that achieve the greatest return on investment.
Strategic financial planning also helps to develop multiple financial scenarios that take into account potential risks, and thus develop alternative plans that ensure the institution's continuity and growth even in light of unexpected circumstances and challenges.
In this way, the combination of strategic planning and smart financing decision-making plays a key role in improving an organization's financial performance and increasing its chances of success and growth in the long term.
Figure 1 .
Figure 1.Arrangement of strategic planning questions Beers us from the table(2) The statistical outputs are in the direction of the answers to all paragraphs of the main variable (strategic planning), which obtained an overall arithmetic mean of (4.216), which reflects the high degree of agreement, and a standard deviation that measures the dispersion of answers from their arithmetic mean of (0.277), and a coefficient of variation that reflects the amount of homogeneity of (6.56).%), which demonstrates the high degree of homogeneity in the content of the paragraphs it includes, to a degree reaching (93.44%), which indicates the degree of agreement in the importance of applying the strategic planning strategy from Diversifying funding sources And Regulatory compliance And Competitive landscape And Staff turnover and changes in staff are on a continuum Which is considered one of The most important factors that helped develop the internal and external variables that aim to advance organizations in all administrative, economic and financial directions Which contributes to supporting the role Strategic planning in developing a methodology to protect against changes and risks.
Table ( 1
) shows the ages of the sample.
Table 3 .
Ratios, frequencies, means, and standard deviations Financial decisions
Table 4 .
Multiple linear regression indicators for strategic planning in financial decisions
|
v3-fos-license
|
2019-02-09T15:02:36.922Z
|
2019-02-08T00:00:00.000
|
59617145
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://epigeneticsandchromatin.biomedcentral.com/track/pdf/10.1186/s13072-019-0256-y",
"pdf_hash": "ffd6f6e5944553657e48a28173f842858f123d6f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3004",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "2345079255cc6644d1275c1f2e326d0776075a71",
"year": 2019
}
|
pes2o/s2orc
|
Coordination between TGF-β cellular signaling and epigenetic regulation during epithelial to mesenchymal transition
Background Epithelial to mesenchymal transition (EMT) plays a crucial role in cancer propagation. It can be orchestrated by the activation of multiple signaling pathways, which have been found to be highly coordinated with many epigenetic regulators. Although the mechanism of EMT has been studied over decades, cross talk between signaling and epigenetic regulation is not fully understood. Results Here, we present a time-resolved multi-omics strategy, which featured the identification of the correlation between protein changes (proteome), signaling pathways (phosphoproteome) and chromatin modulation (histone modifications) dynamics during TGF-β-induced EMT. Our data revealed that Erk signaling was activated in 5-min stimulation and structural proteins involved in cytoskeleton rearrangement were regulated after 1-day treatment, constituting a detailed map of systematic changes. The comprehensive profiling of histone post-translational modifications identified H3K27me3 as the most significantly up-regulated mark. We thus speculated and confirmed that a combined inhibition of Erk signaling and Ezh2 (H3K27me3 methyltransferase) was more effective in blocking EMT progress than individual inhibitions. Conclusions In summary, our data provided a more detailed map of cross talk between signaling pathway and chromatin regulation comparing to previous EMT studies. Our findings point to a promising therapeutic strategy for EMT-related diseases by combining Erk inhibitor (singling pathway) and Ezh2 inhibitor (epigenetic regulation). Electronic supplementary material The online version of this article (10.1186/s13072-019-0256-y) contains supplementary material, which is available to authorized users.
Background
Epithelial to mesenchymal transition (EMT) is a reversible and plastic process in which epithelial cells lose their junctions and polarity to gain a motile mesenchymal phenotype [1]. It plays indispensable roles in physiological processes such as wound healing, but it is also involved in pathological phenotypes like cancer propagation. EMT can be orchestrated by the activation of a series of transcription factors [2], which have been found to be work together with many epigenetic regulators, including SNAI, ZEB and TWIST [3]. As a major inducer of EMT in normal mammary epithelial cells, transforming growth factor beta (TGF-β) has been shown to mediate EMT in various epithelial cells in vitro [4,5]. Previous studies reveal that TGF-β stimulation activates multiple signaling pathways (such as the SMADs, PI3K/AKT or MAPK/ ERK) and ensuing phosphorylates histone modifiers that likely affect chromatin states through the regulation of histone post-translational modifications (PTMs) expression, followed by recruitment of other histone modification binding proteins [6,7].
profile of histone PTMs expression changes during EMT has never been investigated, lacking a global map of cross talk between cellular signaling and gene regulation driven by epigenetics. Histones are key players in epigenetics. Our laboratory has been developing novel mass spectrometry (MS)-based experimental and computational platforms to measure many facets of histone PTM biology [9][10][11]. Phosphorylation is the principal messenger of cellular signaling cascade. With the development of technology, MS greatly accelerates signaling research through the high-throughput quantification [12,13]. To date, multi-omics studies offer new insights into understanding cellular processes at the molecular level [14,15]. Therefore, a systematic quantitative analysis of histone PTMs changes in addition to the time-resolved analysis of phosphoproteome dynamics would contribute to assess how gene expression patterns cooperating with signaling pathways are modulated upon EMT reprogramming.
In this study, we present a time-resolved multi-proteomics strategy to monitor EMT development, including the quantitative analysis at proteome, phosphoproteome and histone PTMs level, respectively. By taking the advantages of SILAC (stable isotope labeling by amino acids in cell culture)-based quantification, proteins concentrated at areas of cell-cell adhesion and cell migration were grouped after 1-day TGF-β stimulation, suggesting overall that structural proteins rearrangements evoked in the later transition state. Significant regulations of phosphorylations were detected after only 5-min treatment, leading to the prediction that Erk signaling was activated upon immediate stimulation. Meanwhile, a monotonic increase in H3K27me3 (histone H3 lysine 27 trimethylation) was observed over the time-course using label-free quantification methodology. This PTM is catalyzed by Ezh2, which is itself regulated by phosphorylations. The combined analysis revealed a potential mechanism for Ezh2 activation and up-regulation of H3K27me3 during EMT. We thus hypothesized that a combined inhibition of signaling pathways Erk and epigenetic regulator Ezh2 would have a pronounced effect in preventing EMT reprogramming and validated our conjecture through the influence on cell phenotypic and histone PTMs expression changes derived from various inhibitors.
Experimental design to monitor EMT development
Characterizing EMT is an important research field, as it plays indispensable roles in cancer propagation. Although the TGF-β signaling, which can trigger EMT when the pathway is activated, is extensively characterized, the coordination between cellular signaling and epigenetic regulation needs further studies. Here, we applied our in-house developed multi-omics strategy to monitor proteome, phosphoproteome and histone PTMs expression changes in a cell model undergoing EMT (Fig. 1a). Simply, we used NMuMG (normal murine mammary gland epithelial) cells exposed to TGF-β, which is an established model to investigate EMT in cell culture [16,17]. Cells were collected after different stimulation times, from 5 min to 2 days. Proteome, phosphoproteome and histone PTMs were further quantified by MS to investigate the correlation between protein changes, signaling pathways and chromatin modulation dynamics during EMT.
Quantitative analysis of global proteome and phosphoproteome at five time points during EMT programming was achieved by adopting metabolic labeling (Fig. 1b). Specifically, three populations of NMuMG cells were SILAC encoded with both arginine and lysine using three distinct isotope forms (K0R0, K4R6 and K8R10) and stimulated by TGF-β (5 ng/ml). Two time-course experiments were combined using the common time point 0 min (no stimulation), providing a five-time-point profile to check immediate early (5 min), early (60 min), late (1 day) and very late (2 days) (phospho)proteome response to TGF-β. Phosphopeptides were enriched using TiO 2 beads. Proteome and phosphoproteome were detected by Orbitrap Fusion and further analyzed through various bioinformatics tools.
TGF-β stimulation successfully induced EMT
First, we evaluated whether EMT happened or not after TGF-β treatment. According to the literature, the phenotypic changes that occur during EMT involve the disassembly of cell-cell contact, separation of cells and remodeling of cytoskeleton, which generate fibroblastlike cells that express mesenchymal markers [18]. After 2-day treatment, dramatic morphological changes were observed (Fig. 2a); cells became larger than untreated ones (0 min), and the shape of cells changed from polygon into fusiform, showing features of mesenchymal cells.
Phosphorylation of Smad2 at S465 and S467 is a direct target of TGF-β signaling [4], which should be a nice positive control to prove that the pathway was activated. Unfortunately, it was not possible to quantify the Smad2 pS465/pS467 changes by MS because it locates at the C-term tail of Smad2 and there is no lysine or arginine that can be isotope-labeled (amino acids sequence from 452 to 467: VLTQMGSPSVRCSSMS). Thus, we checked the Smad2 phosphorylation changes by western blotting (WB), where clearly pinpointed that Smad2 was phosphorylated right after adding TGF-β (5 min) and the phosphorylation state remained during EMT programming (Fig. 2b), suggesting the activation of Smad signaling and presumable EMT reprogramming.
Besides the phenotype remodeling and phosphorylation of target proteins of signaling, major changes which occur during EMT are the down-regulation of epithelial markers expression (such as E-cadherin) and the acquisition of mesenchymal markers abundance (such as N-cadherin). To further confirm the mesenchymal transition, we quantified the expression changes of E-cadherin and N-cadherin by WB, as well as commonly used loading control proteins, GAPDH and beta-tubulin (Fig. 2c, left). E-cadherin was downregulated and N-cadherin up-regulated after stimulation as we expected. GAPDH is a protein that is commonly used for normalization in many studies; however, the expression of this protein may change with hypoxia [19] which makes it inappropriate as a control when working on hypoxic samples, like EMT Experimental design for study of cross talk between TGF-β cellular signaling and epigenetic regulation during EMT. a TGF-β was added to induce EMT in NMuMG cells, and expressions at proteome, phosphoproteome and histone PTMs level were quantified, respectively. The different curves implied distinct regulation patterns during EMT programming. b Workflow for quantitative and time-resolved analysis of (phospho) proteomes using triple SILAC labeling. Two experiments with biological triplicates were performed with 0 min as a common time point studies. Because EMT is a process which involves the structural organization in the cells, it would be not be appropriate to use structural proteins such as tubulin as control. Thus, it is not surprising to see that GAPDH and tubulin were significantly changed in the late transition. Similar quantification results were obtained through MS (Fig. 2c, right) with an average linear correlation R 2 > 0.94 with WB data (Additional file 1: Figure S1), demonstrating the accuracy of our MS-based proteome-wide quantification.
Combining all phenotype, marker protein and phosphorylation expression changes, it can be concluded WB results indicate that Smad2 was phosphorylated in 5 min after adding TGF-β, which is a direct target of TGF-β signaling and should be a nice positive control. c Expression changes of given proteins quantified by WB and MS, respectively. Data are means ± SD (standard deviation) of triplicates. As expected, epithelial marker protein E-cadherin and mesenchymal marker protein N-cadherin were down-regulated and up-regulated, respectively that TGF-β successfully activated signaling pathways and subsequently induced EMT in NMuMG cells.
(Phospho)proteome expression changes during EMT
The distributions of raw protein and phosphosite intensities were firstly explored through boxplots, where similar distributions before and after data normalization were presented (Additional file 1: Figure S2), implying that the different conditions were properly mixed and injected into MS. The analysis of the proteome resulted in the total quantification of 2977 proteins, of which 2328 (78.2%) were quantified across all conditions (Additional file 1: Figure S3A and Additional file 2: Table S1). 3131 (50.3%) out of 6230 phosphosites were measured in all conditions (Additional file 1: Figure S3A and Additional file 3: Table S2A). Only 854 proteins were identified in both proteome and phosphoproteome datasets (Additional file 1: Figure S3B). This was expected, largely stemming from the low abundance and high complexity of phosphoproteome. However, it was also a limitation, because only the overlapped dataset of phosphorylations can be corrected by the protein abundance to assess real regulations of the phosphorylation events. Meanwhile, high phosphoproteome coverage was facilitated by high enrichment specificity (~ 90% of identified peptides were phosphorylated in each experiment) (Additional file 1: Figure S3C), indicating the good data quality and reproducibility.
We then performed Pearson correlation coefficient analysis for whole proteome and phosphoproteome based on the normalized intensities to assess the similarity between different conditions (Fig. 3a). It is reasonable to find that conditions mixed and detected within the same MS run, namely Exp.1: 0 min_L, 1 day and 2 days; Exp. 2: 0 min_S, 5 min and 60 min (Fig. 1b), have higher correlations due to the technical and instrument variation. A high correlation among 0 min_S, 5 min and 60 min at proteome level was observed with R 2 values higher than 0.98. Proteomes after 1 day or 2 days stimulation were observed to be less correlated to 0 min_L, indicating clear differences after longer TGF-β treatment at proteome level. Compared to proteome results, phosphoproteome has smaller R 2 values, respectively, implying a large variety of phosphorylation regulation. Same as proteome, phosphoproteome after 2-day stimulation was also found to be the most different compared to 0 min l, suggesting the biological variation increased along with TGF-β treatment time.
The normal distribution of serine (S): threonine (T): tyrosine (Y) phosphorylation frequencies on cellular proteins is about 80:20:1 in most mammalian systems [13]. Our phosphoproteomics results obtained a ratio of 84:14:2 ( Fig. 3b), highlighting that our results are in line with expectations. More than 67% of phosphosites were identified with a localization confidence higher than 0.75 (Additional file 1: Figure S3D) with an average localization probability 0.95, indicating that most of our identified phosphorylations are accurately mapped at single amino acid (S, T, Y) resolution. Phosphosites were determined on a wide variety of proteins, and around 69% proteins were defined with more than one phosphosite (Fig. 3c). For instance, Prelamin-A/C, a proved protein involved in the well-known PI3K/AKT signaling activated by TGF-β [20], was detected with 33 phosphosites which may be regulated differently across time-course and have different corresponding functions.
Time-resolved phosphoproteome dynamics during EMT
TGF-β signaling begins with activation of TGF receptors and extends through numerous signaling pathways, such as the SMADs, PI3K/AKT, or MAPK/ERK pathways [4]. Subsequently, these activated pathways mediate the phosphorylation of a large number of substrate proteins and cross talk with each other at multiple levels, leading to gene expression albeit regulated by post-transcriptional and post-translational mechanisms. These events are separated in time, and a time-resolved analysis of phosphoproteome dynamics is critical to understand cellular signaling during EMT development.
Significantly regulated phosphosites or proteins quantified in at least four out of five time points with ANOVA p value < 0.05 were classified through fuzzy c-means clustering analysis [21]. This allowed for the grouping of 888 phosphosites and 1180 proteins into six clusters ( Fig. 4a and Additional file 1: S4B). As mentioned, 69% proteins were modified with more than one phosphosite (Fig. 3c). We then investigated whether different phosphosites (only those used for clustering) within the same protein were regulated in the same manner. Interesting, 78 out of 511 phosphoproteins (15.2% of phosphoproteins used for clustering) that had a regulated phosphorylation event also had at least one more event with different trend. The circus plot in Fig. 4b shows the overlaps between phosphoproteins in different clusters, clearly reflecting that proteins could contain different phosphorylation events. For instance, Trim28 (transcription intermediary factor 1-beta) carried four serine phosphosites, exhibiting in cluster 2 (S26ph), cluster 3 (S23ph), cluster 4 (S21ph) and cluster 5 (S473ph), respectively. Previous study reveals that Trim28 contributes to EMT via regulation of E-cadherin and N-cadherin expression through histone modifications [22]. We also detected three serine phosphosites on Vim (Vimentin), classified in cluster 1 (S430ph), cluster 5 (S419ph) and cluster 6 (S214ph), respectively. Tjp1 (Tight junction protein ZO-1) was determined with two serine phosphosites (S166ph and S168ph) belonging to cluster 5, one serine phosphosites (S912ph) in cluster 4 and serine phosphosites (S178ph) in cluster 3. Vim is usually found in various non-epithelial cells, especially mesenchymal cells. Tjp1 regulates cell adhesion and matrix remodeling and can be used as an epithelial marker [4]. These different phosphorylation expression patterns within the same protein imply that phosphorylation events on proteins can have very different regulation, complicating the understanding of the functions of phosphorylations on a given protein.
In order to get a better understanding of protein-protein interactions as well as regulated protein functions at different stimulation time points, we then ordered the phosphoproteome events based on the time point where phosphorylation peaked by averaging the clusters numbers (Fig. 4c). Protein-protein interaction analysis generated through String [23] showed that all regulated phosphoproteins were highly interacted with each other. GO-BP (gene ontology biological processes) enrichment was further performed to determine the biological enrichment represented in each cluster (Additional file 1: Figure S4C and Additional file 3: Table S2D). Phosphorylation changes classified in cluster 1 and cluster 2 were significantly concentrated at the area of mRNA metabolic process, implying that signaling cascade was activated immediately (in 5 min) to mediate gene expression.
Long-term phosphorylation response (> 1 day) was grouped associated with actin cytoskeleton organization, overall suggesting that the stimulation remodeled structural proteins involved in cell adhesion in the later transition state.
Previous studies of EMT focused on investigations at later transition states, e.g., from 1 to 8 days [17,24], and thus, they could miss the important quick signaling response, like after 5-min stimulation. So, we further analyzed phosphoproteins significantly regulated in the immediate early response. Independently from their phosphorylation status changes, these proteins were Further enrichment analysis showed that the majority of phosphoproteins are located in nucleus and specifically related with RNA splicing (Additional file 1: Figure S5), an important element of transcriptional regulation [25]. It is noteworthy to mention that parts of phosphoproteins in cluster 2 (up-regulated in 5 min) belong to the SWI/SNF family. This superfamily has been well characterized with respect to chromatin remodeling processes and plays critical roles in differentiation and proliferation [26].
Phosphosite-kinase specific analysis
In eukaryotes, hundreds of kinases specifically modify ~ 30% of all cellular proteins to orchestrate numerous biological processes such as signal transduction [13]. Identification of protein phosphosites with their cognate kinases is a key step to delineate molecular dynamics and cellular processes. Therefore, the site-specific kinasesubstrate relations prediction software iGPS [27] was utilized to construct a kinase-substrate phosphorylation network for these significantly regulated phosphosites (p value < 0.05 based on biological triplicates) at different time points (Fig. 5a and Additional file 3: Table S2B). Listed kinases in the scatter plot were predicted kinases containing highest iGPS scores for 5-min and 2-day mediation, respectively. Phosphosites significantly regulated after 5-min stimulation were linked to ERK, p38 or JNK, while long-term regulation (2 day) was governed by CDK family. ERK/AKT pathway is required for TGF-βinduced EMT in vitro through the regulation of EZH2 (histone-lysine N-methyltransferase EZH2) expression [7]. Activation of p38 or JNK also mediates EMT and is critical for down-regulation of E-cadherin [17]. It is exciting to find that these kinases were activated as early as in 5 min of TGF-β stimulation since previous studies mainly investigate long-term events of EMT [17,24]. These observations emphasized the importance of ERK signaling early response to TGF-β in EMT programming.
Most protein kinases are dephosphorylated and inactive in the basal state, and they are themselves controlled by phosphorylation, either by other kinases or by other copies of themselves (autophosphorylation). Then, the phosphoproteome dataset was mapped to the known kinome (Additional file 1: Figure S6), where 113 kinases were detected with 363 phosphosites. Kinase phosphorylation expression changes calculated in log 2 fold change to unstimulated condition were plotted out to evaluate the correlation between individual stimulation times (Fig. 5b). The early stimulation time points (5 min and 60 min) did not maintain a visible correlation, showing a random pattern of changes in abundance. To the opposite, a sustained increase or decrease in phosphorylation expression on kinases was observed between day 1 and day 2, showing a linear correlation after day-scale stimulation. The correlation between kinase phosphorylation express suggests that kinases involved in early activation of EMT signaling and late remodeling of cytoskeleton had multiple phosphosites and different regulation patterns.
Regarding that we are mainly interested in the regulatory networks implicated in modeling cell phenotype, kinases that have been reported with histone modifiers and might be involved in epigenetics regulation are highlighted in Fig. 5b. Among these kinases, Mapk3 (also named as Erk1), Akt1 and Cdk1/2 were the most attractive with respect to histone modifications (Fig. 5c). Dual phosphorylation of T203 and Y205 causes dramatic conformational change of Erk1, which enable full enzyme activation [28] and interaction of Erk1 with its substrates, including Msk1/2. Recent evidence has shown that Msk1/2 can directly phosphorylate histone H3 at the sites S10 and potentially site S28 [29,30]. On the other hand, Akt1 and Cdk1/2 can mediate Ezh2 activity, which is a well-known H3K27me3 writer. To verify whether histone modifications, especially H3 PTMs, were regulated in accordance with the observed kinase phosphorylations, we performed a comprehensive histone PTM analysis using our in-house developed label-free quantification workflow.
Comprehensive profiling of histone modification changes
The global profiling of histone PTM changes was obtained by using a label-free quantification workflow established by our laboratory (Fig. 6a) [31]. Cells were treated with TGF-β for 0 day, 1 day and 2 days before harvesting. The reason why we did not include 5 min and 60 min is that from our previous studies on histone dynamics, histone modification changes have slower dynamic changes than phosphorylations, normally on a scale between hours to days [32]. Histones were purified via acidic extraction and then chemically derivatized with propionic anhydride to block free lysine residues. After trypsin digestion, a second round of propionylation was performed to derivatize peptides N-termini. Samples were analyzed by nLC-MS/MS through data-independent acquisition (DIA) method, and peptide quantification was extracted by our in-house developed software EpiProfile 2.0 [33].
Around 200 different histone peptides carrying different kinds of modification types were quantified ( Fig. 6b and Additional file 4: Table S3A), including peptides from histones H3, H4, H2A, H2B and H1 and their variants. For instance, there are more than seven known sequence variants of histone H3 in mammalian cells [34]. These H3 variants have highly 5 Phosphosite specific analysis. a iGPS prediction of kinases responsible for significantly regulated (p value < 0.05 based on triplicates) phosphopeptides at the indicated times after TGF-β stimulation. Top 9 kinases predicted for 5-min and 2-day regulation (compared to 0 min) were labeled in scatter plot (right). Histogram (left) showed the numbers of significantly regulated phosphosites at given stimulation times. b Scatter plots of log 2 fold change of 5 min/0 min versus 60 min/0 min (up) and 1 day/0 min versus 2 days/0 min (bottom) for phosphorylation expression of kinases detected in our dataset. Kinases highlighted in yellow have been previously described with histone modifiers. c A cartoon representing signaling pathways related to histone modifications. Regulated phosphosites, which are associated with enzymatic activity, were shown with varying hues indicating the log 2 fold change of relative abundance compared to 0 min. Mapk3, also named as Erk1. Rps6ka4, also named as Msk2 PCA (principal component analysis) showed that triplicates of the same condition clustered together and different conditions grouped separately (Fig. 6c), indicating the histone PTMs variances over the TGF-β treatment. A heatmap including the overall abundance of each single mark was generated to inspect the most abundant modifications of histone H3 and H4 (Additional file 1: Figure S7A and Additional file 4: Table S3B), where H3K9me2/me3, H3K27me2/ me3 and H4K20me2 were found to be the dominant marks. For instance, relative abundances of all modified forms of the peptide carrying given modification (K27me3 + K27me3K36me1 + K27me3K36me2…) were summed up to estimate the global relative abundance H3K27me3. Then, PTMs relative abundance was organized in hierarchical clustering to see the trends during Table S3A, including modified and unmodified peptides. d Volcano plot presenting fold change and significance of histone H3 and H4 single PTMs relative abundance. Yellow bubbles are significantly regulated marks, i.e., with a t test p value < 0.05 or > 4.32 when − log2 transformed. e Relative abundance changes of selected single histone PTMs. Error bars represent standard deviation (n = 3) EMT (Additional file 1: Figure S7B), creating an overall dynamics of histone marks. Volcano plots presenting fold change and significance for H3 and H4 single marks (Fig. 6d) demonstrated that H3K27me3 was the most significant up-regulated histone mark.
Previous study suggests that H3K27me3 is required for EMT [35], and it is in accordance with our histone PTMs quantification profile where a monotonic increase in H3K27me3 was found during EMT (Fig. 6e). H3K27me3, catalyzed by Ezh2, is a well-characterized histone modiFIcation associated with gene silencing. The activity of Ezh2 is regulated by phosphorylation [36,37]. Unfortunately, we were not able to quantify Ezh2 phosphorylation changes due to its low abundance even after enrichment. However, we identified activity-related phosphosites within kinases which can regulate Ezh2 activity (Fig. 5c). According to studies, Akt1 and Cdk1/2 can mediate recruitment of Ezh2 to maintain H3K27me3 level through different phosphosites with opposite effects. Akt1 phosphorylates Ezh2 at S21 and suppresses its methyltransferase activity [36], while Cdk1/2 phosphorylates Ezh2 at T350 which is important for recruitment of Ezh2 to H3K27 [37]. From our phosphoproteome dataset, S124ph and S129ph of Akt1, which are required to activate Akt1 [38], were found to be decreased during EMT. In the meanwhile, T14ph and Y15ph of Cdk1, which can inhibit Cdk1 enzymatic activity [39], were reduced along with transition. The combined analysis revealed a potential mechanism for Ezh2 activation and up-regulation of H3K27me3 during EMT through the deactivation of Akt1 and loss of suppression of Cdk1.
Erk1 was activated during EMT due to the expression of T203ph/Y205ph (Fig. 5c). Msk1/2, as Erk1 substrates, is known to directly phosphorylate histone H3 at the sites S10 and S28 and may further induce a methylation-acetylation switch of the adjacent residues (K9 and K27) [29,30], while we did not observe a significant regulation of its target H3 S10ph or S28ph (Fig. 6e). It can be potentially explained by that the binding protein complexes of adjacent residues K9me3 or K27me3 may suppress the recruitment of binding proteins for S10 or S28ph as we found a dramatic increase in nearby H3K9me3 or K27me3 after stimulation.
Nevertheless, our data highlight that histone PTMs are dynamically affected during EMT, in particular the gene silencing mark H3K27me3. The mechanism of H3K27me3 expression might be regulated by Erk1, Cdk1/2 and Akt1. Considering the high complexity and cross talk of EMT regulatory network, we speculate that a combinatorial inhibition targeting EMT transcription factor signaling and epigenetic mechanisms would be a seductive approach noteworthy to test. We then performed a phenotypic assay to verify the ability of an epigenetic inhibitor and/or kinase inhibitor in blocking EMT.
Combination of inhibitor treatment affects EMT process
Different inhibitors were tested either alone or in combination with each other to check their effect on cell phenotypic alterations (Additional file 1: Figure S8A). Specifically, LDN-214117 was introduced to block Alk2, which is a TGF-β receptor, acting as a positive control. DMSO was used as a negative control. Inhibitors for Cdk7 (BS-181HCl), Cdk family (PHA-793887) and Erk (AZD6244) were selected based on phosphosite-specific analysis results (Fig. 5a) as well as indicated by previous reports [16]. Ezh2 inhibitors UNC1999 and GSK126 were utilized to alter H3K27me3 expression. Cells were pretreated with inhibitors for 1 day before providing TGF-β stimulation for two more days with the attendance of indicated inhibitors.
To summarize the cell phenotype switch results after 3-day individual treatments (Additional file 1: Figure S8A), Ezh2 inhibition alone did not block the transition process, showing a similar phenotypical change as the DMSO control with mesenchymal feathers. The kinase inhibitors targeting Erk had a better disruption influence on EMT processing than blocking Cdk family or Cdk7, either alone or in concurrent with Ezh2 inhibitors. Even Erk inhibitor achieved the most efficient EMT repression among mono-inhibitions, cells manifested abnormal shape with a fuzzy cell outline. This side effect was resolved by a combination treatment plus Ezh2 inhibitor. Almost all cells upon the combination inhibition of Erk and Ezh2 retained epithelial signatures along with normal sharp outline, suggesting a potential effective strategy to prevent mesenchymal transition.
To further explore the possible mechanism, a global epigenetic modifications profiling of samples under specific treatments shown in Fig. 7a was delineated. More than 200 different histone modification types were captured in the dataset (Additional file 1: Figure S9A and Additional file 5: Table S4). Person correlation coefficient (Additional file 1: Figure S9B) and PCA (Additional file 1: Figure S9C) pinpointed the variance and correlation between different samples, where cells under combination inhibitor treatments highly differed from other samples. The distribution of histone H3 and H4 single mark abundance was presented as a heatmap (Additional file 1: Figure S9D), and the expression trends were clustered to group similar histone mark patterns (Fig. 7b).
As expected, H3K27me3 was reduced when Ezh2 inhibitor was present, either alone or together with Erk inhibitor (Fig. 7c). It is interesting to analyze histone modification pattern changes combining with cell morphology changes under different conditions shown in Fig. 7a. Although H3K27me3 does not increase under only Ezh2 inhibitor (UNC1999 or GSK126) treatment, cells still differentiated into mesenchymal phenotype as DMSO control, while the expression of H3K27me3 was not affected when mesenchymal phenotype changes were interrupted by mono Erk inhibitor. As reported, H3K27me3 is essential for EMT [35]. Our histone modification profile also demonstrated that H3K27me3 was significantly increased undergoing EMT (Fig. 6e). Thus, we hypothesized that other histone modifications may be able to function as H3K27me3 when K27me3 could not be expressed and cooperate with transcription factors to trigger EMT. Study has shown close correlations between histone marks associated with gene silencing, such as methylation of H3K9, H3K27 and H4K20 [40]. In fact, we found H3K9me3 to be up-regulated when H3K27me3 decreased under mono Ezh2 inhibitor treatment, while no alteration was found for H4K20me3 between different Fig. 7 A combination treatment of inhibitors affected the process of EMT. a Cell morphology changes after 3-day treatment with different inhibitors. All inhibitors were used with a final concentration of 1 μM. DMSO was used as control with the same volume. Cells were pre-treated with inhibitors for 1 day before adding TGF-β and then treated with TGF-β and inhibitors for another 2 days. b Heatmap of z-score of relative abundance of histone H3 and H4 single marks. c Relative abundance of selected modified peptides. Error bars represent standard deviation (n = 4) treatments (Fig. 7c). Therefore, we speculated that the increasing abundance of H3K9me3 might compensate the unchanged H3K27me3, leading to EMT when cells were treated with Ezh2 inhibitors.
Recent study revealed that H3K27me3 depletion caused by EZH2 inhibition can drive to reciprocal H3K27ac gain [14,41]. In agreement with our data, H31K27ac relative abundance was increased upon 1-day Ezh2 inhibitor treatment, either alone or coexist with Erk inhibitor (Additional file 1: Figure S10). After inducing TGF-β at day 1, H31K27ac was down-regulated in all conditions independent of cell EMT states, implying a more complex cross talk mechanism between signaling pathways and epigenetic regulation leading to EMT phenotypic changes. Regarding other histone modification patterns, e.g., H3K4 and H4K20, no significant changes observed between control and inhibitor treatments (Additional file 1: Figure S10 and Additional file 5: Table S4), which may suggest that these modifications do not play critical roles during EMT.
Discussion
EMT plays a pivotal role in multiple physiological and pathological conditions, especially regarding cancer progression [3]. Even though the mechanism of EMT signaling is extensively characterized [1], the correlation between signaling pathway and the epigenetic regulation leading to this phenotypic change remains partially obscure. Previous studies investigated only long-term EMT regulation, i.e., from day 1 to 8 [17,24]. Thus, in order to construct a detail map of signaling information from immediate early to long-term response, we developed and applied a strategy combining triple SILAC for monitoring proteomics time-course changes, TiO 2 for phosphopeptides enrichment and high accuracy MS characterization for identification of key phosphorylation events in the TGF-β-induced EMT model. The early activation of Erk (in 5-min) was addressed based on phosphosite-kinase specific analysis, indicating the necessity of Erk signaling during EMT. Proteins involved in cellcell adhesion and cell migration were classified after 1-day TGF-β stimulation. Overall, systematic phosphoproteome and proteome analysis indicates that TGF-β activates first chromatin factors (< 5 min) and then structural proteins involved in cytoskeletal rearrangements (> 1 day).
The comprehensive profiling of histone PTMs identified H3K27me3 as the most significantly up-regulated mark undergoing EMT, demonstrating its critical role in EMT epigenetic regulation which is in accordance with previous reports. Combining phosphoproteomic quantification results, we proposed a mechanism of upregulation of H3K27me3 expression during EMT by the deactivation of Akt1 and loss of suppression of Cdk1. We then speculated that a combination treatment of Erk and Ezh2 inhibitor can potentiate an enhanced blocking of EMT reprogramming. Subsequently, the hypothesis was supported by visual assessment of cell phonotypic changes under indicated inhibitions. The benefit of the drug combination can be possibly explained by their additive effects on suppressing recruitment of H3K27me3 binding proteins by Ezh2 inhibitor, in addition to inhibiting EMT transcription factor signaling pathway through Erk inhibitor. Moreover, we conjecture that the other gene silencing-related histone mark H3K9me3 may take the role of H3K27me3 and cooperate with signaling pathways to trigger EMT when K27me3 could not be expressed.
Conclusions
In summary, our data indicate that a proper EMT is achieved only when all necessary component pathways of the regulatory network are activated. Combining histone PTMs analysis with time-resolved quantitative (phospho) proteomics provided a more detailed map of cross talk between signaling pathway and chromatin regulation comparing to previous EMT studies. Our findings point to a promising combination therapy using Erk inhibitor (singling pathway) and Ezh2 inhibitor (epigenetic regulation) to more effectively reduce the progress of aberrant mesenchymal cell proliferation for EMT-related diseases.
Cell culture and stimulation treatments
NMuMG cells were cultured in DMEM media for SILAC (Thermo Fisher Scientific) supplemented with 10% dialyzed fetal bovine serum (dFBS, Gemini Bio-Products) at 37 °C with 5% CO 2 in atmosphere. Cells were labeled with the natural lysine and arginine or replaced by isotope-labeled amino acids (Silantes, light: lys 0 and arg 0; medium: lys 4 and arg 6; heavy: lys 8 and arg 10) at concentration 100 mg/L for six passengers to make sure fully labeling before adding any treatments. After 1 day 1% dFBS starvation, TGF-β (BioVision, Inc.) was added to induce EMT with a final concentration at 5 ng/ml. Cells were harvested at five time points after stimulation (0 min, 5 min, 60 min, 1 day and 2 days) with triplicates and stored at − 80 °C before further sample preparation for quantification of proteomes and phosphoproteomes.
Western blot
Cells were harvested at the indicated time points, washed with cold PBS and lysed in RIPA lysis buffer (Millipore-Sigma). Western blot (WB) analysis was carried out according to standard protocol. In brief, proteins were separated by NuPAGE 4-12% Bis-Tris Gel (Thermo Fisher Scientific) and electrotransferred to 0.45-μm nitrocellulose membranes (Bio-Rad). After blocking in TBST with 5% BSA (Sigma-Aldrich) at room temperature (RT) for 1 h, the membrane was incubated with primary antibody (1:1000) overnight at 4 °C. Membrane was incubated with HRP-conjugated second antibody (1:10,000) for 1 h at RT after rinsing thrice with TBST. Finally, immunoreactive bands were revealed using ECL Prime Western Blotting Detection Reagent (GE Healthcare Life Sciences) and detected by Amersham Imager 600 (GE Healthcare Life Sciences).
Proteomics and phosphoproteomics analysis using nLC-MS/MS
All chemicals used for preparation of nLC-MS/MS sample were of sequencing grade and purchased from Sigma-Aldrich unless otherwise mentioned. Cells were lysed in lysis buffer (6 M urea/2 M thiourea, 50 mM ammonium bicarbonate, pH 8.2, 1*Protease and phosphatase inhibitor cocktail (Thermo Fisher Scientific)), and concentration was measured by Bradford protein assay (Thermo Fisher Scientific). Proteins were mixed as shown in Fig. 1b at the protein ratio of 1:1:1 (L: M: H). Then, samples were reduced by using 5 mM dithiothreitol (DTT) for 60 min at 51 °C and alkylated with 10 mM iodoacetamide (IAA) in dark for 45 min at RT. Proteins were digested first with endopeptidase Lys-C (Wako, MS grade) at an enzymesubstrate ratio of 1:100 for 4 h, after which the solution was diluted four times with 50 mM ammonium bicarbonate. Proteins were then digested with trypsin (Promega) at an enzyme-substrate ratio of approximately 1:50 at room temperature overnight. After digestion, 20 ug proteins were separated out for input proteome analysis, and all left samples were concentrated by SpeedVac for phosphopeptide enrichment.
All samples were stage-tip desalted by C18 (3 M Empore) as previously described [43]. Dried samples were resuspended in buffer A (0.1% (v/v) FA in water) and loaded into a Nano-LC system (EASY-nLC 1000, Thermo Fisher Scientific) coupled online with an Orbitrap Fusion Tribrid mass spectrometer (Thermo Scientific). Peptides were separated on a home-packed capillary column (200 mm length, 75 µm inner diameter) containing reverse-phase ReproSil-Pur C18-AQ resin (3 µm particle size, Dr. Maisch Gmbh) at a flow rate of 300 nl/min. A gradient of 160 min was set from 0 to 38% buffer B (0.1% FA in ACN), then 38-98% buffer B in 10 min. Full scan mass range of m/z 350-1200 was analyzed in the Orbitrap at 120,000 FWHM (200 m/z) resolution and 5.0e5 AGC target value. MS/MS was performed in the ion trap in the normal mode using the TopSpeed mode (3 s) using datadependent acquisition. HCD collision energy was set to 27, AGC target to 1.0e4 and maximum injection time to 200 ms. Dynamic exclusion (60 s) and apex trigger were enabled. Every sample was injected once into Orbitrap Fusion.
Quantification of histone PTMs
Histones were acid extracted and propionylated according to standard protocol [31]. Briefly, histones were extracted from nuclei with 0.2 M H 2 SO 4 and precipitated with 33% TCA (trichloroacetic acid), followed by washes with acetone containing 0.1% HCl and then pure acetone. The resulting pellets were dissolved in 50 mM HEPES, pH8.0. Derivatization reagent was fresh prepared by mixing propionic anhydride with isopropanol at a ratio of 1:3 (v/v), and the mixture was mixed with histone sample at the ratio of 1:4 (v/v) for 15 min at 37 °C. This reaction was performed twice to ensure all free protein N-termini and mono-methylated and free lysine residues were derivatized. Histones were then digested with trypsin (enzyme: sample ratio 1:20) overnight at 37 °C. After digestion, the derivatization reaction was performed twice to react with peptide N-termini.
Samples were desalted using C18 stage-tips before nLC-MS/MS analysis. Chromatography was configured with the same type of column and HPLC as for the proteomics analysis. NanoLC was coupled to a Q-Exactive mass spectrometer (Thermo Scientific). The HPLC gradient was set as follows: 2-28% buffer B in 45 min, 28-80% buffer B in 5 min, 80% buffer B for 10 min at a flow rate of 300 nl/min. A full scan with mass window m/z 300-1100 was acquired in the Orbitrap with a resolution of 120,000 (at 200 m/z), and an AGC target of 5.0e5. MS/MS was performed using a data-independent acquisition mode, and the entire mass range was fragmented at every cycle using windows of 50 m/z. AGC target for MS/MS was 3.0e4, and maximum injection time was 50 ms. HCD collision energy was set at 27. Every condition for no-inhibitor treatment had three biological replicates, and every sample was run thrice by Q-Exactive. Every condition for inhibitors treatment had four biological replicates, and every sample was run only once by Q-Exactive.
Data processing and analysis
Raw files for proteome and phosphoproteome have been deposited to Chorus database under the accession number 1453.
MS raw files of proteome and phosphoproteome were analyzed by MaxQuant software [44] version v1.6.0.16 against UniProt Mouse protein sequence database (201511). Additionally, the database that included common contaminants was used for first search. Trypsin was specified as digesting enzyme with two missed cleaves allowed. The search for total proteome included fixed modification of carbamidomethyl cysteine and variable modifications of methionine oxidation and N-terminal acetylation. Phosphorylation on serine, threonine and tyrosine residues was also set as variable modification while analyzing phosphoproteome. "Match between runs" was enabled with a matching time window of 1 min. All other values used default setting. Label-free quantification analysis of histone PTMs was performed by home-developed software EpiProfile [9,33].
ANOVA test was used in the time-course study of proteome and phosphoproteome to identify proteins/ PTMs that were significantly changes. Paired sample t-test was performed to identify proteins/PTMs that were significantly changed between two time points. Both phosphoproteome and proteome dynamics were clustered in abundance using fuzzy c-means [21] with at least four out of five values and ANOVA test smaller than 0.05. Gene ontology (GO) enrichment was carried out by using online bioinformatics resource Metascope [45] to characterize the possible association of proteins identified. String (https ://strin g-db.org/) [23] was used to define protein-protein interaction. Site-specific phosphositekinase was predicted by iGPS 1.0 [27]. Perseus [46] was used to visualize data from non-supervised hierarchical clustering and principal component analysis plot. Phosphorylation event order was generated by Cytoscape (v3.3.0) [47]. Venn diagram was created by Venn Diagram Plotter (v1.5, https ://omics .pnl.gov/). Volcano plots and histograms were built in Excel. Kinome tree was viewed through KinMap [48].
Additional files
Additional file 1. Supplemental information including figures.
Additional file 2: Table S1. Time-course monitoring of protein abundance changes during EMT. (A) Total list of identified and quantified proteins across five time points. Protein IDs refer to UniProt database. Peptide counts highlight the number of razor and unique peptides used for protein quantification. Protein intensity is the raw intensity, and the normalized log 2 transformed protein abundance (average of biological replicates, n = 3). 0 min_S and 0 min_L refer to Fig. 1b. ANOVA p value describes the ANOVA p value of protein levels. Cluster number refers to Figure S4A (blank means protein not used for clustering); isClusterMember determines whether the protein belongs significantly to the cluster assigned. (B) Annotation for cluster proteins. (C) GO-BP (gene ontology biological processes) enrichment results for cluster proteins.
Additional file 3: Table S2. Time-course monitoring of phosphorylation site abundance changes during EMT. (A) Total list of identified and quantified phosphosites across five time points. Protein IDs refer to UniProt database. Mod site highlights the phosphorylated S/T or Y residue within protein amino acid (aa) sequence. Localization prob is the confidence score for site localization of the phosphorylation (1 means fully unambiguous). Sequence window highlights the 31 aa residue within protein. Phosphosite intensity is the raw intensity and the normalized log2 transformed phosphosite abundance adjusted based on protein abundance (average of biological replicates, n = 3). 0 min_S and 0 min_L refer to Fig. 1b. ANOVA p value describes the ANOVA p value of phosphorylation levels. Cluster number refers to Figure S4A (blank means the indicated phosphosite was not used for clustering); isClusterMember determines whether the protein belongs significantly to the cluster assigned; 'is Kinase' determines whether the protein belongs to kinome; 'is related with histone PTMs' determines whether the protein has been described with histone PTMs modifiers. (B) Kinases predicted by iGPS which are responsible for significantly regulated (p value < 0.05 based on triplicates) phosphopeptides at the indicated times after TGF-β stimulation. (C) Annotation for cluster phosphoproteins. (C) GO-BP (gene ontology biological processes) enrichment results for cluster phosphoproteins.
Additional file 4: Table S3. Quantification of histone modifications during EMT. (A) Relative abundance of histone peptides detected across three time points. (B) Deconvoluted single marks from table A. For simplicity, the relative abundance of single PTMs was extracted by summing all peptides carrying the given mark. Each day has three biological replicates, and each sample has three instrument replicates. For example, D0_1_2 means the second injection of first biological sample for Day 0.
|
v3-fos-license
|
2021-07-03T06:17:01.282Z
|
2021-06-22T00:00:00.000
|
235714525
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-0817/10/7/788/pdf",
"pdf_hash": "757421a3f986698e6047d158fe9be52d6d2e91da",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3005",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d023913ab859e0f09f7de79c3602d74e1419823e",
"year": 2021
}
|
pes2o/s2orc
|
A Summary of the SARS-CoV-2 Vaccines and Technologies Available or under Development
Since the beginning of 2020, the world has been in a race to develop vaccines that can control the COVID-19 pandemic. More than 250 projects have been initiated for this purpose, but only 14 of them have been authorized for use, despite being in phase 3 clinical trials. More than 40 other vaccines are also in phase 1/2 clinical trials and show promising outcomes. Regarding the appropriate choice of vaccines for each country or region, we reviewed the currently used vaccines in light of the different influencing parameters. These factors include the mode of action, dosage protocol, age group of the vaccinee, side effects, storage conditions, mounted immune response, and cost. Technically, there are seven types of vaccines developed against SARS-CoV-2: messenger RNA (mRNA), nonreplicating and replicating vectors, inactivated viruses, protein subunits, viral-like particles, DNA vaccines, and live attenuated vaccines. The mRNA type is being used for the first time in humans. Unfortunately, mutated variants of SARS-CoV-2 have started to appear worldwide, and researchers are investigating the effects of the currently used vaccines on them. There are many concerns regarding the long-term protection afforded by these vaccines and their side effects, and whether they require future modifications to be effective against the mutated variants. The development of new vaccines using more advanced technology is paramount for overcoming the difficulties in controlling the COVID-19 pandemic across the world.
Introduction
Amid the tsunami of the COVID-19 pandemic-with the first known cases reported on 31 December 2019-it was realized that vaccines could play an essential role in increasing the immunity of the population, preventing severe conditions caused by COVID-19 infection, reducing the burden on healthcare systems, and minimizing economic losses [1,2]. This crisis prompted an unprecedented race for the development of different vaccines using existing expertise in vaccinology [3]. Traditionally, vaccines require 10-15 years of research, development, and testing before their clinical usage can begin [4]. However, in early 2020, scientists embarked on attempts to produce safe and effective SARS-CoV-2 vaccines at record speed [5].
More than 250 vaccine projects were initiated worldwide in 2020, many of which involve conducting active preclinical trials in animals [6]. According to a recent WHO report, 97 vaccines are in clinical trials from phases 1 to 3, and 182 are in their preclinical development stages (Figure 1) [7,8]. Different technologies have been applied in vaccine preparation, some conventional and some newly developed and applied for the first time in humans [8]. Thus far, at least 14 vaccines have reached clinical application and/or have been authorized for use for use against SARS-CoV-2 (Table 1). The vaccines against SARS-CoV-2 can be categorized into seven classes ( Figure 2) [9]. The first comprises nucleic acid (RNA or DNA) vaccines; these consist of snippets of the virus' genetic material, which are injected directly into human body cells. The second class comprises knocked-out virus vaccines, which use inactivated or weakened viruses. The third class is viral vector vaccines, which use Trojan horse nonreplicating vectors-or vectors that replicate much less frequently-to introduce a piece of transcribed DNA from SARS-CoV-2 to another unrelated virus, such as a modified adenovirus. The injected vectors instruct human cells to make coronavirus proteins and trigger an immune response. The fourth class comprises recombinant protein subunit vaccines, which use no genetic material but use whole or fragments of viral proteins packed into nanoparticles for better delivery and uptake by body cells. The fifth class is composed of coronavirus protein subunits; these can be synthesized and assembled to construct virus-like particles (VLPs) similar to those of natural SARS-CoV-2. The sixth class is DNA vaccines that are prepared from viral RNA by reverse transcription. The final class is a group of attenuated and repurposed vaccines based on already-established technology for vaccine preparation ( Figure 2). Researchers are also evaluating more than 40 vaccines in phase 1/2 clinical trials on humans in various countries; the 43 vaccines that have most progress are listed in Table 2. Protein-RBD dimer [24]
The Viral Spike (S) Protein
The S protein is frequently considered the major antigen target for vaccines against the SARS-CoV-2 virus. The genome of SARS-CoV-2 encodes one large spike protein (S) that plays a pivotal role during viral attachment and entry into host cells. The spike protein can be cleaved into S1 and S2 subunits by host proteases, each of which has a trimeric structure [6]. The S1 domain is distally located and contains the receptor-binding domain (RBD) that binds with high affinity to the cellular receptor ACE2. S2 is the proximal domain, and its cleavage by proteases on the host cell surface alters its conformation and enables the viral envelope to fuse with the cell membrane of an infected cell ( Figure 3). Neutralizing antibodies target both the S1 and S2 domains. Antibodies against the S1 RBD block the interaction with ACE2 receptors, while those against S2 can block membrane fusion (see Section 7). The S protein exists in a so-called metastable prefusion conformation, and proteolytic cleavage refolds the protein into a more stable, postfusion state.
Vaccines Approved for Use
There are fourteen specific vaccines of different categories that have been given authorization for use for combating COVID-19 (Table 1), which are discussed below in their respective categories.
3.1. Messenger-RNA (mRNA) Vaccines mRNA vaccines are characterized by robust immunogenicity, intrinsic adjuvant properties, low costs for preparation and production, favorable safety profiles, quick production, and special storage and delivery systems [72]. This is a sophisticated vaccine preparation technology that is being used in humans for the first time. mRNA vaccine preparation has been investigated over the last 20 years for different viruses, such as rabies, influenza, and Zika [73]. However, mRNA vaccines for COVID-19 are the first to be approved and used in humans. The main advantage of this technology is that it allows the body's cells to produce S proteins rather than injecting them as in vaccines. This reduces the time required for building the vaccine and hence requires less time compared to that required for classical vaccines [74]. mRNA vaccines utilize manufactured nucleoside-modified, single-stranded messenger RNA (mRNA) to deliver genetic instructions to human cells for building up the coronavirus protein known as the spike protein (S). The mRNA enters the human cells encapsulated by lipid nanoparticles (LNP) that prevent the cells of the body from degrading it and give stability to the mRNA, which is a fragile molecule. The mRNA does not linger in the body's cells for more than 48 h. Once it has passed its instructions to the protein-making machinery in the cytoplasm of the body's cells, enzymes called ribonucleases (RNases) degrade the mRNA [75]. Therefore, it is impossible for the mRNA to move into the nuclei of cells, as it lacks the genetic signal that would allow it to enter this forbidden compartment; thus, the RNA cannot integrate with the DNA of the vaccinated cells, posing no risk of inducing genetic changes. After the S protein is produced by the cells of the body, the immune response is initiated with its two arms, i.e., humoral (antibodies) and the T-cell (CD4+ and CD8+ T cells)-mediated immunity. The neutralizing antibodies can stop the spike protein or its floating fragments from attaching to cells. The killer T cells (CD8+) in vaccinated individuals recognize and destroy any coronavirus-infected cells that display the spike protein fragments on their surfaces. Three important mRNA vaccines currently in use in humans that are authorized for use are given in the following subsections.
Pfizer-BioNTech Vaccine (PBV)
The PBV vaccine is produced by Pfizer, an American multinational pharmaceutical corporation based in New York, in association with the German company BioNTech. The generic name of this vaccine is tozinameran (Comirnaty) [10,11,76]. The PBV is given in two doses 3 weeks apart [77]. It is also recommended that vaccinated individuals receive a booster shot, or a third dose, within 12 months of being fully vaccinated and then annually thereafter [76][77][78]. It is supplied in vials containing enough liquid for five shots, although it may be adequate for two extra shots. According to the data released by the manufacturers, this vaccine is 95% efficacious in protection [77]. Its efficacy in elderly people is almost the same (94%) as that in those under 65 years of age [78]. It offers strong protection against COVID-19 within 10 to 14 days of the first dose regardless of the recipient's race, weight, or age [79]. It can produce strong antibody and T-cell immune responses. This vaccine does not cause any serious side effects but frequently causes short-lived symptoms such as pain at the site of injection, mild fever, fatigue, and muscle pain [80]. The PBV requires an ultra-cold temperature of −70 • C (−94 • F) for storage and distribution, which imposes difficulties on its usage in certain countries. However, recent reports claim that it can be stored at a much higher temperature (−20 • C) for 2 weeks [81]. Mr. William Shakespeare (81-year-old) was the first man to be vaccinated with the Comirnaty vaccine on 8 December 2020 [82]. The recommended age group for vaccination is >16 years, although other age groups are also currently under investigation [83]. The FDA has recently lowered the age at which people can receive Pfizer's COVID-19 vaccine in the United States to include children aged 12-15 years. This move is expected to result in millions more shots being administered [83].
Moderna Vaccine (MV)
The MV is manufactured by a Massachusetts-based company, Moderna, in collaboration with the U.S. National Institute of Health. This is an mRNA-based vaccine (mRNA-1273) encapsulated in LNP [12]. This vaccine has an advantage over PBV, in that it can be stored at temperatures equivalent to a standard freezer (−20 • C), making it easier to ship to remote and rural areas [84]. It has an efficacy of 94.1% according to the data of the manufacturers and the U.S. FDA [85]. The MV is used for people over 18 years of age [85]. It requires two shots four weeks apart and does not raise any safety concerns. There are clinical trials to investigate the usage of MV for children, for people with cancer and for those in pregnancy. Preliminary reports showed that children and adolescents (12-17 years) exhibited stronger immune responses to the vaccine and consequently showed more intense side effects such as fever and aches [86,87]. However, after the administration of millions of doses globally, the Moderna vaccine does not show an association with cerebral venous sinus thrombosis (CVST) or thrombotic events, based on analyses of recent data [86,87].
CVnCoV Vaccine of CureVac (CVV)
The CVV is produced by Tübingen's CureVac biotech firm in partnership with the giant pharmaceutical company Bayer and is currently in its combined phase 2b/3 clinical trial. This vaccine is considered a rival to the leading mRNA vaccines of Pfizer-BioNTech and Moderna [13,88]. The CVV utilizes a natural, nonchemically modified, synthetic mRNA coding the prefusion-stabilized full-length spike protein of SARS-CoV-2. Therefore, CVnCoV is unlike the Pfizer-BioNTech and Moderna COVID-19 vaccines, which use nucleoside-modified RNA [13,88]. The CVV is administered via a two-dose regimen, with the doses administered four weeks apart intramuscularly. This vaccine requires lower doses (12 micrograms) than the 30 micrograms for PBV and the 100 micrograms for MV [88,89]. The manufacturers claimed that it showed an efficacy of 95% (comparable to that of PBV and MV). Furthermore, it can be stored at 5 • C (41 • F) and is stable for three months at refrigerator temperatures of 2-8 • C (36-46 • F), which makes it suitable for usage in poorer countries [90]. This is a further advantage over the two above-mentioned mRNA vaccines.
Human Adenovirus Nonreplicating and Replicating Vector-Based Vaccines
These types of vaccines utilize replication-deficient viral vectors (backbone) or attenuated replication-competent (bioengineered) viral backbones [44,[91][92][93]. The most common replication-incompetent or defective viral vectors currently in use are adenoviruses for carrying and delivering a selected plasmid-manufactured, double-stranded DNA segment of the RNA of SARS-CoV-2 that codes the S-protein antigen of the virus. The adenovirus vectors currently in use by different manufacturers are human Ad5 and Ad26 adenoviruses and a modified version of the chimpanzee adenovirus ChAdOx1 [92].
After injection, the vectors then enter the body's cells but cannot replicate intracellularly. The delivered genetic material escapes from the vectors and travels to the nucleus, where the DNA is stored but does not integrate with the body's DNA [44,[91][92][93]. Afterwards, it is transcribed into mRNA that leaves the nucleus to be read and "translated" into spike proteins; these proteins begin to be assembled on the surfaces of infected cells. Once the S proteins or their fragments are recognized by the immune system, it starts to send warning signals and generate specific neutralizing antibodies and activated T cells (CD4+ and CD8+), as well as memory cells of the B-and T-cell types. The protection generated from these vaccines ranges between 62 and 90% (average at 70%) [19,94]. The vectors used with these vaccines have a tough protein coat that helps in protecting the genetic material inside them. For this reason, the vaccine does not have to stay frozen and can be stored for at least 6 months at refrigerator temperatures (2-8 • C) [95]. Moreover, these vaccines can induce strong immune responses and do not require adjuvants to be incorporated with them. This is because these vaccines contain their own pathogen-associated molecular patterns (PAMPs) that can serve as built-in adjuvants [44,[91][92][93].
Furthermore, weakened but replicating viral vectors such as measles, influenza, and vesicular stomatitis virus (VSV-MERK) can also be used to carry genes for the spike protein of SARS-CoV-2 [19,44,[91][92][93][94]. In addition, modified vaccinia virus Ankara (MVA-MERS), which is an attenuated orthopoxvirus infecting avian cells but not mammalian ones, can also be used as a vector for the preparation of vaccines with a predilection for protecting mucosal surfaces, such as those of the respiratory tract [96,97].
Four adenoviral, nonreplicating, vector-based vaccines (Table 1) are currently under authorized for use in humans, and they are given in the following subsections.
Oxford-AstraZeneca Vaccine (OAV; AZD 1222; Vaxzevria)
The OAV is produced by Oxford University in cooperation with the British-Swedish company AstraZeneca, along with its Indian version, which is called Covishield [98]. The OAV is a vector vaccine that utilizes a vector based on the recombinant ChAdOx1 of chimpanzees. It is given to individuals >18 years of age in two doses of 5 × 10 10 viral particles each (standard doses, "SD/SD"). Moreover, the developers tried a half dose as the first dose (low dose, "LD") and a standard dose as the second dose in a regimen described as "LD/SD" [99]. The AZD 1222 vaccine has an acceptable safety profile and is efficacious in combating symptomatic COVID-19. In addition, this vaccine is effective against the new and more contagious U.K. SARS-CoV-2 variant B.1.1.7, and partially (10% efficacy) against the South African B.1.351 variant (see below) [100].
The OAV was reported to have an efficacy ranging from 62 to 90%, according to the two-dosage protocol of SD/SD or LD/SD, respectively [14,99]. Furthermore, this vaccine can be kept at refrigerator temperatures, 2-8 • C, for at least six months, which makes it easy to store, transport, and distribute globally [14,99]. The OAV triggers strong humoral and cellular immune responses. Again, this vaccine produces minor side effects, such as fatigue and headache [99].
It has been recently reported that individuals diagnosed with thrombocytopenia syndrome (TTS) within 3 weeks of vaccination with Vaxzevria should be actively investigated for signs of thrombosis. In addition, individuals who present with thrombosis within 3 weeks of vaccination should be evaluated for thrombocytopenia [101]. A clear contraindication is issued in some countries to not vaccinate such individuals with Vaxzevria [100,101]. Further analysis also found another link with a condition called heparin-induced thrombocytopenia (HIT) in people taking the anticoagulant heparin. HIT is thought to be triggered when heparin binds to a protein called platelet factor 4. This stimulates an immune response via the production of antibodies against platelet factor 4 that results in platelet destruction and the release of clot-promoting material. The presence of leg pain, seizures, and a change in mental status are considered possible signs and symptoms of TTS. Vaccinated individuals with severe or persistent headaches, blurred vision, skin bruising beyond the site of vaccination after a few days, shortness of breath, chest pain, leg swelling, or persistent abdominal pain are advised to consult experienced health professionals or specialists in hematology and/or in coagulation to investigate, diagnose, and treat possible TTS, as this condition requires urgent management. Mild thrombocytopenia is commonly reported in fewer than 1 in 10 vaccinated persons. The risk of TTS is lower after the second dose, with an estimated rate of 1.7 cases per million doses. Finally, Guillain-Barre Syndrome (GBS), a rare autoimmune disease, is under investigation to explore any link with the Vaxzevria vaccine's side effects [101].
Sputnik-V Vaccine (SVV)
The SVV vaccine was named in memory of the Soviet-era satellite program [102]. This Russian vector-based vaccine is produced by the state Research Centre of Virology and Biotechnology, Gamaleya Institute. It utilizes a combination of two adenoviruses (Ad5 and Ad26) that are not recognized by the human immune system as foreign and, hence, are not destroyed [15,103]. The developers stated that it had an efficacy of 91.6% in protection after two doses administered three weeks apart intramuscularly [103]. In the first dose, the Ad26 vector was used, while in the second, the Ad5 vector was utilized. However, Russian scientists tested a one-dose version (Sputnik-light), which could provide temporary immunity for 3 to 4 months and has a claimed efficacy of 73-85%; this could help countries with high infection rates [104]. It can be stored at a standard freezer temperature of −20 • C [105]. Recently, it was announced that AstraZeneca and Sputnik-V developers are testing a combination of both vaccines to see whether it could improve their efficacy [106]. Reports from Russia's Gamaleya Institute denied a link between the Sputnik-V vaccine and the formation of blood clots [15,103].
Johnson and Johnson Vaccine (J&J V; JNJ-78436735)
The JJV is manufactured by Janssen Pharmaceutical, which is owned by Johnson and Johnson Multinational Corporation. The JJV utilizes the Ad26 adenoviral vector, which was used for the Ebola vaccine by the same company. The company applied for an Emergency Use Authorization (EUA) from the U.S. FDA in February 2021 [16,107]. It is a one-dose vaccine that can produce a neutralizing antibody response in 90% of vaccinated people after four weeks and in all recipients after two months. In a phase 3 clinical trial (ENSEMBLE), the manufacturers also investigated a 2-dose regimen to see whether it can confer longer protection and increased antibody levels [108]. The JJV shows an efficacy of 66% globally and 72% in the United States [107]. It can be stored for up to 3 months at refrigerator temperatures (2-8 • C; 36-46 • F) and for two years at −20 • C (−4 • F) [109]. It showed a 66% effectiveness in preventing infection after a single dose and was capable of preventing 85% of severe COVID-19 cases 28 days after vaccination [110]. It is also capable of protecting against the SARS-CoV-2 variant of the B.I.351 lineage observed in South Africa [107]. The vaccine is well tolerated and shows no serious side effects [111]. However, there are also reports linking clot formation with the JJV. The blood clots that have been tentatively linked to the AstraZeneca and J&J vaccines have particular characteristics: they occur in unusual parts of the body, such as the brain or abdomen, and are coupled with low levels of platelets, cell fragments that aid blood coagulation [111].
AD5-nCoV (Convidecia) Vaccine
The AD5-nCoV vaccine is prepared by the Chinese CanSino Biologics Company in cooperation with the Academy of Military Medical Sciences. The Convidecia vaccine is based on using the Ad5 adenovirus vector, as reflected in its official name [17,42]. It is currently in phase 3 clinical trials, and the Chinese government has already approved it for military use, for a period of one year. Furthermore, the vaccine has also been awarded authorization for use in some countries. The efficacy of the vaccine after a one-shot dosage is 65.7% (comparable to that of the Johnson and Johnson JJV vaccine; see above) [17,112,113]. It has the advantage of being suitable for storage at refrigerator temperatures (2-8 • C). No serious adverse reactions after vaccination have been reported.
Inactivated Coronavirus Vaccines
Researchers picked up one of three viral variants that can multiply in monkey kidney cells and grow in bioreactor tanks. Dousing large stocks of this variant with a chemical called beta-propiolactone could disable the viruses by binding to their genes and thus preventing their replication; however, their proteins, including the spike (S) protein, remained intact [114]. The preparation was then mixed with an aluminum-based adjuvant to boost the immune response against the inactivated vaccine [115].
Four inactivated vaccines have been given authorization for use (Table 1). Other inactivated vaccines are in phase I/II clinical trials or preclinical trials ( Table 2). The inactivated vaccines express a wide range of native viral antigens [18][19][20][21]. Such multiple antigens can induce a TH2 response and lung eosinophilia, which may be worse in aged hosts [116]. This broad-spectrum immune stimulation may result in a special condition in the postvaccination period called the vaccine-related enhancement of disease (VRED) [117]. This condition may also be triggered by the aluminum adjuvant used in the vaccine, which is also known to drive TH2-cell immune responses (see Section 7). Therefore, TH1-skewing modified alum or other types of adjuvants such as CpG are recommended as alternatives to avoid VRED [118].
It is possible for an inactivated virus vaccine to induce a broader immune system response than vaccines that only feature the spike protein. Since the existing viral variants of concern (see Section 5) have critical mutations in the spike, inactivated virus vaccines could theoretically offer an advantage for protection against these variants [18][19][20][21]118].
Sinopharm Vaccine (SV; BBIBP-CorV)
The SV vaccine is manufactured by Sinopharm Group, which is a state-owned Chinese company, and is marketed with the cooperation of the UAE. It is an inactivated vaccine and is administered in a two-dose regimen, with the doses given 3 weeks apart by intramuscular injection. It showed an efficacy of 79.34% in China and 86% in the UAE, besides being 100% effective in preventing moderate and severe COVID-19 cases [18,119]. The developers did not report any serious side effects during its phase III clinical trial or after its authorization for use [120].
The available data on the Sinopharm vaccine in pregnant women are inadequate in informing us on either the vaccine's efficacy or vaccine-associated risks during pregnancy. However, this is an inactivated vaccine with an adjuvant that is commonly used in many other vaccines with proven good safety profiles, including in pregnant women. The effectiveness of the Sinopharm vaccine in pregnant women is, therefore, expected to be comparable to that observed in nonpregnant women of similar ages [18,119,120].
Sinopharm-Wuhan Vaccine (SWV)
The SWV vaccine was prepared by the Chinese Wuhan Institute of Biological Products. It is Wuhan's version of the Sinopharm vaccine. It is effective in preventing COVID-19 in 72.5% of vaccinees [121]. It shows comparable side effects to the Sinopharm vaccine and is also in its phase III clinical trial [19].
The Wuhan vaccine utilizes the WIV-04 strain, which was isolated and cultivated in a Vero cell line for propagation. Then, the supernatant of the infected cells was inactivated as described above. Interim analysis of two randomized controlled trials showed a seroconversion rate of 100% in the phase 1 trial and 85.7% in the phase 2 trial [19,121]. This vaccine is given in two doses 3-4 weeks apart. A third dose is recommended for those individuals who show weak immune responses. A lower-dosage injection was associated with a higher geometric mean titer (GMT) of neutralizing antibody at Day 14 after the third injection, compared with the other dosage protocols [19,121].
CoronaVac Vaccine (CV; Formerly PiCoVacc)
The CoronaVac vaccine is manufactured by a private Beijing-based biopharmaceutical company, SinoVac Biotech, in collaboration with the Brazilian research center, Butantan. The CV vaccine is given in two doses 2 weeks apart by intramuscular injection [20]. The Chinese manufacturing company reported 50.38% efficacy in the Brazilian trial when including "very light cases" in their data analysis [122]. Furthermore, the Brazilian trial showed efficacy of 78 and 100% in preventing mild and severe COVID-19 cases, respectively [122,123]. The interim analysis of other countries' clinical trials showed higher efficacy of 83.5 and 65.3% in Turkey [124] and Indonesia [125], respectively. Moreover, it was reported that this vaccine generates a moderate immune response with lower antibody levels in comparison with levels in patients who have recovered from COVID-19 [126]. Therefore, this vaccine requires an adjuvant, such as alum, to boost the immune response, but this requirement in turn makes the vaccine unsuitable for respiratory administration [19,126]. The safety and effectiveness in children 3-17 years of age is also being studied in clinical trials, with promising results [127]. CoronaVac showed no serious side effects. It can be stored at refrigerator temperature (2-8 • C; 36-46 • F), making it suitable for worldwide distribution [123].
Covaxin Vaccine (COV; Bharat Biotech Vaccine, BBV152)
Covaxin vaccine was manufactured by the Indian Bharat Biotechnology Company in collaboration with the Indian Council of Medical Research and National Institute of Virology [21]. The COV has been granted permission in India for restricted use in emergency situations despite being in phase 3 of clinical trials [128,129]. The Indian company also signed a partnership with the Pennsylvania-based company Ocugen for marketing the vaccine in the United States [130]. This vaccine is used in a two-dose regimen with the doses given 4 weeks apart, and its efficacy is reported to be 81% [128,130], although 82.8 to 91.9% of the vaccinated people generated antibodies (seroconverted) with robust immune responses [131]. It can be stored for one week at room temperature, which makes it suitable for usage in tropical and subtropical countries [128].
Recombinant Protein Subunit Vaccines
These types of vaccines utilize no genetic materials but use whole or fragments of viral proteins packed in nanoparticles [49,132,133]. This type of vaccine is considered very safe and incapable of causing disease. Five vaccines of this type are in preclinical trials utilizing different protein (peptide) subunits [134]. Since these subunits are poorly immunogenic, they require adjuvants and repeated administrations [49,133]. They can primarily induce reasonable CD4+ T-cell activation and specific neutralizing-antibody responses, but they show poorer stimulation of CD8+ T cells [135]. Three types of recombinant protein subunit vaccines are described in the subsections below; they are in the late stages of phase 3 clinical trials or have received authorization in some countries.
Novavax (NVX-CoV2373) Vaccine
The NVX vaccine is manufactured by a Maryland-based company, Novavax, in collaboration with GSK and Sanofi, two companies in the United Kingdom and France, respectively, by attaching viral proteins onto a nanoparticle carrier (microscopic particle) to aid efficient delivery and uptake by body cells [22]. It is administered in two doses three weeks apart by intramuscular injection. It can produce a strong antibody response (better than in COVID-19-recovered patients), as well as T-cell activation [136,137]. It is stable at refrigerator temperatures and has an efficacy of 89.3%, reaching up to 96% in a U.K. clinical trial [135][136][137]. Furthermore, this vaccine is under further investigation to see whether it can be given together with the flu vaccine [138].
EpiVacCorona Vaccine (EVCV)
The EVCV vaccine is manufactured by the Vector Institute, a Russian biological research center. It is based on using fragments of synthetic viral peptides reflecting SARS-CoV-2 antigens [23,139]. It is given in two doses three weeks apart by intramuscular injection to people over 18 years of age as well as older people >60 years of age [139,140]. The developers claimed that it is stable during storage at refrigerator temperatures for up to two years. Its efficacy is officially unknown, and it is awaiting regulatory approval. However, all the volunteers who were administered the EVCV developed specific antibodies against its antigens [140].
ZF 2001 (RBD Dimer) Vaccine
The developers of this vaccine are the Chinese Anhui Zhifei Longcom and the Academy of Military Medical Sciences. The vaccine uses a section of the spike protein named the receptor-binding domain (RBD) combined with an adjuvant, so it is considered very safe [24,133,141]. The ZF 2001 vaccine is administered in a three-dose course with the doses given 4 weeks apart by intramuscular injection [142]. The efficacy of this vaccine is officially unknown, as it is in a phase 3 clinical trial, but it has been approved for emergency use in Uzbekistan and China [143,144].
Virus-Like Particle (VLP) Vaccines
The VLPs are composed of several structural viral proteins (co-expressed or admixed) [145,146]. These VLPs are manufactured viral proteins-S, M, and E with or without N-co-expressed and budding from eukaryotic producer cells [147,148]. These particles are similar to the virus but lack the viral genome. Of these VLPs, the S can bind and enter via the ACE2 receptor. Additionally, the S particle can crosslink B-cell surface receptors and stimulate antibody production. These vaccines require adjuvants and repeated administration. Therefore, they are based on a noninfectious virus-like-particle (VLP) that resembles the morphology and structure of the SARS-CoV-2 particles but does not contain any of its infective genetic materials, thus rendering it extremely safe to produce and handle [146][147][148]. These VLPs can be produced in any biosafety level 1 facility. None of the VLP vaccines have yet been approved for use, but there are three promising VLP vaccines under development. Firstly, the Canadian company Medicago has genetically engineered plants to produce a VLP vaccine, which is in phase 2/3 clinical trials and was recently granted Fast Track designation by the U.S. FDA [149]. Secondly, the ContiVir team at the Max Planck Institute for Dynamics of Complex Technical Systems (Magdeburg, Germany) has designed and produced a virus-like particle vaccine [145,150]. Thirdly, a Georgia-based biotechnology company, GeoVax Atlanta, has used MVA viral vectors to express VLPs [151]. The last two vaccines are in preclinical trials.
Repurposed and Live Attenuated Vaccines
The Bacillus Calmette-Guerin (BCG) vaccine was prepared as a bacterial live attenuated vaccine at the beginning of the twentieth century to prevent tuberculosis [24]. Recently, the Murdoch Children's Research Institute, Australia, in collaboration with the University of Melbourne, investigated whether BCG could partly protect against SARS-CoV-2 (repurposed) in a trial called BRACE. Evidence is not yet available, and recommendations from the WHO are awaited [152].
Furthermore, it is worth mentioning that there are three live attenuated vaccines for SARS-CoV-2 currently in preclinical trials in India and Turkey [6]. The COVI-VAC vaccine, developed by the Serum Institute of India, India, in collaboration with Codagenix, a New York private biotech company, is an example of a live attenuated vaccine under development. This technology is expected to lead to the development of a vaccine that can recognize the whole virus and be administered via the intranasal route, which would be a great advantage [125].
Vaccines in Phase 1/2 Clinical Trials
The most important vaccines currently in phase 1/2 clinical trials according to WHO press releases comprise 56 vaccines against SARS-CoV-2 ( Figure 1) [135]. The current review has selected 43 leading vaccines from the seven different classes of COVID-19 vaccines mentioned above ( Figure 2, Table 2). The categories and the number of candidate vaccines in each include mRNA (5 vaccines), DNA (8), vectors (10), protein subunits/virallike particles (17), inactivated viruses (5), and live attenuated (1) vaccines. These vaccines are promising according to preliminary results, and some utilize modern technology in vaccinology for the production of vaccines that can be administered via oral [32,40], nasal [41,42,71], or dermatological [29,30,[34][35][36] routes. Other vaccines also use vectors different than those vaccines currently awarded authorization for emergency use status or various purified VLPs (Table 2). Many more vaccines are in preclinical trials or animal studies; they are expected to be developed using advanced and sophisticated technology for their preparation (Figure 1).
Mutations and Types of Viral Variants
The world is currently facing mutated variants of SARS-CoV-2, which are further increasing the infection rate and threatening the effectiveness of the already prepared vaccines. The importance of these variants arises from their potential for increased transmissibility, increased virulence, or resistance to the vaccines available to protect individuals against them. Furthermore, they show higher mortality and morbidity rates, less susceptibility to antiviral therapy, capabilities to evade the usual diagnostic tests and natural immunity, ability to infect vaccinated individuals, and greater capabilities to infect immunocompromised patients.
There are thousands of different variants of SARS-CoV-2 circulating across the world. However, researchers around the world have identified seven notable variants of clinical significance: B. [157], and P.1 in Japan and Brazil [158]. B.1.1.7 is more transmissible (30-50%) globally, more lethal, and more virulent, but of unchanged antigenicity compared to the "sequence zero" original virus [153]. B.1.429 and B.1.427 have higher transmissibility and decreased sensitivity to neutralizing antibodies [154]. B.1.525 and B.1. 251 both show a moderate reduction in their ability to be neutralized by antibodies [155,156]. The B.1.351 mutation makes the virus bind more effectively to the ACE2 receptor on human cells, which facilitates its transmission, reduces its antigenicity, and makes neutralization by antibodies significantly less effective. The three Indian variants (B.1.617: V1, V2, and V3) but particularly V2 have significantly higher transmissibility (160%) than the original viral strain but slightly reduced susceptibility to antibody neutralization [157]. Lastly, the P.1 mutation increases viral transmission and lethality and reduces its susceptibility to antibody neutralization [158].
Although the WHO is trying to introduce uniform nomenclature for all of the viral variants, there are three main clades-GISAID (2021), Nextstrain (2017), and Pangolin (2020)-used to categorize these variants. Moreover, it has been found that the genetic codes of these variants are slightly different from each other and from the original virus. A mutation at N501Y is seen in the U.K., South African, and Brazilian variants. A mutation at E484K is seen in the South African, Brazilian, and some U.K. variants. Lastly, mutations at P681R and L452R might help the Indian variant to spread rapidly.
Scientists are now investigating the efficacy of the different vaccines against these variants and are taking the necessary measures to modify the current vaccines to make them effective against these variants. It is worth mentioning that SARS-CoV-2 has not mutated enough to render the current vaccines ineffective [157]. Future monitoring for the development of new variants of SARS-CoV-2 is therefore required. The current vaccines were designed for earlier versions of the coronavirus; there is evidence that they should work against the new versions but potentially less effectively. The latest research suggests that two doses of either the Pfizer or AstraZeneca vaccine are still protective against the Indian variant and that Pfizer is still protective against the new variants mentioned above. The AstraZeneca vaccine protects against the U.K. variant but is less effective against the South African strain, while the Moderna vaccine is effective against it but with weaker and a shorter-lived immunity. Lastly, the Moderna vaccine is still effective against the U.K. and N501Y variants [155][156][157][158].
Side Effects, Precautions, and Contraindications
No serious side effects have been reported from the usage of the currently authorized vaccines [159]. However, mild to moderate postvaccination symptoms have been reported, such as pain, swelling, and erythema at the local injection site; fever; chills; fatigue; myalgia; arthralgia; and axillary lymphadenopathy. One local symptom and systemic symptoms occur in 80-90% and 55-85% of vaccinated individuals, respectively [80,125,159]. These symptoms could be more pronounced after the second dose. All of these symptoms could be alleviated by the usage of acetaminophen, and most of them subside within 2 to 3 days.
Although very rare, cerebral venous sinus thrombosis has been reported with the Oxford-AstraZeneca vaccine [160], as well as an unexplained illness with the Johnson and Johnson vaccine [161], and gastrointestinal manifestations of nausea, vomiting, and diarrhea have been observed with the CanSinoBIO vaccine [41]. These clinical manifestations and a few others could be coincidental and not vaccine-related [41,160,161].
The side effects during pregnancy are similar to those that occur in nonpregnant people, and there is no need for women to avoid pregnancy if vaccinated. In addition, lactating women need not avoid vaccination [162]. It has been reported that patients with severe hypersensitivity, such as anaphylaxis, should avoid mRNA vaccines for the time being, but that avoidance is not indicated for local urticaria, mild allergic reactions, or latex allergy. Hypersensitivity was reported in 0.63 and 1.5% of people vaccinated with the Pfizer-BioNTech and Moderna vaccines, respectively [80,85,163]. Any history of anaphylaxis in response to other vaccines or injectable therapies should be assessed carefully by specialists to determine whether it represents an absolute contraindication. Anyone experiencing anaphylaxis after the first dose of a SARS-CoV-2 vaccine should not receive a second dose. It is worth mentioning that the vaccines' vial stoppers do not contain natural rubber latex, so latex allergy is not a contraindication. Allergic individuals (reactogenic) should wait for at least half an hour after vaccination at the injection facility to ensure stable conditions. Other individuals are only asked to wait for 15 min after vaccination [85,163].
An immunocompromised status is not an absolute contraindication, as the risk of COVID-19 in immunosuppressed patients outweighs any risk from vaccination, so such patients should be vaccinated. Moreover, patients with autoimmune diseases, such as Guillain-Barre syndrome can be vaccinated if there is no contraindication [164]. Finally, although cases of Bell's palsy has been reported in the postvaccination period in trials, these were later considered coincidental, and individuals who had Bell's palsy can take the vaccine if desired. They should be handled in a similar way to others.
For patients with COVID-19 infection or their contacts, vaccines can be deferred until the end of the quarantine period of isolation to avoid exposing healthcare workers or other persons to the risk of infection. It is preferable to delay the vaccination of these individuals until after 90 days, since reinfection is uncommon at this point [164]. Vaccination should be offered regardless of prior history of asymptomatic or symptomatic SARS-CoV-2 infection. Serological tests for COVID-19 are not recommended for the purpose of vaccination. A fully vaccinated individual would achieve their maximum immune response two weeks after the last dose [165]. Persons who have received monoclonal or polyclonal antibodies (convalescent plasma) can be deferred from vaccination for 90 days as a precautionary measure [164]. Finally, there is limited information on the effect of vaccination in reducing transmission and how long it lasts. Thus, vaccinated individuals should continue to follow all the current guidance to protect themselves by wearing masks, maintaining 6-foot distancing, washing hands, abiding by travel regulations, and avoiding gathered crowds [166,167].
The Immune Response in the Postvaccination Period
The main aim of vaccination is to generate protective adaptive immunity in the form of antibodies and specific T-cell responses against SARS-CoV-2 with the involvement of innate immunity (Figure 4) [168]. The spike (S) proteins are molecules emerging from the surface of the virus, and their cleavage into two parts is an essential step for infection ( Figure 3). One part (S1) of the S protein is a receptor-binding domain (RBD) that the virus uses to bind to the host receptor angiotensin-converting enzyme-2 (ACE2). The second subunit of the S protein (S2) mediates fusion with the cell membrane by forming a six-helical bundle via the two-heptad repeat domain to fuse with the cell membrane, enabling SARS-CoV-2 to infect the cell [6]. Antibodies specific to the S1 or S2 (see above, Section 2) proteins could neutralize and block the attachment and fusion of SARS-CoV-2 to the host cell. Therefore, neutralizing antibodies play an important role in viral clearance. The production of these antibodies specific to S1 or S2 could help to protect against future COVID-19 infection. Antibodies against the SARS-CoV-2 viral spike protein have been shown to have neutralizing effects. Current vaccines have been developed to elicit antibodies to the spike protein [6].
The production of antibodies against the S proteins is switched towards the IgM, IgG (mainly IgG1 and IgG3), and IgA classes. The levels of these antibodies peak 14 and 28 days after vaccination for IgM and IgG/IgA, respectively [168,169]. The serum concentrations of these antibodies can be measured in laboratories on a large scale to assess or follow up on vaccination and/or infections [169,170].
The CD4+ T-cell (Th1) response after vaccination can produce interferon-γ, tumornecrosis factor-alpha, and interleukin (IL)-2. CD8+ T cells are also activated by the vaccines. Unfortunately, the measurement of T-cell responses is only possible in a limited number of laboratories. T-cell stimulation can be measured by INF-γ ELISpot, which directly measures TH1 activity [168].
If herd immunity among different nations is achieved at a desirable level in the postvaccination period, the COVID-19 pandemic could be stopped by the end of 2021 [171].
Conclusions
All the vaccines authorized for emergency use are safe and effective, with efficacy above 50% and up to 95%. No serious side effects from their administration have been reported, and they induce effective immune responses capable of protecting individuals against COVID-19 after the completion of the vaccination protocols. The appropriate choice of vaccine depends on different clinical, practical, and logistic parameters for countries and/or individuals. Important factors include the technology applied (modern or classical), the efficacy, vaccination protocol, the age group and health status of the vaccinee, storage requirements, side effects, and cost. More promising new vaccines are under development, and they are expected to add to the armory against SARS-CoV-2. Significant control of COVID-19 transmission by the end of 2021 is expected, through the achievement of widespread herd immunity in the postimmunization period.
|
v3-fos-license
|
2018-04-03T05:09:14.632Z
|
2013-12-03T00:00:00.000
|
16577025
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10856-013-5108-x.pdf",
"pdf_hash": "aa4b0e8dfede7a2f70bf3220ea98fb98e0920c65",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3006",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Engineering"
],
"sha1": "aa4b0e8dfede7a2f70bf3220ea98fb98e0920c65",
"year": 2013
}
|
pes2o/s2orc
|
Towards ready-to-use 3-D scaffolds for regenerative medicine: adhesion-based cryopreservation of human mesenchymal stem cells attached and spread within alginate–gelatin cryogel scaffolds
Cultivation and proliferation of stem cells in three-dimensional (3-D) scaffolds is a promising strategy for regenerative medicine. Mesenchymal stem cells with their potential to differentiate in various cell types, cryopreserved adhesion-based in fabricated scaffolds of biocompatible materials can serve as ready-to-use transplantation units for tissue repair, where pores allow a direct contact of graft cells and recipient tissue without further preparation. A successful cryopreservation of adherent cells depends on attachment and spreading processes that start directly after cell seeding. Here, we analyzed different cultivation times (0.5, 2, 24 h) prior to adhesion-based cryopreservation of human mesenchymal stem cells within alginate–gelatin cryogel scaffolds and its influence on cell viability, recovery and functionality at recovery times (0, 24, 48 h) in comparison to non-frozen control. Analysis with confocal laser scanning microscopy and scanning electron microscopy indicated that 2 h cultivation time enhanced cryopreservation success: cell number, visual cell contacts, membrane integrity, motility, as well as spreading were comparable to control. In contrast, cell number by short cultivation time (0.5 h) reduced dramatically after thawing and expanded cultivation time (24 h) decreased cell viability. Our results provide necessary information to enhance the production and to store ready-to-use transplantation units for application in bone, cartilage or skin regenerative therapy.
Introduction
3-D substrates have great impact in the development of scaffolds for tissue engineering in regenerative medicine [1][2][3][4][5][6], especially in combination with multipotent stem cells, like human mesenchymal stem cells (hMSCs), that are capable to differentiate in various cell types of mesoderm germ layer [7]. These scaffolds provide native environments required for cell proliferation and differentiation at an optimal growth-area-to-volume-ratio. Among the numerous kinds of materials serving as scaffolds, alginate convinces by following advantages for medical applications: the natural polysaccharide forms a hydrogel when crosslinked with multivalent cations, is cyto-and biocompatible, as well as biodegradable [8][9][10]. Furthermore, pore sizes in alginate scaffolds can be adjusted and allows immuno-isolation of encapsulated grafts along with nutrient supply and metabolic factor diffusion [11][12][13][14][15] as well as cell motility and contact in scaffolds with large, open pores [16][17][18][19]. Latter are often fabricated by cryogenic methods, like freeze-drying and cryostructuring, that produced a macro-porous cryogel scaffolds by large ice crystals at low cooling rates and temperature up to -20°C [16,[18][19][20][21][22][23][24].
However, untreated alginate is not famous for cell adhesion, especially hMSCs remain round-shaped with little attachment sites caused by missing integrin-based cell-substrate binding [25]. To enable or enhance cell adhesion, alginate scaffolds are coupled with different extracellular matrix (ECM) proteins such as collagen/gelatin, fibronectin, laminin [26]. Recently we proposed a chemical binding of gelatin to walls of alginate sponges, promoting hMSCs adhesion, growth and proliferation without interrupting of adipogenic, chondrogenic and osteogenic differentiation [19]. The bio-, and cytocompatibility of these alginate-gelatin scaffolds, as well as other alginate-cryogel compositions, have already been shown: they exhibit good adhesion, growth, and proliferation characteristics for various cell types and have a high potential to serve as matrix for tissue engineered transplantation units [23,[27][28][29][30][31].
Application of cell-scaffold constructs in regenerative medicine implies a linear workflow from cell seeding in appropriate scaffolds over proliferation in vitro to transplantation in vivo, that does not allow pausing or even storing the tissue constructs in biobanks for future supply. Cryopreservation of hMSC-scaffold constructs with maintained cell viability and functionality is a desirable approach [32][33][34] to overcome shortage in supply and would allow immediate application of the constructs by their ready-to-use character. Even differentiation in various tissue types in situ is possible [32,33,35,36]. In spite of several decades of research, it is still very difficult to cryopreserve adherent cells. The cells with cell-cell and cell-substrate contacts are much more sensitive to freezethaw injury than single cells in suspension, their spacious plasma membrane and cytoskeleton is affected by mechanical ruptures, followed by cell detachment and death [36][37][38][39][40]. These contacts mediated by cytoskeleton proteins, are involved in anchorage, spreading and motility of adherent cells [41], however, it wasn't still shown an influence of cell spreading on the cryopreservation success. Attachment and spreading processes depend on the duration of cultivation and can already be detected after a few hours [42]. To enhance cryopreservation success, the strained cytoskeleton of adherent cells has to be protected against injury caused by freezing and thawing procedures. Since it is well known, that water molecules are the main cause of cryoinjury (solution effects, mechanical damage by ice crystals) [43], hydrogel scaffolds like alginate seem to have beneficial effects for cryopreservation procedures.
Here, we show that alginate-gelatin scaffolds are suitable for 3-D cultivation and cryopreservation of hMSCs and can serve as ready-to-use scaffold constructs for regenerative medicine. It was for the first time demonstrated that cell recovery after thawing depends on cultivation time before cryopreservation: short cultivation time had a beneficial effect. This was analyzed with confocal scanning electron microscopy (CLSM) for viability and cell number. Additionally, cell functionality by spreading and motility on the scaffolds as well as cell injury caused by cryopreservation was analyzed by scanning electron microscopy (SEM), where surface changes and cell spreading on the nontransparent substrate can be monitored and quantified [44,45].
Preparation of scaffolds
The wide pore alginate-based scaffold matrix ( Fig. 1) was prepared essentially in accordance with the patented method described in [46]. In brief, sodium alginate of trade mark Manugel DMD (ISP, USA) was dissolved in distilled water in the concentration of 1.5 wt%. The solution was poured in plastic Petri dishes (40 mm in dia.; the layer thickness was 2 mm) and placed on to the strictly horizontal platform located in the chamber of precision programmable cryostat FP 45 MH (Julabo, Germany). The samples were frozen at -15°C for 3 h, further they were transferred into the freezedrier Alpha 1-2 LD plus (Martin Christ, Germany) and lyophilized for 24 h. Dry wide-porous Na-alginate disks were obtained, which were then immersed in saturated ethanolic solution of calcium chloride (Panreac Quimica SA, Spain) and kept there for 3 days with periodical, gentle shaking in order to exchange Na ? -ions for the cross-linking Ca 2? -ions. Further, the wide-porous Ca-alginate disks were rinsed with medical ethanol several times for salt removal. Subsequently, scaffold modification was carried out according to the procedure reported elsewhere [19]. Briefly, the disks were soaked in 0.1 M aqueous solutions of calcium hydroxide (Panreac Quimica SA, Spain) for 30 min with simultaneous argon bubbling. Afterwards, divinylsulfon (DVS) (Sigma-Aldrich Chemie GmbH, Germany) was added to 0.75 wt% DVS concentration in the liquid phase. Further the disks were incubated in DVS-medium for 18 h at room temperature with periodical shaking for grafting reactive vinyl-sulfone groups to the polysaccharide core. Then the disks were rinsed with water from the unreacted DVS residues, soaked in 1 wt% aqueous solution of gelatin (gelatin from porcine skin, type A (G A ), Sigma, USA) and adjusted to pH 9 with calcium hydroxide. Now, the disks were incubated in this medium for 12 h at 40°C with gentle stirring followed by rinsing with warm (40°C) water till the absence of protein in the washing wastes. Finally, water was substituted for ethanol, under which the resulting scaffolds were stored (4-6°C) before use.
Cell cultivation
Umbilical cord hMSCs were obtained from Promocell (Heidelberg, Germany) and cultivated according to manufacturer's protocol. Before seeding, the scaffolds were cut manually to a size of *5 9 5 9 2 mm and placed into culture medium for 15 min. After culture medium removal, the scaffolds were transferred into non-adhesive Petri dishes. To each scaffold, 100 lL with 10 5 cells in total was added manually on top. The following time periods of cell cultivation were chosen for non-frozen control and cryopreserved samples: 0.5, 2 and 24 h. After 0.5 h cultivation the sponges in group 0.5 h were stained and analyzed, and culture medium was added to other groups. For each time point three scaffolds were used and each experiment was repeated three times (n = 3). To estimate cell shrinkage during cell preparation for SEM, cells were also seeded on 35 mm ibidi plastic l-dish with grid (ibidi GmbH, Martinsried, Germany) so exactly the same cells could be analyzed using different methods.
Cryopreservation and recovery
For cryopreservation each scaffold was placed into one cryovial (Sigma-Aldrich, Taufkirchen, Germany) equipped with cryo-medium contains 10 % dimethyl sulfoxide (DMSO) in culture medium. After 5 min incubation at 4°C, the vials were transferred into Mr. Frosty-Boxes (Nalgene, Sigma-Aldrich, Taufkirchen, Germany) to guarantee a cooling rate of 1°C/min and into a -80°C freezer overnight. Next day the samples were transferred into the vapor phase of liquid nitrogen (\-160°C) and kept there for at least 24 h. The samples were thawed in a 37°C water bath, and the scaffolds were placed into nonadherent well-plates containing 0.5 ml of Dulbecco's phosphate buffered saline (PBS) with Ca 2? and Mg ? (Invitrogen, Karlsruhe, Germany) by forceps. The PBS was changed once, removed and substituted with culture medium. Immediately after thawing some samples were stained for viability, the others were placed into incubator for recovery. Recovery time points were 0, 24 and 48 h after thawing. For each time point three scaffolds were used and each experiment was repeated three times (n = 3).
Cell membrane integrity and metabolic activity assay
The membrane integrity and metabolic activity was assessed using a double fluorescent staining with enzyme substrate fluorescein diacetate (FDA) and DNA-dye ethidium bromide (EB) as previously described [47]. FDA is non-fluorescent but after crossing the plasma membrane of metabolic active cells by diffusion it is hydrolyzed to fluorescein, indicating viable cells. EB can only enter cells with damaged membranes, intercalates with their DNA and exhibit a red fluorescence. FDA/EB staining was analyzed by CLSM (Eclipse EZ-C1, Nikon GmbH, Düsseldorf, Germany). Confocal images were obtained along the Z-axis with 20 lm intervals at excitation wavelength of 488 nm (FDA) and 543 nm (EB).
SEM
To evaluate cell morphology and behavior within the scaffolds, SEM was performed before and after freezing and recovery. Adherent hMSCs on the alginate-gelatin scaffolds were prepared as previously described [48] with some modification for alginate preparation. Briefly, the samples were fixed in 2 % glutaraldehyde in sodium cacodylate buffer, treated with 2 % osmium tetroxide and 1 % tannic acid in 0.15 M sodium cacodylate buffer. After dehydration in increasing alcohol concentration, the samples were dried in hexamethyldisilazane (HMDS, Sigma-Aldrich, Taufkirchen, Germany). Finally, the samples were Inset (top, left) shows an overview. Scale bar 500 lm coated with carbon, ion-sputtered with gold-palladium and studied in field emission scanning electron microscope Phillips FESEM XL30 (FEI, Eindhofen, Netherlands) by 5 and 10 kV accelerating voltage and 10 mm working distance.
Determination of cell viability
Analysis steps of CLSM fluorescence images for viability estimation are presented in Fig. 2a. Two programs were used: Image J software (NIH, Bethesda, MD, USA) and NIS Elements imaging software (Nikon GmbH, Düsseldorf, Germany). Image J program was used to split CLSMimages in color channels and subtracted them to obtain pure green and red signals. Afterwards the mean single cell size was obtained by measuring the area of at least 80 green-fluorescing cells in NIS elements. The estimation of vital cell number was performed by dividing the total green area of one image, which was also obtained by NIS elements, by the mean single cell size. The number of dead cells per image was obtained from object count in red signal image, since EB-signals are only small dots. Finally, viability was calculated by dividing the number of living cells by the total number of cells. Additionally, we calculated the total number of attached living cells by standardization to total cells in control.
Determination of cell spreading area and cell shape
For hMSCs spreading estimation we used previously developed SEM-method based on the determination of cell area in backscattered electron (BSE) mode of SEM [45]. Due to overlapping and sometimes minor differences of brightness/contrast between scaffolds and cells, in this work we measured the cells not only automatically, but also manually by freehand selection and analyzed/measured the cell area in Image J software (see Fig. 2b). More than 90 cells per approach were measured. Total number of cells at each time point was 300-400. From the same SEMimages we estimated the number of round-, or flowershaped and star-shaped cells, since the work of Kilian et al. [49] shows that the cell shape can reflect their future differentiation possibilities.
Determination of cell shrinkage by SEM preparation
After SEM preparation (see above Sect. 2.5) and before dehydration, hMSCs cultivated 0.5, 2 and 24 h on the ibidiplastic l-dishes were photographed under water layer. The exact same cells were found and measured after dehydration and HMDS-drying in SEM. The areas of spread cells were determined in light microscopic (LM)-, and SEMimages with above mentioned and described method. A cell shrinkage coefficient of 1.35 ± 0.25 (n = 298) was calculated by dividing cell area measured in LM images by the corresponding cell area measured in SEM images. This coefficient was taken into account for our cell area determination.
Statistical analysis
Results of statistical analysis are presented as mean ± standard deviation, evaluated with paired Student t test and were considered significant by P \ 0.05. Figure 3a shows representative CLSM fluorescence images of hMSCs before and after freezing-thawing procedure, cultivated for different time periods. The living (green) and damaged (red) cells are well distinguishable. More red cells were seen in scaffolds cultivated for 24 h prior to cryopreservation than in scaffolds cultivated for 0.5 and 2 h, respectively: viability decreased immediately after thawing and remained significant lower as in other groups after 24 and 48 h recovery (see Fig. 3b). In every approach, adherent and living cells were recovered after freezingthawing procedure (see Table 1), whereas in the 2 h cultivation group cells recovered most rapidly from freezing stress. The numbers of attached living cells in relation to the total cell numbers, displayed in Table 1, also showed best results in samples cultivated for 2 h prior to cryopreservation: hardly any changes immediately after thawing, slight loss of cells after the following 24 h recovery, but afterwards a complete recovery and even an increase in cell number after 48 h. In contrast, cell number in the 0.5 h cultivation group decreased about 50 % after thawing. However, the total cell number of the non-frozen control after 2 h cultivation was significantly less in comparison with 0.5 and 24 h.
Effect of cultivation time on the cell viability and recovery after cryopreservation
3.2 Cell surface and cell-substrate behavior before and after cryopreservation procedure SEM investigation of hMSCs in non-frozen control and after cryopreservation indicated good attachment and spreading of hMSCs on the alginate-gelatin scaffolds (Figs. 4, 5, 6). The scaffolds were smooth and showed no ruptures or damages after cryopreservation. Spread cells in control were mostly covered with microvilli and some small vesicles (Figs. 4a, 5a, 6a), the round cells have also some small blebs/vesicles and ruffles (Fig. 4b). After 0.5 h cultivation nearly half of the cells in control were round or flower-shaped (Fig. 4a, b). After 2 h cells were more spread (Fig. 5a) and star-shaped, in some cases round and flower-shaped. After 24 h cultivation, hMSCs were well spread, flat, and star-shaped with tight cell-cell and cellsubstrate contacts (Fig. 6a). Thin long fibers (up to 200 nm in dia.) were seen in control on the scaffold almost everywhere, except in samples after 0.5 h cultivation (Figs. 5b, 6b). Immediately after thawing (0 h recovery) of the scaffolds, cultivated for 0.5 h before cryopreservation, round and flower-shaped cells with same surface features like cells in control were observed (compare Fig. 4b, d). The surface of some cells in samples after 0.5 and 24 h cultivation has changed: next to vital cells with surfaces comparable to control hMSCs, damaged spread cells with disruptions or cells with crinkled plasma membranes were seen (cells marked with red asterisk in Figs. 4d, 6c). In samples after 24 h cultivation prior cryopreservation traces of detached cells were sometimes visible (data not shown). Most hMSCs after 2 h cultivation had the similar surface morphology, cell-cell and cell-substrate contacts as in control (compare Fig. 5a, c). Only a small number of cells in this group had intact plasma membranes with smooth surfaces that were only covered with single short microvilli (data not shown). In samples cultivated for 2 and 24 h prior to cryopreservation, the same thin long fibers present in non-frozen control scaffolds were seen (Figs. 5d, 6d). This effect was not detected in samples after 0.5 h cultivation (Fig. 4d).
In general after 24 h recovery, hMSCs were covered with microvilli and some blebs and vesicles, and appeared mostly star-shaped (see Figs. 4e, 5e, 6e). However, in scaffolds with 24 h pre-cultivation the cells with disrupted plasma membranes were still seen after freezing-thawing procedure (not shown). Also the above-mentioned thin fibers were present in all cultivation groups (Figs. 4f, 5f, 6f).
Finally after 48 h cell recovery, most cells in every cultivation group possessed elongated star-shaped forms and were mostly covered with microvilli and some single vesicles (Figs. 4g, 5g, 6g). However, cells cultivated 0.5 h prior to cryopreservation appeared slimmer than in other groups, and cell-cell contacts were not dense (Fig. 4g). In contrast, discrimination of single cells was more difficult in the 2 h and 24 h cultivation groups due to tight cell-cell contacts (Figs. 5g, 6g). Remarkably, in all samples after 48 h recovery, fine filaments on the substrate were present (Figs. 4h, 5h, 6h).
Effect of cryopreservation on the cell spreading
For investigation of cell spreading and calculation of cell area we used BSE mode of scanning electron microscope. The applied invert BSE-imaging method and its corresponding preparation enhanced contrast between cells and surfaces (Fig. 7a) and improved automatic or manual evaluation. Immediately after thawing we observed a significant decrease of the cell spreading area in all groups (Fig. 7a, b, 0 h recovery). Also in BSE-images, the cells cultivated for 0.5 h prior cryopreservation appeared slimmer than in all others groups. hMSCs cultivated for 2 h and 24 h prior to cryopreservation were mainly star-shaped with a minor number of round cells. The cell area of 0.5 h cultivation group was significantly less than at all recovery time points after cryopreservation (Fig. 7b). The cells cultivated for 2 h before cryopreservation showed a cell spreading area comparable to the non-frozen control scaffolds after 24 h recovery. From the same SEM-images we have calculated the percentage of star-shaped cells (Fig. 7c). The number of star-shaped cells in the 0.5 h cultivation group has always been significantly lesser than in all other groups. Only cells cultivated for 2 h and 24 h prior to cryopreservation showed cell shapes comparable to the non-frozen control after a recovery times of 24 and 48 h after thawing.
Discussion
For the first time, we showed successful cryopreservation of adherent hMSCs within alginate-gelatin scaffolds with the potential to serve as a ready-to-use tissue engineered product. We observed that cryopreservation success depends on the cultivation time before cryopreservation and identified an optimal time period in order to recover adherent cells with maintained functionality.
Short cultivation time is preferable for viability and recovery
In comparison with the work of Xu et al. [35], viability of cryopreserved hMSCs adherent to scaffolds could be enhanced after short time cultivation. Improved results of this work maybe due to the following reasons. First, the alginate-gelatin scaffolds used here are soft. Mechanical strains between shrinking cells and stiff scaffolds during cooling and ice crystallization occur in lesser amount, as both have similar thermal contractions, so that cells do not detach [35,50]. Second, alginate as a potent hydrogel absorbs water and thus can avoid harmful formation of large ice crystals. The scaffolds remain soft and elastic after cryopreservation, and SEM images in Figs. 4, 5 and 6 do not show visible structural changes (ruptures or damages) between cryopreserved scaffolds and non-frozen controls.
Most research in cryopreservation of adherent cells use cultivation times of 24 h up to 7 days prior to freezing [32,35,40,[51][52][53]. However, our results show that the longest cultivation time (24 h) yielded lowest viability of hMSCs after thawing (Fig. 3b). Cell viability and the relative number of recovered cells with shorter cultivation time were significantly higher.
Major cell membrane/cytoskeleton disruptions after 24 h cultivation
After 24 h cultivation hMSCs were more spread than the cells in other groups and formed closed cell-cell and cell-substrate contacts mediated by integrins [30] and cytoskeletal structures such as actin stress fibers [53,54]. Former studies already showed that rigidity of plasma membrane and cytoskeleton play key roles in freezing and thawing of adherent cells [35,40]. Disruption of the cytoskeleton by ice crystal formation during cryopreservation can lead to cell detachment with subsequent cell death [35,39,40,50]. Our results of hMSCs cultivated for 24 h before cryopreservation confirmed this thesis (see cell number, Table 1). We did not evaluate cytoskeletal proteins as did others [35,50], but investigated cell state after cryopreservation by high-resolution SEM and a Representative, inverted BSEimages for calculation of hMSCs spreading area; scale bar 20 lm. b Spreading area before and after cryopreservation of hMSCs on alginate sponges. The differences between 0.5 and 2 h cultivation (*), as well as 0.5 versus 24 h cultivation (**) were statistically significant at every recovery time. Also, differences between control and 0 h recovery (***) were statistically significant at every cultivation time. *, **, and *** show P \ 0.05. c Percentage of cells with star shapes. The differences between 0.5 versus 2 h (*), as well as 0.5 versus 24 h cultivation (**) were statistically significant at every recovery time (P \ 0.05) checked cell spreading, shape and membrane integrity (ruptured and wrinkled plasma membranes, Figs. 4c, d, 6c). That allows an indirect proof of cytoskeleton breakdown of cells with maintained cell-substrate focal contacts.
Major cell loss after 0.5 h cultivation
After a short cultivation (0.5 h) of hMSCs, the number of viable attached hMSCs was higher than in scaffolds with 24 h cultivation time immediately after thawing, but was drastically reduced for increasing recovery times (compare number of living cells at each time of recovery with total cell number in control, Table 1). Most likely, the roundshaped, non-spread cells, predominated in scaffolds with 0.5 h cultivation time, had less focal contacts and therefore weak adhesion.
Shrinkage and cell-volume regulation
Another indirect measurement of cytoskeleton was performed by calculating the spreading area of cells. We observed a significant decrease of the spreading area in every group of cultivation immediately after thawing, what indicated the reduced cell volume after cryopreservation. During slow-freezing of cells, osmotic shrinkage occurs [55] what can lead to cell damage with following post-thaw death, if the cell is dehydrated over its hypothesized minimum [56]. The importance of cytoskeleton in cell volume regulation is well-known [57] and is discussed in cryopreservation with quick responses of membrane-cytoskeleton complex systems [58,59]. Due to the water loss during cryopreservation, cell volume is reduced but there is membrane material still available (i.e. the cell membrane is developed and stiff), the cell can be recovered or repaired [59]. After 0.5 h and partially after 2 h cultivation, we observed round and flower-shaped hMSCs on non-frozen samples, covered with many microvilli, ruffles and small vesicles (see Fig. 4b). It is possible, that the developed membrane structures with enhanced mechanical properties [59] lead to higher stability of these cells to extracellular ice damage and therefore increased number of viable cells immediately after thawing (see Fig. 3b). Another possible explanation for the enhanced viability of cells cultivated for 0.5 and 2 h prior to cryopreservation in comparison to 24 h cultivation group is the cytoskeleton stiffness: at a reduced cell volume and increased cytoskeleton stiffness of cells cultivated 24 h before freezing irreversible damage and cell death followed [59].
We measured a major reduction of cell area directly after thawing and removal of the hyperosmotic cryoprotectant DMSO (see Fig. 7b). DMSO removal processes take approximately 10 min, but the cellular adaptation with regulatory processes after hyperosmotic stress takes 0-20 h (for actin 10-30 min) [60]. However, in a recent published paper, real-time osmotic response of adherent cells was measured with approximately only 5 min [61]. Most likely, we measured cells immediately at 0 h recovery, so that they did not finish their volume regulation procedure.
Influence of cultivation time before cryopreservation on cell-shape/differentiation
As some publications already showed, the shape of hMSCs reflects their future differentiation possibilities [49,54]. According to this we can speculate that hMSCs with round morphology after 0.5 h cultivation before cryopreservation (i.e. reduced number of star-shaped cells, see Fig. 7c) could be more preferable to adipogenic and chondrogenic pathways of differentiation. After 2 and 24 h cultivation, more cells show star-shaped morphology, which might be beneficial for differentiation of hMSCs in osteoblasts.
Maintenance of functionality by migration of hMSCs on scaffolds after cryopreservation
Since the ability of hMSCs to migrate is of great importance in tissue repair and regeneration [62,63], it is crucial to maintain this potential for future stem cell transplantation [64]. Therefore we evaluated the cell motility in SEM images by so-called cell traces closely connect with cytoskeleton proteins (tubulin, actin etc.) as well as focal contact proteins (i.e. integrins) [63,65,66]. Such traces were visible in our samples: we observed fine, long fibers throughout the scaffolds that, according to morphology and size, can be classified as cell traces. These traces were observed when hMSCs were cultivated for 2 or 24 h before cryopreservation (Figs. 5b-h, 6b-h). After a short cultivation (0.5 h) such traces could not be observed in non-frozen scaffolds and immediately after thawing. In general, the availability of cell traces may be a significant characteristic of cell migration indicating the physiological and functional activity of cells within scaffolds [62,63]. These results serve as basis for future experiments and applications, especially for wound healing.
Standard deviation
High standard deviations in our cell area measurements occur due to following reason. We have observed different processes: some cells were newly attached and therefore still spherical (0.5 h and, partially, 2 h), others already begun to spread (0.5 and 2 h cultivation), were already spread (24 h), or had already divided (especially 24 and 48 h recovery in all groups), resulting in a decreased cell spreading area [67].
Cell number in control
Considering cell spreading area, morphology and cell traces of adherent hMSCs within alginate-gelatin scaffolds, cells cultivated for 2 h before cryopreservation showed fewest deviation and best viability relative to the non-frozen control scaffolds. However, total cell number after 2 h cultivation in non-frozen controls was less than after other cultivation times. Possible reasons for that might be following: as can be seen in Figs. 3a and 4, and in consistency with former work of Anselme et al. [42,68], cells after 0.5 h cultivation time were initially adhesive and only minor spread. The viability assay for those scaffolds was performed immediately and before addition of culture medium. The addition of medium to groups of 2 and 24 h cultivation to prevent drying rinsed the minor spread cells from the alginate-gelatin scaffold. Most likely, this cell detachment causes the low total cell number after 2 h of cultivation, but the cells cultivated for 24 h had enough time to divide and double. To eliminate this artifact, we normalized the cell number to the initial seeded cells. This phenomenon needs further investigation, since an improved cultivation time between 2 and 24 h seems possible.
4.5 Significance of ready to-use cryogel scaffolds for regenerative medicine It has been reported that cryogels have significant characteristics for cell-based biomedical applications [23,27,28,69]. First to mention is the good adhesion potential of the sponges resulting in cell growth, proliferation and migration [31]. With electron scanning microscopy we observed cell traces and showed here, that the chemical coupling of gelatin to alginate [19] enables an excellent adhesion of therapeutically interesting mesenchymal stem cells. Secondly, cryogels with pore sizes up to 100 lm allow cellcell connections and cell-environment interactions like proliferation, migration and signaling [28,31,69,70], which we could proof by laser scanning microscopy. These properties may lead the way for applications in wound healing and tissue repair. The third important fact is their elasticity and high water absorbing capacity [12,70,71], what can avoid cell and substrate damage caused by ice crystallization during cryopreservation processes. Altogether we showed here, that the elastic properties might benefit cryopreservation procedures, that adhesion potential is still intact after cryopreservation procedures and that large pore size allows good cell migration and diffusion of (cryo-)media. Finally, the shown determination of cell shape (round or elongated) can be modulated and screened by cultivation time and/or adjustable stiffness of the cryogels, leading the way to ready-to-use scaffolds for regenerative medicine and capable of long-time storage.
Conclusions
We compared the influence of different cultivation times (0.5, 2, 24 h) before cryopreservation of adherent hMSCs within alginate-gelatin scaffolds on viability, morphology and recovery at different time points (0, 24 and 48 h) after thawing. Criteria of cryopreservation success were: (a) cell viability; (b) number of adherent, viable cells; (c) cell morphology and form; (d) cell spreading area; (e) the ability of cells to migrate. For the first time, we showed that short cultivation times (0.5 and 2 h) are more preferable for adherent hMSCs cryopreservation than a long cultivation time (24 h): a short cultivation time yielded higher viability and recovery what was confirmed by the cell's ability to expand and migrate. More precisely, hMSCs cultivated for 2 h before cryopreservation recovered most rapidly from freezing and showed a higher percentage of viable cells than cells cultivated for only 0.5 h. Our work proofs that adherent hMSCs on alginate-gelatin scaffolds can be cryopreserved successfully without any further preparation steps and thus build a valuable tool for future regenerative medicine. This is a first step towards ready-to-use 3-D scaffolds for long-time storage at cryogenic temperatures in biobanks.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
1998-01-01T00:00:00.000
|
9574968
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://wwwnc.cdc.gov/eid/content/24/12/pdfs/v24-n12.pdf",
"pdf_hash": "615f5b5ce119108d119eb5033f39554f99520f89",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3007",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "615f5b5ce119108d119eb5033f39554f99520f89",
"year": 1998
}
|
pes2o/s2orc
|
Murine typhus in travelers returning from Indonesia.
We report the first three documented cases of murine typhus imported into Europe from Indonesia, discuss clues for the diagnosis of the disease, and urge that murine fever be considered in the diagnosis of febrile disease in travelers.
Dispatches
One of the oldest recognized, most common, but least reported arthropod-transmitted zoonoses (1,2), murine virus is caused by Rickettsia typhi, formerly named R. mooseri, a gram-negative obligate intracellular bacterium (1,2). The main vector of murine typhus is the rat flea Xenopsylla cheopis, which maintains R. typhi in rodents. Flea bites and contamination of excoriated skin or the respiratory tract with infected flea feces are the major sources of human infection throughout the world (2). The classic reservoir and vector are the rat and rat flea, respectively (2); however, the role of cat fleas and opossums in maintaining this microorganism has been suggested in some areas of the United States where rats and rat fleas are absent (2,3). Although murine typhus has a worldwide distribution, it is often unrecognized, and documented cases are rarely reported, particularly in tropical countries. Three recent cases of murine typhus in 2 months in patients returning to Europe from Indonesia indicate that murine typhus should be considered a possible cause of imported fever from Indonesia.
Case 1
In October 1997, a 29-year-old man living in France was hospitalized with a 2-day history of fever. On admission, his temperature was 40°C and his pulse, at 70/min, was dissociated. No signs of eruption or other abnormalities were observed. The patient had returned from a 3-week trip to Bali, Indonesia, 2 weeks before his hospitalization but reported no arthropod bites during his trip. Increases of alanine aminotransferase (110 U/L), aspartate aminotransferase (120 U/L), lactate dehydrogenase (1000 U/L) and reactive protein C (130 mg/L) associated with a low white blood cell count were noted. Repeated blood smears did not disclose any malaria parasites. The patient received a 21-day treatment with doxycycline plus rifampin and became afebrile on day 3. He was discharged from the hospital and remained well. An indirect immunofluorescence assay (IFA) for antibodies reactive with R. typhi antigens (4) showed raised levels of immunoglobulin (Ig) M (1:512) and IgG (1:256).
Case 2
In November 1997, a 28-year-old man living in France was hospitalized with a 6-day history of fever. On admission, he had a low-grade fever (38°C). Physical examination found no abnormalities except for a few skin ulcerations of the inguinal area. The patient had returned from Bali 3 weeks before his hospitalization but reported no arthropod bites during his trip. Biologic findings were unremarkable. Blood smears did not disclose malaria parasites. The blood cell count, hemoglobin level, and blood chemistry values were within normal limits. Serologic testing for antibodies reactive with R. typhi antigens showed raised levels of IgM (1:512) and IgG (1:256).
Case 3
The third patient, a 28-year-old man from Belgium, became acutely ill during a trip to Indonesia (Bali, Lava, Lombok), with a fever
Murine Typhus in Travelers Returning from Indonesia
Philippe Parola,* Dirk Vogelaers, Chantal Roure, François Janbon, § and Didier Raoult* *Université de la Méditerranée, Marseille, France; Universitair Ziekenhuis, Gent, Belgium; Hôpital de la Croix-Rousse, Lyon, France; and §Hôpital Guy de Chauliac, Montpellier, France Dispatches (40°C), shivers, diffuse myalgia, but no rash. He was treated in Indonesia with amoxicillin, followed by cefuroxim-axetil (500 mg twice a day for 10 days), without success. The patient recalled multiple insect bites during his trip. Clinical examination on arrival in Belgium showed severe disease with bilateral conjunctivitis and splenomegaly, but no adenopathies. Biologic findings were unremarkable except raised levels of alanine aminotransferase (681 U/L), aspartate aminotransferase (659 U/L), and reactive protein C (17 mg/L). Analysis of the cerebrospinal fluid showed 20 leukocytes/ml with 75% polymorphonuclear cells. Repeated blood smears did not disclose any parasites. The patient received empirical treatment with quinine, doxycycline, and a quinolone to treat suspected malaria or salmonellosis. The fever rapidly resolved, and the patient recovered. Serologic testing with IFA was positive for R. typhi (IgG, 512; IgM, 1024). Antibodies to R. typhi were also detected in the cerebrospinal fluid (IgM, 2). All patients exhibited serologic crossreactions (low titers) with R. prowazekii, the agent of epidemic typhus, and R. conorii, the agent of Mediterranean spotted fever (Table;1,4). Serologic findings were completed by crossabsorption. When absorption was performed with R. typhi, homologous and heterologous antibodies disappeared, but when absorption was performed with R. prowazekii, only homologous antibodies disappeared; therefore, murine typhus was diagnosed as the cause of illness in the three patients (4).
Murine typhus is a mild disease with nonspecific signs (1). The incubation period is 7 to 14 days. The classic triad of fever, headache, and skin rash is observed in fewer than 15% of cases (5). While fever and headache are frequent, a rash occurs in only half of the patients and is often transient or unimpressive. This is illustrated in these three cases, where no rash was noted. Arthralgia, myalgia, or respiratory and gastrointestinal symptoms are frequent (5). Leukocytosis and mild leukopenia, anemia, and thrombopenia occur frequently; hyponatremia, hypoalbuminemia, hepatic abnormalities, or renal dysfunction may occur (5). Fewer than half of the patients report exposure to fleas or flea hosts. Of the cases reported here, only one involved insect bites. Untreated patients may have fever and symptoms for 7 to 14 days and then rapidly convalesce. The treatment of choice is antibiotic therapy with tetracyclines. A single 200-mg dose of oral doxycycline usually leads to defervescence during the first 72 hours (6).
Like our patients, most murine typhus patients may have isolated fever and nonspecific biologic abnormalities. Diagnosis may be missed because the rash, the hallmark of rickettsial diseases, is absent (1). For example, in a recent review in the United States, 22 different diagnoses were proposed for 80 patients with murine typhus (5). Consequently, the diagnosis of the disease is based on serologic testing. The reference method is immunofluorescence; however, a latex test, a dot blot enzyme-linked immunosorbent assay, and an immunoperoxydase assay have also been described (4). The fact that travel-associated cases of murine typhus have been identified and may represent a source of imported fever indicates that serologic testing should be considered in febrile patients returning from disease-endemic areas (Figure). Dispatches Tissot-Dupont et al. recently reported that the prevalence of antibodies against R. typhi in humans in Africa was higher in coastal areas where rats are prevalent (Figure; 7). Other seroepidemiologic studies demonstrated substantial seroprevalence in Asia (Thailand, Vietnam, Japan, Indonesia, China), Europe (Croatia, Greece), and Australia (6,8). Cases have recently been reported in Greece, Spain, Portugal, Israel, and Thailand (6). The disease is known to be endemic in the United States (particularly California, Hawaii, and Texas) and Mexico (6). Some authors have reported the disease in Rumania, Yugoslavia, Albania, Italy, areas around the Caspian Sea, in Kuwait, Latin America (Venezuela, Argentina, Brazil, Chile, Ecuador, Panama, Guatemala, Puerto Rico), and French Guyana (9). However, serologic cross-reactivity may occur between R. typhi and R. prowazekii, the agent of epidemic typhus (1,4). Epidemic typhus, transmitted by body lice, occurs more frequently in cool areas where clothes are infrequently changed and particularly during human conflicts (1). The disease is prevalent in countries at high altitudes in Central America or Africa (10,14); interpretation of serologic results in these countries has at times been difficult (7,10). However, cross-absorption is a useful technique for identifying the infecting rickettsia to species level as reported in the present cases (4).
Cases of murine typhus have been reported in travelers returning from India, Nepal, Morocco, Canary Islands, Spain, and Africa (6,(11)(12)(13). Although a case of murine typhus was reported in a patient after a trip to Malaysia and Indonesia (11), the cases reported here are the first serologically documented cases clearly imported from Indonesia. In Malang, East Java, humans are highly exposed to R. typhi, and murine typhus appears to be endemic in the region (8).
Fever is a common problem in patients returning from travel abroad. Although malaria is often suspected, rickettsial diseases (including murine typhus) are generally not. Our cases indicate that murine typhus should be considered in the diagnosis of febrile illness in travelers returning from disease-endemic areas, particularly if fever is present and blood smears show no malaria parasites. Rickettsial diseases were recently cited as the third most frequent cause of imported fever in Switzerland (12). On the basis of this study and two previous reports from France (13) and the United States (11), murine typhus is a frequently imported rickettsial disease. Increased development of tourism and travel to tropical countries should lead to an increase in the documentation of cases.
|
v3-fos-license
|
2017-11-10T13:48:05.169Z
|
2017-07-26T00:00:00.000
|
20659021
|
{
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10915-017-0637-y.pdf",
"pdf_hash": "debd1c51f49339e3d33b46c8640f66a73731302e",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3010",
"s2fieldsofstudy": [
"Engineering",
"Mathematics",
"Physics"
],
"sha1": "e25d85e09bf839b9d4ffa755ec21adb4a26c334d",
"year": 2017
}
|
pes2o/s2orc
|
High-order numerical methods for 2D parabolic problems in single and composite domains
In this work, we discuss and compare three methods for the numerical approximation of constant- and variable-coefficient diffusion equations in both single and composite domains with possible discontinuity in the solution/flux at interfaces, considering (i) the Cut Finite Element Method; (ii) the Difference Potentials Method; and (iii) the summation-by-parts Finite Difference Method. First we give a brief introduction for each of the three methods. Next, we propose benchmark problems, and consider numerical tests-with respect to accuracy and convergence-for linear parabolic problems on a single domain, and continue with similar tests for linear parabolic problems on a composite domain (with the interface defined either explicitly or implicitly). Lastly, a comparative discussion of the methods and numerical results will be given.
Introduction
Designing methods for the high-order accurate numerical approximation of partial differential equations (PDE) posed on composite domains with interfaces, or on irregular and geometrically complex domains, is crucial in the modeling and analysis of problems from science and engineering. Such problems may arise, for example, in materials science (models for the evolution of grain boundaries in polycrystalline materials), fluid dynamics (the simulation of homogeneous or multi-phase fluids), engineering (wave propagation in an irregular medium or a composite medium with different material properties), biology (models of blood flow or the cardiac action potential), etc. The analytic solutions of the underlying PDE may have non-smooth or even discontinuous features, particularly at material interfaces or at interfaces within a composite medium. Standard numerical techniques involving finite-difference approximations, finiteelement approximation, etc., may fail to produce an accurate approximation near the interface, leading one to consider and develop new techniques.
The aim of this work is to establish benchmark (test) problems for the numerical approximation of parabolic PDE defined in irregular or composite domains. The considered models (Section 2) arise in the study of mass or heat diffusion in single or composite materials, or as simplified models in other areas (e.g., biology, materials science, etc.). The formulated test problems (Section 4) are intended (a) to be suitable for comparison of high-order accurate numerical methods -and will be used as such in this study -and (b) to be useful in further research. Moreover, the proposed problems include a wide variety of possibilities relevant in applications, which any robust numerical method should resolve accurately, including constant diffusion; time-varying diffusion; high frequency oscillations in the analytical solution; large jumps in diffusion coefficients, solution, and/or flux; etc. For now, we will consider a simplified geometrical setting, with the intent of setting a "baseline" from which further research, or more involved comparisons, might be conducted. Therefore, in Section 2 we will introduce two circular geometries, which are defined either explicitly, or implicitly via a level set function.
In Section 3, we briefly introduce the numerical methods we will consider in this work, i.e., second-and fourth-order versions of (i) the Cut Finite Element Method (cut-FEM); (ii) the Difference Potentials Method (DPM), with Finite Difference approximation as the underlying discretization in the current work; and (iii) the summation-by-parts Finite Difference Method combined with the simultaneous approximation term technique (SBP-SAT-FD). These three methods are all modern numerical methods which may be designed for problems in irregular or composite domains, allowing for high-order accurate numerical approximation, even at points close to irregular interfaces or boundaries. We will apply each method to the formulated benchmark problems, and compare results. From the comparisons, we expect to learn what further developments of the methods at hand would be most important.
To resolve geometrical features of irregular domains, both cut-FEM and DPM use a Cartesian grid on top of the domain, which need not conform with boundaries or interfaces. These types of methods are often characterized as "immersed" or "embedded". In the finite difference framework, embedded methods for parabolic problems are developed in [1,23]. For comparison with cut-FEM and DPM, however, in this paper we use a finite difference method based on a conforming approach. The finite difference operators we use satisfy a summation-by-parts principle. Then, in combination with the SAT method to weakly impose boundary and interface conditions, an energy estimate of the semi-discretization can be derived to ensure stability. In addition, we use curvilinear grids and transfinite interpolation to resolve complex geometries.
The paper is outlined as follows. In Section 2, we give brief overview of the continuous formulation of the parabolic problems in a single domain or a composite domain. In Section 3, we give introductions to the basics of the three proposed methods: cut-FEM, DPM, and SBP-SAT-FD. In Section 4, we formulate the numerical test problems. In Section 5, we present extensive numerical comparisons of errors and convergence rates, between the second-and fourth-order versions of each method. The comparisons include single domain problems with constant or time-dependent diffusivity; and interface problems with interface defined explicitly, or implicitly by a level set function. In Section 6, we give a comparative discussion of the three methods and the numerical results, together with a discussion on future research directions. Lastly, in Section 7, we give our concluding remarks. Figure 1 The (a) single domain Ω and (b) composite domain Ω = Ω 1 ∪ Ω 2 . In (b), ∂Ω 1 has two connected components: the boundary ∂Ω and interface Γ = ∂Ω 2 .
Statement of problem
In this section, we describe two diffusion problems, which will be the setting for our proposed benchmark (test) problems in Section 4. (Recall from Section 1 that these models arise, for example, in the study of mass or heat diffusion.) For brevity, in the following discussion, we denote u := u(x, y, t) and u s := u s (x, y, t), with s = 1, 2.
Remark 1. We consider the circular geometries depicted in Figure 1 as the geometrical setting for our proposed benchmark problems in this work. In applications (Section 1), other geometries will likely be considered, some much more complicated than Figure 1. While our methods can handle more complicated geometry, this is (to the best of our knowledge) the first work looking to establish benchmarks -and compare numerical methods -for parabolic interface problems (3)(4)(5)(6)(7)(8)(9). As such, we think that the geometries in Figure 1 are a good "baseline"without all the added complexities that more complicated geometries might produce -from which further research, or more involved comparisons, might be done.
To be more specific, we aim to define a simple set of test problems that can be easily implemented and tested for any numerical scheme of interest. With circular domains, it suffices for us to compare/contrast performance of the numerical methods on a simple geometry with smooth boundary versus on a composite domain with fixed interface (explicit or implicit). The approximation of the solution to such composite-domain problems are already challenging for any numerical methods, since (i) the solution may fail to be smooth (or may be discontinuous) at the interface, and (ii) there may be discontinuous material coefficients (λ 1 = λ 2 ).
Remark 2.
For both the single and composite domain problems, we could also consider other boundary conditions, e.g., a Neumann boundary condition as in [6,13], etc.
Cut-FEM
In this section, we give a brief presentation of the cut-FEM method. For a more detailed presentation of cut-FEM, see, for example, [13,14,53].
Let Ω s be covered by a structured triangulation, T s , so that each element T ∈ T s has some part inside of Ω s ; see Figures 2a and 2b. Here, s = 1, 2 is an index for the composite domain problem (3-9), which will be omitted when referring to the single domain problem (1,2). (For the latter, note that T covers Ω.) Typically T 1 and T 2 would be created from a larger mesh by removing some of the cells. Further, let T Γ = {T ∈ T : T ∩ Γ = ∅} be the set of intersected elements; see Figure 2c. In the following, we shall use Γ both for the immersed boundary of the single domain problem and for the immersed interface of the composite domain problem, in order to make the connection to the set T Γ clearer. Figure 2 The (a) subdomain Ω 1 immersed in a mesh T 1 , (b) subdomain Ω 2 immersed in a mesh T 2 and domain Ω immersed in T , and (c) intersected elements T Γ .
To construct the finite element spaces we use Lagrange elements with Gauss-Lobatto nodes of order p (Q p -elements). Let V s h denote a continuous finite element space on Ω s , consisting of Q p -elements on the mesh T s : For the single domain problem (1, 2) we solve for the solution u ∈ V h ; while for the composite domain problem (3-9), we solve for the pair For the latter problem, this means that the degrees of freedom are doubled over elements belonging to T Γ .
We begin by stating the weak formulation for the single domain problem (1,2). Let (·, ·) X and ·, · Y be the L 2 scalar products taken over the two-and one-dimensional domains X ⊂ R 2 and Y ⊂ R 1 , respectively. The present method is based on modifying the weak formulation by using Nitsche's method [60] to enforce the boundary condition (2). By multiplying (1) with a test function v ∈ V h , and integrating by parts, we obtain: Note that (2) is consistent with the following terms: where γ D is a constant, and h T is the side length of the quadrilaterals in the triangulation. Now, adding (12,13) to (11) gives the following weak form: Find where For T Γ (the elements intersected by Γ), note that one must integrate only over the part of the element that lies inside Ω. A problem with this is that one cannot control how the intersections (cuts) between Ω and T are made. Depending on how Ω is located with respect to the triangulation, some elements can have an arbitrarily small intersection with the domain -see, for example, Figure 3a. If Ω is moved with respect to T to make the cut arbitrarily small, then the condition numbers of the mass and stiffness matrices can become arbitrarily large.
To mitigate this issue, in this work we add a stabilizing term j -defined shortly in (19) -to the mass and stiffness matrices, so that their condition numbers are bounded, independently of how the domain Ω is located with respect to the triangulation T [14,53]. Adding stabilization to (14) results in the following weak form: Find u ∈ V h such that where γ M and γ A are scalar constants.
In order to state the definition of stabilization (19), denote by F s the set of faces, as seen in Figures 3b and 3c. That is, F s is the set of all faces of the elements in T Γ , excluding the boundary faces of T s : Then, the stabilization term is defined as: where [u] = u| F+ − u| F− is the jump over a face, F ; n refers to a normal of F ; and ∂ k n u denotes the k-th order normal derivative. The scaling with respect to k of the terms in (19) is based on how the stabilization was derived. In particular, the k!-factors come from the Taylor-expansion and the factor 2k + 1 comes from integrating each term once.
We now consider the composite domain problem (3)(4)(5)(6)(7)(8)(9). To derive the weak formulation, one follows essentially the same steps as for the single domain problem, namely: 1. For both (3) and (4), multiply the equation for u s with a test function v s ∈ V s h , and then integrate by parts; 2. Add terms consistent with the interface and boundary conditions; and 3. Add stabilization terms j 1 and j 2 over F 1 and F 2 , respectively.
where the bilinear forms M and A correspond to the stabilized mass and stiffness matrices: L Ω corresponds to the forcing function: a Γ and L Γ consistently enforce the interface conditions (8,9): and the terms a ∂Ω and L ∂Ω enforce the boundary condition (7) along the outer boundary, ∂Ω: In (24)(25)(26)(27), n denotes the outward pointing normal at either Γ or ∂Ω (depending on the domain of integration); κ 1 + κ 2 = 1, so that {v} = κ 1 v 1 + κ 2 v 2 is a convex combination; and γ Γ , κ 1 , κ 2 are chosen as in [13]: The remaining parameters (appearing in Equations 21,22,[26][27][28] are given by: The scaling of γ D with respect to p follows from an inverse inequality. When p = 1 these reduce to the same parameters as the ones used in [71], where γ M was chosen based on numerical experiments on the condition number of the mass matrix. This also agrees with the choice of γ A and γ D in [14], where γ A was investigated numerically. In order to use cut-FEM, one needs a way to perform integration over the intersected elements T Γ . For example, with the interface problem, on each element K ∈ T Γ , we need a quadrature rule for the K ∩ Ω 1 , K ∩ Ω 2 and K ∩ Γ. For the numerical tests in this work (Section 4), we represent the geometry by a level set function, and compute high-order accurate quadrature rules with the algorithm from [68].
Remark 3. Optimal (second-order) convergence was rigorously proven for cut-FEM applied to the Poisson problem in [14]. As far as we know, there is no rigorous proof of higher-order convergence for cut-FEM, though such a proof would likely be similar to the second-order case.
DPM
We continue in this section with a brief introduction to the Difference Potentials Method (DPM), which was originally proposed by V. S. Ryaben'kii (see [64,67,65], and see [29,33] for papers in his honor). Our aim is to consider the numerical approximation of PDEs on arbitrary, smooth geometries (defined either explicitly or implicitly) using the DPM together with standard, finitedifference discretizations of (1) or (3, 4) on uniform, Cartesian grids, which need not conform with boundaries or interfaces. To this end, we work with high-order methods for interface problems based on Difference Potentials, which were originally developed in [66] and [3,4,5,6,26,28]. We also introduce new developments here for handling implicitly-defined geometries. (The reader can consult [67] for the general theory of the Difference Potentials Method.) Broadly, the main idea of the DPM is to reduce uniquely solvable and wellposed boundary value problems in a domain Ω to pseudo-differential Boundary Equations with Projections (BEP) on the boundary of Ω. First, we introduce a computationally simple auxiliary domain as part of the method. The original domain is embedded into the auxiliary domain, which is then discretized using a uniform Cartesian grid. Next, we define a Difference Potentials operator via the solution of a simple Auxiliary Problem (defined on the auxilairy domain), and construct the discrete, pseudo-differential Boundary Equations with Projections (BEP) at grid points near the continuous boundary or interface Γ. (This set of grid points is called the discrete grid boundary.) Once constructed, the BEP are then solved together with the boundary/interface conditions to obtain the value of the solution at the discrete grid boundary. Lastly, using these reconstructed values of the solution at the discrete grid boundary, the approximation to the solution in the domain Ω is obtained through the discrete, generalized Green's formula.
Mathematically, the DPM is a discrete analog of the method of Calderón's potentials in the theory of partial differential equations. The DPM, however, does not require explicit knowledge of Green's functions. Although we use an Auxiliary Problem (AP) discretized by finite differences, the DPM is not limited to this choice of spatial discretization. Indeed, numerical methods based on the idea of Difference Potentials can be designed with whichever choice of spatial discretization is most natural for the problem at hand (e.g., see [25]).
Practically, the main computational complexity of the DPM reduces to the required solutions of the AP, which can be done very efficiently using fast, standard O(N log N ) solvers. Moreover, in general the DPM can be applied to problems with general boundary or interfaces conditions, with no change to the discretization of the PDE.
Let us now briefly introduce the DPM for the numerical approximation of parabolic interface models (3)(4)(5)(6)(7)(8)(9). First, we must introduce the point-sets that will be used throughout the DPM. (Note that the main construction of the method below applies to the single domain problem (1, 2), after omitting the index s and replacing interface conditions with boundary conditions; see [6].) Let Ω s (s = 1, 2) be embedded in a rectangular auxiliary domain Ω 0 s . Introduce a uniform, Cartesian grid denoted M 0 s on Ω 0 s , with grid-spacing h s . Let M + s = M 0 s ∩ Ω s denote the grid points inside each subdomain Ω s , and M − s = M 0 s \ M + s the grid points outside each subdomain Ω s . Note that the auxiliary domains Ω 0 1 , Ω 0 2 and auxiliary grids M 0 1 , M 0 2 need not agree, and indeed may be selected completely independently, given considerations regarding accuracy, adaptivity, or efficiency.
Define a finite-difference stencil N s,α j,k , with α = 5, 9, to be the stencil of the standard five-point or a wide nine-point Laplacian, i.e., Next, with α fixed, define the point-sets Lastly, we now define the important point-set which we call the discrete grid boundary. In words, γ s is the set of grid points that straddle the continuous interface Γ. (See Figure 4 for an example of these points-sets, given a single elliptical domain Ω.) Note that the point-sets M + s , N + s , and γ s will be used throughout the Difference Potentials Method.
Here, we define the fully-discrete finite-difference discretization of (3,4), and then define the Auxiliary Problem. Indeed, the discretization we consider is of time-and spatial-discretizations. (Here, we have simplified notation slightly by assuming that h := h 1 = h 2 , which need not be the same in general.) For full details of the discretization, including the choice of BDF2 or BDF4 in the time discretization, we refer the reader to Appendix 8.1.
The choices of discretization (33) in each subdomain need not be the same. As in [3,6], one could choose a second-and fourth-order discretization on M + 1 and M + 2 , respectively, given considerations about accuracy, adaptivity, expected regularity of the analytical solution in each domain, etc.
Next, we define the discrete Auxiliary Problem, which plays a central role in the construction of the Difference Potentials operator, the resulting Boundary Equations with Projection at the discrete grid boundary, and in the numerical approximation of the solution via the discrete, generalized Green's formula.
Definition 1 (Discrete Auxiliary Problem (AP)). At time t i+1 , given the righthand side grid function q i+1 s : M 0 s → R, the following difference equations (34,35) are defined as the discrete AP.
Remark 4. For a given right-hand side q i+1 s , the solution of the discrete AP (34, 35) defines a discrete Green's operator G s ∆t,h q i+1 s . The choice of boundary conditions (35) will affect the resulting grid function G s ∆t,h q i+1 s , and thus the Boundary Equations with Projection defined below. However, the choice of boundary conditions (35) in the AP will not affect the numerical approximation of (3-9), so long as the discrete AP is uniquely solvable and well-posed.
Let us denote by G s ∆t,h F i+1 s the particular solution on N + s of the fully-discrete problem (33), defined by solving the AP (34, 35) with and restricting the solution from N 0 s to N + s . Let us also introduce a linear space V γs of all grid functions denoted v i+1 γs , which are defined on γ s and extended by zero to the other points of N 0 s . These grid functions are referred to as discrete densities on γ s . Definition 2 (The Difference Potential of a density.). The Difference Potential of a given density v i+1 γs is the grid function P i+1 and restricting the solution from N 0 s to the point-set N + s .
Proof. See [67] for the general theory of DPM (including the proof for general elliptic PDE), or one of [3,5,6] for the proof in the case of parabolic interface problems.
Remark 5. A given density v i+1 γs is the trace of some solution of the fully-discrete finite-difference equations (33) if and only if it is a solution of the BEP.
However, since boundary or interface conditions have not yet been imposed, the BEP will have infinitely many solutions u i+1 γs . As originally disucssed in [3,4,5,6,26,28], in this work we consider the following approach in order to find a unique solution of the BEP.
At each time level t i+1 , one can approximate the solution of (3-9) at the discrete grid boundary γ s , using the Cauchy data of (3-9) on the continuous interface Γ, up to the desired second-or fourth-order accuracy. (By Cauchy data, we mean the trace of the solution of (3-9), together with the trace of its normal derivative, on Γ.) Below, we will define an Extension Operator which will extend the Cauchy data of (3-9) from Γ to γ s .
As we will see, the Extension Operator in this work depends only on the given parabolic interface model. Moreover, we will use a finite-dimensional, spectral representation for the Cauchy data of (3-9) on Γ. Then, we will use the Extension Operator, together with the BEP (38) and the interface conditions (8,9), to obtain a linear system of equations for the coefficients of the finitedimensional, spectral representation. Hence, the derived BEP will be solved for the unknown coefficients of the Cauchy data. Using this obtained Cauchy data, we will construct the approximation of (3-9) using the Extension Operator, together with the discrete, generalized Green's formula.
Let us now briefly discuss the Extension Operator for the second-order numerical method, and refer the reader to Appendix 8.2 for details (including details for the fourth-order numerical method). For points in the vicinity of Γ, we define a coordinate system (d, ϑ), where ϑ is arclength from some reference point, and d is the signed distance in the normal direction from the point to Γ. Now, as a first step towards defining the Extension Operator, we define a new function where n is the unit outward normal vector at Γ. We choose p = 2 for the second-order method (which we will discuss now) and p = 4 for the fourth-order method (see Appendix 8.2). As a next step for the second-order method ∂n , etc. As a last step, a straightforward sequence of calculations (see Appendix 8.2) shows that where κ denotes the curvature of Γ. Therefore, with v i+1 s (d, ϑ) defined by (39)(40)(41), the only unknown data at each time step t i+1 are the unknown Dirichlet data u i+1 s and the unknown Neumann data ∂u i+1 s ∂n . The Extension Operator will incorporate the interface conditions (8,9) when it is combined with the BEP (38), so that the only independent unknowns at each time step t i+1 will be u i+1 1 , . (This is also true for the fourth-order numerical method -see Appendices 8.2 and 8.3.) Now we are ready to define the Extension Operator that extends the Cauchy data of (3-9) from Γ to γ s .
For a given point (x j , y k ) ∈ γ s , note that d is the signed distance between (x j , y k ) and its orthogonal projection on Γ, while ϑ is the arclength along Γ between a reference point and the orthogonal projection of (x j , y k ).
Next, we briefly discuss the finite-dimensional, spectral representation of Cauchy data u i+1 s,Γ . Indeed, we wish to choose a basis φ ν (ϑ) on Γ (ν = 1, 2, 3, . . .) in order to accurately approximate the two components of the Cauchy data u i+1 s,Γ . To be specific, whichever basis we choose, we require that tends to zero as N 0 , N 1 → ∞, for some sequence of real numbers (c s,i+1 and (c s,i+1 2,ν ) N 1 ν=1 . In other words, we require Now let us discuss a choice of basis. In this work, recall that we consider interfaces Γ that are at least C 2 (Γ) (due to the choice of smooth, circular geometries). Also, as we will see in Section 4.1, each function u considered in the test problems on a composite domain (TP-2A, TP-2B, TP-2C) is locally smooth, in the sense that u| Ω1 = u 1 and u| Ω2 = u 2 are smooth in Ω 1 and Ω 2 , respectively. Moreover, each component of the Cauchy data u i+1 1,Γ and u i+1 2,Γ are smooth, periodic functions of arclength ϑ. (Note that u i+1 1,Γ and u i+1 2,Γ need not agree, and indeed do not -neither µ 1 (x, y, t) nor µ 2 (x, y, t) in (8,9) are identically equal to zero, for any of our test problems on a composite domain.) Therefore, in this work, we choose a standard trigonometric basis φ ν (ϑ), with and k > 1. Moreover, at every time step t i+1 , we will discretize the Cauchy data u i+1 s,Γ = u i+1 s , ∂u i+1 s ∂n Γ using this basis. Therefore, we let where Φ 0 ν = (φ ν , 0) and Φ 1 ν = (0, φ ν ) are the set of basis functions used to represent the Cauchy data on the interface Γ.
Remark 6. It should be also possible to relax regularity assumption on the domain under consideration. For example, one can consider piecewise-smooth, locally-supported basis functions (defined on Γ) as the part of the Extension Operator. For example, [52] use this approach to design a high-order accurate numerical method for the Helmholtz equation, in a geometry with a reentrant corner. Furthermore, [80,81] combine the DPM together with the XFEM, and design a DPM for linear elasticity in a non-Lipschitz domain (with a cut).
Definition 5 (Discrete, generalized Green's formula.). At each time step t i+1 , the numerical approximation u i+1 s ≈ u s (x j , y k , t i+1 )| (xj ,y k )∈N + s of (3-9) is given by u i+1 Here, u i+1 γs = Ex sũ In this work, we also propose a novel feature of DPM, extending the method originally developed in [66] and [3,4,5,6,26,28] to the composite domain problem (3-9) with implicitly-defined geometry. The primary difference between Difference Potentials Methods on explicitly-defined versus implicitly-defined composite domains is in the approximation of the interface Γ, which must be done accurately and efficiently, in order to maintain the desired second-or fourth-order accuracy.
The main idea of DPM-based methods for implicitly-defined geometry is to seek an accurate and efficient explicit parameterization of the implicit boundary/interface. First, we represent the geometry implicitly via a level set function F (x, y) on M 0 . Then we construct a local interpolantF (x, y) of F (x, y) on a subset of M 0 near the continuous interface Γ. Next, we parameterize Γ by arclength using numerical quadrature. With this parameterization, we (i) compute the Fourier series expansion from initial conditions for the Cauchy data u i+1 s,Γ on the implicit interface Γ, and (ii) construct the extension operators (Definition 4) with p = 2 or p = 4.
Conjecture 1 (High-order accuracy of the DPM with implicit geometry). Due to the second-or fourth-order accuracy (in both space and time) of the underlying discretization (33), the extension operator (42) with p = 2 or p = 4, and the established error estimates and convergence results for the DPM for general linear elliptic boundary value problems on smooth domains (presented in [62,63,67] and [33]), we expect second-and fourth-order accuracy in the maximum norm for the error in the computed solution (59 or 60) for both the single and composite domain parabolic problems.
Remark 7. Indeed, in the numerical results (Section 5) we see that the computed solution (47) at every time level t i+1 has accuracy O(h 2 + ∆t 2 ) for the secondorder method, and O(h 4 +∆t 4 ) for the fourth-order method, for both the single and composite domain problems, with explicit or implicit geometry. See [3,6,27,66] for more details and numerical tests involving explicit (circular and elliptical) geometries.
Main Steps of the algorithm: Let us summarize the main steps for the Difference Potentials Method.
• Step 2 : At each time step t i+1 , compute the Particular Solution u i+1 , (x j , y k ) ∈ N + s , using the AP with the right-hand side (36). • Step 4 : Compute the approximation of the density u i+1 γs , by applying the Extension Operator (42) to the solution of (79).
• Step 5 : Construct the Difference Potentials P N + s γs u i+1 γs of the density u i+1 γs , using the AP with the right-hand side (37).
SBP-SAT-FD
We continue in this section with a brief presentation of SBP-SAT-FD, for solving the parabolic problems presented in Section 2. For more detailed discussions of the SBP-SAT-FD method, we refer the reader to two review papers [21,73].
The SBP-SAT-FD method was originally used on Cartesian grids. To resolve complex geometries, we consider a grid mapping approach by transfinite interpolation [43]. A smooth mapping requires that the physical domain is a quadrilateral, possibly with smooth, curved sides. If the physical domain does not have the desired shape, we then partition the physical domain into subdomains, so that each subdomain can be mapped smoothly to the reference domain. As an example, the single domain of equation (1,2), shown in Figure 5a, is divided into five subdomains. The five subdomains consist of one square subdomain, and four identical quadrilateral subdomains (modulo rotation by π/2) with curved sides. Similarly, the composite domain of equation (3-9) is divided into nine subdomains, as shown in Figure 5b. Suitable interface conditions are imposed to patch the subdomains together.
Although the side-length of the centered square is arbitrary (as long as the square is strictly inside the circle), its size and position have a significant impact on the quality of the curvilinear grid. In a high-quality mesh, the elements should not be skewed too much, and the sizes of the elements should be nearly uniform. In practice, it is usually difficult to know a priori the optimal way of domain division.
A Cartesian grid in the reference domain is mapped to a curvilinear grid in each subdomain. The grids are aligned with boundaries and interfaces, thus avoiding small-cut difficulties sometimes associated with embedded methods. In this paper, we only consider conforming grid interfaces, i.e., the grid points from two adjacent blocks match on the interface. For numerical treatment of non-conforming grid interfaces in the SBP-SAT-FD framework, see [44,55].
When a physical domain is mapped to a reference domain, the governing equation is transformed to the Cartesian coordinate in the reference domain. The transformed equation is usually in a more complicated form than the original equation. In general, a parabolic problem in a physical domain will be transformed to where (ξ, η) is the Cartesian coordinate in the unit square, and J(ξ, η), α(ξ, η), β(ξ, η), γ(ξ, η) depend on the geometry of the physical domain and on the chosen mapping. In particular, we use transfinite interpolation for the grid mapping. In this case, the precise form of (49) and the derivation of the grid transformation are presented in Section 3.2 of [7]. Even though the original equation is in the simplest form with unit coefficients, the transformed equation has variable coefficients and mixed derivatives. Therefore, it is important to construct multi-block finite difference methods solving the transformed equation (49). Hence, we need two SBP operators, D 1 ≈ ∂/∂x to approximate a first derivative, and D (b) 2 ≈ ∂/∂x(b(x)∂/∂x) to approximate a second derivative with variable coefficient, where b(x) > 0 is a known function. Below we discuss SBP properties, and start with the first derivative.
Consider two smooth functions u(x), v(x) on x ∈ [0, 1]. We discretize [0, 1] uniformly by N grid points, and denote the restriction of u(x), v(x) onto the grid by u, v, respectively. Integration by parts states: The SBP operator D 1 mimics integration by parts: where H is symmetric positive definite -thus defining an inner product -and B = diag(−1, 0, · · · , 0, 1).
In fact, H is also a quadrature [20]. It is easy to verify that (51) is equivalent to which is the SBP property for the first derivative operator. At the grid points in the interior of the domain, standard, central, finite-difference stencils can be used in D 1 , and the weights of the standard, discrete L 2 -norm are used in H. At a few points close to boundaries, special stencils and weights must be constructed in D 1 and H, respectively, to satisfy (52). The SBP operators D 1 were first constructed in [45] and later revisited in [72]. The SBP norm H can be diagonal or non-diagonal. While non-diagonal norm SBP operators have a better accuracy property than diagonal norm SBP operators, when terms with variable coefficients are present in the equation, a stability proof is only possible with diagonal norm SBP operators. Therefore, we use diagonal norm SBP operators in this paper.
For a second derivative with variable coefficients, the SBP operators D were constructed in [54]. We remark that applying D 1 twice also approximates a second derivative, but is less accurate and more computationally expensive than D 2 . Due to the choice of centered difference stencils at interior grid points, the order of accuracy of the SBP operators is even at these points, and is often denoted by 2p. To fulfill the SBP property, at a few grid points near boundaries, the order of accuracy is reduced to p for diagonal norm operators. This detail notwithstanding, such a scheme is often referred to as 2p th -order accurate. In fact, for the second-and fourth-order SBP-SAT-FD schemes used in this paper to solve parabolic problems, we can expect a second-and fourth-order overall convergence rate, respectively [77].
An SBP operator only approximates a derivative. When imposing boundary and interface conditions, it is important that the SBP property is preserved and an energy estimate is obtained. For this reason, we consider the SAT method [16], where penalty terms are added to the semi-discretization, imposing the boundary and interface conditions weakly. This bears similarities with the Nitsche finite element method [60] and the discontinuous Galerkin method [40].
We note that in [75], SBP-SAT-FD methods were developed for the wave equation with Dirichlet boundary conditions, Neumann boundary conditions, and interface conditions. Comparing equation (53) with (49), the only difference is that the wave equation has a second derivative in time, while the heat equation has a first derivative in time. The spatial derivatives of (53) and (49) are the same. Assuming homogeneous boundary data for simplified notation, we write the SBP-SAT-FD discretization of (53) as where Q is the spatial discretization operator including the boundary implementation. For the scheme developed in [75], stability is proved by the energy method by multiplying (54) where H 2 is a diagonal, positive-definite operator, obtained through a tensor product from the corresponding SBP norm, H, in one spatial dimension. It is shown in [75] that H 2 Q is symmetric and negative semi-definite. Therefore, we can write (55) as d dt (53) is conserved. If we use the same operator Q to discretize the heat equation (49) with the same boundary condition as the wave equation (53), then the scheme is also stable. To see this, we multiply (56) by v T H 2 from the left, and obtain d dt where v T H 2 v is the discrete energy for (49). In this paper, we use the spatial discretization operators developed in [75] to solve both the single (1, 2) and composite domain problems (3)(4)(5)(6)(7)(8)(9).
In [10], SBP-SAT-FD methods are discussed for the one-dimensional heat equation with constant coefficients, both in a single domain and a composite domain. In theory, these schemes can also be generalized to solve equation (49), but are different from the ones used in this paper.
Test Problems
In this section, we first list the test problems that we will consider (in Section 4.1), and then briefly motivate and discuss these choices (in Section 4.2). The tests we propose are "manufactured solutions", in the sense that we state an exact solution u(x, y, t) or (u 1 (x, y, t), u 2 (x, y, t)) and a diffusion coefficient λ(t) or (λ 1 , λ 2 ). From (1, 2) (for the single domain problem) or (3-9) (for the composite domain problem) we compute the (i) right-hand side, (ii) initial conditions, (iii) boundary condition, and (iv) functions (µ 1 (x, y, t), µ 2 (x, y, t)) for the interface/matching conditions. Then, (i-iv), together with the diffusion coefficient, serve as the inputs for our numerical methods. (TP-2C)
Motivation of the chosen test problems
Test Problem 1A (TP-1A) involves a high-degree polynomial, with total degree of 17. This is a rather straightforward test problem, which allows us to establish a good "baseline" with which to compare each method. The choice of high degree ensures that there will be no cancellation of local truncation error, so that we should see -at most -second-or fourth-order convergence for the given methods, barring some type of superconvergence. Next, (TP-3A) adds on (incrementally) the complication of time-varying diffusion. Likewise, (TP-2A) offers a straightforward "baseline" with which to consider the interface problem: The test problem is piecewise-smooth, and the geometry is simplified (see Remark 1). However, there is a jump in both the analytical solution and its flux, which requires a well-designed numerical method to accurately approximate. Moreover, (TP-2A) was first proposed in [48] (see also [6]), and is a good comparison with the immersed interface method therein.
Then, (TP-2B) adds additional challenges onto (TP-2A) in the form of much higher-frequency oscillations; while (TP-2C) adds onto (TP-2A) in the form of both (i) large contrast in diffusion, and (ii) large jumps in the analytical solution and its flux.
Time discretization
The spatial discretization for each method is discussed in Section 3. For the time discretization, the backward differentiation formulas of second-and fourthorder (BDF2 and BDF4) are used for the second-and fourth-order methods, respectively. In each case, the time-step is given by However, note that h in (58) bears different physical meanings for each method. Indeed, for cut-FEM, h is the average distance between the Gauss-Lobatto points; for DPM, h is the grid spacing in the uniform, Cartesian grid M 0 (see the text prior to (33)); and for SBP-SAT-FD, h is the minimum grid spacing in the reference domain.
Measure for comparison
Let u i j,k denote the computed numerical approximation of u(x, y, t) at the gridpoint (x j , y k ) ∈ Ω and time t i = i∆t ∈ (0, T ]. For the three methods, we will compare the size of the maximum error in u at the grid points, with respect to the number of degrees of freedom (DOF). For the single domain problem (1,2), the maximum error is computed as: and for the composite domain problem (3-9) as:
Convergence results
In the following tables and figures, we state the number of degrees of freedom in the grid, maximum error (59, 60 for the single-and composite-domain problems, respectively), and an estimate of the rate of convergence. In Tables 1-5, the estimate of rate of convergence is computed as follows. Let (DOF n , E n ) be given, with n = 1, 2, 3 referring to the first, second, and third grids (from coarsest to finest). Then, for n = 2, 3, compute the standard estimate which is the estimated rate of convergence, denoted in Tables 1-5 by "Rate". In Figures 6, 7, 10-12, the estimate of rate of convergence is computed differently. Computing a least-square linear regression for the data (log 10 ( √ DOF n ), log 10 (E n )) gives a line with slope m, where m is the estimate of rate of convergence, reported in the legend on the right side of each figure. Overall, we see in Tables 1-5 that the error for second-order methods (denoted, for brevity, as CUT2, DPM2, SBP2) on the finest mesh is similar, or sometimes larger, than the error for fourth-order methods (denoted CUT4, DPM4, SBP4) on the coarsest mesh -this illustrates the effectiveness of higher-order methods, when high accuracy is important. Additionally, comparing the three methods together, the size of the errors for the single-domain problems (TP-1A, TP-3A) are similar, up to a constant factor; while for the composite-domain problems (TP-2A, TP-2B, TP-2C) we do see differences of one or two orders of magnitude, with the DPM having the smallest errors. Figure 6 Log-log plot of absolute error (59) versus √ DOF, and estimated rate of convergence, for the second-and fourth-order versions of each method, applied to Test Problem 1A (TP-1A). See Table 1 for more details.
In Table 1 and Figure 6, we observe that the measured rates of convergence for the numerical approximation of Test Problem 1A (TP-1A) are all ≈ 2 (for the second-order versions) or ≈ 4 (for the fourth-order versions), except for DPM4, which for this test problem is superconvergent, with fifth-order convergence. Such higher-than-expected convergence might occur due to several reasons -for example, (i) if the geometry is smooth; (ii) if the magnitude of the derivatives have fast decay (effectively reducing the local truncation error by a factor of h); or (iii) if there is cancellation of error due to symmetries in the geometry, or in the analytical solution. (59), for the second-and fourthorder versions of each method, applied to Test Problem 3A (TP-3A), with diffusion coefficient λ(t) = 1.1 + sin(πt), and time-step ∆t = 0.5h. Table 2 and Figure 7 show the numerical results for (TP-3A). This test problem has the same manufactured solution as (TP-1A), but with a timevarying diffusion coefficient. Despite this added complexity, the numerical results are the same order of accuracy, and in many cases the errors are the same up to seven digits, when compared with the results for (TP-1A). This similarity in the numerical results demonstrates that the three methods can robustly handle time-varying diffusion coefficients.
The plots of spatial error at the final time T = 1.0, shown in Figure 8, are representative of other tests (not included in this text) on a single circular domain. The error in the cut-FEM solution presents largely at the boundary; the error in the DPM solution typically has smooth error, even for grid points very near Γ; while the error in the SBP-SAT-FD solution is not smooth at interfaces introduced by the domain partitioning.
The plots of spatial error at the final time T = 1.0 for (TP-2A) are shown in Figure 9. These plots are fairly representative of the other composite domain tests reported herein, and also of others test problems not included in this work. As in Figure 8, the cut-FEM has its largest error at degrees of freedom on cut (intersected) elements; the DPM has piecewise smooth error, including even grid points at the boundary/interface; and the SBP-SAT-FD has its largest error at Table 2 for more details.
the interfaces between computational subdomains, with particularly pronounced error at the corners of Ω, where the grid is most stretched.
Regarding the max-norm error in presented in Table 3 and Figure 10, we see that the DPM has smaller max-norm by more than an order of magnitude. We also observe that the convergence rate of the fourth-order SBP-SAT-FD is only three. This suboptimal convergence is inline with the error plot in Figure 9c, which shows that the error at the corners of the domain is significantly larger than elsewhere. In addition, the error is only non-smooth along the interfaces on the two diagonal lines of the domain. We have also measured the L 2 error at the final time T = 1.0 (not reported in this work), and fourth-order convergence is obtained.
In Table 4 and Figure 11, we see the numerical results for (TP-2B). The analytical solution is similar to (TP-2A), though much more oscillatory -this additional challenge is manifested by an increase in error by several orders of magnitude.
In Table 5 and Figure 12, we see the numerical results for (TP-2C), which shows that our numerical methods are robust to large jumps in diffusion coefficients, the analytical solution, and/or the flux of the true solution. Also, observe that the errors from DPM2/DPM4 (explicit geometry) and DPM2-I/DPM4-I (implicit geometry) in Tables 3-5 are almost identical, which demonstrates the robustness and flexibility of the DPM. (60), for the second-and fourthorder versions of each method, applied to Test Problem 2A (TP-2A), with diffusion coefficients (λ 1 , λ 2 ) = (10, 1), and time-step ∆t = 0.5h. (DPM2-I/DPM4-I refers to the extension of the DPM method, to consider implicit geometry. )
Discussion
There are many possible methods (Section 1) for the numerical approximation of PDE posed on irregular domains, or on composite domains with interfaces. In this work, we consider three such methods, designed for the high-order accurate numerical approximation of parabolic PDEs (1, 2 or 3-9). Each implementation was written, tested, and optimized by the authors most experienced with the method-the cut- Kreiss. Although we consider only one type of boundary/interface (a circle), we hope that the benchmark problems considered will be a valuable resource, and the numerical results a valuable comparison, for researchers interested in numerical methods for such problems. The primary differences between the cut-FEM and the standard finite element method are the stabilization terms for near-boundary degrees of freedom, and the quadrature over cut (intersected) elements. Tuning the free parameters in the stabilization terms could mitigate the errors observed in Figures 8,9. (We have done some preliminary experiments suggesting that the errors decrease when tuning these parameters, but further investigations are required in order to guarantee robustness.) Given a level-set description of the geometry, there are Table 3 for more details.
robust algorithms for constructing the quadrature over cut elements. Together, these differences allow for an immersed (non-conforming) grid to be used. The theoretical base for cut-FEM is well established.
The DPM is based on the equivalence between the discrete system of equations (33) and the Boundary Equations with Projection (Thm. 1). The formulation outlined in Section 3.2 allows for an immersed (non-conforming) grid; fast O(N log N ) algorithms, even for problems with general, smooth geometry; and reduces the size of the system to be solved at each time-step. The convergence theory is well-established for general, linear, elliptic boundary value problems, and we conjecture in Section 3.2 that this extends to the current setting. In this work, we have extended DPM to work with implicitly-defined geometries for the first time. This is a first step for solving problems where the interface moves with time.
In the finite difference framework (the SBP-SAT-FD method, in this work), the SBP property makes it possible to prove stability and convergence for high-order methods by an energy method. Combined with the SAT method to impose boundary and interface conditions, the SBP-SAT-FD method can be efficient to solve time-dependent PDE. Geometrical features are resolved by curvilinear mapping, which requires an explicit parameterization of boundaries and interfaces. High quality grid generation is important -our experiments, though not reported in this work, have shown that the error in the solution is sensitive to both the orthogonality of the grid and the grid stretching.
Similarities between the cut-FEM and the DPM (beyond the use of an immersed grid) include the thin layer of cut cells along the boundaries/interfaces (cut-FEM) and the discrete grid boundary γ (DPM); and the use of higher-order Table 4 Convergence in the maximum norm (60), for the second-and fourthorder versions of each method, applied to Test Problem 2B (TP-2B), with diffusion coefficients (λ 1 , λ 2 ) = (10, 1), and time-step ∆t = 0.5h. (DPM2-I/DPM4-I refers to the extension of the DPM method, to consider implicit geometry. ) normal derivatives in the stabilization term (cut-FEM) and extension operator (in the Boundary Equations with Projection; DPM). A similarity between the cut-FEM and SBP-SAT-FD is the weak imposition of boundary conditions, via Nitsche's method (cut-FEM) or the SAT method (SBP-SAT-FD). In this work, the DPM and the SBP-SAT-FD method both use an underlying finite-difference discretization, but the DPM is not restricted to this type of discretization. Although both the cut-FEM and the DPM use higher-order normal derivatives in their treatment of the boundary/interface, the precise usage differs. For cut-FEM, it is the normal of the element interfaces cut by Γ, while for DPM, it is the normal of the boundary/interface Γ. Moreover, in the cut-FEM, stabilization terms (19) involving higher-order normal derivatives at the boundaries of cut-elements are added to the weak form of the PDE, to control the condition number of the mass and stiffness matrices, with a priori estimation of parameters to guarantee positive-definiteness of these matrices; while in the DPM, the Boundary Equations with Projection is combined with the Extension Operator (Definition 4), which incorporates higher-order normal derivatives at the boundary/interface Γ.
Returning to Section 5.3, we see (in Tables 1-5 Figure 11 Log-log plot of absolute error (60) versus √ DOF, and estimated rate of convergence, for the second-and fourth-order versions of each method (cut-FEM, DPM with explicit geometry, SBP-SAT-FD), applied to Test Problem 2B (TP-2B). See Table 4 for more details.
fourth-order SBP-SAT-FD method only has a convergence rate of three. From the error plot in Figure 9c, we observe that the large error is localized at the four corners of the domain Ω, where the curvilinear grid is non-orthogonal and is stretched the most (see Figure 5b).
As seen in the error plots (Figures 8, 9), the error for the cut-FEM and the SBP-SAT-FD has "spikes", while for the DPM the error is smooth. A surprising observation from Figure 9 is that conforming grids (on which the SBP-SAT-FD method is designed) do not necessarily produce more accurate solutions than immersed grids (on which the cut-FEM and the DPM are designed). Indeed, it is challenging to construct a high-quality curvilinear grid for the considered composite domain problem.
Future directions we hope to consider (in the context of new developments and also further comparisons) include: (i) parabolic problems with moving boundaries/interfaces, (ii) comparison of numerical methods for interface problems involving wave equations [12,70,71,75,78], (iii) extending our methods to consider PDEs in 3D, (iv) design of fast algorithms, and (v) design of adaptive versions of our methods.
Indeed, for (i), difficulties for the cut-FEM might be the costly construction of quadrature, while for DPM difficulties might be the accurate construction of extension operators. Regarding (iii), this has already been done for the cut-FEM and SBP-SAT-FD; while for the DPM, this is current work, with the main steps extending from 2D to 3D in a straightforward manner.
Conclusion
In this work, we propose a set of benchmark problems to test numerical methods for parabolic partial differential equations in irregular or composite domains, in the simplified geometric setting of Section 2, with the interface defined either explicitly or implicitly. Next, we compare and contrast three methods for the numerical approximation of such problems: the (i) cut-FEM; (ii) DPM; and (iii) SBP-SAT-FD. Brief introductions of the three numerical methods are given in Section 3. It is noteworthy that the DPM has, for the first time, been extended to problems with an implicitly-defined interface. For the three methods, the numerical results in Section 5.3 illustrate the high-order accuracy. Similar errors (different by a constant factor) are observed at grid points away from the boundary/interface, while the observed errors near the boundary/interface vary depending upon the given method. Although we consider only test problems with circular boundary/interface, the ideas underlying the three methods can readily be extended to more general geometries.
In general, all three methods require an accurate and efficient resolution of the explicitly-or implicitly-defined irregular geometry: cut-FEM relies on accurate quadrature rules for cut elements, and a good choice of stabilization parameters; DPM relies on an accurate and efficient representation of Cauchy data using a good choice of basis functions; and SBP-SAT-FD relies on the smooth parametrization to generate a high-quality curvilinear grid. Table 5 for more details.
Appendix (DPM)
Let us now expand some details presented in the brief introduction to the Difference Potentials Method (Section 3.2).
Equation-based extension
Let us now expand the discussion surrounding (39)(40)(41) leading up to Definition 4 of the Extension Operator (42).
An important step in this discussion is to recast the original PDE (3, 4) into a curvilinear form, for points (x, y) in the vicinity of Γ. Following the notation [58], let us first introduce the coordinate system (d, ϑ) for points in the vicinity of Γ. Recall from Definition 42 that d is the distance in the normal direction from a given point to its orthogonal projection on Γ, while ϑ is the arclength along Γ from some reference point to the orthogonal projection. In this coordinate system, the PDE (3, 4) becomes where where H ϑ = 1 − dκ is the Lamé coefficient, and κ is the signed curvature along the interface Γ. From (67), a straightforward calculation gives the second-order normal derivative ∂ 2 us ∂n 2 (used in the calculation of (41)), which is ∂ 2 u s ∂n 2 = 1 λ s ∂u s ∂t − f s − ∂ 2 u s ∂ϑ 2 + κ ∂u s ∂n .
For the fourth-order numerical method, which uses an Extension Operator with p = 4, we also need the third-and fourth-order normal derivatives, which we state now. Differentiating (68) with respect to n, we see that and Next, let us follow-up on comments made in the text following (41). There, it was pointed out that the unknown Dirichlet and Neumann data u s , ∂us ∂n are the only data required for the Extension Operator (42) with p = 2. Moreover, it was pointed out that this is also true for the Extension Operator when p = 4. The reasoning is as follows.
The system of equations at each time step.
With the Cauchy data u i+1 s,Γ and Extension Operator Ex s u i+1 s,Γ from Γ to γ s introduced in Definition 4, and the spectral representation introduced in (46), we now give a sketch of the linear system for the coefficients (c s,i+1 1,ν ) N 0 ν=1 and (c s,i+1 2,ν ) N 1 ν=1 , and moreover the approximation of the solution u s (x, y, t i+1 ) at (x j , y k ) ∈ N + s . Indeed, substituting Ex sũ (so that c i+1 s = [c i+1 s,1 , c i+1 s,2 ] ), and the matrix A s = (I − P i+1 γs ) Ex s Φ 0 1 , (I − P i+1 γs ) Ex s Φ 0 2 , · · · (I − P i+1 γs ) Ex s Φ 0 N 0 As,1 , (I − P i+1 γs ) Ex s Φ 1 1 , (I − P i+1 γs ) Ex s Φ 1 2 , · · · (I − P i+1 γs ) Ex s Φ 1 Then, the full system of equations (76) is However, note that c i+1 1 and c i+1 2 are related by the interface conditions (8,9), so that the number of unknowns in (79) is equal the dimension of either c i+1 1 or c i+1 2 , depending on which one is considered the independent unknown. Therefore, the dimension of A is (|γ 1 | + |γ 2 |) × (N 0 + N 1 ), where N 0 + N 1 is the dimension of c i+1 1 or c i+1 2 (whichever is the independent unknown). 2 ) is chosen so that the finitedimensional, spectral representation (46) of the Cauchy data u i+1 s,Γ accurately resolves the Cauchy data with a small number of basis functions, in the consideration of both accuracy and computational efficiency. For (TP-2A) and (TP-2B), we choose c i+1 2 as the independent unknown, while for (TP-2C) we choose c i+1 1 . With these choices for the independent unknown, we have N 0 = N 1 = 1 for the three considered test problems.
Since each column involves the Difference Potentials operator P i+1 γs applied to a vector Ex s Φ k ν , each column is therefore constructed via one solution of the Auxiliary Problem (Definition 1). However, the Auxiliary Problems are posed on the computationally simple Auxiliary Domains, and can be computed using a fast FFT-or multigrid-based algorithm, which can significantly reduce the computational cost. Moreover, if λ s (t) ≡ λ s is constant, then A can be computed and inverted once (as a pre-processing step), thus significantly reducing computational cost for long-time simulations.
|
v3-fos-license
|
2021-05-18T05:16:40.312Z
|
2021-05-01T00:00:00.000
|
234744200
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/22/9/4994/pdf",
"pdf_hash": "12bc65168ef090464b631a0f849c0794e9bbd81d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3012",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "12bc65168ef090464b631a0f849c0794e9bbd81d",
"year": 2021
}
|
pes2o/s2orc
|
Neurons and Glia Interplay in α-Synucleinopathies
Accumulation of the neuronal presynaptic protein alpha-synuclein within proteinaceous inclusions represents the key histophathological hallmark of a spectrum of neurodegenerative disorders, referred to by the umbrella term a-synucleinopathies. Even though alpha-synuclein is expressed predominantly in neurons, pathological aggregates of the protein are also found in the glial cells of the brain. In Parkinson’s disease and dementia with Lewy bodies, alpha-synuclein accumulates mainly in neurons forming the Lewy bodies and Lewy neurites, whereas in multiple system atrophy, the protein aggregates mostly in the glial cytoplasmic inclusions within oligodendrocytes. In addition, astrogliosis and microgliosis are found in the synucleinopathy brains, whereas both astrocytes and microglia internalize alpha-synuclein and contribute to the spread of pathology. The mechanisms underlying the pathological accumulation of alpha-synuclein in glial cells that under physiological conditions express low to non-detectable levels of the protein are an area of intense research. Undoubtedly, the presence of aggregated alpha-synuclein can disrupt glial function in general and can contribute to neurodegeneration through numerous pathways. Herein, we summarize the current knowledge on the role of alpha-synuclein in both neurons and glia, highlighting the contribution of the neuron-glia connectome in the disease initiation and progression, which may represent potential therapeutic target for a-synucleinopathies.
Introduction
The presynaptic neuronal protein alpha-synuclein (aSyn) under physiological conditions regulates neurotransmitter release and SNARE (soluble N-ethylmaleimide-sensitive factor attachment protein receptor) complex assembly and is considered a chameleonprotein due to its remarkable conformational plasticity [1]. On the other hand, aggregated aSyn is the major component of the proteinaceous inclusions found in the degenerating neurons of Parkinson's disease (PD) and dementia with Lewy bodies (DLB) brains, known as Lewy bodies (LBs) and Lewy neurites (LNs) [2]. Alpha-synuclein also has a strong genetic link to PD pathogenesis, since missense point mutations of the SNCA gene encoding for aSyn, SNCA gene locus duplications and triplications or gene-enhanced expression are the main causes of familial PD [3][4][5][6][7][8][9]. In contrast, multiple system atrophy (MSA), a fatal debilitating neurodegenerative disorder, is characterized by the presence of aggregated aSyn within the glial cytoplasmic inclusions (GCIs) present in the cytoplasm of oligodendrocytes [10][11][12]. Glial aSyn accumulation is also evident in PD and PD with aSyn-positive deposits reported in astrocytes and oligodendrocytes [13][14][15]. Contrarily, aSyn-positive inclusions in astrocytes have been also found in MSA [16], but to a lesser extent [14] compared to neuronal and oligodendroglial inclusion pathology.
The clinical and neuropathological heterogeneity in a-synucleinopathies may ascend from the unique properties of the different conformational aSyn strains found in neurons or glia that might contribute to distinct clinical phenotypes [17][18][19][20]. Even though the physiological and pathological functions of aSyn in neurons, where the protein is physiologically expressed, are well characterized, the mechanisms underlying the pathological Figure 1. The role of aSyn at the presynaptic terminal. A schematic representation depicting of aSyn physiological and pathological effects at the synapse: (a) aSyn reduces the activity of tyrosine hydroxylase (TH), the enzyme responsible for catalyzing the conversion of L-Tyrosine to L-DOPA, thus impairing dopamine biosynthesis, (b) Increased levels of aSyn inhibit VMAT2, which is responsible for the uptake of monoamines (such as dopamine) into synaptic vesicles, (c) aSyn associates with synaptic vesicle membranes and regulates the SNARE-dependent vesicle fusion and neurotransmitter release, (d) Soluble aSyn interacts with the dopaminergic transporter DAT and decreases its amount on the plasma membrane, thus regulating the dopamine re-uptake from the synapse. However, aSyn aggregates trigger DAT recruitment to the plasma membrane that leads to massive entry of dopamine, (e) aSyn aggregates interact with Na + /K + -ATPase (NKA) preventing the effective pump out of Na + ions, (f) aSyn is secreted from neuronal cells partly via associating with exosomes, (g) Extracellular aSyn interacts with neuronal receptors (i.e., LAG3) for its internalization in neurons or (h) it is up-taken via endocytosis, (i) PD-linked A30P and A53T mutant aSyn form large membrane pores through which most cations (i.e., Ca 2+ ) can pass non-selectively, (j) Extracellular aSyn activates the voltage-gated Ca 2+ channels (VGCCs), resulting in increased Ca 2+ influx, (k) Monomeric aSyn enters neuronal cells via passive diffusion or direct penetration of their plasma membrane.
On the contrary, natively unfolded monomeric aSyn at the pre-synaptic terminal is prone to form pathological conformations, thus exerting neurotoxic effects [44] (Figure 1). It has been additionally suggested that aSyn is preferably bound to synapsin 1 and VAMP2 when the protein is present in its oligomeric form [45], highlighting the importance of the conformational state of aSyn for its proper function. There are also findings supporting the implication of aSyn in synaptic transmission, due to its association with the synaptic vesicle pool, modulating the vesicle mobility, the recycling pool homeostasis and endocytosis [46][47][48].
Alpha-synuclein can also function as a molecular chaperone via effective binding to other intracellular proteins. The first indication came with the discovery that aSyn displays structural and functional homology with other molecular chaperones, as the 14-3-3 or small heat shock proteins [49,50]. Additional studies revealed that aSyn synergistically acts with the presynaptic cysteine-string protein-alpha (CSPalpha) promoting the assembly of the SNARE complex [38,51], further validating its chaperoning properties. Biochemical and structural analysis of aSyn strengthened the current indications for its chaperone-like function via its C-terminal region (residues 61-140) [52][53][54]. However, following studies indicated that the chaperone-binding site of aSyn lies within the non-amyloidal component (NAC) region (residues 61-95), which is prone to aggregation and thus highly susceptible to form fibrils [55,56].
Association with Membranes and Lipid Trafficking
Intracellular aSyn can be found either natively unfolded in a soluble state or membranebound forming an alpha-helical or a beta-sheet secondary structure, depending on the solution conditions [57][58][59]. It has been proposed that there is a bidirectional link between aSyn species formation and membrane remodeling, meaning that not only aSyn structure is affected upon lipid interaction, but also that membrane integrity depends on the presence of different aSyn conformations [60][61][62]. However, there are controversial results regarding the association of aSyn with membrane lipids and its conformational state, with some studies reporting that membrane-bound aSyn gets protected from aggregation, thus leading to neurotoxicity attenuation [44,63,64], whereas others suggest that interaction of aSyn with membranes triggers its self-association and subsequent aggregation [65][66][67]. Importantly, it has been shown that the PD-related aSyn mutations reduce its interaction with membranes, thus further suggesting that aSyn binding on membranes may exert neuroprotective effects [68][69][70][71][72].
A plethora of studies argue that aSyn in its soluble state exists as a monomer [73-76], whereas others suggest that it occurs physiologically as a tetramer resisting aggregation [77][78][79]. In the presence of lipid membranes, aSyn adopts an alpha helical structure in the N-terminus region that stabilizes the formation of high-order aSyn multimers [42,73,80,81]. Interestingly, the membrane curvature seems to affect the structure of aSyn, which can adopt either an elongated or a broken alpha-helix conformation, when bound to a large diameter (∼100 nm) or a small, highly curved vesicle, respectively [82][83][84][85]. It has been also proposed that aSyn has a role in lipid metabolism, since it participates in fatty acids transportation between the cytosol and membranous compartments [86,87] and in lipid and membrane biogenesis organizing and stabilizing the lipid bilayer of membranes and vesicles [88,89]. On the other hand, disrupted aSyn expression pattern leads to lipid dysregulation, since both the absence and the overexpression of either wild-type (WT) or mutated aSyn gives rise to abnormal lipid metabolism [90][91][92][93]. Finally, several studies have demonstrated that aSyn regulates membrane homeostasis via inhibition of phospholipases activity, such as phospholipase D [94-97]; however, there are controversial results in the literature [98].
Aggregation and Post-Translational Modifications
alpha-Synuclein is composed of three distinct domains: the N-terminal lipid-binding domain, the NAC region and the C-terminal binding domain [84,99,100]. A central role in the fibril formation and subsequent aggregation of aSyn is thought to be mediated through the NAC region of the protein composed of nonpolar side-chains and assembles cross b-structures. Based on that, it has been shown that the deletion of specific residues (74-84) within the core region can abolish aSyn aggregation [101,102]. It has been also demonstrated that the endogenous neuronal aSyn and the interaction of aSyn with lipids plays a central role for aSyn recruitment and subsequent seeding of pathology, as it could behave as a core for the formation of insoluble aggregates [35,75,103,104].
Several mutations in the Snca gene have been linked to PD pathogenesis, such as the A53T, A30P, E46K, H50Q, G51D, A18P, pA29S and A53E mutations, all located in the N-terminus region [3,5,7,68,[105][106][107]. Most of them are tightly linked to enhanced aSyn aggregation, pathology progression and clinical manifestations in PD. Specifically, A53T and A30P aSyn mutants are natively unfolded, similarly to WT protein. However, at higher concentrations A53T has been shown to accelerate aSyn fibrillization, a critical event in PD pathogenesis [108][109][110]. On the other hand, A30P promotes aSyn oligomerization rather than fibrillization, thus reducing aggregate formation [109,111]. The E46K mutation leads to conformational changes of aSyn due to C-terminal to N-terminal contacts in the monomeric protein, resulting in enhanced aSyn accumulation [111][112][113]. Moreover, the PD-linked H50Q point mutation increases aSyn aggregation propensity and toxicity [114], whereas the G51D mutation has the opposite effects [115]. However, although G51D mutants seem to oligomerize in a slow rate, they form more toxic fibrils, thus suggesting distinct disease mechanisms for the various aSyn mutations [116,117]. Similarly, A53E mutant seems to lead to neuronal toxicity via an aSyn aggregation-independent manner [118]. Strikingly, the G51D and A53E aSyn mutations have been proposed as potential links between PD and MSA [106,119]. However, up-to-date, no hereditable mutations in the coding region of SNCA gene have been identified in MSA cases [120]. Apart from point mutations [117,121,122], various post-translational modifications are implicated in aSyn aggregation, the most important of which are phosphorylation, sumoylation, ubiquitination, nitration, N-acetylation, O-GlcNAcylation and truncation.
The phosphorylation of aSyn both at serine and tyrosine residues and particularly at Ser129 is widely considered as an indicator of pathology. However, the effect of Ser129 phosphorylation on aSyn toxicity is still under debate, with the majority of studies suggesting that it accelerates cell toxicity and neurodegeneration [123][124][125][126][127]. Contrarily, others have proposed a neuroprotective role of Ser129 phosphorylation since it was reported to drive the conversion of toxic oligomers into less harmful aggregates [128][129][130]. Other mechanisms of phosphorylated Ser129 aSyn-mediated neuroprotection include inhibition of its fibrillation [131], upregulation of tyrosine hydroxylase (TH) activity [132] or lowering of the protein's membrane-binding affinity [133]. Although the 90% of aSyn in LBs is found phosphorylated at Ser129, a significant amount of phosphorylated Ser129 aSyn is also detected in a soluble, rather than in an aggregated state in PD brains [134], whereas only a small percentage of aSyn is phosphorylated at Ser129 in the brains of healthy controls [135][136][137]. In addition, aSyn can be phosphorylated at Ser87, Tyr125, Tyr133 and Tyr136 residues [138,139] and these are also implicated in either neurotoxic or neuroprotective events [127,138,140,141]. Nonetheless, in most in vivo models where aSyn is overexpressed (virally, transgenic or PFF-inoculations) the detection of pSer129 positive aSyn signal is invariably linked to neurotoxicity, indicating a rather neurotoxic and not a neuroprotective role.
Nitrated aSyn is also tightly linked to neurodegeneration, as demonstrated by experiments in both cellular and animal models, as well as in patient-derived brains [142-145], through its implication in oxidative damage and disease development [146]. Four tyrosine residues in aSyn sequence, Tyr39 (within the N-terminus), Tyr125, Tyr133 and Tyr 136 (within the C-terminus) can undergo nitration. Nitration at Tyr39 has been shown to result in low binding affinity of aSyn on lipid vesicles due to its loss-of-alpha helical conformation status [147], whereas nitration at Tyr125 seems to play a crucial role for aSyn dimerization [148]. Moreover, the linking between two tyrosines is considered as a potential mechanism for aSyn oligomer stabilization and its subsequent aggregation into proteinaceous inclusions [149]. In addition, the detection of nitrated aSyn in the human blood serum could potentially serve as a clinical biomarker for PD diagnosis [150].
Another aSyn post-translational modification crucial for its aggregation propensity is ubiquitination, via regulation of the proteasome-dependent protein degradation [151] and the subcellular localization of the protein [152]. Ubiquitinated aSyn has been isolated from LBs and sarkosyl-insoluble fractions derived from synucleinopathy brains [153,154]. CHIP (C-terminal U-box domain of co-chaperone Hsp70-interacting protein), SIAH (seven in absentia homolog) and Nedd4 (neuronal precursor cell-expressed, developmentally down-regulated gene 4) have been identified among the E3 ubiquitin ligases implicated in aSyn ubiquitination [155][156][157][158][159][160]. Ubiquitin modification has been demonstrated to have differential effects on aSyn accumulation and subsequent aggregation, dependent on the residue being modified. More precisely, ubiquitination at Lys6, Lys12 and Lys21 residues has been shown to moderately inhibit aSyn fibrillation, whereas at Lys10 and Lys23 residues has been reported to promote the formation of aSyn inclusions [161]. In addition, ubiquitination at Lys32, Lys34, Lys43 and Lys96 inhibits aSyn aggregation [161].
Sumoylation is a similar process to ubiquitination, since aSyn is conjugated to SUMO (small ubiquitin-like modifier) at lysine residues. SUMO-1 was found in aSyn-positive inclusions of a-synucleinopathy brains or associated with lysosomes of PD animal models [162][163][164]. It has been also suggested that aSyn sumoylation facilitates its aggregation since it inhibits its degradation [165], whereas other studies proposed a neuroprotective role of aSyn sumoylation, which seems to promote aSyn solubility and thus inhibit its aggregation [166,167]. The discrepancy between these data may be attributed to the different lysine residues available for sumoylation being investigated in each study. Another aSyn modification that has been up for debate is its N-terminal acetylation. Although many studies have assigned a neurotoxic role on aSyn N-acetylation, as it has been shown to promote aSyn β-sheet formation and fibrillation [168][169][170], others suggest that either N-acetylated aSyn mediates its physiological binding on synaptic vesicles [171], or it acts in a protective manner against aSyn aggregation [172,173].
Channel Formation/Channel Interactions
As mentioned above, membrane-bound aSyn adopts an alpha-helical conformation, which facilitates its oligomerization and subsequent aggregation. It has been suggested that aSyn oligomers can form transmembrane channels and pore-like structures that have been linked to pathological events during PD development ( Figure 1) [195][196][197]. As a result, vesicles or low-molecular mass molecules may penetrate the cell membrane and in combination with altered cellular ionic homeostasis could potentially lead to cell toxicity and neuronal degeneration [198,199]. Another mechanism for the increased membrane permeability involves the incorporation of aSyn oligomers between the membrane phospholipids, thus leading to the bilayer thinning which thereafter allows the diffusion of small molecules [200].
A wide range of studies has demonstrated that the ion channels formed by oligomeric aSyn dysregulate cellular ion concentrations and may represent a critical event in the pathogenesis of a-synucleinopathies [198]. Some PD-linked aSyn mutations, such as E46K and A53T, have been shown to be implicated in the channel formation, whereas other aSyn mutants (i.e., A30P) have displayed low membrane affinity [197,201]. However, other groups have shown that A30P and A53T aSyn mutations are responsible for the formation of large membrane pores through which most cations can pass non-selectively [202]. It has been reported that the formation of such cation-permeable pores could lead either to ion conductivity or to increased Ca 2+ influx and subsequent cell death [202][203][204][205]. Upon aSyn cation channel opening, other channels, such as the ATP-dependent potassium channels K (ATP), have been reported to be activated in hippocampal neurons and this could probably diminish the aSyn-dependent neuronal excitability [205].
Binding of aSyn to the plasma membrane results in the formation of aggregates and this aggregation leads to the redistribution of the α3 subunit of Na + /K + -ATPase. As a result, Na + /K + -ATPase is no longer able to effectively pump out Na + from neurons, thus leading to an intracellular Na + accumulation [206]. Furthermore, extracellular aSyn was reported to activate the voltage-gated Ca 2+ channel Cav2.2 in rat neurons, due to disorga-nization of lipid rafts in the plasma membrane, resulting in enhanced dopamine release and increased Ca 2+ influx [207]. Both events may explain the synaptic dysfunction and neuronal vulnerability in PD. L-type Ca 2+ channels are also implicated in PD development, as administration of L-type Ca 2+ channel blockers (i.e., isradipine, nimodipine) in animal models and PD patients, reduced death risk and ameliorated disease manifestations [208][209][210][211]. Finally, aSyn oligomers can inhibit α4β2 nicotinic acetylcholine receptors of dopaminergic neurons, thus leading to cholinergic signaling deficits [212]. In summary, aSyn seems to regulate neuronal toxicity and survival via the formation of channels or pores in the plasma membrane or via its interaction with other channels or receptors crucial for the proper neuronal activity (Figure 1).
Dopamine Metabolism
Soluble aSyn has been proposed to interact with the dopamine transporter (DAT) and decrease its amount on the plasma membrane, thus regulating the dopamine re-uptake from the synapse and protect neuronal cells from excessive dopamine toxicity [213,214]. Contrariwise, aSyn aggregation triggers DAT recruitment to the plasma membrane that results in massive entry of dopamine and production of reactive oxygen species (ROS) in neurons [215]. It is obvious that aSyn-mediated modulation of DAT activity is crucial for neuronal functioning via a balanced dopaminergic neurotransmission. Moreover, the regulation of dopamine storage is provided by an interaction of aSyn with the vesicular monoamine transporter 2 (VMAT2), which is responsible for the packaging of monoamine transmitters into synaptic vesicles [216]. It has been reported that increased levels of aSyn lead to VMAT2 inhibition and dopamine dysregulation that results in pathological events [217]. In addition, aSyn regulates dopamine biosynthesis, via reducing the activity or the phosphorylation status of TH, the rate-limiting enzyme in catecholamine synthesis [218][219][220][221][222][223]. In agreement, enhanced expression or phosphorylation and subsequent aggregation of aSyn alter TH activity and evoke an imbalance in dopamine synthesis, thus leading to neurotoxicity [132, [224][225][226]. In vivo evidence further support the role of aSyn in dopamine metabolism, since the absence of aSyn caused decreased reuptake of dopamine, low levels of TH and DAT in the mouse striatum and reduced number of dopaminergic cells in the substantia nigra of aSyn KO mice [227][228][229].
Interaction with Mitochondria and ER
alpha-Synuclein displays a remarkable conformational flexibility upon macromolecular interactions and can associate with mitochondrial membranes, thus altering mitochondrial function [230][231][232] (Figure 2). There are reports suggesting that aSyn is a physiological regulator of mitochondrial activity [233][234][235], whereas others support the opposite [236][237][238]. Such discrepancies could be attributed to the different synuclein models utilized in each study, taking into account that brain homeostasis is a complex process and in vivo studies recapitulate better the interplay between the various brain components, compared to the isolated in vitro cellular setup. A bidirectional interaction between aSyn aggregation and mitochondrial dysfunction has been implicated in PD pathogenesis. In particular, increased levels of aSyn can lead to mitochondrial dysfunction [239][240][241][242][243][244], whereas, conversely, impairment of mitochondrial activity may accelerate aSyn pathology [245][246][247][248]; however, the precise underlying mechanisms remain to be elucidated. Both WT and mutant aSyn have been shown to interact with mitochondrial elements, altering both mitochondria morphology and function. Specifically, soluble pre-fibrillar aSyn oligomers seem to be responsible for complex I dysfunction, loss of membrane potential, disrupted Ca 2+ homeostasis, enhanced cytochrome c release and ROS production, thus leading to neuronal demise [240,[249][250][251][252].
Figure 2.
The proposed intracellular effects of various aSyn conformations in neurons. A schematic representation of the aberrant interactions between the various aSyn species with intracellular organelles: (a) In the cytoplasm of neurons, aSyn monomers form oligomers that can eventually become fibrils, (b) Both unfolded and aggregated aSyn impair the function of PMR1, a Ca 2+ -transporting ATPase pump that regulates Ca 2+ and Mn +2 levels in the Golgi apparatus (GA), resulting in elevated cytosolic Ca 2+ levels, (c) Both WT and mutant A53T aSyn disrupt the vesicular transport from Endoplasmic Reticulum (ER) to Golgi (GA), (d) WT aSyn inhibits the transportation of methyltransferases from the cytoplasm to the nucleus (N), thus altering DNA methylation of the SNCA gene, (e) Inside the nucleus (N), aSyn inhibits histone acetylation via its direct binding to histones or by preventing the action of histone acetyltransferase (HAT) enzymes, thus interfering in the process of gene transcription, (f) In the ER, aSyn aggregates activate the Ca 2+ -ATPase SERCA, resulting in dysregulated Ca 2+ homeostasis, (g) Both monomeric and oligomeric aSyn interact with Voltage-dependent anion channel 1 (VDAC1) and inositol triphosphate receptors (IP3Rs), the protein components involved in mitochondrial-associated ER membrane (MAM) and regulates the transmission of Ca 2+ signals from the ER to mitochondria (M), (h) aSyn binds to TOM20, a mitochondrial import receptor subunit and inhibits normal protein import, (i) Normally, monomeric or dimeric forms of aSyn are degraded in the lysosome (L) via Chaperone Mediated Autophagy (CMA), following their interaction with LAMP2A. However, under pathological conditions, impairment of CMA has been proposed to lead to aSyn accumulation and subsequent cell toxicity, (j) Oligomeric aSyn and various misfolded proteins are cleared via macroautophagy, following the fusion of autophagosomes with the lysosome. Pathological aSyn has been shown to inhibit autophagosome maturation or their fusion with lysosomes, thus impairing autophagic flux, (k) Monomeric and oligomeric aSyn are degraded via the proteasome; however, under pathological conditions, increased levels of aSyn or even soluble aSyn oligomers may inhibit proteasomal function, leading to aSyn accumulation and the formation of insoluble aggregates.
Experiments in various animal models of a-synucleinopathy have revealed mito chondrial abnormalities, DNA damage and neuronal degeneration in PD-affected brai regions [244,253,254]. Moreover, in vitro and in vivo experiments have shown that aSy inhibits mitochondrial fusion and triggers mitochondrial fragmentation [231,255]. D Maio and colleagues have proposed that certain post-translationally modified aSyn con formations (soluble oligomers, dopamine-modified and S129E phosphorylation mimic lead to impaired mitochondrial function via binding to TOM20 (translocase of the oute In the cytoplasm of neurons, aSyn monomers form oligomers that can eventually become fibrils, (b) Both unfolded and aggregated aSyn impair the function of PMR1, a Ca 2+ -transporting ATPase pump that regulates Ca 2+ and Mn +2 levels in the Golgi apparatus (GA), resulting in elevated cytosolic Ca 2+ levels, (c) Both WT and mutant A53T aSyn disrupt the vesicular transport from Endoplasmic Reticulum (ER) to Golgi (GA), (d) WT aSyn inhibits the transportation of methyltransferases from the cytoplasm to the nucleus (N), thus altering DNA methylation of the SNCA gene, (e) Inside the nucleus (N), aSyn inhibits histone acetylation via its direct binding to histones or by preventing the action of histone acetyltransferase (HAT) enzymes, thus interfering in the process of gene transcription, (f) In the ER, aSyn aggregates activate the Ca 2+ -ATPase SERCA, resulting in dysregulated Ca 2+ homeostasis, (g) Both monomeric and oligomeric aSyn interact with Voltage-dependent anion channel 1 (VDAC1) and inositol triphosphate receptors (IP3Rs), the protein components involved in mitochondrial-associated ER membrane (MAM) and regulates the transmission of Ca 2+ signals from the ER to mitochondria (M), (h) aSyn binds to TOM20, a mitochondrial import receptor subunit and inhibits normal protein import, (i) Normally, monomeric or dimeric forms of aSyn are degraded in the lysosome (L) via Chaperone Mediated Autophagy (CMA), following their interaction with LAMP2A. However, under pathological conditions, impairment of CMA has been proposed to lead to aSyn accumulation and subsequent cell toxicity, (j) Oligomeric aSyn and various misfolded proteins are cleared via macroautophagy, following the fusion of autophagosomes with the lysosome. Pathological aSyn has been shown to inhibit autophagosome maturation or their fusion with lysosomes, thus impairing autophagic flux, (k) Monomeric and oligomeric aSyn are degraded via the proteasome; however, under pathological conditions, increased levels of aSyn or even soluble aSyn oligomers may inhibit proteasomal function, leading to aSyn accumulation and the formation of insoluble aggregates. Experiments in various animal models of a-synucleinopathy have revealed mitochondrial abnormalities, DNA damage and neuronal degeneration in PD-affected brain regions [244,253,254]. Moreover, in vitro and in vivo experiments have shown that aSyn inhibits mitochondrial fusion and triggers mitochondrial fragmentation [231,255]. Di Maio and colleagues have proposed that certain post-translationally modified aSyn conformations (soluble oligomers, dopamine-modified and S129E phosphorylation mimic) lead to impaired mitochondrial function via binding to TOM20 (translocase of the outer membrane receptor) and inhibiting mitochondrial protein import [239].
Nonetheless, there is evidence suggesting an impairment of mitochondrial function upstream of aSyn pathology. Experiments using the pesticides rotenone and paraquat have shown that dysregulation of mitochondrial function leads to nigrostriatal dopaminergic loss and formation of LB-like inclusions, positively stained with anti-aSyn antibodies and thioflavine S, thus resembling PD features [246,247,[256][257][258]. Similarly, incubation of WT aSyn-overexpressing COS-7 cells with mitochondrial inhibitors resulted in the disappearance of the aSyn aggregates formed upon rotenone or oligomycin treatment [259]. A plethora of studies that utilize the mitochondrial neurotoxin MPTP to induce PD-like pathology in animals, further suggest that mitochondria impairment is a key player in disease development [245,248,[260][261][262][263][264]. Genetic studies further support the hypothesis of aSyn accumulation as a secondary event following mitochondrial malfunction. Specifically, mutations in ATP13A2 (ATPase cation transporting 13A2), encoding for the lysosomal type 5 P-type ATPase, were shown to result in dysregulation in mitochondrial depolarization and ATP metabolism leading to mitochondrial fragmentation and subsequent cell death [265,266].
Apart from its implication in mitochondrial failure, aSyn has been also reported to play a biological role in the association of mitochondria with the endoplasmic reticulum (ER) Ca 2+ homeostasis. It has been demonstrated that aSyn favors the Ca 2+ transfer from ER to mitochondria, as a result of the communication the two organelles, probably due to the fact that aSyn can act as a "bridge" via its C terminus [267]. Later studies further supported the physiological localization of aSyn in mitochondria-associated ER membranes (MAM), stabilizing their interaction, which was perturbed upon aSyn aggregation and its subsequent redistribution [268,269]. Interestingly, the familial PD-linked A53T and A30P aSyn point mutations resulted in their weakened interaction with MAM, which affected MAM function and mitochondrial integrity [269].
The association of aSyn with mitochondria was further corroborated by findings indicating interactions between both monomeric and oligomeric aSyn with the Ca 2+ transporting voltage-dependent anion channel 1 (VDAC1) [270][271][272][273]. Importantly, VDAC1 has been detected on the MAM of ER mediating the communication between the two organelles, regulating Ca 2+ homeostasis [274][275][276]. Moreover, VDAC levels have been found decreased in nigral neurons of PD brains, where pathological aSyn inclusions had been formed [277]. Additionally, VDAC has been proposed to be a component of the mitochondrial permeability transition pore, the opening of which has been shown to be affected by aSyn overexpression and oligomerization [230,278]. In vivo experiments on transgenic mice overexpressing the human A53T aSyn further supported the role of permeability transition pore activity modulation on the mitochondrial dysfunction during PD pathogenesis [279].
Unfolded Protein Response, Regulation of ER/Golgi Trafficking and Ca 2+ Homeostasis
The ER is a continuous membrane system mainly responsible for the production and processing of lipids and proteins, as well as Ca 2+ homeostasis. In case of impaired protein folding (ER stress), cells activate a group of signal transduction pathways, known as the unfolded protein response (UPR). It has been previously shown that aSyn overexpression in PD patients leads to UPR and contributes to the molecular pathogenesis of the disease [280]. The ER chaperone glucose regulated protein 78 (GRP78/BIP) has a crucial role on ER stress regulation due to its ability to control the activation of transmembrane ER stress sensors (IRE1, PERK and ATF6) [281]. Disassociation of GRP78 from IRE1 and PERK results in stress signaling, finally leading to altered ER homeostasis [282]. aSyn associates with GRP78/BIP under physiological or pathological conditions, thus inducing UPR and leading to dopaminergic cell death [45,283]. Strikingly, Ser129 phosphorylated and aggregated aSyn was found in ER microsomes of A53T transgenic mice and more importantly, administration of the UPR inhibitor salubrinal, effectively attenuated disease manifestations in this PDmouse model [284,285]. It is worth mentioning that GRP78/BiP levels were found elevated in DLB and PD brains in an aSyn burden-dependent manner [286]. In addition, the protein levels of various ER chaperones were found elevated in a-synucleinopathy models, colocalized with aSyn positive inclusions, suggesting that aggregated aSyn could potentially be implicated in UPR regulation in disease progression [284,[287][288][289][290][291][292][293].
Proteins synthesized in the ER, are packaged into vesicles and directed to Golgi apparatus for subsequent modifications. One of the first pathological roles attributed to aSyn is the blockade of the vesicular transport from ER to Golgi by antagonizing ER/Golgi SNAREs [294][295][296]. Towards the same direction, aSyn can also disrupt the intra-Golgi and post-Golgi secretory trafficking, via an abnormal interaction with several Rab-family proteins of the intracellular endocytic pathway [294,[296][297][298][299]. Additionally, aSyn can also impair the ionic transport and membrane trafficking, resulting in Golgi fragmentation and subsequent cytotoxicity [300][301][302].
Another significant role of aSyn on ER and Golgi function is the regulation of Ca 2+ homeostasis via its binding on specific channels or pumps localized in these organelles ( Figure 2). Specifically, proximity ligation assay experiments demonstrated that soluble and insoluble aSyn aggregates, but not monomers, interact with the ER Ca 2+ -ATPase SERCA, resulting in decreased cytosolic Ca 2+ that disrupts the physiological cell function and leads to neuronal cell death [303]. Moreover, administration of the SERCA inhibitor cyclopiazonic acid restored cytosolic Ca 2+ levels and protected neurons against the aggregated aSyndependent cell death [303]. In support to these results, aggregated aSyn bound on SERCA pump was detected in LBs and GCIs of PD and MSA brains, respectively [303]. Furthermore, PMR1, a Ca 2+ -transporting ATPase 1 pump regulating the levels of Ca 2+ and Mn +2 ions in the Golgi [304], has been proposed to be a mediator of aSyn-dependent cytotoxicity. Specifically, in various PD models (yeast, flies and nematodes), PMR1 pump has been linked to aSyn pathology via a Ca 2+ -dependent mechanism, where aSyn accumulation elevated cytosolic Ca 2+ levels and increased cell death. Interestingly, upon PMR1 deletion, the disease-associated characteristics were abolished, further suggesting the relevance of this pump to aSyn pathology [305,306].
a-Synuclein in the Nucleus
The name aSyn was given to the protein due to its localization in the nucleus and presynaptic nerve terminals [37]. Nuclear aSyn was detected in neurons of various brain regions of rodents and was reported to interact with histones, underlying PD pathology [307][308][309], even though a single study declares that the nuclear staining of aSyn is attributed to the non-specific signal of some antibodies that probably recognize unknown antigens in neuronal nuclei [310]. It has been proposed that aSyn is responsible for epigenetic dysregulation via inhibition of histone acetylation or reduced DNA methylation, thus favoring neuronal degeneration, whereas others suggest that nuclear aSyn regulates cell cycle rate exhibiting cell toxicity [311][312][313]. Importantly, histone deacetylase (HDAC) inhibitors attenuated aSyn toxicity and provided neuroprotection in both cell culture and transgenic Drosophila models [311,314].
Experiments in SH-SY5Y cells revealed that nuclear translocation of aSyn is regulated by calreticulin and Ca 2+ , following treatment with retinoic acid and modulates the expression of PD-linked genes such as ATP13A2 and PINK1 (PTEN-induced kinase1) [315]. Interestingly, phosphorylated aSyn at Ser129 was found accumulated in the nucleus of HEK293E-aSyn overexpressing cells and in various brain regions of transgenic (Thy1)-[A30P] aSyn mice [316]. Further experiments in H4 cells expressing various aSyn proteins verified that nuclear localization of aSyn depends on its phosphorylation at Ser129 [317]. The same group supported a role of DNA-binding and gene expression regulation for aSyn providing an insight into the role of modified aSyn in the nucleus [317]. Furthermore, other post-translational modifications of aSyn, such as sumoylation, seem to be responsible for the translocation of aSyn from the cytoplasm to the nucleus [318]. Although the majority of studies support a neurotoxic role for aSyn nuclear localization, some groups proposed that aSyn in the nucleus displays a protective role against DNA damage, replication stress or impaired nucleo-cytoplasmic transport [319][320][321]. However, the numerous in vitro and in vivo studies demonstrating a neurotoxic role of nuclear aSyn, in contradiction to the limited number of studies supporting a protective role originated mostly from cell lines or yeast, favors the pathological potential of nuclear aSyn.
Alpha-Synuclein and Protein Degradation Pathways: An Intricate Interplay
A great wealth of data focuses on the complicated relationship between aSyn clearance and protein degradation pathways ( Figure 2). Both the ubiquitin-proteasome system (UPS) and the autophagy lysosome pathway (ALP) are responsible for aSyn degradation in a manner that depends on cell type, tissue and aSyn conformation state [322][323][324]. Specifically, there are studies demonstrating that aSyn can be degraded by the 26S/20S proteasome via ubiquitin-dependent [325,326] and ubiquitin-independent manner [327,328]. Studies in PC12, HEK293 and primary mesencephalic cells suggested that pharmacological inhibition of the proteasome does not lead to aSyn accumulation [324,329,330]; however, others have shown that soluble aSyn oligomers, but not monomers, are partially cleared via the 26S proteasome [331]. Importantly, it has been proposed that the UPS is responsible for aSyn removal under normal conditions, while in pathological cases the ALP is recruited to clear the increased aSyn burden [332].
Chaperone-mediated autophagy (CMA) is also responsible for the degradation of monomeric or dimeric forms of the protein via the lysosome-associated membrane protein type 2A (LAMP2A), whereas oligomeric aSyn is cleared mainly via macroautophagy [324,333,334]. Lee and colleagues also suggested that the lysosome is responsible for the removal of oligomeric but not fibrillar aSyn and that lysosomal failure results in aSyn accumulation and aggregation and subsequent cell death [335]. Moreover, initial in vivo evidence suggested that increased aSyn protein levels evoked by paraquat treatment were preferably degraded via CMA in dopaminergic neurons, where the levels of LAMP2A and the lysosomal heat shock cognate protein of 70 kDa (HSC70), both essential CMA-components, were found elevated [336]. We have also shown that boosting CMA function via LAMP2A overexpression in cell lines and primary neuronal cultures and in the rat dopaminergic system mitigated aSyn protein levels and related toxicity [337]. Similar neuroprotective effects were obtained upon LAMP2A overexpression in the Drosophila brain [338]. On the contrary, we have also shown that LAMP2A silencing led to endogenous aSyn accumulation in vitro [324] and in vivo [339] and in extensive neurodegeneration of the rat nigrostriatal axis [339]. Decreased levels of LAMP2A and HSC70 were reported in the human substantia nigra and amygdala of PD brains [340], whereas, in a subsequent study, LAMP2A was found to be selectively reduced in association with increased aSyn levels, even in the early stages of PD, thus suggesting a potential dysregulation of CMA-mediated protein degradation prior to substantial aSyn aggregation in PD [341].
However, a bidirectional link between aSyn accumulation and the protein degradation machineries exists and extensive studies have been conducted to elucidate not only the manner of aberrant aSyn degradation in a-synucleinopathies, but also the impact of various aSyn conformations on UPS and ALP function. It has been proposed that overexpression of A30P and A53T mutants, contrarily to WT aSyn, leads to cell death due to proteasomal inhibition [342]. Indeed, overexpression of mutant A53T aSyn resulted in UPS failure by inhibiting the activity of the 20S/26S proteasome, finally leading to aSyn pathological accumulation [343]. Other groups have failed to detect alterations in the proteasomal function of PC12 cells or transgenic mice, following overexpression of WT or mutant (A30P, A53T) aSyn [344]. Moreover, later studies demonstrated that transient overexpression of WT or mutant aSyn, followed by addition of recombinant aSyn oligomers and fibrils in an osteosarcoma cell line, did not result in any disturbance of the proteasomal function [345]. Importantly, studies in human post-mortem PD brains also suggested impaired proteasomal function in the substantia nigra [346][347][348], further supporting a role of UPS malfunction in PD pathogenesis. In addition, total rates of protein degradation declines with aging, thus contributing to the pathogenesis of age-related diseases [349]. Even though human post-mortem studies provide valuable information in regards to etiology and/or disease pathogenesis, the data obtained should be treated with caution, given into account the overall decline in the function of multiple systems with aging. For a-synucleinopathies, we believe that the use of tissue from affected and non-affected (in regards to aSyn pathology and neuronal death) brain areas may provide useful information regarding early or late events leading to neurodegeneration.
Increased aSyn protein burden is reported to impair macroautophagy function as well, via its interaction with Rab1a, an event that subsequently results in the autophagosomeformation-related protein Atg9 mislocalization [350]. Similar results were obtained from cells expressing the PD-linked mutation of the retromer protein VPS35, which is involved in autophagy and is implicated in PD pathogenesis [351]. The three most well studied PD-linked aSyn mutations, E46K, A30P and A53T, have been shown to promote ALP dysfunction, via either impairing autophagosome formation or inhibiting the selective removal of damaged mitochondria through mitophagy [352][353][354]. It has been previously reported that dopamine-modified aSyn inhibits CMA and this could probably shed light into the selective vulnerability of dopaminergic neurons in PD [355]. Further experiments in human iPSC-derived midbrain dopaminergic neurons revealed that disrupted hydrolase trafficking, due to aSyn overexpression, reduces lysosomal function [356]. Similarly, multiple studies suggest that there is a strong relationship between decreased β-glucocerebrosidase (GCase) activity and aSyn accumulation. In particular, heterozygote mutations in GBA1 gene encoding for β-glucocerebrosidase represent a major risk factor for PD development with a-synucleinopathy [357][358][359][360][361][362].
Alpha-Synuclein in the Extracellular Space
The first indication that aSyn can be secreted arose from the detection of the protein in human CSF and plasma of PD patients, indicating that aSyn can be released into the extracellular space [363,364] and can exert various deleterious effects on neighboring cells. Further studies supported that aSyn can be secreted from neuronal cells, either via vesicles or exosomes [365][366][367]. Extracellular aSyn has been the subject of intensive research in recent years, mainly due to its propensity to spread from neuron to neuron or other glial cells, as discussed in the following sections.
The major hypothesis regarding the onset and spread of aSyn pathology in a-synucleinopathies relies in the protein's nucleation propensity that leads to the formation of aberrant aSyn species, which then spread to neighboring cells and tissues via various mechanisms. Furthermore, aSyn has been proposed to act as a "prion-like" protein since it was demonstrated that pathogenic aSyn could transfer from diseased neurons of a PD patient to the healthy transplanted ones, fourteen years after the surgical intervention [368]. Similar results were obtained by other groups in both humans and rats [369][370][371][372]. Experiments of PD and DLB patient-derived brain extracts delivered into the brain of mice and non-human primates further validated the transfer of pathological aSyn and the formation of aSyn aggregates within the recipient neurons [373,374]. Moreover, when Pre-Formed Fibrils (PFFs) were used as seeds in both in vitro and in vivo experiments, the endogenous neuronal aSyn was recruited into the formation of highly insoluble aggregates [104, [375][376][377][378][379].
Neuron-to-neuron aSyn transmission occurs following both anterograde and retrograde axonal transport or trans-synaptic pathways [402][403][404]. Several groups have proposed that dysregulation of axonal transport is implicated in aSyn accumulation at the cell body; however, it is not clear whether PD-linked aSyn mutations play a key role in the process per se [403,[405][406][407]. Notably, aSyn in its oligomeric form has been shown to interfere with microtubules and kinesin motors, thus disrupting the anterograde transport and similar results were obtained in an aSyn overexpressing mouse model for PD, as well as in patients diagnosed with the disease [408][409][410]. Additionally, it has been suggested that the variety in a-synucleinopathy phenotypes is attributed to the formation of different aSyn "strains" that display "aggressive" characteristics [17,18,411]. As a consequence of their disparate structures, these "strains" have discrete biochemical responses along the different brain regions and cell types, thus explaining the various disease manifestations of a-synucleinopathies [19,20,125,[412][413][414].
Role in Microglia Function and Dysfunction
Microglial cells are the resident phagocytes of the brain, guarding the CNS homeostasis and performing essential role in health and disease. Specifically, apart from exhibiting immunoreactivity as a response to any changes or inflammatory stimulus, they are responsible for the monitoring and pruning of neuronal synapses [415][416][417][418][419]. Disturbance or loss of brain homeostasis "activates" microglial cells, a term used to describe the changes in their shape, gene expression profile and function during their response [420][421][422][423]. A well-regulated immune surveillance of the brain is essential for the proper CNS functioning; however, an excessive and continuous inflammatory response could potentially lead to cellular and tissue damage, tightly linked to the development of various neurodegenerative diseases [424][425][426][427][428]. Enhanced production of pro-inflammatory cytokines, reactive oxygen species (ROS, NO, superoxide) and glutamate has been shown to lead to dopaminergic cell death in PD [429][430][431].
Indeed, in the diseased brain, microglial cells get activated in two states, M1 and M2, depending on the cytokine signaling pathway involved. The classical pro-inflammatory TNF/IFNγ-mediated activation leads to M1 state, whereas the M2 state is subdivided into the M2a "alternative activation" and the M2c "acquired deactivation" states, acquired following the involvement of anti-inflammatory cytokines IL-4 and IL-13 (for M2a) or IL-10 and TGF-β (for M2c) [432]. It has been proposed that in PD a shift from M2 to M1 phenotype is responsible for disease progression; therefore, the scientific interest has been focused on immunomodulatory therapies promoting the neuroprotective M2 type [433].
Physiological Role of aSyn in Microglia Function
Although aSyn has been primarily characterized as a pre-synaptic neuronal protein, several studies have proposed a physiological role of aSyn in microglial function. Microglial cells from mice lacking aSyn (aSyn -/-) displayed reduced phagocytic activity and enhanced secretion of pro-inflammatory cytokines, thus suggesting that aSyn modulates the activation phenotype of the brain immune cells and contributes to the clearance of debris present in the local brain microenvironment [434,435]. On the other hand, transient overexpression of WT, A53T or A30P aSyn in BV2 microglial cells led to a distinct pro-inflammatory cytokine profile in combination with impaired phagocytic activity [436]. Additionally, microglia isolated from BAC transgenic mice overexpressing aSyn, exhibited dysregulation of cytokine release and phagocytosis [437]. Data obtained from iPSC-derived macrophages from PD patients harboring the A53T aSyn mutation and aSyn triplication mutations further support the implication of aSyn in the phagocytic capability of these cells [438]. Expression of aSyn in microglia has been also proposed to promote cell migration via the enhanced expression of the cell-surface glycoprotein CD44 and the matrix metalloproteinase membrane-type 1 (MMP-MT1) [439]. However, the presence of various aSyn species in the environment of microglial cells alters their physiology and behavior leading to neuroinflammation and neurodegeneration.
Microgliosis in a-Synucleinopathies
Microgliosis is the reaction of CNS microglial cells to pathogenic insults and their shift from a resting to the active state [440]. Since the first study demonstrating microgliosis in PD brains [25], a plethora of reports highlights an important role of activated microglia in disease pathogenesis in both humans and animal models (reviewed in [441][442][443][444][445]). Microglial activation has been observed in PD brains by in vivo positron emission tomography (PET) imaging studies [446][447][448], suggesting that microgliosis is an early event that perpetuates during the disease progression. Additional studies have further supported the hypothesis of an early activation of microglia tightly associated to aSyn pathology, in various PD models [192,[449][450][451][452][453][454]. However, other reports suggested that microglia respond differently in the various disease stages, in a manner that depends on the affected brain region and the protein burden of aSyn, indicating the existence of immunological diversity among microglia in the diseased brain [449,450,455].
It has been proposed that the neuron-microglia interaction may contribute to the neuroinflammation that characterizes PD, where neurons expressing aSyn activate microglia, which in turn secrete inflammatory factors surrounding the diseased neurons, thus forming a vicious cycle [456] (Figure 3). Likewise, microgliosis has been also reported in MSA, where aSyn is found aggregated mainly within oligodendrocytes [426,[457][458][459]. This hypothesis for neuron-microglia communication in synucleinopathies is further supported by findings demonstrating an altered expression profile of various cytokines in the brains of PD patients [460]. Moreover, microglial activation has been shown to induce aSyn-mediated neuronal cell death, in both in vitro and in vivo PD models [461][462][463]. In addition, microglial cells exposed either to cytokines or to PD-derived CSF, displayed alterations in the intracellular aSyn protein levels, suggesting a crucial role of the brain microenvironment for aSyn accumulation in microglia [464,465]. Activated microglia have been also detected in various transgenic animal models overexpressing wild type of mutated aSyn specifically in neurons [192,[452][453][454]466,467]. Experimental PD animal models including the use of the MPTP mouse model or nigral injections of recombinant aSyn fibrils or AAVs overexpressing aSyn are characterized by neuroinflammation followed by a significant degeneration of dopaminergic neurons [449,450,[468][469][470]. Similarly, further results obtained from the rAAV-driven overexpression of aSyn in the mouse substantia nigra show extensive aSyn-mediated microgliosis primarily in the nigrostriatal axis, accompanied by an increase in the production of pro-inflammatory cytokines [410,471,472]. Neuronally-derived aSyn enters oligodendrocytes via clathrin-mediated endocytosis, exosomal transportation, or via binding to Heparan Sulfate ProteoGlycans (HSPGs), (c) Inside oligodendrocytes, neuronal aSyn (red) initiates the seeding of the endogenous oligodendroglial aSyn (black) and together with the oligodendroglial-specific TPPP/p25α protein, they lead to the formation of GCIs, (d) aSyn aggregates impair the proteolytic machineries of oligodendrocytes [(proteasome and lysosome (L)], (e) Misfolded aSyn leads to mitochondrial (M) dysfunction and subsequent cell toxicity in MSA, (f) Oligodendroglial-derived exosomes containing aSyn have been shown to transfer to microglial cells (orange), (g) Extracellular aSyn stimulates TLRs and other receptors (i.e., FcγR, P2X7 etc) to activate microglial transcription factors, such as NFkB, for the production of various pro-inflammatory cytokines (IL1β, TNFa) and chemokines that induce astrocyte (green) reactivity and oligodendroglial damage, (h) Neuronally-secreted aSyn is taken up by microglial cells via clathrin-mediated endocytosis and is then driven to the lysosome (L) for degradation. In pathological conditions, though, it accumulates into aSyn insoluble aggregates, (i) Free or exosome-associated aSyn released by neurons is transmitted to astrocytes via endocytosis and enters the lysosome (L) for its clearance, (j) In disease, aSyn aggregates are formed that lead to lysosomal (L) impairment and mitochondria (M) dysfunction, (k) In astrocytes, aSyn triggers the opening of channels (i.e., Cx43 and Panx1), leading to dysregulation in Ca 2+ homeostasis and altered mitochondrial morphology, (l) Microglia-secreted cytokines and chemokines activate astrocytes to further produce pro-inflammatory signaling molecules and enhance neurotoxicity.
Activation of Microglia and Clearance of Toxic aSyn Species
Numerous in vitro and in vivo studies have demonstrated that either conditioned medium from aSyn overexpressing cells or aSyn per se (i.e., recombinant monomeric, oligomeric or fibrillar aSyn) robustly activate microglia. In particular, treatment of microglial cells with non-aggregated aSyn was shown to increase phagocytosis and enhance pro-inflammatory cytokines release, NF-kB nuclear translocation and microglial migra- receptors (i.e., FcγR, P2X7 etc) to activate microglial transcription factors, such as NFkB, for the production of various pro-inflammatory cytokines (IL1β, TNFa) and chemokines that induce astrocyte (green) reactivity and oligodendroglial damage, (h) Neuronally-secreted aSyn is taken up by microglial cells via clathrin-mediated endocytosis and is then driven to the lysosome (L) for degradation. In pathological conditions, though, it accumulates into aSyn insoluble aggregates, (i) Free or exosome-associated aSyn released by neurons is transmitted to astrocytes via endocytosis and enters the lysosome (L) for its clearance, (j) In disease, aSyn aggregates are formed that lead to lysosomal (L) impairment and mitochondria (M) dysfunction, (k) In astrocytes, aSyn triggers the opening of channels (i.e., Cx43 and Panx1), leading to dysregulation in Ca 2+ homeostasis and altered mitochondrial morphology, (l) Microglia-secreted cytokines and chemokines activate astrocytes to further produce pro-inflammatory signaling molecules and enhance neurotoxicity.
Activation of Microglia and Clearance of Toxic aSyn Species
Numerous in vitro and in vivo studies have demonstrated that either conditioned medium from aSyn overexpressing cells or aSyn per se (i.e., recombinant monomeric, oligomeric or fibrillar aSyn) robustly activate microglia. In particular, treatment of mi-croglial cells with non-aggregated aSyn was shown to increase phagocytosis and enhance pro-inflammatory cytokines release, NF-kB nuclear translocation and microglial migration [439,468,[473][474][475], whereas addition of fibrillar aSyn in BV2 cells was reported to reduce their phagocytic activity [474]. In agreement, incubation of human microglial cell lines or primary microglial cells with monomeric aSyn triggered the release of various pro-inflammatory factors [476][477][478][479]; however, recently, it has been suggested that monomeric, contrarily to oligomeric aSyn, promotes an anti-inflammatory phenotype of microglia [480]. Other groups have found that aggregated aSyn leads to increased TNFa and ROS production, both related to cell toxicity [456,[481][482][483].
Microglial activation may depend on the aggregation state of aSyn and microglial cells readily take-up fibrillar aSyn and produce pro-inflammatory cytokines [484]. Furthermore, incubation of microglial cells with conditioned media from neuronal cells or with CSF from PD patients resulted in significant secretion of TNFa, IL1β and ROS [465,473,478]. Importantly, it has been shown that PD-related aSyn mutants are more prone to activate microglia when compared to WT protein [463,479,485]. Moreover, elevated levels of CXCL12 chemokine in both postmortem PD brain tissue and in nigral microglia of transgenic A53T mice further support the aSyn-mediated neuroinflammation [486]. Contrariwise, aSynevoked microgliosis in some instances leads to the enhanced expression and release of neuroprotective factors, such as BDNF, probably in an attempt for neuronal repair and survival [481]. Notably, studies in microglia of mice lacking aSyn expression have verified the critical role of aSyn in modulating microglial activation [434,435].
The internalization of aSyn by microglia triggers various immune response-related cascades, including NF-kB, Nrf2, MHCII and inflammasome. Numerous studies have reported activation of NF-kB pathway upon addition of various aSyn conformations in both rodent and human microglial cell lines [468,469,476,478,485,503]. The nuclear translocation of NF-kB is a result of aSyn interaction with TLR that leads to the Myd 88-mediated activation of IkB kinases [504]. Moreover, oligomeric aSyn has been shown to trigger TLR2 signaling in microglial cells via NF-kB and p38 MAPK activation, which has been previously linked to aSyn-related toxicity [469,505]. Another key player in both aSyn pathogenesis and the neuroimmune system is LRRK2 (Leucine-rich repeat kinase 2), constitutively expressed in neurons and glial cells, mutations of which have been characterized as common risk factors for PD. Significantly, manipulation of LRRK2 expression levels in mouse microglial cells has revealed its role in regulating aSyn degradation [506,507].
Furthermore, various studies have proposed that aggregated aSyn results in IL-1β production by reactive microglia, which in turn involves nod-like receptor protein 3 (NLRP3) inflammasome activation [508][509][510]. Interestingly, the inflammasome related caspase-1 activation is responsible for aSyn truncation and its subsequent pathological accumulation [188]. Apart from the activation of microglial pro-inflammatory transcription factors, such as NFkB, pathological aSyn also affects the antioxidant transcription factor Nrf2 [485,511]. Modifications in the expression levels of Nrf2 have confirmed its role in aSyn clearance and neuronal survival [512][513][514]. Moreover, since microglial cells act as antigen presenting cells in the brain, it has been proposed that upon aSyn internalization by microglia, the protein is presented to T-cells via MHCII, which then infiltrate in the CNS and finally lead to neuronal degeneration present in PD and MSA [22,[515][516][517][518][519][520].
Apart from modulating the immune responses in the CNS, microglial cells are the brain's phagocyting cells clearing cellular debris or any toxic insult. Amongst the various threats, extracellular aSyn has been shown to be effectively processed by activated microglia, in some cases via a DJ1-mediated autophagy [474,[521][522][523][524]. Consistent with these results, experiments utilizing primary cells, have demonstrated that aSyn upon its internalization by microglia is targeted to autophagosomes, thus leading to its degradation [493]. TLR4 also seems to play a crucial role in microglial phagocytosis, since TLR4 (-/-) murine microglial cells exhibit impaired aSyn clearance and enhanced neurotoxicity [494,496]. Furthermore, addition of aSyn fibrils in BV2 and primary microglial cells induces autophagy as a rescue mechanism to restore lysosomal damage [525].
Importantly, aging is another key player in the efficient aSyn clearance, since it has been proposed that microglia and monocytes display reduced phagocytic activity with age [526,527]. Regarding the uptake of aSyn by microglia, it has been reported that ganglioside GM1 and lipid rafts, but not clathrin, caveolae and dynamin, mediate monomeric aSyn internalization, whereas aggregated aSyn enters microglia via clathrin-and calnexindependent mechanisms [487,528]. Moreover, microglial uptake of neuronally-derived exosome-associated aSyn via macropinocytosis could potentially account for pathological aSyn spreading [529][530][531]. Strikingly, apart from the immunomodulating role of microglia in the CNS, it has been also suggested that resting microglia regulates the cell-to-cell transfer of aSyn in vivo [532]. Therefore, further considering the aforementioned involvement of exosomes in the transmission of aSyn pathology, targeting exosome-release from various cell types of the brain could be a potential therapeutic target against disease progression.
Astrocytes in a-Synucleinopathies
Astrocytes, the star-shaped cells of the brain, are the most abundant glial cells of the CNS, accounting for at least one third of the brain mass. They have a supportive role to neurons, by maintaining osmotic, energetic and structural tissue homeostasis. In particular, it is suggested that astrocytes regulate neurotransmitter removal from the extracellular space, synaptic transmission, myelination, brain energy metabolism and pH homeostasis, ion balance, cholesterol synthesis, blood-brain barrier (BBB) permeability, cerebral blood flow and glymphatic system function [533][534][535][536][537][538][539][540][541][542][543]. It has also been suggested that astrocytes modulate neuronal synaptic activity via a Ca 2+ -dependent release of transmitters that have a feedback action on neurons, a process known as "gliotransmission" [544][545][546]. There are two main types of astrocytes in the brain: protoplasmic and fibrous [547]. Protoplasmic astrocytes are highly branched and are mainly located in the gray matter, tightly associated with neuronal cell bodies and synapses [548], whereas fibrous astrocytes have straight and long processes and they are widely distributed in the white matter, contacting nodes of Ranvier [549].
Apart from their crucial role in brain homeostasis and health, astrocytes have been also implicated in the cascade of events underlying neurodegenerative diseases. Specifically, astrocytes in PD brains have been reported to migrate and become reactive and have been classified in two categories depending on their neuroprotective or neurotoxic effects: harmful A1 astrocytes and protective A2 astrocytes [24]. Activated microglial cells induce A1-astrocyte reactivity, leading to neuronal and oligodendroglial cell death and subsequent synaptic impairment [24]. They have been also shown to secrete inflammatory cytokines, partially as a response to the increased aSyn protein load, thus contributing to PD progression and neurodegeneration [550], although there are studies reporting a neuroprotective role of astrocytes in a-synucleinopathies, modulating the levels of accumulated aSyn and protecting neurons against oxidative stress [551,552].
Astrocytes in PD: Friend or Foe?
Up to date, there is little evidence regarding the expression levels [553] and the physiological role of aSyn in astrocytes, such as the implication of aSyn in astrocytic fatty acid metabolism [90]. It is also suggested that cultured human astrocytes express low levels of aSyn and various inflammatory cytokines or cell stress enhance aSyn production [554,555]. Yet, numerous studies have focused on the role of astrocytes in the modulation of aSyn levels and the regulation of immune responses in neurodegeneration. Interestingly, aSynpositive inclusions have been detected in astrocytes in various regions of PD and DLB brains [13][14][15][556][557][558][559][560][561]. According to the prevailing hypothesis, the accumulated aSyn detected in astrocytes origins from the neighboring diseased neurons, which upon its release is subsequently internalized by astrocytes, probably as a mechanism of clearance and neuroprotection [562] (Figure 3). However, the responses of astrocytes in aSyn-related pathogenesis have been characterized as a "double-edged sword", due to the controversial findings regarding their role in neurotoxicity or neuroprotection [563].
The neuron-to-astrocyte aSyn transmission has been extensively studied and is considered as the prime suspect for the detection of aSyn immunoreactivity within astrocytes in a-synucleinopathies. A seminal study utilizing primary astroglial cultures treated with conditioned media from differentiated SH-SY5Y neuronal cells showed that internalization of neuronally-derived aSyn by astrocytes occurs via endocytosis and results in the formation of proteinase-K resistant aSyn inclusions within astrocytes [550]. Similar results were obtained in transgenic mice overexpressing human aSyn under the neuronal promoter PDGFβ, where aSyn-positive inclusion bodies were observed in both neurons and astrocytes [550]. Furthermore, co-culture of primary astrocytes with SH-SY5Y cells verified the formation of LB-like inclusions positive for aSyn staining [564]. The detection of aSyn within the astrocytic endosomal/lysosomal compartment led to the hypothesis that astrocytes normally internalize neuronally-derived aSyn as a neuroprotective mechanism; however, prolonged exposure to pathological aSyn species may lead to impaired glial lysosomal function and, thus, astrocytic aSyn accumulation [550,565]. In agreement, overexpression of PD-linked aSyn mutants (A30P and A53T) in actrocyte cell lines resulted in impaired autophagic function, mitochondrial failure and cellular apoptosis, thus preventing the astrocyte-mediated neuroprotection [566]. In support to a protective role of astrocytes during PD, it has been reported that neurotrophin-immunoreactive (BDNF and NT-3) astrocytes surrounding degenerating nigral neurons in the brains of idiopathic PD patients may indicate a beneficial role of glial cells against neuronal failure [567]. In addition, elevated levels of the glutathione peroxidase-GPx in astrocytes of the substantia nigra of PD patients, further demonstrates their role against neuropathology [568,569]. Strikingly, overexpression of the Nrf2 transcription factor selectively in astrocytes in the haSyn A53T transgenic mouse model provided protection against aSyn-related toxicity by enhancing its degradation via the ALP [513]. In a recently published study, it was suggested that astrocytes internalize aSyn aggregates and effectively degrade them via proteasomal and autophagic pathways, thus protecting dopaminergic neurons against the aberrant effects of toxic aSyn species [570].
In vitro and ex vivo studies proposed that aSyn can efficiently transmit not only from neurons to astrocytes, but also between astrocytes and translocate to the lysosomes of the recipient cells [571]. However, differently from neurons, astrocytes are able to efficiently degrade fibrillar aSyn, suggesting an active role for these cells in clearing aSyn deposits [571]. It is worth-mentioning that the transfer of aSyn from astrocytes to neurons was reported to a lesser extent, even though another study suggested that aSyn deriving from SNCA-flag tagged PD astrocytes was effectively transferred to the co-cultured surrounding control neurons, which then displayed signs of degeneration [572]. Strikingly, iPSC-derived astrocytes from PD neurons displayed impaired CMA and macroautophagy, which could potentially account for the observed aSyn accumulation, highlighting the implication of astroglial-mediated proteolysis in the spread of aSyn pathology [572]. On the contrary, when astrocytes derived from healthy controls were co-cultured with PD neurons, the former cells absorbed the neuronal aSyn from the medium, thus indicating a potential neuroprotective effect of astrocytes [572].
In another aSyn-overexpressing neuron-astrocyte co-culture, addition of aSyn fibrils led to the formation of aSyn aggregates in both cell types, probably recapitulating the events occurring during PD pathogenesis [573]. It has been also suggested that aSyn enters astrocytes more efficiently than neurons via endocytosis and that aSyn transmission from astroglial to neuronal cells leads to neurotoxicity and cell death [574]. Indeed, the uptake of aSyn by astrocytes has been reported as an early-in-time event, since it was detected in the cytoplasm of the cells within 30 min following aSyn application [574]. The spread of aSyn from neurons to astroglial cells has also been demonstrated in rodent transgenic animal models overexpressing human aSyn in neuronal cells, by the detection of aSyn-positive inclusions within the cytoplasm of these glial cells [575,576]. Furthermore, hippocampal delivery of aSyn fibrils in the brain of M83 A53T Tg mice resulted in the formation of pSer129-positive aSyn inclusions within both astrocytes and microglia, four months post-injection [397], Various mechanisms have been proposed to underlie aSyn transfer between neurons and astrocytes. For example, it is known that astrocytes, upon oxidative stress, form tunneling nanotubes (TNTs) in order to connect with other non-stressed cells [577]. It has been proposed that various stressors, such as aberrant aSyn, lysosomal dysfunction or mitochondrial failure could trigger TNT formation in astrocytes and enhance aSyn spreading [387,578]. Another mechanism for intercellular communication is via exosomes. Although extensive work has been done regarding the role of exosomes in neuronal aSyn transmission and disease pathology [366,[579][580][581][582], few studies have proposed a vesicularmediated transfer of neuroprotective molecules from astrocytes to neurons [583,584] or an exosome-related aSyn spread from neurons to astrocytes [585,586].
Upon aSyn transmission to astrocytes, the latter produce multiple pro-inflammatory cytokines (IL-1α, IL-1β, IL-6, IL-18) and chemokines (CC-, CXC-and CXCL-type) as a response [550,587]. It has been suggested that the pro-inflammatory response of astrocytes to aSyn depends on TLR4 [496,588]. Apart from cytokine release, aSyn leads to Ca 2+ flux and oxidative stress upon its entry in astroglial cells, presumably leading to neurotoxicity [589][590][591]. Moreover, in vitro and in vivo experiments have shown that pathological aSyn triggers microglial activation, followed by the induction of reactive A1 astrocytes, finally leading to neurodegeneration [592]. It has been also reported that astrocytes overexpressing hA53T aSyn displayed impaired functions, including glutamate uptake and BBB regulation, resulting in paralysis in transgenic mice [593]. Additionally, aSyn-treated astrocytes have been proposed to produce reduced levels of cholesterol, whereas in parallel they display enhanced GFAP expression, indicative of astrocyte reactivity [594]. Significantly, treatment of astrocytes with various aSyn species (monomeric, oligomeric, fibrillar) induced astrocyte activation and secretion of TNF-α and IL-1β, the expression of which seemed to depend on aSyn species, leading to subsequent non-cell autonomous neuronal degeneration [595]. Other studies revealed that elevated expression levels of myeloperoxidase and enhanced IFN-γ signaling could mediate the astrocytic-activation and inflammation observed in PD brains [21,[596][597][598].
alpha-Synuclein has been reported to trigger the opening of connexin 43 (Cx43) hemichannels and pannexin-1 (Panx1) channels in mouse cortical astrocytes, leading to alterations in [Ca 2+ ] i levels, production of nitric oxide (NO), enhanced purinergic and glutamatergic signaling, altered mitochondrial morphology and reduced astrocyte survival [599]. Another recently published study suggests that vesicle-associated aSyn, deriving from erythrocytes, effectively crosses the BBB and accumulates within astrocytes, impairing glutamate uptake, probably due to interactions of oligomeric aSyn with excitatory amino acid transporter 2 (EAAT2) [600]. Furthermore, astrocytes overexpressing mutant A35T and A30P aSyn triggered ER stress and damaged the Golgi apparatus, finally leading to apoptotic cell death [601]. Remarkably, co-culture of primary astrocytes overexpressing mutant aSyn with neuronal cells, inhibited neurite outgrowth, probably due to reduced GDNF secretion [601]. Finally, Cy3-labeled aSyn oligomers were internalized by glial cells, primarily astrocytes, which then started to degrade the oligomers via the ALP [602].
In addition to the well-established role of microglia in the activation of astrocytes, astrocytes themselves have been also reported to control microglial activation and microgliainduced neuroinflammation [603,604], thus unraveling an astrocyte-microglia intimate crosstalk (Figure 4). For example, in response to pathological aSyn insult, astrocytes can acquire a pro-inflammatory phenotype that can lead to neuronal death, independent of microglia. Given that astrocytes produce pro-inflammatory cytokines and chemokines as a response to various stimuli, it has been proposed that such astrocytes may mediate the microglial activation detected in aSyn-related brain diseases [605,606]. Various WT or mutant aSyn conformations have been shown to trigger the up-regulation of pro-inflammatory modulators in astrocytes, such as ICAM-1, IL-6 and TNF-α, leading to microglial activation, neuroinflammation and neurotoxic events during PD progression [496,587,607]. Specifically, transgenic mice inducibly overexpressing the PD-related A53T mutant aSyn selectively in astrocytes exhibited reactive astrogliosis accompanied by increased inflammatory responses and microglial activation in brain regions with significant neuronal loss [593]. Moreover, the detection of ICAM-1 positive reactive astrocytes surrounding brain areas with severe neuronal loss in PD brains or in the MPTP mouse PD model, indicates a sustained inflammatory process mainly triggered by astroglial cells, which is responsible for the consequent degeneration of dopaminergic neurons [608]. Finally, similar to the aSyn prion-like mode of action, the neurotoxic reactive astrocyte polarization has been recently proposed to occur during the CNS prion disease, where microglial cells seem to retain a neuroprotective phenotype against the inflammatory astrocytic responses [609]. All the above observations highly cement the contribution of astroglial reactivity in the pathogenesis of PD and related neuronal a-synucleinopathies. mutant aSyn with neuronal cells, inhibited neurite outgrowth, probably due to reduced GDNF secretion [601]. Finally, Cy3-labeled aSyn oligomers were internalized by glial cells, primarily astrocytes, which then started to degrade the oligomers via the ALP [602]. In addition to the well-established role of microglia in the activation of astrocytes, astrocytes themselves have been also reported to control microglial activation and microglia-induced neuroinflammation [603,604], thus unraveling an astrocyte-microglia intimate crosstalk (Figure 4). For example, in response to pathological aSyn insult, astrocytes can acquire a pro-inflammatory phenotype that can lead to neuronal death, independent of microglia. Given that astrocytes produce pro-inflammatory cytokines and chemokines as a response to various stimuli, it has been proposed that such astrocytes may mediate the microglial activation detected in aSyn-related brain diseases [605,606]. Various WT or mutant aSyn conformations have been shown to trigger the up-regulation of pro-inflammatory modulators in astrocytes, such as ICAM-1, IL-6 and TNF-α, leading to microglial activation, neuroinflammation and neurotoxic events during PD progression [496,587,607]. Specifically, transgenic mice inducibly overexpressing the PD-related A53T mutant aSyn selectively in astrocytes exhibited reactive astrogliosis accompanied by increased inflammatory responses and microglial activation in brain regions with significant neuronal loss [593]. Moreover, the detection of ICAM-1 positive reactive astrocytes surrounding brain areas with severe neuronal loss in PD brains or in the MPTP mouse PD model, indicates a sustained inflammatory process mainly triggered by astroglial cells, which is responsible for the consequent degeneration of dopaminergic neurons [608]. Finally, similar to the aSyn prion-like mode of action, the neurotoxic reactive astrocyte polarization has been recently proposed to occur during the CNS prion disease, where microglial cells seem to retain a neuroprotective phenotype against the inflammatory astrocytic responses [609]. All the above observations highly cement the contribution of astroglial reactivity in the pathogenesis of PD and related neuronal a-synucleinopathies. . Crosstalk between astrocytes and microglia in a-synucleinopathies. Monomeric or oligomeric aSyn released by neurons is taken-up by astrocytes (purple) and microglia (red), which are then activated and secrete various cytokines or chemokines. (a) According to the prevailing hypothesis, the released pro-inflammatory molecules by microglia trigger the activation of astrocytes (red arrow), leading to excessive inflammation and neurotoxicity. (b) Conversely, aSyn can directly activate astrocytes to secrete pro-inflammatory cytokines or chemokines that recruit and activate microglia (purple arrow) resulting in excessive neuroinflammation.
Implication of astrocytes in MSA pathology
Apart from PD and DLB, astroglial activation is also present in MSA and it seems to play a role in both disease initiation and progression. Extensive astrogliosis has been reported in various brain regions of MSA patients, in some cases accompanied by aSynpositive inclusions within astrocytes [16,[610][611][612][613]. In agreement, accumulation of abnormally phosphorylated and aggregated aSyn was present within astrocytes of MSA patients [614]. Interestingly, aSyn-positive doughnut-shaped inclusions were detected in radial processes of Bergmann glia (unipolar protoplasmic astrocytes in the cerebellar cortex) of various a-synucleinopathy brains, including MSA [558]. Although astroglial aSyn accumulation has been demonstrated in the brains of various MSA transgenic mouse models, finally leading to astrogliosis that accompanies neurodegeneration [615][616][617][618], Song et al. suggested that subcortical astrocytes in MSA did not display aSyn accumulation, in contrast to PD [14]. However, up-to-date the data on the precise role of astrocytes in MSA are scarce, reinforcing the necessity for further studies to elucidate the contribution of astroglial activation in MSA pathogenesis.
Alpha-Synuclein in Oligodendrocytes: The Pathologic Hallmark of MSA, A Unique Oligodendrogliopathy
Oligodendrocytes are responsible for the production of myelin [619] that surrounds the neuronal processes, mediates the transmission of electric signals between neurons and provides a neurotrophic support [620][621][622][623]. There are two main types of oligodendrocytes, the myelinating and the non-myelinating ones, that concentrate in white and grey matter, respectively [624]. Many neurodegenerative diseases occur due to either oligodendroglial death or damage to the myelin sheathes they produce leading to subsequent neuronal demise [622,625,626]. Oligodendrocytes have been proposed to participate in late PD and DLB progression, rather than in disease initiation [442,627]. This secondary involvement of oligodendrocytes in neuronal a-synucleinopathies was further supported by the detection of aSyn-positive inclusions within non-myelinating oligodendrocytes of PD and DLB brains [15,628], as well as by the presence of complement-activated oligodendrocytes in the diseased brains [629,630]. Moreover, axonal myelination deficits in neurodegenerative diseases also highlight the involvement of oligodendrocytes in neuropathogenetic events [631,632]. Additionally, oligodendroglial cell loss was reported in the striatum of the MPTP-intoxicated mouse PD model, shortly after MPTP administration [633].
On the other hand, the oligodendroglial aSyn inclusions detected in the brains of MSA patients (GCIs) are the main hallmark of the disease and are considered to play a crucial role in the primary events leading to MSA [626]. The involvement of oligodendrocytes in MSA initiation and progression is indisputable, due to the wide distribution of GCIs along the affected brain areas of MSA patients [611,[634][635][636][637]. Apart from GCIs present within oligodendrocytes, other inclusions, such as neuronal cytoplasmic inclusions (NCIs) have been detected in neuronal somata, axons and nucleus in various brain regions, mainly composed of aSyn [638][639][640]. GCIs were first described three decades ago, as multi-shaped oligodendroglial inclusions composed of a 10-nm-sized central core fibrils, positively stained with antibodies against aSyn, surrounded by other aggregated proteins such as αβcrystallin, ubiquitin, cytoskeletal proteins, chaperones and the microtubule-related proteins TPPP/p25α and tau [11,634,[641][642][643]. Comparative analysis of the protein composition of GCIs and LBs revealed that GCIs consist of 11.7% aSyn, 1.9% αβ-crystallin and 2.3% 14-3-3 proteins, whereas LBs are composed of 8.5% aSyn, 2% αβ-crystallin and 1.5% 14-3-3 proteins [644]. It has been also reported that aSyn is a key player in the accumulation of tau and αβ-crystallin within GCIs [645], further highlighting the contribution of aSyn in disease pathogenesis.
Another early hallmark of oligodendroglial pathology in MSA and major component of GCIs is the oligodendroglial-specific phosphoprotein TPPP/p25α, which under physiological conditions has been proposed to mediate the myelination process and colocalize with myelin basic protein (MBP) in normal human brains [646][647][648]. Under pathological conditions, TPPP/p25α is considered to re-locate from the myelin sheaths to the abnormally expanded oligodendroglial somata and to trigger aSyn aggregation in vitro [12,649]. In vitro experiments utilizing TPPP/p25α ectopic overexpression in PC12 cells revealed that TPPP/p25α prevents the fusion of autophagosomes with lysosomes and impairs aSyn degradation, enhancing its secretion via exophagy [650]. Moreover, concurrent overexpression of TPPP/p25α and aSyn in OLN-93 rat oligodendroglial cells led to pSer129 aSyn-dependent microtubule retraction from the processes to the perinuclear space, as well as to cytotoxicity and subsequent cell death via activation of the FAS receptor and caspase-8 [651,652]. Recently published work from our lab revealed a crucial role for TPPP/p25α in the recruitment and seeding of oligodendroglial aSyn and in the formation of aberrant aSyn species within oligodendrocytes [653]. Additionally, the levels of glial cell-derived neurotrophic factor (GDNF) were found significantly decreased in the brains of MBP-haSyn transgenic mice, a mouse model for MSA where human aSyn is specifically overexpressed in oligodendrocytes [654]. Similar results were obtained from brain samples of MSA patients, further supporting that oligodendroglial aSyn accumulation is implicated in the dysregulation of neurotrophic support, oxidative stress and neuroinflammation, thus leading to MSA pathogenesis [654].
Alpha-Synuclein Accumulation in Oligodendrocytes, Propagation and Spread of Pathology
The origin of aSyn detected in oligodendroglial GCIs still remains enigmatic and there are controversial studies in the literature suggesting either the internalization of neuronallysecreted aSyn by oligodendrocytes or an enhanced expression and decreased degradation of oligodendroglial aSyn [655][656][657][658][659][660][661]. The release of aSyn by neuronal cells, partially bound on vesicles or exosomes is well-documented [365,366,380,662,663] and some studies propose that oligodendrocytes can take-up the neuronally-derived aSyn via dynamin GTPase-, clathrin-and dynasore-dependent mechanisms [658,[664][665][666]. The neuron-oligodendrocyte communication can also be mediated via exosomes [667], which are characterized as "Trojan horses" of neurodegeneration [668] and they could serve as transporters of pathological disease-related proteins, such as aSyn ( Figure 3). Moreover, ectopic expression of the endocytosis regulatory proteins Rab5 and Rabaptin-5 in oligodendrocytes of MSA brains may account for the elevated levels of aSyn within oligodendrocytes, probably via enhanced endocytotic activity [669].
In vitro and in vivo experiments revealed the ability of oligodendrocytes to take up exogenously added recombinant or neuronally-derived aSyn and incorporate it into intracellular GCI-like aggregates [658,660,[664][665][666]. In a recently published study, mature human oligodendrocytes generated from neural stem cells had the ability to internalize neuronally-derived aSyn and form proteinaceous inclusions, thus further supporting the existing theory for the origin of MSA-related aSyn. Significantly, it has been shown that once neuronal aSyn is taken up by oligodendrocytes, it accumulates and gains GCI-like characteristics rather determined by the oligodendroglial milieu [17].
Over the last years, the prion hypothesis has gained a lot of attention regarding the spread of pathological aSyn in the context of both PD and MSA. Specifically, it has been reported that inoculation of transgenic mice overexpressing human A53T aSyn with MSA brain homogenates resulted in CNS dysfunction, whereas, strikingly, the PD brainderived material did not evoke similar effects [670][671][672]. Similarly, intrastriatal injections of MSA homogenates in the brains of Tg(SNCA)1Nbm/J mice (out for mouse aSyn and overexpressing the human protein) resulted in the detection of hyper-phosphorylated aSyn-positive inclusions in various brain regions [673]. Finally, treatment of HEK293T cells stably expressing fluorescently-tagged aSyn with healthy, PD or MSA brain-derived extracts highlighted that only the MSA-added material was capable of inducing aSyn aggregation [674].
There are other possible scenarios that have been proposed to explain aSyn seeding and propagation in MSA brains, based on oligodendrocyte-to-oligodendrocyte communications. Specifically, it is possible that aSyn is taken up by oligodendroglial progenitor cells prior to their maturation, probably impairing the myelination process [675]. Finally, another scenario suggests that, in MSA pathology, oligodendrocytes adopt unknown cellular uptake mechanisms for aSyn internalization and subsequent propagation [675]; however, the precise mechanisms underlying aSyn transfer to oligodendrocytes still remains unknown. The gap junction protein connexin-32 (Cx32) has been also implicated in the uptake of oligomeric aSyn by both neurons and oligodendrocytes [676] and Cx32 protein levels were found elevated in animal PD and MSA models, thus suggesting an interaction between Cx32 expression and aSyn cellular uptake [676]. Contrariwise, others and we have suggested that the endogenous aSyn, expressed even at minute amounts, plays a pivotal role in the accumulation of pathological aSyn within oligodendrocytes and the subsequent GCI-like formation [653,677].
It is interesting to note that oligodendroglial and neuronal aSyn accumulation has been reported to occur in different time points and in particular that aSyn aggregation requires several months to progress within oligodendrocytes, upon synthetic haSyn-PFFs delivery into the brain of WT mice [678]. Moreover, in vitro aSyn overexpression in oligodendrocytes resulted in delayed maturation of oligodendrocyte progenitor cells and impaired myelin-gene expression and myelination deficits [679][680][681], whereas in another study aSynpositive inclusions were mainly detected in BCAS1-expressing (breast carcinoma amplified sequence 1) immature oligodendrocytes of MSA brains [682]. The above observations insinuate that the oligodendroglial maturation and aSyn-aggregate formation are closely linked and may provide information regarding pathogenic events in MSA.
Regarding the hypothesis of impaired aSyn degradation in the context of MSA, both the UPS and the ALP have been proposed to contribute to the accumulation and aggregation of aSyn within oligodendrocytes. The detection of LC3-positive signal or other autophagy-related proteins, such as ubiquitin and p62 in GCIs points a role of the ALP in MSA pathogenesis [659,[683][684][685][686][687]. It has also been suggested that AMBRA1, an upstream protein regulator of autophagy and UCH-L1, a deubiquitylating enzyme, are implicated in neurodegenerative diseases with oligodendroglia pathology [688,689]. The role of autophagic dysregulation along with mitochondrial impairment in aSyn aggregation was also studied in primary oligodendroglial cultures and in the OLN-t40 oligodendroglial cell line [690]. Moreover, neurosin (kallikrein 6) has been proven an effective serine protease in clearing aSyn from oligodendrocytes both in vitro and in vivo [691][692][693]. In addition, treatment of Tg haSyn-PLP mice, a well-established MSA mouse model, with the proteasome inhibitor I for 12 weeks, resulted in enhanced accumulation of both human and endogenous mouse aSyn within the cytoplasm of oligodendrocytes, thus highlighting the role of UPS in aSyn degradation [694]. Finally, several in vitro studies have proposed that aSyn aggregation is stimulated by heparin and heparan sulfate [391,695,696], linear polysaccharides (glycosaminoglycans) found on the cell membrane and in the extracellular matrix [697,698]. Heparan sulfate has been suggested to mediate aSyn fibril uptake by oligodendrocytes via binding to the plasma membrane [391,699], whereas other have proposed that heparin and heparan sulfate proteoglycans (HSPGs) are responsible for aSyn fibrillation [695,696,[700][701][702].
Apart from aSyn toxicity per se, the overexpression of aSyn in oligodendrocytes can lead to oligodendroglial cell death and subsequent neuronal loss via a plethora of mechanisms. Specifically, aSyn-overexpressing oligodendrocytes are more susceptible to oxidative stress and various cytokine actions [591,703], or display impaired adhesion properties [704]. Furthermore, it has been reported that animal MSA models exhibit myelin loss and impaired mitochondrial function, accompanied by severe neurodegeneration in various brain regions [615,617,618,705,706].
Conclusions
It has been almost 25 years since the discovery that the Lewy pathology in PD and DLB neurons is immunoreactive for aSyn and, at the same time, neuronal aSyn accumulates in glial inclusions within MSA oligodendrocytes. Even though a plethora of studies focuses on the role of aSyn in neuronal physiology and pathology, increasing amount of data reinforces the contribution of non-cell autonomous neuron-glial interactions in the initiation and progression of a-synucleinopathies. Microglia and astrocytes form the brain's defense system against neurotoxic insults, become activated and release proinflammatory factors. However, uncontrolled activation results in chronic microgliosis and astrogliosis that may be detrimental and lead to neurodegeneration. Even more, the deposition of aSyn in oligodendrocytes impairs their myelinating activity and reduces neuronal trophic support, events that eventually result in neuronal demise. Remarkably, neurons, microglia, astrocytes and oligodendrocytes are all able to take up and clear extracellular aSyn; however, glial cells appear to be the most potent scavengers. The endocytosis of various aSyn species might be conformation-sensitive, cell-and receptortype specific, adding further complexity in disease management.
Undoubtedly, better understanding of the mechanisms mediating the interaction between neurons and glial cells in a-synucleinopathies may provide insights into neuronal dysfunction and death and may uncover novel disease modifying therapies.
|
v3-fos-license
|
2022-07-22T15:14:16.848Z
|
2022-07-01T00:00:00.000
|
250939476
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-9032/10/7/1346/pdf?version=1658304291",
"pdf_hash": "98fb5e7edbf2e37f599f4609916f2eddc7b53152",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3015",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ee481d06e986a36aba3d077e6f75980c9cb2ab39",
"year": 2022
}
|
pes2o/s2orc
|
Clinical and Biological Adaptations in Obese Older Adults Following 12-Weeks of High-Intensity Interval Training or Moderate-Intensity Continuous Training
Sarcopenia and obesity are considered a double health burden. Therefore, the implementation of effective strategies is needed to improve the quality of life of older obese individuals. The aim of this study was to compare the impact of high-intensity interval training (HIIT) and moderate-intensity continuous training (MICT) on functional capacities, muscle function, body composition and blood biomarkers in obese older adults. Adipose tissue gene expression and markers of muscle mitochondrial content and quality control involved in exercise adaptations were also investigated. Sixty-eight participants performed either HIIT (n = 34) on an elliptical trainer or MICT (n = 34) on a treadmill, three times per week for 12 weeks. HIIT produced significantly higher benefits on some physical parameters (six-minute walking test (HIIT: +12.4% vs. MICT: +5.2%); step test (HIIT: +17.02% vs. MICT: +5.9%); ten-repetition chair test (HIIT: −17.04% vs. MICT: −4.7%)). Although both HIIT and MICT led to an improvement in lower limb power (HIIT: +25.2% vs. MICT: +20.4%), only MICT led to higher improvement in lower limb muscle strength (HIIT: +4.3% vs. MICT: +23.2%). HIIT was more beneficial for increasing total lean body mass (HIIT: +1.58% vs. MICT: −0.81%), while MICT was more effective for decreasing relative gynoid fat mass (HIIT: −1.09% vs. MICT: −4.20%). Regarding adipose tissue gene expression, a significant change was observed for cell death-inducing DFFA (DNA fragmentation factor-alpha)-like effector A (CIDEA) in the HIIT group (A.U; HIIT at T0: 32.10 ± 39.37 vs. HIIT at T12: 48.2 ± 59.2). Mitochondrial transcription factor A (TFAM) content, a marker of mitochondrial biogenesis, increased significantly following HIIT (+36.2%) and MICT (+57.2%). A significant increase was observed in the HIIT group for Translocase of Outer Membrane 20 (TOM20; +54.1%; marker of mitochondrial content), Mitofusin-2 (MFN2; +71.6%; marker of mitochondrial fusion) and Parkin RBR E3 Ubiquitin Protein Ligase (PARKIN; +42.3%; marker of mitophagy). Overall, our results indicate that even though MICT (walking on treadmill) and HIIT (on an elliptical) are effective intervention strategies in obese older adults, HIIT appears to have slightly more beneficial effects. More specifically, HIIT led to higher improvements than MICT on functional capacities, lean mass and skeletal muscle markers of mitochondrial content, fusion, and mitophagy. Thus, MICT but also HIIT (time-efficient training) could be recommended as exercise modalities for obese older adults to maintain or improve mobility, health and quality of life.
Introduction
Sarcopenia is the progressive loss of muscle function, mass and strength that naturally occurs with age [1]. The aging-related alterations in skeletal muscle biology increases the risk for older adults to become physically frail [2]. Another major public health issue is obesity, whose prevalence increases with aging. Sarcopenic obesity is considered a double health burden as it is associated with major adverse health outcomes including the development of frailty and disability, due to low muscle function and mass and an excess in adiposity levels [3]. There is therefore an urgent need to identify and implement effective strategies to improve the health status and quality of life of obese older adults.
Exercise training is recognized to provide major health benefits [4], and is considered an effective non-pharmacological strategy for older adults [5]. It has been observed that, after age 60, aerobic training as well as resistance training can be considered to maintain or improve muscle quality (strength/unit of muscle mass) [6]. Among exercise training modalities, aerobic moderate-intensity continuous training (MICT) is considered a successful intervention that reduces fat mass and metabolic disorders in obese adults [7]. In addition, high-intensity interval training (HIIT), a subtype of endurance training, is recognized as another time-efficient intervention for older adults, especially because lack of time is one of the main causes of inactivity in this population [8].
To make specific recommendations, studies have compared the clinical impact of HIIT and MICT in different populations. First, a systematic review comparing the effect of aerobic HIIT and MICT (average of three times per week for 12-16 weeks) in adults (age ≥ 18 years) with impaired vascular function reported that HIIT was more effective than MICT in improving brachial artery vascular function [9]. In obese individuals, a systematic review revealed that HIIT and MICT (interventions involving running and cycling for six to 16 weeks, with a frequency of two to five times per week) led to similar reductions in wholebody fat mass [10,11]. Additionally, a similar adherence and enjoyment were reported [12], as well as a similar improvement in insulin sensitivity [13]. When considering patients with coronary artery disease, a meta-analysis revealed that both exercise interventions (HIIT and MICT involving cycling and running) similarly improved glucose levels [14]. Finally, in non-obese older adults, it has been demonstrated that both training modalities (ergometer, four sessions per week for eight weeks) induced similar improvements in aerobic fitness (VO2 max) [15], although HIIT led to higher upper limb strength as well as superior body mass index (BMI) adaptations [16].
Thereafter, studies have compared molecular adaptations (muscle and adipose tissue (AT) metabolisms) following HIIT and MICT in different populations. For example, adaptations in AT metabolism are expected since aerobic exercise is already well known for its ability to reduce body fat. Indeed, it has been observed that endurance training is associated with an increase in catecholamine-induced lipolysis in obese humans [17] and AT lipoprotein lipase (LPL) activity in non-obese adults [18]. Finally, it has been observed that the circulating adipokine concentration as well as AT gene expression (uncoupling protein-1 (UCP1), cell death-inducing DFFA (DNA fragmentation factor-alpha)-like effector A (CIDEA) etc.) can be increased following exercise interventions [19]. In addition, the investigation of mitochondrial health following HIIT has recently been an area of interest. For example, exercise training appears to be an effective non-pharmaceutical means of maintaining muscle health by enhancing the skeletal mitochondrial content [20] and preserving mitochondrial health in older adults [21]. In young healthy adults, it was previously demonstrated that nuclear abundance of peroxisome proliferator-activated receptor-gamma coactivator (PGC1-α; a stimulator of mitochondrial biogenesis) in human skeletal muscles increased following an intense HIIT [22]. Moreover, mitochondrial content and oxidative phosphorylation (OXPHOS) capacity in skeletal muscles increased following HIIT (cycling three times per week for six weeks) in young overweight participants [23] and overweight older adults [24]. Additionally, 12 weeks of HIIT was also shown to increase multiple markers of mitochondrial content and quality control processes in obese older adults [25]. When comparing the effect of HIIT and MICT on the muscles' mitochondrial content in overweight and obese young adults, it was reported that 10 weeks of cycling HIIT (twice per week) was superior to MICT for increasing mitochondrial content [26]. Furthermore, in young obese adults, mitochondrial respiration similarly improved after 12 weeks (three times per week) of HIIT and MICT on a treadmill [27].
Interestingly, to our knowledge, no study comparing HIIT to MICT explored the clinical, biological and molecular adaptations together (in the same study), nor in obese older adults. Therefore, the aim of this study was to compare the impact of HIIT and MICT on functional capacities, muscle function, body composition and blood biomarkers in obese older adults. A subset of participants underwent AT and muscle biopsies to investigate AT gene expression and markers of mitochondrial content and quality control involved in exercise adaptations.
Study Design
This study is an a posteriori study. The ethics committee of the "Université du Québec à Montréal (UQAM)" approved all procedures (#2014_e_1018_475). The participants provided their informed written consent after being informed about the study's purpose, aim, procedures and associated risks. The participants included in the HIIT group were part of a previous study [25], and were matched by age and sex to the participants from another study which performed a MICT.
Participants
Participants were recruited from the community via social communication (flyers and meetings in community centers) in Greater Montreal. To be included in this study, participants had to meet the following criteria: (1) age 60 and over; (2) obese (BMI between 30 and 40 kg·m −2 or fat mass (%; DXA) equal or superior to 27% in men and 40% in women) or a waist circumference greater than 102 cm for men and 88 cm for women; (3) inactive (less than two hours of structured physical activity per week); (4) no involvement in a vigorous exercise program for at least 12 months; (5) able to follow the exercise training; (6) stable weight (±5 kg) for 6 months; (7) non-smokers and moderate drinkers (max: 15 g/day of alcohol); (8) able to understand French or English; and (9) postmenopausal for women (i.e., 12 consecutive months without menses). Exclusion criteria were the following: (1) presence of metal implant (pacemaker); (2) asthma requiring oral steroid treatment; (3) use of medication that could affect metabolism or cardiovascular function; (4) use of anticoagulants (only for participants undergoing biopsies). Participants with diagnosed but untreated neurological, cardiovascular or lung diseases, or cognitive disorders were also excluded.
Sixty-eight participants completed the intervention and were matched according to age (+/−2 years) and sex so that the two training modalities could be compared (HIIT: n = 34 vs. MICT: n = 34). Among these participants, 30 received subcutaneous abdominal AT biopsies (HIIT: n = 19 vs. MICT: n =11) and 25 received skeletal muscle biopsies (HIIT: n = 11 vs. MICT: n = 14) pre-and post-intervention. To be considered as having completed the intervention, participants had to complete at least 80% of the training sessions (minimum: 29/36 sessions) and perform the evaluation pre-and post-intervention.
Exercise Intervention
All the participants performed three supervised training sessions per week during 12 consecutive weeks.
High-Intensity Interval Training (HIIT)
Participants performed their HIIT training on an elliptical device (TechnoGym Synchro Exc 700, Technogym, NJ, USA) to avoid impact and injuries and were supervised by trained kinesiologists (i.e., certified exercise instructors). The duration of the exercise session was 30 min and divided as follows: (1) five minute warm-up at a low intensity (50-60% maximal heart rate (MHR) and/or 8-12 on Borg' scale); (2) twenty minutes of HIIT consisting of multiple 30-s high-intensity sprints (80-85% MHR or >17 on Borg' scale) alternated with 90 s at moderate intensity (65% MHR or 13-16 on Borg' scale); and (3) five-minute cool down period (50-60% MHR or 8-12 on Borg' scale). MHR percentage and/or perceived exertion (Borg scale; relying exclusively on perceived exertion for participants using antiarrhythmic and inotropic agents) were used to determine the intensity of each cycle. The following equation was used to determine the MHR: (((220-age)−Heart Rate rest) × % Heart Rate target) + Heart Rate rest. Speed and resistance of the elliptical device were continuously adjusted during the intervention to ensure that the MHR was always above 80% during high intensity intervals.
Moderate-Intensity Continuous Training (MICT)
Participants followed a MICT, where they walked on a treadmill (Precor C936i, Precor, WA, USA) and were supervised by trained kinesiologists (i.e., certified exercise instructors). The MICT was performed at a moderate intensity (60-70% MHR or 13-14 Borg's scale) for one hour per session. Speed and resistance of the treadmill were continuously adjusted during the intervention to ensure that the MHR was always between 60-70% MHR or 13-14 on Borg's scale.
Physical Performance
Validated tests were used to evaluate physical performance and were previously described in Buckinx et al. [28]. The five tests used are briefly detailed below: Six-minute walk test (6MWT): aerobic endurance was determined using the validated 6MWT [29,30] following the American Thoracic Society guidelines. Participants were instructed to walk at their own pace in an enclosed, flat 30-m-long track for six minutes. They were allowed to stop and rest as needed. The total distance covered at each minute and at the end was recorded.
Walking speed: This validated test [31,32] was conducted on an eight-meter straight line track. The two meters at the beginning and the end of the track, which included acceleration and deceleration phases, were not taken into account when calculating speed. The time taken (s) to complete the four meters at a usual and fast walking pace were recorded.
Unipodal balance: This test has extensive clinical support with a good test-retest reliability [33]. To assess static balance, participants were asked to stand on one leg with their eyes open and arms by their sides. The time was recorded in seconds from the moment one foot was lifted from the ground until it touched the ground again, the stance leg moved, or until 60 s had elapsed.
Timed up & go (TUG): This validated test [34] consists in standing from a sitting position on a chair, walking a three-meter distance and sitting down again. The test aims to estimate gait speed at a comfortable self-paced (TUG) speed as well as a fast-paced walking speed (TUGf). This test is recognized to predict fall risk.
Chair stand test (ten-repetitions): This test measures functional lower-body strength. This test is reproducible and correlates with lower extremity muscle strength [35]. Subjects were asked to stand up from a sitting position and to sit down ten times as fast as possible, with their arms folded across their chest. The time (in seconds) to complete this task was recorded.
The step test: This reliable and valid test [36,37] evaluates dynamic balance during an activity where the participant is required to be in movement and shift their body weight while standing on one leg. The participants were asked to perform the task as fast as possible for 20 s. The number of times the participant touched the top of the step with their foot was recorded.
Muscle Function Assessments
Loss of muscle strength and power are considered the main predictors of functional capacity decline as well as loss of mobility and autonomy. The three validated tests assessed to estimate muscle function were previously described in Buckinx et al. [28].
Grip strength: A hand dynamometer with an adjustable grip (Lafayette Instrument Company, Lafayette, IN, USA) was used to measure the maximum voluntary handgrip strength. To measure hand grip strength, participants were standing upright with the arm along the side of the body with the elbow extended and the palm of the hand facing the thigh. Participants were asked to squeeze the hand dynamometer as hard as possible for up to 4 s. This test was repeated three times for each hand, alternating between right and left, and the best result was recorded.
Lower limb muscle power: The Nottingham Leg Extensor Power Rig was used to assess lower limb muscle power while participants were seated. Participants were asked to use their dominant leg to push a pedal as fast and as hard as they could, which accelerated a flywheel.
Lower limb muscle strength: Participants were seated with the knee and hip joint angles set at 135 • and 90 • respectively. To measure strength, the tested leg was fixed to the lever arm at the level of the lateral malleoli on an analog strain gauge. The strongest of three maximum voluntary contractions was recorded.
These three muscle function measures were expressed in absolute (kg or W or N, respectively) and normalized to body weight and limb lean mass.
Body Composition
Fat masses (total, android, gynoid, arm and leg, total; %) and total lean masses (total, arm and leg; kg) were quantified by dual-energy X-ray absorptiometry (DXA; GE Medical Systems, Madison, WI, USA) in fasted state.
Thigh Composition
A peripheral quantitative computed tomography (pQCT; Stratec XCT3000 system; STRATEC Medizintechnik GmbH), used at one third of the length of the right femur (distance from the lateral epicondyle to the greater trochanter), assessed thigh muscle composition (muscle area, subcutaneous and intramuscular fat contents; cm 2 ). The total length of the femur, voxel size (0.5 mm) and speed (10 mm·s −1 ) were the scanning variables entered into the software. All scans were performed by operators trained in pQCT data acquisition according to Bone Diagnostics© guidelines (Fort Atkinson, WI, USA). Results were provided automatically in the ImageJ analysis (version 1.3.11; Bethesda, Rockville, MD, USA). For muscle and fat area, ranges for precision errors were reported to be between 2.1 and 3.7% and 2.4 and 6.4%, respectively [38].
Blood Parameters
After an overnight fast of 12 h, 15 mL of blood was collected from each participant to assess fasting serum levels of biochemical and hormonal markers. Participants were venipunctured and blood was collected in gold vacutainer tubes (Becton-Dickinson, Frankli, NJ, USA). More specifically, the lipid profile (Total, HDL-and LDL-cholesterol, and TG levels), AT metabolites and adipokines (free fatty acids, adiponectin and leptin levels, adiponectin/leptin ratio), growth hormones (IGF1; IGFBP3 and IGFB3/IGF1 molar ratio) and glucose-insulin homeostasis (glucose and insulin levels but also HOMA and QUICKI indices) were assessed (see [25] for more details).
Adipose Tissue Biopsies and Quantification of Gene Expression
As previously described in Marcangeli et al. [25], biopsy samples were collected from an area in the lower quadrant (10-12 cm from the umbilicus) using a 12-gauge Yale needle. Abdominal subcutaneous AT samples (≈1 g) were obtained from the peri-umbilical region. The collected samples were immediately frozen in liquid nitrogen and were kept at −80 • C until further analysis of key gene expressions of the AT lipid metabolism. Genes selected represented important processes expected to be involved in reduced body weight and fat mass, as well as the conversion of white to brown AT in response to physical exercise.
Skeletal Muscle Biopsies and Immunoblotting
Skeletal muscle samples were obtained from the vastus lateralis muscle using Bergstrom needle biopsy. Muscle samples were frozen in liquid nitrogen and kept at −80 • C until further analysis. Immunoblots were performed to assess the content of multiple markers of mitochondrial biogenesis, content fusion, fission and mitophagy as extensively described in [25].
Energy Balance
Dietary intake: As previously described and validated in older adults, dietary intake was assessed before and after the intervention using the three-day food record method (two weekdays and one weekend day) [39]. Participants were asked to maintain their regular dietary habits during the intervention period. Analyses of total energy intake were performed using the Nutrific®web-applications (U Laval, Quebec city, QC, Canada).
Physical activity level: The number of steps was used to estimate the level of physical activity of participants using a validated tri-axial accelerometer SenseWear ® Mini Armband (BodyMedia, Pittsburgh, PA, USA) as previously described by Colbert et al. [39]. Participants had to wear the device on their left arm at all times during three consecutive days, except when showering or swimming. Each participant had to wear the device at least 85% of the time to be included in the study.
Sociodemographic and Cognitive Assessment
The validated Montreal Cognitive Assessment (MoCA) was used to assess cognitive status [40]. In cases in which a subject had ≤12 years of education, an extra point was added to the total score [40].
Statistical Analyses
Quantitative results were expressed as means ± SD. The homogeneity of variances was assessed using Levene's test. The delta changes (%) were calculated as (post-pre)/pre × 100. The time effect (intervention), group effect and their interaction (time*group effect) on the clinical and biological parameters were tested using a linear mixed-models approach (nlme package) with two-factor repeated measures ANOVA. Simultaneous tests for general linear hypotheses (emmeans package) were used for post-hoc analyses with a Bonferroni correction. All statistical analyses were performed using the software R (4.2; foundation for statistical computing, Vienna, Austria), and results were considered statistically significant at p-value < 0.05.
Adherence and Baseline Characteristics
At the end of our intervention, the adherence level of the participants in both groups was considered high. More specifically, among the 36 sessions, participants completed an average of 34.9 sessions for the HIIT group (97%) and 34.6 sessions for the MICT group (96%). The baseline characteristics were similar for both groups (Table 1).
The Impact of HIIT and MICT on Functional Capacities
The impacts of HIIT and MICT on functional capacities are detailed in Table 2, and the delta changes are detailed in Table S1. A time effect was observed for all the functional capacity parameters evaluated. A time*group interaction was observed for the six-minute walking test (p = 0.004), the step test (p < 0.0001), and the ten-repetition chair test (p = 0.006; Figure 1). These results indicate that these two training modalities are effective for improving all functional capacities in obese older adults. However, we observed a significantly higher benefit for HIIT than MICT on some physical parameters (six-minute walking test (HIIT: +12.4% vs. MICT: +5.2%); step test (HIIT: +17.0% vs. MICT: +5.9%); ten-repetition chair test (HIIT: −17.0% vs. MICT: −4.7%)). Additionally, a clinical impact was observed for the HIIT group for the six-minute walking test, where the distance increased by more than 50 m, the minimal increase in distance required for a change to be considered clinically significant [41]. Walking speed significantly increased in both groups, but a clinical impact was only observed for the HIIT group where walking speed increased by more than 0.1 m/s, the minimal increase in walking speed required for a change to be considered clinically significant [42].
The Impact of HIIT and MICT on Skeletal Muscle Function
The impacts of HIIT and MICT on skeletal muscle function are detailed in Table 2, and the delta changes are detailed in Table S1. Absolute and relative upper (handgrip strength) and lower (maximal quadriceps strength) limb muscle strength as well as lower limb power were assessed. Absolute (kg) and relative (kg/body weight) upper limb muscle strength did not significantly change following both exercise modalities. However, a time*group interaction was observed for absolute and relative lower limb muscle strength. These parameters increased significantly more for the MICT group than HIIT group (Quadriceps strength (HIIT: +4.3% vs. MICT: +23.2%); quadriceps/body weight (HIIT: +4.1% vs. MICT: +23.6%); quadriceps/lean leg mass (HIIT: +2.3% vs. MICT: +25.3%); Figure 2). Finally, both groups significantly improved lower limb muscle power [(HIIT: +25.2% vs. MICT: +20.4%); Figure 2]. These results indicate that MICT seems more effective than HIIT for improving lower limb muscle strength in obese older adults.
The Impact of HIIT and MICT on Body Composition
The impacts of HIIT and MICT on body composition are detailed in Table 3, and the delta changes are detailed in Table S2. Anthropometry, fat and lean mass, as well as thigh muscle quality were measured to assess the impact of HIIT and MICT on body composition. Participants improved only total (HIIT: +1.6% vs. MICT: −0.8%) and leg (HIIT: 2.1% vs. MICT: −0.8%) lean muscle mass, and this improvement was greater in the HIIT than MICT group (Figure 3). Participants improved only leg (HIIT: −1.6% vs. MICT: −3.7%) and gynoïd (HIIT: −1.1% vs. MICT: −4.2%) relative fat mass, and this improvement was greater in the MICT than HIIT group (Table 3 and Figure 4). No other change was observed. Considering that lean mass increased for the HIIT group and fat mass decreased for the MICT group, both training modalities are effective for improving body composition. Regarding muscle composition measured using pQCT, total muscle area, total fat area, and subcutaneous fat area, significantly decreased following MICT only (Table 3 and Figure 4). Data are presented as: mean ± SD. HIIT = high-intensity interval training; MICT = moderate-intensity continuous training; Pre = before the 12-week intervention; Post = after the 12-week intervention; DXA = dual-energy X-ray absorptiometry; pQCT = peripheral quantitative computed tomography; BMI = body mass index; Time effect, Group effect and Time*Group effect were analyzed using two-way repeated measures ANOVA. * p < 0.05, ** p < 0.01, *** p < 0.001 = HIIT effect, and MICT effect (analyzed using post-hoc tests).
The Impact of HIIT and MICT on Blood Parameters
The impacts of HIIT and MICT on blood parameters related to muscle metabolism or metabolic syndrome are detailed in Table 4, and the delta changes are detailed in Table S3.
The only significant time effect observed was for triglycerides, which significantly decreased (−8.8%) in the HIIT group. These results indicate that HIIT is effective to improve triglyceride levels in obese older adults.
The Impact of HIIT and MICT on Skeletal Muscle Mitochondrial Content and Quality Control
The impacts of HIIT and MICT on skeletal muscle mitochondrial content are detailed in Table 5, and the delta changes are detailed in Table S4. Several mitochondrial proteins were measured to assess the impact of HIIT and MICT on markers of mitochondrial content (translocase of outer membrane 20; TOM20), biogenesis (transcription factor A mitochondrial; TFAM), fusion (mitofusin-2; MFN2), fission (dynamin-related protein 1; DRP1), and mitophagy (Parkin RBR E3 ubiquitin protein ligase; PARKIN). A significant time effect was reported for the mitochondrial TFAM, for TOM20, and PARKIN. More specifically, TFAM increased significantly following HIIT (+36.2%) and MICT (+57.24%) ( Figure 6). A significant increase was observed for TOM20 (+54.14%), Mitofusin-2 (MFN2; +71.6%), and PARKIN (+42.32%) in the HIIT group only ( Figure 6). Neither HIIT nor MICT significantly altered DRP1 content. These results indicate that HIIT is effective for improving markers of muscle mitochondrial content, fusion and mitophagy, although both exercise interventions are effective for improving mitochondrial biogenesis.
Discussion
Due to the aging population worldwide and the progressive increase in the prevalence of obesity, finding effective strategies to reduce body fat and increase muscle function is of particular importance for improving the quality of life of afflicted individuals and to reduce healthcare costs. In this regard, the potential beneficial impacts of two aerobic training modalities (HIIT and MICT) performed over 12 weeks were investigated in obese older adults. The adherence level for our interventions was comparable between both groups (97% for HIIT and 96% for MICT). This finding is in line with a previous study that has reported that adherence and enjoyment level were comparable for HIIT and MICT in older adults [43]. In addition, our study showed that changes in several parameters were specific to the type of exercise training. Indeed, MICT was more beneficial for decreasing relative gynoid fat mass and increasing lower limb muscle strength while HIIT resulted in greater improvement in functional capacities and greater increase in total lean body mass.
Functional capacities reflect the actions used in daily life and decreases with aging [44]. Interestingly, we report here that while both HIIT and MICT are effective in improving several functional capacities, HIIT led to greater improvement in the performance at several functional capacity tests (six-minute walking test, step test and ten-repetition chair test) in obese older adults. These results are in line with a previous study conducted in older women showing that functional capacities improved following both HIIT and MICT [45]. Additionally, our results showed that the lower limb power increased for both groups. The total and legs lean mass increased following HIIT, and the gynoid fat mass decreased following MICT. These different findings could be explained by the use of the elliptical trainer which produces more whole-body movement compared to the treadmill. In addition, HIIT and MICT led to greater improvement in lower limb power whereas MICT led to higher improvement in lower limb muscle strength. A possible explanation could be that a higher speed was used for HIIT (elliptical movements), and a higher force was used for MICT (impact when walking). Another possible explanation could be the presence of lower body weight strain during elliptical training compared to higher body weight strain on the treadmill. Contrarily to our findings, a study conducted in older adults showed that HIIT led to greater improvement in upper limb strength than MICT [16]. [17].
Regarding blood parameters, our results are supported by a meta-analysis revealing that triglyceride levels generally decrease after HIIT [46]. Additionally, in patients with coronary artery disease, glucose levels similarly improved following both HIIT and MICT [14], although no significant difference was observed following our intervention. A possible reason could be the different age range and obesity status of our participants that could differentially affect biological parameters.
Concerning body composition, interesting differences between HIIT and MICT modalities were observed for fat mass and lean mass adaptations after our intervention. HIIT was more beneficial for increasing total lean body mass (HIIT: +1.58% vs. MICT: −0.81%), while MICT was more beneficial for decreasing relative gynoid fat mass (HIIT: −1.09% vs. −4.20%). These results are consistent with a previous study showing that total and visceral fat mass decreased after 12 weeks of MICT in obese older adults [47]. In young adults, it was previously found that HIIT had a greater effect on whole-body adiposity, while lower intensity training had a greater effect on subcutaneous abdominal and visceral fat mass [48]. Interestingly, in young obese adults, no significant difference in body adiposity was observed [10]. This discrepancy with our findings could be due to the age difference, where older obese adults could have different mechanisms regulating fat metabolism than younger obese adults. In line with our findings, a greater increase in lean mass after HIIT compared to MICT was observed in older patients undergoing cardiac rehabilitation [49]. Although HIIT and MICT differentially affected body composition, our results indicate that both training modalities can be considered effective in improving body composition in obese older adults.
Regarding adipose tissue gene expression, a significant change was observed only for CIDEA, which is associated with lipid droplets and insulin sensitivity in humans and is considered an important regulator of fat metabolism [50]. Among all the parameters of gene expression, the reason why only CIDEA significantly changed following our intervention might be due to the insufficient changes in fat mass to induce molecular adaptations, since the changes were not clinically significant (delta change: <5%) [51]. Taken altogether, these data indicate that 12 weeks of HIIT or MICT had a limited impact on AT gene expression within the set of genes studied. These results suggest that the positive impact of MICT on gynoid fat mass decrease might not be related to transcriptional reprogramming in adipocytes, based on the limited set of genes evaluated. Multi-omics approaches will be required to confirm and extend our results on metabolic adaptations in human adipocytes following aerobic exercise interventions.
Accumulation of mitochondrial dysfunction is believed to play a key role in the muscle aging process [52]. Furthermore, intramuscular lipid accumulation in obese individuals has been linked to altered skeletal muscle mitochondrial content and function [53]. As we previously reported [25], HIIT effectively increased markers of mitochondrial biogenesis (TFAM), mitochondrial fusion (MFN2), mitochondrial content (TOM 20), and mitophagy (Parkin) in obese older adults. Additionally, an increase in TFAM was also observed for MICT. Interestingly, neither HIIT not MICT altered Drp1 content, a marker of mitochondrial fission. Previous studies performed in obese [26] and older [24] adults demonstrated that mitochondrial content improved more after HIIT compared to MICT. Interestingly, our study, which focused on older and obese individuals, reports similar findings, although differences between HIIT and MICT did not reach statistical significance.
To our knowledge, our study is the first to compare the impact of HIIT and MICT in obese older adults using a deep phenotyping approach. This approach involves an extensive clinical assessment using gold standard methods (functional capacities, muscle function, body composition, blood biomarkers) as well as an exploration of potential mechanistic explanations (AT gene expression and markers of mitochondrial content and quality control). However, our study has some limitations. First, we did not evaluate sex-specific adaptations induced by HIIT and/or MICT interventions since the sample size for molecular and cellular assessments were limited to ensure adequate statistical power. We performed a matched group per-protocol analysis to compare our interventions. The associated risk was therefore to overestimate the effects of our interventions. To confirm our encouraging results, intention-to-treat analysis should also be performed in future studies. Regarding participants, extrapolation of the results requires caution since our participants were moderately obese on average and hence may not directly translate to population that are either older or more severely obese. A selection bias is also possible as only volunteer subjects were included in the exercise interventions. Moreover, the two exercise modalities were not performed on the same device (elliptical trainer for HIIT and treadmill for MICT). The reason for this was to prevent joint injuries on the lower limbs, due to the high surface impacts that might be caused by HIIT, as some of our participants may have been suffering from osteoporosis. However, for future studies, it would be interesting to perform a RCT to confirm our promising results by adding one group performing MICT on the elliptical trainer and also an inactive control group. It will also be of great interest to compare our results with that of a younger population to assess if there is an age effect and to investigate in older people if greater adaptations are observed with a longer intervention. Additionally, following the numerous analyses done in this study, it would be interesting to conduct serum metabolomic analyses before and after both exercise modalities and evaluate the metabolomic signatures behind the physiological changes. Thus, new putative biomarkers specific to HIIT and MICT in obese older adults could be used for therapeutic strategies to treat obesity and age-related decline in muscle mass and function.
Conclusions
Overall, our results showed that even though MICT (walking on treadmill) and HIIT (on an elliptical) are effective intervention strategies to improve the health status of obese older adults, HIIT appeared to have slightly more beneficial effects. More specifically, MICT was more beneficial for decreasing relative gynoid fat mass and increasing lower limb muscle strength whereas HIIT led to better improvements than MICT on functional capacities, lean mass and skeletal muscle markers of mitochondrial biogenesis, content, fusion, and mitophagy. Thus, MICT and HIIT (which is more time-efficient since it requires half the time of MICT), could be recommended for obese older adults in order to maintain or improve mobility, health and quality of life.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/healthcare10071346/s1, Table S1: Delta changes of functional capacities and skeletal muscle functions in obese older adults following 12 weeks of High-Intensity Interval Training (HIIT) and Moderate-Intensity Continuous Training (MICT); Table S2: Delta changes of body composition parameters in obese older adults following 12 weeks of High-Intensity Interval Training (HIIT) and Moderate-Intensity Continuous Training (MICT); Table S3: Delta changes of blood parameters in obese older adults following 12 weeks of High-Intensity Interval Training (HIIT) and Moderate-Intensity Continuous Training (MICT); Table S4: Delta changes of skeletal muscle mitochondrial content in obese older adults following 12 weeks of High-Intensity Interval Training (HIIT) and Moderate-Intensity Continuous Training (MICT).
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Université du Québec à Montréal (the protocol code #2014_e_1018_475 was approved on 9 December 2014.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The datasets used during the current study are available from the corresponding author on reasonable request.
|
v3-fos-license
|
2020-08-13T10:09:49.329Z
|
2020-01-01T00:00:00.000
|
221284307
|
{
"extfieldsofstudy": [
"Computer Science",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09163379.pdf",
"pdf_hash": "dddc21f9b9a61fb4e81dbe29e315f26993b6c2ec",
"pdf_src": "IEEE",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3020",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "c47e884296cb7752ea40a77a8a184b1fdc567bf3",
"year": 2020
}
|
pes2o/s2orc
|
Many-Tier Vertical GAAFET (V-FET) for Ultra-Miniaturized Standard Cell Designs Beyond 5 nm
The GAAFET (gate-all-around FET) is expected to replace FinFETs in future nodes due to its excellent channel controllability. It is also expected to be an impressive device due to its horizontal or vertical transistor structures. Vertical GAAFETs (V-FETs) are expected to be a promising device compared to horizontal GAAFETs (H-FETs) due to their structure, which allows area reduction and significant parasitic reduction. Besides, V-FETs are positioned on top of each other and thus allow more significant size reductions. Therefore, this paper studies the overall potential of many-tier V-FETs by investigating the essential design factors from the layout perspective. First, we study the factors that should be considered for designing many-tier V-FETs. Second, we propose an interconnect structure that maximizes the advantages of many-tier V-FETs. Third, we compare 2-tier V-FET standard cells to one-tier V-FET cells and visualize the advantages that many-tier V-FET cells provide. Our study shows that 2-tier V-FET standard cells provide a −35.6% area reduction with a cost of +16.5% wirelength and +13.2% parasitic capacitance increase compared to 1-tier V-FET cells. Compared to H-FETs and FinFETs, our cells show −50.1% area reductions with −0.3% wirelength reductions and −18.9% parasitic capacitance reductions. We emphasize that the design freedom to place transistors on top of each other and proper interconnect structures lead to ultra-scale miniaturized standard cell designs. We note that the increase in wirelength and capacitance is due to the vertical size increases and detours that must exist in the designs. Thus, careful circuit design is required to obtain the maximum advantages from V-FETs.
I. INTRODUCTION
The so-called ''Moore's Law'' [1] is one of the prominent keywords which explains the growth of today's semiconductor industry, which is ''miniaturization.'' For more than 50 years, miniaturization of transistors and interconnects has been a core factor for increased business profits and is a crucial factor of the evolution of the industry itself. In fact, advances in semiconductor devices have also been introduced in addition to this prominent miniaturization trend. For example, Intel introduced device technologies such as high-k metal gate and strained silicon in their 45 nm CMOS technology to control increased leakage currents and enhance The associate editor coordinating the review of this manuscript and approving it for publication was Yuh-Shyan Hwang. device performance [2]. Currently, FinFETs have completely replaced planar MOSFETs in the latest nodes of major foundries [3], [4]. These changes in devices that have begun in the last decade indicate that devices will continue to face challenges from this everlasting scaling trend and that proper breakthroughs must be introduced for transistor evolution.
In 2011, Intel introduced FinFET as its dominant transistor structure in its 22 nm technology node [5]. FinFET is a transistor type in which the gate covers three sides of a channel, whereas a gate in a planar MOSFET covers only one side. Therefore, FinFETs show better channel controllability and performance compared to conventional planar MOSFETs. However, it is expected that the era of FinFETs will not last too long due to scaling issues [6]- [8]. A major foundry started migrating from FinFETs to advanced transistor structures on their future process nodes. For example, Samsung plans a GAAFET (gate-all-around FET) solution in its 3 nm technology node [9]. 1 GAAFETs, which are advanced structures compared to FinFETs, cover all four sides of a channel [11]. Thanks to their unique channel structure, GAAFETs will have better channel controllability and scalability over Fin-FETs in future technology nodes [12]- [14]. Thus, we anticipate that GAAFETs will be the dominant transistor structure in the near future [15], [16].
Regarding how a GAAFET gate can surround all four sides of a channel, two dominant models represent the majority of these devices: nanowire FETs (NWFETs) and nanosheet FETs (NSFETs). NWFETs originate from FinFETs, where they evolved from double-gate FinFETs, Omega FinFETs, and nanowire FinFETs, to NWFETs [17]. NWFETs provide a subtle form with the best channel controllability. NSFETs are FET structures in that they use nanosheets as channels. A nanosheet is a form which is similar to a nanowire for which a channel is widespread like a sheet instead of being somewhat like a narrow wire. Studies have reported that nanosheet structures can provide higher current due to broader channel widths [18]. NSFETs are referred to by different names based on the manufacturers and foundries. For example, MBCFET (multi-bridge-channel FET) is an example of how NSFETs are referred to by a particular foundry [19].
In addition to the two dominant channel shapes, various alternate channel structures have also been proposed. C. Dupré et al. proposed a NWFET structure with spacers between the channels ( FET, [20]), and P. Feng et al. proposed and compared variously-shaped NSFET structures for maximum transistor performance [21]. In this NSFET structure paper, the authors proposed nanowires and nanosheets that possess hexagonal, rectangle, and oval shapes. Regarding the optimal shape for GAAFET channel structures, both academia and industry are actively searching for better structures to replace FinFETs in the near future.
From a structural point of view, GAAFETs are reported to be fabricated in two different styles: horizontal GAAFETs (H-FETs) and vertical GAAFETs (V-FETs). H-FETs are ''horizontal'' because the channel is parallel to the substrate surface [22]. For the general transistor shape, H-FET transistors are very similar to those of FinFETs. Thus, from the designers' perspective, the only difference in transistors is that NWs or NSs replace fins. Therefore, it is reasonable that foundries will soon adopt H-FETs since they do not require significant changes for circuit design and fabrication [23]. V-FETs are described by their name because their channels are constructed perpendicular to the substrate surface [24] (see Figure 1). The latest studies show that 5 nm diameter nanowires with nano-scale interconnects, and 20 nm vertical nanosheets are manufacturable [25], [26]. Due to their unique channel formation, V-FET circuits do not follow conventional circuit design methodologies. In other words, V-FETs require circuits, such as standard cells and SRAMs, 1 On the other hand, TSMC continues FinFET on their 3 nm node [10]. to be designed in a completely different manner. However, studies have shown that V-FET circuits that use the new design methodologies provide significant advantages over conventional FinFETs and H-FETs. For example, a V-FET D flip-flop footprint reduces by 30%, and SRAM bit cells are 20-30% denser than H-FET SRAMs with 2.6× lower standby leakage current [27], [28]. Based on these reports, we expect that V-FETs will soon be the new type of transistor that follows H-FETs in the ongoing trend of transistor evolution.
Regarding V-FET circuits, H. Na and T. Endoh proposed a 12-transistor V-FET SRAM cell, which is 26% smaller than a conventional 8T-SRAM cell [29]. This size reduction is possible because of the vertical transistors that the authors have fabricated. The interesting part of this study is that the nanowires of this V-FET have two vertical gate layers for channel control. The device current flows when two gate switches are closed. A 1-tier V-FET is a device in which one gate layer covers the nanowire/nanosheet for current control ( Fig. 1(a)). A 2-tier V-FET provides two vertical gate layers to control the channel current. In the same transistor footprint, a 2-tier V-FET can be considered as two transistors placed on the top and bottom instead of one transistor. The proposed SRAM is reported to provide a 47% power reduction due to its advanced structure in spite of the increased transistor count. This study implies a very important point that the vertical positioning of transistor fabrication is possible (see Figure 1 (b)). This brings the full potential of a true 3-dimensional circuit/layout design in VLSI chips when V-FETs are utilized. Therefore, in this trend, we anticipate that V-FETs will be a bridge to designing actual 3-dimensional transistor structures for ultimate miniaturization and optimization.
In this paper, we extend the research of [30] and investigate the practical benefits of V-FETs when these vertical transistors are designed in many tiers. Few papers describe why GAAFETs should succeed over FinFETs at the layout and circuit level. Therefore, we study the advantages of GAAFETs (both H-FETs and V-FETs) over FinFETs by studying standard cells in the layout and circuit perspectives. Standard cells are vital components in digital VLSI design, and we demystify the impact of GAAFETs by focusing our efforts on V-FETs. Inspired by the study of [29], which leads to the possibility of true three-dimensional transistor design, we focus our efforts on investigating the advantages of fabricating devices on top of each other. In the path of 3-D circuitry, we study the practical concerns when digital circuits are designed in a two-tier vertical device fashion. The points below describe our contributions.
1) This paper is the first study on the impact of many-tier V-FETs for optimal design of standard cells. 2) We compare NWFETs and NSFETs to FinFETs from the perspective of parasitics. Our results report that NSFETs may produce greater capacitance levels than FinFETs under certain conditions. 3) We propose an interconnect structure for optimal standard cell design of many-tier V-FETs. 4) We propose a novel methodology and a set of algorithms for designing many-tier V-FET standard cells.
II. PARASITIC CAPACITANCE COMPARISON BETWEEN NANOWIRE FETs (NWFETs) AND NANOSHEET FETs (NSFETs)
This section discusses the characteristics of FinFETs, H-FETs, and V-FETs from the perspective of parasitics. In detail, we study how significant parasitics NWFETs and NSFETs have when these devices are used for H-FET/V-FET inverter (INV). Devices and layouts are closely related in ICs, and analyses of devices that do not consider physical layouts can have weak meanings. By performing the INV comparison, we gain an understanding of how parasitics differ between these devices in various layouts. Note that our focus in this section is on parasitic capacitance 2 and not on resistance. This focus is because parasitic resistances are nearly identical between FinFETs and H-FETs, and a resistance analysis between H-FETs and V-FETs was performed in [30]. 2 Capacitances in VLSI can be sorted into three different categories: (1) device-to-device capacitance, (2) device-to-interconnect capacitance, and (3) interconnect-to-interconnect capacitance. We define parasitic capacitance as those two types of capacitances that form between deviceinterconnect and interconnect-interconnect ((2) and (3)). We focus on parasitic capacitance because 1) it has a critical impact on standard cell performance (more capacitance leads to slower speeds and higher power consumption), and 2) they are highly dependent on standard cell layouts.
A. TECHNOLOGY SETTINGS
We designed a set of industrial-grade 5 nm standard cells for comparing FinFETs, H-FETs, and V-FETs. Figure 2 illustrates some important parameters for these FETs, and Table 1 shows the 30 standard cells that we designed. 3 The transistors and metals used for pin connections in our standard cells are based on the following references: [6], [7], [31]- [33]. Below are some details of our assumptions.
1) Our standard cell circuitry originates from the Silvaco 45nm Open Cell Library [34]. However, Silvaco 45nm cell designs are based on planar MOSFETs. Therefore, we heavily modified the netlist to fit the performance requirements for FinFETs, NWFETs, and NSFETs. 4 2) One minimum-sized transistor consists of two fins.
3) The height of a nanowire and thickness of a nanosheet are identical to the thickness of a fin. Also, the spacings of stacked nanowires and nanosheets are based on [35]. 4) All layouts and design parameters for an H-FET standard cell are identical to those of a FinFET. In an H-FET cell, nanowires and nanosheets replace an equivalent fin. 5) We use Synopsys QuickCap [36] for capacitance extraction. In brief, we note that these experimental settings are similar to the conditions of [30], and these should enable a fair comparison to be conducted.
B. PARASITIC CAPACITANCE COMPARISON BETWEEN FinFET, H-FET AND V-FET
One key question in designing H-FET and V-FET standard cells is ''How many nanowires/nanosheets do we need to supply sufficient current to a circuit?.'' Though the most straightforward method for all FinFETs, NSFETs, and NWFETs is to increase the layout area to increase the number of the fins (or nanowires/nanosheets), this may not be a desirable solution from a layout perspective. FinFETs can increase their fin heights and drive stronger current without adjusting the layout area (see Figure 2 (a)). In H-FETs, devices can be stacked on top of each other. In addition to device stacking, NSFETs allow the width of their nanosheet channel to be wider for stronger driving current (Figure 2 (c)). Therefore, it is essential to understand how each current-increasing method would impact the total parasitic capacitance in a given standard cell layout. FinFET INVs, respectively. Each yellow, red, and blue solid line describes the capacitance change in one-tier, two-tier, and three-tier horizontal NSFETs when the widths of their nanosheets change. NWFET capacitance can be considered as a nanosheet with 5 nm width, and we normalize all capacitance values to the baseline of a 3 Fin FinFET layout. This experiment uses a 44 nm-height FinFET for comparison. A 44 nm fin is based on the maximum width that our layout can support for NSFETs. Figure 3 (a) provides the following information: Unlike the performance enhancement NSFETs are reported to show in devices, it is difficult to state that NSFETs are significantly better than FinFETs from the parasitic perspectives. The ratio of capacitance increase in NSFETs is similar to th'at of FinFETs. In other words, if the channel count and the size of nanosheets are similar to those of fins, the total parasitic capacitance will be similar. Thus, unless NSFETs show significantly better current driving capability, a similarlysized NSFET will have a similar parasitic capacitance as a FinFET in the layout. This is reasonable because a similar physical size (e.g., fin or nanosheet) will result in a similar capacitance.
1) H-FET
On the other hand, NWFETs show significant capacitance reductions compared to FinFETs [30]. However, nanowires are reported to have less current-driving capability when compared to nanosheets. Therefore, foundries should be considering the trade-offs between nanowires and nanosheets for the best current driving capability while considering layout parasitics. From this perspective, Figure 3 (a) provides information on how significant capacitance from nanosheets forms in a given standard cell layout compared to FinFETs. For example, when a designer is designing an H-FET INV which has less capacitance than a 2-fin FinFET INV, they should use a device that is (1) a 3-tier-stacked nanosheet for which the width is smaller than 28 nm or (2) a 2-tierstacked nanosheet for which the width is less than 44 nm. A 1-tier NSFET would also be better if it could drive a sufficient amount of current. Besides, a 3-tier stacked NS H-FET with 12 nm width has a similar total parasitic capacitance as a 1-tier NS H-FET with a 44 nm width. The total effective width of the channel is 36 nm (12*3) in a 3-tier NSFET, while the total effective width for a 1-tier NSFET is 44 nm, which 8 nm longer. Also note that the parasitic capacitance increases by 36.2% (e.g., normalized capacitance: 1.41 to 1.92) in 1-tier NSFETs while it increases by 66.2% (e.g., 1.48 to 2.46) in 2-tier NSFETs and by 89.2% (e.g., 1.58 to 2.99) in 3-tier NSFETs. These results show that the parasitic capacitance is affected differently in various layout scenarios. In brief, nanowires/nanosheets should utilize a secure stacking and width changing strategy, and these strategies change the capacitance of the total layout. Thus, delicate device engineering combined with layout design should be performed. 5 We note that the change of device size in NS/NWFETs is critical to transistor performance (e.g., drive current). Besides, we also emphasize that the transistor drive current will scale with the change of channel dimensions, which is a critical factor in standard cell design. Standard cell design in NS/NWFETs should consider both device strengths and parasitic capacitances for optimal performance.
2) V-FET Figure 3 (b) shows the parasitic capacitance comparison between a 1-fin FinFET and 1-tier V-FETs (nanowires and nanosheets) in an INV design. The design of a V-FET INV layout follows the guidelines of [30]. Note that a V-FET INV can be designed using 50% less area than a FinFET INV. The blue dotted line refers to the capacitance of a 1 Fin FinFET. Each yellow, red, or blue solid line describes the capacitance change in one-tier, two-tier, or three-tier vertical NSFETs, respectively, when the widths of their nanosheets change. The capacitance values are based on the same normalization of a 3-fin FinFET for Figure 3 (a) and (b). Figure 3 (b) provides the following information: First, V-FETs significantly reduce parasitic capacitance compared to H-FETs. Note that for capacitance increases of V-FETs when the nanosheet widths increase, the capacitance increase is not as steep as for H-FETs. This is because the standard cell layout of a V-FET INV is only 50% of an H-FET INV, and the affected interconnect/device area is significantly smaller than that of the H-FET components. Therefore, V-FET circuit designers can design standard cells that outperform H-FETs in terms of parasitic capacitance. 6 Second, H-FETs can stack nanowires/nanosheets without increasing the layout area. This stacking is due to the layout style that H-FETs offer in design. However, the concept of stacking nanowires/nanosheets is not possible in V-FET layouts, because each nanowire/nanosheet must consume a certain area. Nevertheless, we note that increasing the number of nanowires/nanosheets does not always lead to an increase in the design area. A standard cell typically leaves some amount of design margin in its layout. In other words, a standard cell may be designed to have the same layout area when it uses one nanowire/nanosheet or three nanowires/nanosheets in its design. Thus, we design our layouts with the same area for V-FETs with one nanowire/nanosheet to three nanowires/nanosheets. In summary, V-FETs provide significant capacitance reductions compared to H-FETs and Fin-FETs, but careful designs must be performed as stacking nanowires/nanosheets is not possible.
III. INTERCONNECT STRUCTURE AND CIRCUIT DESIGN FOR MANY-TIER VERTICAL FETs
This section discusses an optimal interconnect structure for many-tier V-FET logic cell design. We emphasize that this is the first study that discusses an optimal interconnect structure for many-tier V-FET logic circuits. We compare previouslyproposed interconnect structures to our new structure and discuss why our proposed structure is necessary for circuit designs. Then, we illustrate the assumptions behind our proposed structure and its meanings for circuit design.
A. PROPOSED INTERCONNECT STRUCTURE
Few studies have mentioned interconnect structures for many-tier V-FET circuits [29], [37]. These two studies discussed how an optimal SRAM can be designed when a designer utilizes a 2-tier V-FET structure. According to these studies, the authors designed an interconnect structure in which the top and bottom devices are connected such that no metals are used between the devices (see Figure 4 (a)). However, a critical issue with this structure lies in circuit design. From the schematic point of view, this structure is identical to a two-input switch for which these two inputs are connected in series (Y = A * B). In circuit designs, however, many scenarios occur for which it is highly beneficial for designers to have the freedom to use the intermediate terminal for various purposes (see Figure 4 (b)). Therefore, we propose an interconnect structure in Figure 5 that allows 2-way routing for each source/drain terminal (e.g., SD1, SD2, and SD3 in Figure 4 (b)). Note that our structure supports 2-way routing even between two devices (SD2 using M2 in Figure 4 (b)). [29], [37], and (b): Our structure and schematic. Transistors can be placed in both tiers (G1 and G2) or in one tier only (G1 or G2).
B. ASSUMPTIONS
Our target is to provide an interconnect structure that enables 2-tier V-FETs to use less area than 1-tier V-FETs in their standard cell designs. To fulfill this goal, our proposed interconnect structure is based on the following assumptions 7 : 7 Considering the manufacturability of advanced nanowire/nanosheet FETs and interconnect structures in the sub-10 nm node [25], [26], we consider our assumptions for the advanced devices and interconnects to be reasonable.
1) Our structure requires a bidirectional interconnect
(both x and y directions) for all M0 (bottom), M2 (middle), and M4 (top), which is similar to [30]. Unidirectional interconnects in M0 and M2 lead to unwanted area overhead [27]. Our structure for the middle interconnect should be (1) metal for low resistance, 8 and (2) should support a bidirectional interconnect for routing. This is for the following reasons: First, we are very close to implementing bidirectional interconnects in advanced technology nodes [39]- [42], and the choice of the number of layers in the middle layer directly impacts the height of the nanowire/nanosheet. For example, when the middle layer uses two metal layers, it requires three intermediate dielectric layers. However, if the middle layer uses only one metal layer instead, it requires two dielectric layers. Thus, using two metal layers requires roughly 2× the height that one metal layer requires. Assuming that a dielectric or metal layer requires a few tens of nanometers for its height, using two metal layers would require a height of a few hundreds of nanometers. Note that this is the intermediate height required before the second-tier poly is placed. Thus, two metal layers for the middle layer may require significant heights, which may lead to fabrication issues and high resistance increases in the nanowire/nanosheet channel. However, the number of metal layers on the top or bottom is not as critical as the number of layers in the middle layer. Our manytier V-FET requires that all terminals have bidirectional interconnects for optimal standard cell designs. 2) We assume that V-FET transistors can be placed both in 1-tier and 2-tier configurations, but can also be separate into 1-tier or 2-tier. 9 Figure 6 illustrates two 8 O. Kilpi et al. successfully manufactured a metal interconnect in the middle of a V-FET channel in [38]. Based on this reference, we assume implementing a metal interconnect inside the V-FET channel to be possible. 9 We assume this possible by mentioning that the SRAM design by [29] requires individual placement of V-FET transistors. examples of transistors being placed on either the 1 st tier or 2 nd tier in a 2-tier INV design. 10 Details of this assumption will be discussed in Sec.III-C1. 3) Our interconnect inherits some assumptions and characteristics from 1-tier V-FET designs [30]. First, we assume poly (e.g., M1 and M3) to be used to create the gate layer for V-FETs. Due to the high resistance of the poly, we assume unidirectional metal layers in M1 and M3 that directly connect to these poly layers. Second, we assume separate vias connecting from M0 to M4 (via0 to via3). References regarding V-FET circuit designs mention their intermediate layers to connect to the top or bottom metal layers (e.g., [37]). Therefore, we assume that these structures are possible.
C. PROPOSED INTERCONNECT STRUCTURE AND ITS IMPLICATIONS FOR CIRCUIT DESIGN
We discuss the physical and logical meaning of our proposed interconnect structure in the circuits' perspective and how it affects the standard cell designs for 2-tier V-FETs. Regarding the number of tracks a particular V-FET standard cell can use, we assume that one pair of transistors (P/N) can use one vertical routing track (see Figure 7). We note that our proposed structure can be extended to a many-tier V-FET interconnect structure based on our guidelines.
1) SELECTABLE TRANSISTORS FOR CIRCUIT DESIGN
As mentioned in Sec. III-A, we assume that transistors can be placed at any tier based on its design. For example, in Figure 4, a channel pillar can have G1 and G2, a separate G1, or a separate G2. In cases where a channel pillar consists of one transistor (e.g., G2 only), we assume the other part to be a regular conducting channel (e.g., the G1 part that does not have a gate). We emphasize that the freedom to design separate transistors in a 2-tier V-FET is very important. As shown in Figure 7 (a) and (b), two styles of INV can be designed by using poly in different tiers (see Figure 6 for side view). More importantly, since an INV requires only one PFET and one NFET, these INVs can be designed only if our assumption is valid.
2) LOGICAL STRUCTURE OF THE MANY-TIER V-FET
As Figure 4 (a) shows, a V-FET channel with two gates has a logically-identical transistor structure as a two-transistor device that the source and drain terminal of each transistor are connected. This means that a netlist with two transistors sharing one net is the best way to utilize a 2-tier V-FET structure. In other words, if a circuit design requires transistors that do not share the same net too much, that design will not benefit from using a many-tier V-FET structure. A buffer (BUF) is an excellent example of this characteristic (Figure 7 (c)). A BUF design in a 2-tier V-FET does not show any area reduction compared to a 1-tier V-FET. A buffer consists of two inverters, and this circuitry does not share any net between the source and drain terminals of PFET/NEFT. Thus, despite the potential for area reduction in 2-tier V-FETs, the BUF design does not benefit. Note that the logical structure of manytier V-FETs also follows this concept. Although many-tier V-FETs allow N number of transistors to be placed vertically, this configuration is not fully advantageous unless a netlist of transistors is designed such that the source/drain terminals share the same net (i.e., N serially connected transistors). From this perspective, a BUF design will always consume a two-transistor footprint area in any type of many-tier V-FET designs. More transistors on the same footprint cause another unique routing issue that occurs in V-FETs. An H-FET footprint supports one net, whereas a 1-tier V-FET footprint supports two nets (e.g., top and bottom), and a 2-tier V-FET supports three nets (e.g., top, middle, and bottom). More nets on the same footprint provide two influences: First, the device layer requires more routing layers. We see that a 2-tier V-FET has three metal layers, whereas a 1-tier V-FET has only two. Second, escape routing becomes more challenging as the number of device routing layers increases. As shown in Figure 7 (a), input pin A is required to connect all the way from the top metal to the 1st-tier poly. It is crucial to guarantee a position for the I/O pins to perform escape routing from the top metal to its destination. In complex cells, situations occur where cells should use more routing tracks because of the escape routing of these I/O pins.
4) IS BIDIRECTIONAL ROUTING NECESSARY IN THE MIDDLE METAL LAYER (M2)?
Despite some references which mention the latest interconnect structures with advanced width and pitch, there is still some controversy regarding the forecast for bidirectional metal usage in advanced nodes. However, based on some previous references which guide the possibility for bidirectional routing support [39]- [42], we describe how 2-tier V-FETs can be advantageous from advanced interconnect technologies. A comparison between Figure 7 (d) and (e) illustrates how a NAND3 gate reduces area by bidirectional routing in M2.
Thanks to its schematics, NAND3 is a gate that can utilize the 2-tier V-FET structure. With our proposed interconnect, a 2-tier V-FET NAND3 uses two-transistor footprints (Figure 7 (d)), while a 1-tier V-FET NAND3 uses three. However, if the interconnect of a 2-tier V-FET does not support bidirectional routing in M2, this requires the design to use one additional y-directional routing track, which should be considered identical to consuming one additional transistor footprint. Therefore, we emphasize that bidirectional routing is essential in 2-tier (and many-tier) V-FET designs for the minimum area.
IV. STANDARD CELL DESIGN METHODOLOGY FOR MANY-TIER V-FETs
This section describes the design methodology and a set of algorithms for 2-tier V-FET standard cell designs. We detail our proposed design methodology and algorithms so that we can generally expand to many-tier V-FET designs. Our design methodology is inspired by [30]. The following subsections discuss how the algorithms in this work are expandable and improve on the previous study.
A. GENERAL DESIGN METHODOLOGY
Our methodology follows the design order of Algorithm 1. First, given a standard cell netlist, this netlist is partitioned into clusters (see Sec.IV-A1 for details). Then, Cluster Placement extracts the ordering between clusters. Once we obtain the cluster order, we perform Mini-Cluster Placement to finalize the one-line ordering of the transistors. Then, we perform N-tier V-FET transistor placement and perform manual transistor placement. Finally, we perform Net Routing to finalize our standard cell design. Net Routing is a normal routing step that is performed in conventional standard cell designs. Our methodology assumes that the transistors are placed in one design track. Thus, in this one-track design, we assume that VDD (PFETs) is placed on the top, and VSS (NFETs) is placed on the bottom.
1) CLUSTER PARTITIONING
We define a cluster as a group of transistors (both PFET and NFET) gathered for a special purpose. The concept and formation of a cluster is identical to [30]. Given a standard cell netlist, transistors form a network of PFETs and NFETs to generate a specific output signal. For example, an INV requires one PFET and one NFET, and a NAND2 (or a NOR2) requires two PFETs and two NFETs to generate an output signal ZN. Two transistors (e.g., 1 PFET and 1 NFET) form a cluster in an INV gate, and four transistors (e.g., 2 PFETs and 2 NFETs) form a cluster in a NAND2 (or a NOR2) gate. We define these gates (INV, NAND2, and NOR2) as singlecluster gates.
Contrary to the concept of single-cluster gates, complexcluster gates require more than two clusters to form a gate. For example, a BUF is a complex-cluster gate that requires two clusters for its gate design. In a BUF, an INV output is cascaded into another INV to export an output Z. Likewise, AND or OR gates are also complex-cluster gates because they require an INV cascaded with NOR or OR clusters. Therefore, a standard cell will be either a single-cluster gate or a complex-cluster gate.
B. CLUSTER PLACEMENT
Once we partition the clusters, we perform Cluster Placement based on Algorithm 2. A set of clusters can be assumed to be a graph, where the nets are edges and clusters are vertices. Our goal of this placement is to reduce the wirelength between VOLUME 8, 2020 clusters to be in its minimum length. In our work, we propose a modified force-directed graph for cluster ordering. A force-directed graph [43], [44] performs a placement of vertices that finds equilibrium in the given solution plane based on two forces, which we refer to as the ''Pullforce'' and ''Pushforce''. A Pullforce is an attractive force that occurs between vertices connected by edges (e.g., 'nets' in a netlist). On the other hand, a Pushforce is a repulsive force between all vertices in the solution plane. A Pullforce is in the relationship shown by the Hooke's law, and a Pushforce is in the relationship shown by Coulomb's law as in Eq. 1 and 2.
The constants in these equations are as follows: x as the distance between clusters (in m), k 1 varies in springs (in N /m), k 2 is 8.987 × 10 9 N · m 2 · C −2 , and q 1 and q 2 , are the magnitudes of C.
We adjust Eq. 1 and 2 based on the following rules: 1) We consider each net between clusters to be a separate edge between vertices. In other words, clusters having more connections between each other will have stronger Pullforces. 2) We scale the strength of the Pullforce by the size of its cluster sizes, which means that larger clusters (i.e., clusters with more transistors) have weaker Pullforces. This scaling prevents huge clusters from being placed very close to each other. Note that huge clusters consume significant design area. Thus, providing opportunities for small clusters to be placed at their optimal locations may be more beneficial from the perspective of total wirelength. With these adjustments, Eq. 1 and 2 become Eq. 3 and 4 where the sizes of clusters are s 1 and s 2 , the number of nets between clusters is N net , and k 3 and k 4 are adjustable constants in the calculations. With these complex forces present in the solution plane, each vertex has a sum of forces that affect other vertices. The Sumforce is the sum of the Pullforce and Pushforce, which can be expressed as a vector in the x and y-directions. Given an initial location, these vertices will traverse through the solution plane based on the Sumforces between vertices and edges. Eventually, when sufficient time has elapsed, these vertices will be stable based on their equilibrium state.
In our algorithm, we fix one vertex in its initial location to reduce runtime. Unlike [30], we place a vertex (e.g., Pulling Vertex) that applies a very weak Pullforce to the leftmost vertex at the very last coordinate in the x-direction plane (-∞). This Pulling Vertex only pulls the vertex that is farthest from the stable vertex. This concept has the following advantages: First, the purpose of the force-directed graph is to obtain the ordering of the vertices. From this perspective, this weak Pullforce helps the solution to be naturally aligned in the x-direction. Second, our method is less calculation-hungry. [30] placed M vertices on the top and the bottom of the solution plane to guide the clusters in the x-direction. Thus, the solution plane needs to calculate the Sumforce between N (clusters) + 2×M vertices. However, our methodology includes only one additional vertex for solution guiding. Thus, our methodology uses (N +1)/(N +2M )% fewer nodes. When the solution converges to equilibrium, we extract the x-coordinates and assign them in order.
C. MINI-CLUSTER PLACEMENT
We propose a Mini-cluster Placement process that determines the order of transistors once the Cluster Placement is complete. Algorithm 3 describes the general flow. Mini-clusters are gatherings of transistors in series or parallel that we consider as transistor groups. The Mini-cluster concept is to reduce the complexity of the netlist by archiving non-critical nets and transistors. The following are the rules for creating a Mini-cluster. 1) Each PFET or NFET forms a separate Mini-cluster.
2) The start of a Mini-cluster (=Level 0 parent vertex) is the output net (e.g., ZN = parent net) of a cluster.
3) A Level 0 parent vertex is the lowest level vertex.
A child vertex of the Level 0 parent vertex is a Level 1 parent vertex. 4) A Mini-cluster consists of a parent net and a child net. 5) From a parent net, a Mini-cluster can be a group of series or parallel transistors. a) Until a child net diverges to two (or more) child transistors, or a child net becomes VDD/VSS in a series connection, those transistors form a Minicluster. b) A group of parallel transistors can be a Minicluster if the source and drain terminals of the transistors are connected to the same parent and child nets. The key concept of our placement is to convert as much parallel transistor data into serial transistor data as possible. A cluster will form a tree of Mini-clusters as in (a) for both PMOS and NMOS. If the tree structure is (a), the Mini-clusters will be in one line, as shown in (b), because child nodes will always be placed to the left of the parent node. However, the designer must still question which child node should be placed ''closer'' to the parent node than the other. In our figure, note that there is no information on the Lv.1 nodes for (a) determining which node should be placed closer to the parent node. However, a standard cell typically has a complementary network between PFETs and NFETs. Therefore, our algorithm scans through the complementary FETs (NFETs in this case) and searches for serial connections in the corresponding Mini-cluster. Figure 8 (c) shows an example of an actual transistor network, and (d) shows how we express this in Mini-clusters. Our design step of P/N Comparison traverses through P/NFETs and checks if there is a corresponding complementary Mini-cluster pair. Note that in (d), NFET has two parallel Lv.0 nodes, but PFET has one Lv.0 node. Thus, our algorithm starts from the Lv.0 node in PFET for ordering. Our algorithm starts from the lower level node and checks if the input pins are identical for its complementary node. For example, the NFET Lv.0-2 node has an identical input (A and B) to the PFET Lv.0-1 node. Thus, NFET Lv.0-2 becomes the complementary node for PFET Lv.0-1. Once the target Minicluster and its complementary node are set, our algorithm checks for serial connections. Due to the complementary characteristics, one Mini-cluster or its complementary node will typically consist of a serial transistor connection. The counter node follows the serial order of the transistors. In (d), NFET Lv.0-2 has a serial connection that the closest is the TR gate = A and is followed by TR gate = B. Thus, PFET Lv.0-1 follows the same order. In (d), The numbers in # indicate the order in which the placement should be performed in our algorithm. If our algorithm cannot find a serial connection in the Mini-cluster pair, it places the transistors in their numerical order. In summary, we generate Mini-clusters and scan for the serial information for transistor ordering.
The advantage of our Mini-cluster Placement is that (1) it is more intuitive for understanding the structure of transistors, and (2) it requires fewer calculations. [30] required an analysis that worked through all possible transistor ordering combinations inside a cluster, which was no different from brute-force tuning for the smallest wirelength. However, our methodology provides a method to analyze the netlist, and the transistor ordering process becomes significantly shorter.
D. N-TIER TRANSISTOR ALLOCATION AND MANUAL TRANSISTOR PLACEMENT
The one-line transistor ordering we obtained in the previous subsections is a solution required for placement in a 1-tier V-FET standard cell design. In fact, this is a universal solution because the goal of placing highly-connected transistors as closely as possible does not change even when we design V-FETs in many tiers. Our N-tier transistor allocation proposed in Algorithm 4 describes how gathering closely-related transistors in the same footprint reduces a more significant amount of area. First, given an order of transistors, we determine if the M th transistor and the following N − 1 transistors meet the requirements for placement on the same footprint. For example, if N = 2, the netlist of those two transistors should be identical to Figure 4. Likewise, for N -tier, where N > 3, V-FET designs follow the same methodology. Any many-tier V-FET should meet the netlist structure requirement to be placed on the same footprint. Once all of the processes described in Sec.IV are complete, we manually select the location of each transistor for best performance and then perform regular routing. Some details of the manual design are mentioned in Sec. V-B7 for discussion.
Algorithm 4 N-Tier Transistor Allocation
Data: Transistor order of a standard cell Result: Minimum footprint layout and transistor count per unit footprint 1 Generate Mini-clusters ; 2 for a transistor (TR i ) in a given order do 3 if TR i . . . TR i+N matches required pattern then 4 Gather TRs into one footprint ; 5 end 6 end VOLUME 8, 2020
V. RESULTS AND DISCUSSIONS
This section displays our results in comparison with [30]. We compare 1-tier and 2-tier V-FET standard cells for the area, wirelength, and parasitic capacitance. In the subsections, we describe the conditions of our experiment. Then, the results and detailed analysis with discussions for future study follows. Our primary goal for standard cell designs targets a minimum area layout.
A. EXPERIMENTAL CONDITIONS
To conduct a fair comparison between 1-tier V-FETs and 2-tier V-FETs, we set our experimental conditions similar to those of [30]. We describe some important details below: 1) The design conditions for the layout followed the details of [30]. The metal width and pitch were identical, and we designed both 1-tier and 2-tier V-FET standard cells to have five horizontal routing tracks in a 5 nm technology node. Vertical connections between two vias were considered as one routing track, and all detailed device/interconnect dimensions were the same as H-FETs for a fair comparison. 2) We compared the results among 30 standard cells.
These cells are noted in Table 1, and the GDSII layouts of DFF, AND2, and HA are described in Figure 9. Since all placements could be accomplished in less than one second, we did not report runtimes.
3) The number of channels used for transistor design was identical to [30]. A transistor consists of four nanowires. 4) The standard cell designs followed the flow of Sec. IV. Once we completed these steps, we manually designed the standard cells for the best performance. We illustrate the reasoning for this manual process in Sec. V-B7. 5) Once we designed the standard cells in a GDSII format, we wrote a parasitic technology file that analyzed the 3-D structure of this layout and extracted the parasitics. For this process, we used Synopsys QuickCap [36].
B. RESULTS ANALYSIS 1) AREA Table 2 shows our comparison results. For the layout area, we see a significant reduction of −35.6% on average. However, it is important to understand that 2-tier V-FET standard cells do not show a 50% area reduction on average compared to 1-tier V-FETs. As mentioned in Sec.IV-D, the footprint reduction of a 2-tier V-FET occurs only for specific layout conditions (as in Figure 4), which are not conditions that all transistors can meet. INV and BUF are good examples of 2-tier V-FETs that do not result in any area reduction at all. Besides, some standard cells are formed as complexcluster cells. The boundaries of these complex cells cannot meet these conditions because the cluster boundaries do not share a common SD2 net. Thus, a 50% area reduction is the maximum achievable but is a very challenging goal to achieve. However, we emphasize that our results of −35.6% represent a significant area reduction that we achieved via a non-scaling approach. Even the area reduction from H-FETs to 1-tier V-FETs was −22.5%. When comparing the area reduction from H-FETs to 2-tier V-FETs, we explore −50.1%, which is highly significant.
2) WIRELENGTH -GENERAL
Unlike the significant area reduction, 2-tier V-FET standard cells show an average of 16.5% increased wirelength compared to 1-tier V-FET cells. This result is contrary to the expectation that smaller cells would report smaller wirelength than larger cells. However, as the vertical design space becomes deeper in the z-direction, 2-tier V-FETs face more unexpected challenges (see Sec. V-B4 for details). From a general point of view, the increase of wirelength in 2-tier V-FETs provides the insight that 2-tier (or any type of manytier) V-FETs may not be the optimal structure for wirelength reduction in certain standard cells. In addition, the vertical routing lengths from the top to bottom metal are now nonnegligible heights. Still, we emphasize that the wirelength of 2-tier V-FETs is similar to that of H-FETs (−0.3%).
3) WIRELENGTH -BREAKDOWN
We provide an insight that the lateral wirelength of 2-tier FETs is similar to that of 1-tier FETs. Table 3 provides a breakdown of the wirelength. In addition to Table 2, we have examined the wirelength of standard cells in two different categories: One is the wirelength for signal routing only, and the other is the signal routing that does not consider any vertical vias. This table provides the following information: First, the lateral signal wirelength of 2-tier V-FETs are similar to those of 1-tier V-FET designs. Thus, we note that vertical routing takes a significant portion of 2-tier designs. However, the average area of 2-tier V-FETs is reducing, but the lateral wirelength is almost the same. Considering the metric of wirelength per area, 2-tier V-FET routing is not as efficient as 1-tier routing. Second, the usefulness of the 2-tier V-FET interconnects is very different in various standard cells. Typically, a 2-tier V-FET is very useful for reducing the size and wirelength of large-sized cells, but it is counter-effective in smallsized cells. For example, a 2-tier AOI211 shows a 68.4% increase in signal wirelength compared to that of a 1-tier V-FET. In contrast, a FA shows −15.6% reduction in signal wirelength compared to its 1-tier counterpart. Finally, 2-tier V-FETs can be designed to be almost identical to 1-tier FETs based on the designer's decision. As shown in Fig. 7 (c), BUF design in the 2-tier case is nearly identical to a 1-tier BUF. In certain cases, at the cost of additional area, better design in terms of wirelength is possible for specific cells. However, this is a designer's choice for the tradeoff between area and wirelength. 11 Figure 10 (a) describes some unwanted scenariosin an AND2 gate with 2-tier V-FET designs. Figure 10 (b) shows a 1-tier V-FET AND2 design for comparison. The following are some highlighted points. First, an AND2 design requires more connections (e.g., height) between the top and bottom metal. More connections translate to increased wirelength and vias. For VDD/VSS, a reinforced global VDD/VSS stripe using both the top and bottom metal will resolve this issue, but an adequate solution does not exist for signal nets. Second, some inefficient net connections occur due to routing area constraints. An internal net connected to the second gate in an AND2 uses both the top and middle metal. The best routing is to connect the top and middle metal, but this is not possible in this AND2 because there is no space (poly1 blocks these two metals). Finally, net A1 faces two issues: (1) the 1 st poly requires more via to be exposed to the top metal, and (2) A1 requires a detour because the optimal space for escape routing is not possible due to other nets. In summary, we emphasize that 2-tier V-FET standard cells face many design issues due to the reduced design space. Besides, we anticipate that these issues will become more severe in many-tier V-FET designs.
5) CAPACITANCE -GENERAL
As reported in Table 2, the capacitance of 2-tier V-FETs increases by 13.2% in general despite the expectation that it would decrease. Here, we highlight some important points from our results. First, the capacitance trends between largesized cells (e.g., DFF, FA, HA, etc.) and small-sized cells are different (e.g., INV, BUF, NAND, etc. . . ). It is difficult to state that the total capacitance follows the general trends of wirelengh increase/decrease. However, we note that the total capacitance and the signal wirelength follow a similar trend. We also emphasize that 2-tier V-FET capacitances are reduced by −18.9% when compared to H-FET standard cells.
6) UNUSED GATES FROM THE PARASITICS PERSPECTIVE
Cells such as INV and BUF show increased capacitance even though that there is no increase in area or wirelength. When a channel pillar uses only one transistor, the designer must understand that a 1-tier V-FET is more favorable than a 2-tier (or a many-tier) V-FET from the perspective of parasitics. Two factors support this statement: First, the channel pillar of a 1-tier V-FET is shorter than a 2-tier pillar. Thus, the resistance of the pillar is smaller in a 1-tier V-FET than in a 2-tier V-FET.
Second, 2-tier V-FETs are potentially exposed to more parasitic capacitance than 1-tier V-FETs. To explain this, we propose the concept of the ''design volume''. Given the same silicon footprint, the 3-dimensional device volume (=''x-width''×''y-width''×''height'') of a V-FET is greater in a 2-tier V-FET. Also, note that more metal layers are in the device volume in a 2-tier V-FET than in a 1-tier V-FET. The greater design volume means that the layout environment in a 2-tier V-FET has more obstacles than in a 1-tier environment. This environment leads to more inherent parasitic capacitance, which unexpectedly handicaps 2-tier designs. Given a standard cell that uses the same area as a 1-tier V-FET and 2-tier V-FET, a 2-tier V-FET will possibly suffer from more capacitance than in a 1-tier V-FET. In the BUF example, the total capacitance of a 2-tier BUF is 24% higher than in a 1-tier V-FET BUF design due to this. In brief, the 2-tier V-FET structure is not so advantageous when compared to a 1-tier V-FET from the perspective of parasitics unless the design footprint is reduced. We expect this trend to be similar for many-tier V-FETs.
7) HOW MANY DESIGN STEPS CAN WE AUTOMATE FOR MANY-TIER V-FET STANDARD CELLS?
Unlike 1-tier V-FET standard cell designs, we performed manual transistor placement in the last steps for 2-tier V-FET standard cells. A general rule for placing the I/O pins on the top metal and placing the internal nets in the lowest metal layer was the key idea for automation in 1-tier V-FET designs. However, cases occurred in 2-tier standard cells that better designs were possible without following these rules. Also, the design freedom to select a tier for vertical transistor placement added more complexity to the design that did not follow our general rules for optimization. Thus, we comment that more studies will be necessary for the general steps of design automation in many-tier V-FET standard cells.
VI. CONCLUSION
In this paper, we investigated the advantages of many-tier vertical GAAFETs (V-FET) for logic cell designs. In particular, we investigated the advantages that 2-tier V-FETs provide compared to 1-tier V-FETs at the layout level. We proposed an optimal interconnect structure and a design methodology that optimizes an arbitrary many-tier V-FET standard cells. We compared the area, wirelength, and capacitance of these standard cells and showed an −35.6% reduction in area, a 16.5% increase in wirelength, and a 13.2% increase in parasitic capacitance. Compared to H-FETs, our results reduced area, wirelength, and capacitance by −50.1%, −0.3%, −18.9%, respectively. We report that these results are due to the advanced interconnect structure and optimized designs. However, we also report that increases in wirelength are due to the unexpected increase in vertical interconnects and that the capacitance increases are due to the increased vertical channels compared to 1-tier V-FETs. We report a general trend that 2-tier V-FETs show better design metrics in larger standard cells but worse metrics in smaller standard cells. For resistances, small standard cells are not as efficient as 1-tier V-FETs. Thus, careful design of interconnects and better devices should be supported for better designs and performance.
|
v3-fos-license
|
2024-02-06T17:29:34.474Z
|
2024-02-01T00:00:00.000
|
267480847
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/78C4EFC35C0F68D983112A3A565A1283/S2059866124000207a.pdf/div-class-title-diversifying-the-research-landscape-assessing-barriers-to-research-for-underrepresented-populations-in-an-online-study-of-parkinson-s-disease-div.pdf",
"pdf_hash": "55a5815a10485e4bce92fc386e219d1dfe5107c3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3021",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"sha1": "200375e300fedf88a8829b685e3be893adb248d9",
"year": 2024
}
|
pes2o/s2orc
|
Diversifying the research landscape: Assessing barriers to research for underrepresented populations in an online study of Parkinson’s disease
Despite federal regulations mandating the inclusion of underrepresented groups in research, recruiting diverse participants remains challenging. Identifying and implementing solutions to recruitment barriers in real time might increase the participation of underrepresented groups. Hence, the present study created a comprehensive dashboard of barriers to research participation. Barriers to participation were recorded in real time for prospective participants. Overall, 230 prospective participants expressed interest in the study but were unable to join due to one or more barriers. Awareness of the most common obstacles to research in real time will give researchers valuable data to meaningfully modify recruitment methods.
Introduction
Medical studies with human volunteers are necessary to evaluate interventions focused on new medicines, products, and therapeutic procedures for healthcare and health behavior [1].Despite the importance of clinical trials in advancing medicine, recruiting participants for these studies remains a challenge for researchers and physicians, especially among minority populations [2,3].Even when federal regulations mandate the inclusion of underrepresented groups (e.g., racial and ethnic minority populations, rural populations, low-income populations, and populations with low educational attainment) in research [4,5], the involvement and systematic inclusion of these communities in clinical investigation remains disproportionately low [6,7].Recruiting Underrepresented Groups (URGs) to research studies is crucial to more comprehensively understand diseases and treatment development as well as improve health care delivery [8].Furthermore, engaging URGs in research is critical to significantly reducing health disparities and driving equity in health care delivery [9,10].Regardless of the increasing awareness of barriers to research, investigators have continued to focus on the underrepresentation of minority populations, rather than specific or comparative strategies to overcome these barriers [11][12][13][14].
Community-based research has shown that collecting action communities, including social relations between local organizations and the surrounding community, helps build social cohesion, social trust, and participationall fundamental to removing barriers to healthcare and research access [10,15,16].Yet most previous research on diverse and inclusive research recruitment does not extend these findings to all URGs; nor does the extant literature develop strong, testable, and scalable engagement and recruitment frameworks for these individuals in these communities [17].Critically and most surprisingly, no study has identified and implemented solutions to recruitment barriers in real time using the same prospective sample to our knowledge [17].
Hence, the Fostering Inclusivity in Research Engagement for Underrepresented Populations in Parkinson's Disease (FIRE-UP PD) Study [18], a multi-site online study funded by the Michael J. Fox Foundation (MJFF), created a dashboard of barriers to research participation to comprehensively capture the multifaceted and multilayered barriers prohibiting prospective participants from participating in the study.This study was designed to mimic barriers to research participation in Fox Insight an online PD study with the Michael J. Fox Foundation to understand if research barriers can be manipulated through intervention to increase URG participation.The FIRE-UP PD Study was a stratified-randomized study in which eight sites were selected according to a proposed intervention.This study aimed to assess trust in and engagement toward PD research among URGs, increase participation of URGs to Fox Insight, and identify and disseminate methods and best practices to engage and recruit URGs in PD clinical research across eight months [18].
In this report, we outline barriers to recruitment in the FIRE-UP PD Study within the control and intervention groups.Through better understanding of the real-time barriers to clinical investigation in Parkinson's disease, researchers can more strategically prioritize health equity and diversify the traditional research landscape through community-based approaches by eliminating these common barriers to research that routinely obstruct members of underrepresented groups from participating.Research barriers specific to the FIRE-UP PD Study that have been overcome will be described in our upcoming manuscript, which will detail all study findings.
Methods
Methods for the FIRE-UP PD Study have been previously described in detail [18].Briefly, the FIRE-UP PD Study asked eight participating sites to identify a URG or geographical region to measure changes in awareness and trust in Parkinson's Disease (PD) research along with engagement and interest in PD protocols through the use of several surveys.The FIRE-UP PD Study measured changes in awareness and trust in PD research using the Trust in Medical Research Scale.Research engagement was measured through use of the Patient Engagement in Research Scale.Finally, interest in three hypothetical PD clinical trial protocols was measured, leveraging surveys developed by Dr. Allison Willis which were collectively titled the "PD Research Participation Survey" for the purpose of this study [18].All participating sites were provided with tablets to perform surveys and collect data by using REDCap database a secure survey platform.Questionnaires for these surveys can be found in Sanchez et al., "Designing the Fostering Inclusivity in Research Engagement for Underrepresented Populations in Parkinson's Disease study [18]."Surveys were collected for all sites at two separate time periods: pre-intervention and post-intervention, allowing the study team to evaluate changes in awareness after interventions.Interventions included developing educational tools to engage local communities, building partnerships within local PD communities, and recruiting stakeholders to reimagine medical and research information for the community.Additional information about interventions can also be found at Sanchez et al. [18].
The FIRE-UP PD Study also aimed to increase Fox Insight (an online study with MJFF) participation.Sites were randomly assigned to either the intervention or control condition.Interventions included developing educational tools to engage local communities, building partnerships within local PD communities, and recruiting stakeholders to reimagine medical and research information for the community [18].Researchers adopted Picillo et al.'s framework [19] to systematically outline barriers to recruitment in a research barriers dashboard.Barriers fell within the following five major categories: infrastructure, nature of the research, recruiter characteristics, participant characteristics, and community with the first four categories developed by Picillo and colleagues [19], and the final category, community, being added by the current researchers to capture the team's focus on community-centered recruitment approaches.The majority of the recorded barriers focused on infrastructure and participant characteristics as drivers of willingness to participate.See Table 1 for more information.
Table 1 outlines the tracked study barriers, originally conceptualized by Picillo et al. and modified by the current authors through the addition of the "Community" category.
Prospective participants at all sites who were not recruited into the study had their barriers to participation recorded in real time via the research barriers dashboard developed for this project.The barriers dashboard did not require local Institutional Review Board approval as Personal Identifying Information was not collected.Instead, researchers only asked prospective participants for their reason to reject participation in this study.Barriers tracked included those related to language needs, digital limitations, trust, time commitment, transportation, contact information, as well as privacy concerns, and participants were able to describe other limitations not captured under the aforementioned categories.
Research barriers dashboard was collected on a monthly basis by the Community Access and Research Engagement Research
Center, located at the Massachusetts General Hospital, which served as the Recruitment and Engagement Coordinating Center for the FIRE-UP PD Study.
Results
Multiple barriers to research participation were recorded for each prospective participant.A total of 488 participants were recruited to the FIRE-UP PD Study, with 295 participants recruited to intervention sites and 193 participants recruited to control sites.Two hundred and thirty prospective participants or the equivalent of 47% of all recruited participants expressed interest in the FIRE-UP PD Study but were otherwise unable to participate due to one or more of the tracked barriers.Demographic data could not be collected from prospective participants not able to participate as they were not enrolled in the study, but demographic data from recruited participants demonstrated that those able to participate were disproportionately White and highly educated with an annual household income of $50,000 or higher.Sanchez et al. [18] describe the demographic composition of enrolled participants for the FIRE-UP PD Study in more detail.barriers reported in the literature [11][12][13]16,18,20] which disproportionately affect URG research participation.Consequently, by continuing to focus on recruitment as an outcome over patientcentered measures of engagement, trust, and/or empowerment, the notable lack of representation among URGs in clinical trials will likely persist.By enhancing comprehension and interpretation of these interconnected obstacles hindering prospective participants from becoming involved in research, investigators can be better equipped to modify research recruitment methods as needed to develop a more localized approach to reach the populations that they hope to recruit, as there is no single activity that will allow for sustained or easy trial recruitment overall.Subsequently, by addressing and eliminating frequent barriers to clinical investigation currently being experienced, researchers may better promote and prioritize health equity in traditional research spaces by expanding studies to participants historically excluded through common research barriers.Through this intentional expansion of research to URGs through community-based methods, harmful patterns of exclusionary research may begin to be broken and recalibrated to address the needs of the community at large.
Author contributions.Dr. Angie V. Sanchez contributed to the conception and design of the work, the data collection, the data analysis and interpretation, as well as the manuscript draft and revisions.Juliana M. Ison contributed to the data analysis and interpretation as well as the manuscript draft and revisions.Helen Hemley and Dr. Jonathan D. Jackson contributed to the conception and design of the work, the data collection, and the manuscript draft.Dr. Sanchez takes responsibility for the manuscript as a whole.
Funding statement.The study was funded by The Michael J. Fox Foundation for Parkinson's Research (MJFF) under grant number 2018A009209.
Table 1 .
List of tracked barriers as categorized by the modified Picillo et al.'s framework
|
v3-fos-license
|
2021-04-10T06:16:47.272Z
|
2021-04-09T00:00:00.000
|
233190767
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/hex.13243",
"pdf_hash": "d66212822ab34f6cfe008573be287c0dc042edf9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3026",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "32506570ebbd9736addf2b738511965bcb30ad15",
"year": 2021
}
|
pes2o/s2orc
|
The story vs the storyteller: Factors associated with the effectiveness of brief video‐recorded patient stories for promoting opioid tapering
Abstract Background Narrative communication is often more persuasive for promoting health behaviour change than communication using facts and figures; the extent to which narrative persuasiveness is due to patients’ identification with the storyteller vs engagement with the story is unclear. Objective To examine the relative impacts of patient engagement, age concordance and gender concordance on perceived persuasiveness of video‐recorded narrative clips about opioid tapering. Methods Patient raters watched and rated 48 brief video‐recorded clips featuring 1 of 7 different storytellers describing their experiences with opioid tapering. The dependent variable was clips’ perceived persuasiveness for encouraging patients to consider opioid tapering. Independent variables were rater engagement with the clip, rater‐storyteller gender concordance and rater‐storyteller age concordance (<60 vs ≥60). Covariates were rater beliefs about opioids and opioid tapering, clip duration and clip theme. Mixed‐effects models accounted for raters viewing multiple clips and clips nested within storytellers. Results In multivariable models, higher rater engagement with the clip was associated with higher perceived persuasiveness (coefficient = 0.46, 95% CI 0.39‐0.53, P < .001). Neither age concordance nor gender concordance significantly predicted perceived persuasiveness. The theme Problems with opioids also predicted perceived persuasiveness. Conclusion Highly engaging, clinically relevant stories are likely persuasive to patients regardless of the match between patient and storyteller age and gender. When using patient stories in tools to promote health behaviour change, stories that are clinically relevant and engaging are likely to be persuasive regardless of storytellers’ demographics. Patient or public contribution Patients were involved as storytellers (in each clip) and assessed the key study variables.
| INTRODUC TI ON
Narrative, or story-based, communication is often more effective for promoting health behaviour change than didactic communication using facts and figures. For example, randomized clinical trials comparing narrative vs non-narrative patient education videos have found that narrative videos resulted in significantly better blood pressure control among African American patients with uncontrolled hypertension. 1 A randomized trial comparing narrative vs non-narrative films promoting cervical cancer screening found that women in both arms demonstrated improvements in knowledge and attitudes about cervical cancer screening. These improvements were significantly greater for women who watched the narrative video (vs the non-narrative video), and only women who watched the narrative video were significantly more likely to have undergone or scheduled cervical cancer screening 6 months later. 2 Narrative transportation theory posits that the persuasiveness of narrative communication is driven by the extent to which a narrative's audience feel 'transported' into the story while processing it, and that greater transportation is driven by emotional engagement with the narrative, identification with the storyteller and perceptions of narrative authenticity. [3][4][5] Recent meta-analyses have concluded that narrative messages decrease resistance significantly more than nonnarrative ones 6 and that narratives delivered via video or audio are typically more persuasive than written narratives. 7 However, literature on the effects of similarity between storytellers and patients on persuasiveness, including the effect of gender and age concordance, is mixed. 8 Ooms et al examined the effects of gender and age concordance on participants' intent to conduct cancer self-examinations after reading narrative health messages about gender-related cancers (ie breast or testicular cancer). They found that younger student participants identified significantly more with younger (vs older) storytellers, but that neither age nor gender concordance was associated with intent to perform cancer-related self-examinations or donate to cancer charities. 9 In contrast, when Chen et al conducted a similar experimental study to examine factors associated with perceived persuasiveness of a story about diabetes, gender concordance and age concordance were strongly associated with both participants' identification with the storyteller and the story's perceived persuasiveness. 10 Understanding the impact of age and gender concordance on narrative persuasiveness is particularly important in health communication, where there is a need for brief, effective interventions that can be incorporated into clinical workflows to promote patient health behaviour change. Patient education videos are commonly used in clinical studies, and a systematic review of such videos found that videos delivering story-based messages were more persuasive than non-narrative videos. 11 Storytellers' age and gender are easy to assess and so are often manipulated by health researchers as a way to increase patients' identification with a story and, by extension, its perceived persuasiveness. 1,12 Further research on the extent to which age concordance and gender concordance affect the perceived persuasiveness of narrative videos could inform future studies and health intervention development by clarifying the extent to which matching patient and storyteller demographics is an effective strategy for increasing persuasiveness. We also know little about the extent to which persuasiveness is associated with patients' identification with the storyteller vs their engagement with the story overall.
To examine the relative contribution of these factors, we analysed patient ratings of brief video clips showing patient stories that were collected to create a patient education video about opioid tapering. We focused on opioid tapering because reducing rates of opioid-related harms (including overdose and opioid use disorder) is an urgent public health priority in the United States. 13 In addition, reviews have found that narrative messages tend to be less effective at changing intention when encouraging behavioural cessation (eg reduce opioid consumption) vs when encouraging patients to initiate preventive or health screening behaviours. 7 Thus, identifying factors associated with perceived persuasiveness is particularly important for behavioural cessation messages, because researchers need as many tools as possible to optimize the persuasiveness of behavioural cessation messages when designing videos or other health interventions. Our primary objective was to investigate whether patients' engagement with the video clip, gender concordance between patient and storyteller, and age concordance between patient and storyteller were associated with patients' ratings of stories' overall persuasiveness. We also examined other factors that could impact stories' persuasiveness, including patients' attitudes towards opioids and opioid tapering, clip duration and clip theme (ie narrative content).
| Study design and population
This study was part of a larger project to create a patient education video about opioid tapering. We recruited both compelling storytellers (who were featured in narrative video clips) and raters who evaluated those clips. Storytellers and raters were recruited in a 2phase process from the same population using identical recruitment procedures and eligibility criteria. All participants were adults at 13 primary care clinics in Northern California who reported moderateto-severe chronic neck and/or back pain and had either tapered down or off long-term prescription opioids (defined as at least one K E Y W O R D S chronic pain, narrative transportation theory, opioid analgesics, patient education, persuasive communication, tapering opioid dose per day for at least three months) within the past year.
We recruited patients aged 35-85 years because our prior research found very few patients younger than 35 who took long-term opioids for chronic back or neck pain. 14 Participant exclusion criteria were pregnancy, active cancer treatment, being enrolled in hospice or palliative care, and being prescribed opioids by specialists rather than primary care clinicians. We used an electronic health record screening algorithm to identify potentially eligible patients who met these criteria, and then gave primary care clinicians lists of their patients identified by the algorithm and asked clinicians to identify their patients who were good candidates for opioid tapering, were in the process of tapering or had finished tapering within the past 12 months. Study personnel independently contacted the identified patients, assessed their eligibility via telephone and then invited interested patients to participate.
| Storyteller identification and clip selection
To identify compelling storytellers, in the first phase of recruitment we enrolled 21 participants who took part in 4 focus groups about opioid tapering (eighty-nine percent of patients who were screened and eligible for inclusion agreed to participate). 15 During each focus group, an investigator not conducting the focus group acted as an observer and took notes on group dynamics. Researchers then identified compelling storytellers by reviewing focus group audio recordings and transcripts, and notes from focus group observers.
Compelling storytellers were defined as participants who told stories about their experiences with opioid tapering that seemed authentic, coherent and engaged or kept the attention of other focus group participants. We invited 9 compelling storytellers to participate in an additional 30-minute 1-on-1 video-recorded interview, during which time they would recount and elaborate on their personal experiences with opioid tapering. Two participants declined to be interviewed (1 due to concerns about being video-recorded and 1 due to scheduling conflicts), leaving seven compelling storytellers. From these interviews, investigators selected 48 brief video clips (mean duration 42 seconds; SD = 12; range 19-70 seconds) for possible inclusion in the patient education video.
To select clips, researchers viewed all interviews and identified video segments that showed a storyteller recounting a brief, coherent story about their experiences related to opioid tapering.
Consistent with the overall project goal, we did not select clips that explicitly discouraged opioid tapering or explicitly encouraged opioid dose escalation. All identified segments were edited into clips that were stored as separate digital files. Two examples of video clips rated in this study can be viewed in the Videos S1 and S2.
| Clip ratings
For the second phase, patient raters were recruited from the same clinics using the same methods and eligibility criteria as storytellers.
We began recruiting raters when we had almost finished conducting focus groups. Participants who were unable to participate in focus groups were eligible to participate as raters. However, focus group participants were not eligible to be raters. In addition, rater recruitment was stratified by age (<60 vs ≥60 years) and gender. To our knowledge, no raters were acquainted with any of the storytellers.
Each rater rated 24 randomly selected narrative clips, for a total of 1152 ratings (12 ratings per clip; 3 per gender-by-age category).
The 48 clips were first randomly divided into two groups ('A' and 'B') of 24 clips each. Raters were then scheduled to watch the video clips in a series of small groups. Each small group watched either the 24 clips in group 'A' or the 24 clips in group 'B'. During each small group, raters were first informed of the study purpose and then watched and rated the 24 video clips on several Likert-type items. Raters watched each clip and then immediately recorded their ratings for that clip using paper questionnaires. Rating all 24 clips took approximately 90 minutes; raters were given a break midway through to prevent fatigue. Raters were seated to ensure that they could not observe how others rated the clips. During each small group, raters viewed clips in a different random order (using a random order generated by the study biostatistician) to minimize potential order effects. Raters were assigned to small groups to ensure balanced recruitment for each gender-by-age category. Prior to viewing the clips, raters provided data on demographic characteristics (Table 1) and attitude towards opioids ( Table 2).
| Measures
Raters' baseline questionnaires included the following rater characteristics: age, gender, race, ethnicity (Hispanic vs non-Hispanic), highest education attained, employment status, annual household income (US dollars), average pain severity, duration of chronic pain and opioid tapering status (finished tapering within the past year, in the process of tapering, clinician had recommended but not yet started tapering, or clinician had not recommended tapering).
Baseline questionnaires also included covariates related to raters' beliefs about opioids and opioid tapering (which could affect perceived persuasiveness). Raters' attitudes about opioid tapering were measured using 1 item from the Prescribed Opioid Difficulties Scale 16 (Table 2) analysed as a binary variable (disagree or strongly disagree with desire to taper (reference) vs agree, strongly agree or already tapered). Beliefs about opioid effectiveness were measured by the mean of 2 items asking about opioid effectiveness ( Table 2, Cronbach's alpha = 0.95). Beliefs about opioid-related side-effects were measured by the mean of 4 items from the Prescribed Opioid Difficulties Scale and the Current Opioid Misuse Measure 17 ( Table 2, Cronbach's alpha = 0.90).
Our dependent variable was perceived clip persuasiveness, which raters assessed after viewing each clip by answering the question, 'After watching this clip, how willing do you think other people with chronic pain would be to try taking less opioid pain medication?' (rated from 1 = 'not at all willing' to 5 = 'very willing'; mean rating 2.8; SD 0.3). We designed this variable using this particular wording because the planned patient education video was intended to target patients who were candidates for tapering, and we thought that asking patients in varying stages of the tapering process about Our independent variables were patient engagement with the clip, age concordance between rater and storyteller, and gender concordance between rater and storyteller. Raters assessed their engagement with each clip after viewing it by answering 5 items related to engagement (rated from 1 = 'not at all' to 5 = 'very'; see Table 3) that we adapted from items used by other research teams to assess the persuasiveness of narrative videos. Engagement was operationalized as the mean of these 5 items (Cronbach's alpha = 0.87).
Age concordance and gender concordance were operationalized as binary variables indicating whether the rater and storyteller were in the same age category (<60 vs ≥60 years) and gender category, respectively.
Finally, covariates for clip characteristics were clip duration (in seconds) and clip theme. To evaluate clip theme, two investigators first viewed each clip and then coded whether the story in that clip included any of the following nine topics: reasons for tapering, opioid-related risks, fears about tapering, benefits of tapering, communicating with clinicians, managing pain, managing opioids, getting through the day (ie completing daily activities while tapering) and support for tapering (ie support from family or friends).
Disagreements were resolved by discussion. Topics were not mutually exclusive; the story in a single clip could be coded for more than one topic (median = 3, range 1 to 5). The most common topics were managing pain (52%) and getting through the day (52%), followed by reasons for tapering (40%). The least common topics were fears about tapering (8%) and benefits of tapering (10%).
| Power considerations
Our sample size of 48 raters (12 per gender-by-age category) was chosen to achieve the aims of the larger study, 18 Assuming the sum of all variances of interest (i.e., between raters, clips, and storytellers) accounts for 50% to 70% of total variance (i.e., an ICC between 0.5 and 0.7), the effective number of ratings ranges from 92 to 67. Based on data from similar scales and patient populations, 16 When examining the effects of gender and age concordance, our sample was designed so that half of the ratings were from raters who had the same gender and/or age as the storyteller and half were from raters who did not. Using the same strategy as above, for an ICC of 0.7, the effective sample size is 33 per group and the power to detect a 0.5-point difference in persuasiveness between the clips rated by age-or gender-concordant raters and those rated by age or gender non-concordant raters is at least 82%. The power would be >92% if the ICC was 0.5.
| Statistical analyses
Given the large number of topics and the substantial proportion of clips that included multiple topics, we used latent class analysis to classify clips into homogenous classes, or themes, based on the combination of topics that each clip included. We performed a latent class analysis to simplify how we analysed clip content and reduce the chances of type I error. We examined models with two to five themes (classes) and selected the optimal number of themes based on the proportion of clips assigned to each class, interpretability of results and several goodness-of-fit criteria (Bayesian information criterion, Akaike information criterion, entropy, and Lo-Mendell-Rubin and parametric bootstrapped likelihood-ratio tests). 21,22 The highest posterior probability from the optimal latent class analysis model was used to assign each clip to one theme. For statistical analyses, each theme was operationalized as a binary variable indicating whether a particular theme was present in each clip. Themes were mutually exclusive; each clip was assigned to only 1 theme.
We started by fitting separate linear mixed-effects models to examine unadjusted (bivariate) associations between each independent variable (patient engagement with the clip, age concordance between rater and storyteller, and gender concordance between rater and storyteller) and our dependent variable (perceived persuasiveness). 23 We then added all 3 independent variables into a single multivariable model. Three random effects (for storyteller, clips nested within storyteller and rater) were included to account for raters viewing multiple clips and clips clustered within storyteller.
We also examined associations between covariates and our dependent variable. We first analysed each covariate separately in unadjusted linear mixed-effects models controlling only for raters viewing multiple clips and for clips being nested within storytellers.
We then added covariates with P < .2 in unadjusted analyses, one at a time, to the multivariable model containing our three independent variables and retained only those that were significantly as- We used Mplus version 8 24 to perform latent class analyses.
All other analyses were implemented using SAS version 9.4 (SAS Institute Inc). All tests were two-sided, with an alpha of 0.05. Due to our relatively small sample size, we did not explore mediators or moderators of perceived persuasiveness or examine statistical interactions.
| RE SULTS
As planned, we recruited 48 raters (12 for each gender-by-age category); 79% of patients who were screened and eligible for inclusion agreed to participate. Table 1 shows rater characteristics.
We chose the three-class model as the best-fitting latent class analysis model for the clip topics in our data. Table 5 shows our primary results for both unadjusted and adjusted analyses. In unadjusted analyses of our independent variables, higher patient engagement was associated with higher perceived persuasiveness and age concordance was associated with lower perceived persuasiveness. However, gender concordance was not significantly associated with perceived persuasiveness. When all 3 independent variables were examined in the same multivariable model, only patient engagement remained significantly associated with perceived persuasiveness.
Among the covariates examined, beliefs about opioid tapering (but not beliefs about opioid effectiveness, opioid-related sideeffects or clip duration) were significantly associated with perceived persuasiveness in unadjusted analyses. Patient raters who endorsed a desire to taper or had already tapered rated clips as significantly more persuasive than raters who disagreed with a desire to taper.
Two of the 3 themes-Problems with opioids and Daily functioning-
were also associated with perceived persuasiveness (P < .2) in unadjusted analyses and so were considered for inclusion in the final model (Table 5). TA B L E 3 Items used for assessing rater engagement with video clips a
| Discussion
This study examined factors correlated with perceived persuasiveness of video-recorded narrative clips collected during a larger project on opioid tapering. Patients were involved in this study both as storytellers (in each clip) and as raters assessing perceived persuasiveness. We found that higher rater engagement with brief narrative clips was strongly and significantly associated with greater perceived persuasiveness. In contrast, age concordance and gender concordance between patient raters and storytellers were not significantly associated with clips' perceived persuasiveness in multivariable models. These findings suggest that highly engaging stories or vignettes are likely persuasive to patients regardless of the storytellers' age and gender or patients' age and gender. One possible explanation of this finding is that, while a patient's identification with the storyteller is a key element of persuasion in narrative persuasion theory, 4 being of the same gender or age group, by itself, is often not sufficient to make patients identify with storytellers in narrative videos. This interpretation is consistent both with an older review that found similarity between participants and storytellers was not typically associated with persuasiveness, 8 and with the findings from the recent study by Ooms et al. 9 That study found limited effects of age and gender concordance on persuasion; however, when the authors fit structural equation models, measurements of perceived similarity did contribute significantly to persuasiveness.
If confirmed in other studies, an implication of this interpretation is that health researchers should consider multiple factors associated with patient-storyteller identification (eg storyteller authenticity, storyteller context) rather than just simple demographics when selecting storytellers for narrative videos.
One unexpected finding was that raters who agreed or strongly agreed that tapering was beneficial found clips to be much more persuasive than raters who disagreed that tapering was beneficial. An implication of this finding is that the video produced as part of the larger project may be more persuasive to patients who are at least open to the idea of tapering opioids vs patients who believe tapering will lead to worse pain control. This finding may relate to the consensus from prior reviews that it is particularly difficult to persuade patients to stop harmful or unwanted behaviour, compared with taking on or initiating healthy behaviour. 7,8 Our finding that clips assigned to the theme Problems with opioids were more persuasive than other clips suggests that using stories that recount patients' rationale for tapering, including concerns about opioid-related risks, is a promising strategy for future interventions related to opioid tapering and, potentially, other interventions focused on behavioural cessation.
Notably, in this analysis of 48 clips featuring 7 different storytellers, storyteller identity did not independently explain any of the variance in perceived persuasiveness (ie ICC for storyteller was 0).
Raters and storytellers were recruited from the same patient population using the same procedures, so one potential explanation for this finding is that persuasiveness was driven by raters' identification with the clinical scenarios described in the clips or problems with opioids described by the storytellers rather than by the demographic characteristics of the storytellers. These findings indicate that narrative content also impacts the perceived persuasiveness of patient narratives.
Our study has some limitations. The small number of unique storytellers in the clips and sample size may have limited our ability to detect small effects or to reliably estimate the ICC for storyteller; however, our sample was balanced to optimize our ability to evaluate the effects of age and gender concordance. We could not examine racial concordance because most raters and storytellers identified as white. Our decision to measure whether raters perceived that clips were likely to be persuasive for others as the dependent variable rather than whether clips were persuasive to TA B L E 4 Item response probabilities of topics given a latent class for the best-fitting latent class analysis model a
ACK N OWLED G M ENTS
We are grateful to Gary Weinberg for his assistance with data collection, patient recruitment and video clip selection.
CO N FLI C T S O F I NTE R E S T
No authors have any conflicts of interest to disclose.
AUTH O R CO NTR I B UTI O N S
Henry conceived and designed the study, obtained funding, planned a From linear mixed-effects regression models fitted separately for each independent variable and controlling only for raters viewing multiple clips and for clips nested within storytellers.; b From linear mixed-effects regression models controlling for all listed independent variables and covariates that have coefficients listed in the second column and accounting for raters viewing multiple clips and for clips being nested within storytellers.; c Analysed as a binary variable: disagree or strongly disagree with desire to taper (reference) vs agree, strongly agree or already tapered.; d Clip themes were analysed as binary variables indicating whether each clip contained the specific theme or not (reference).
analysis, interpreted the analysis results, edited the manuscript for critical content and approved the final submission.
This study was approved by the University of California Davis
Institutional Review Board (protocol number 897480).
DATA AVA I L A B I L I T Y S TAT E M E N T
A de-identified version of the statistical data set used in this study is available from the corresponding author upon reasonable request.
Video clips used in this study are not available for additional research because they cannot be de-identified.
|
v3-fos-license
|
2024-06-30T15:19:00.530Z
|
2024-06-28T00:00:00.000
|
270832124
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2504-4990/6/3/69/pdf?version=1721118121",
"pdf_hash": "0dd986232fe31f27729ebf3eeb9bd14237884777",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3027",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"sha1": "6db616e0086223ca0af2b9e824193c38562fd59a",
"year": 2024
}
|
pes2o/s2orc
|
Motor PHM on Edge Computing with Anomaly Detection and Fault Severity Estimation through Compressed Data Using PCA and Autoencoder
: The motor is essential for manufacturing industries, but wear can cause unexpected failure. Predictive and health management (PHM) for motors is critical in manufacturing sites. In particular, data-driven PHM using deep learning methods has gained popularity because it reduces the need for domain expertise. However, the massive amount of data poses challenges to traditional cloud-based PHM, making edge computing a promising solution. This study proposes a novel approach to motor PHM in edge devices. Our approach integrates principal component analysis (PCA) and an autoencoder (AE) encoder achieving effective data compression while preserving fault detection and severity estimation integrity. The compressed data is visualized using t-SNE, and its ability to retain information is assessed through clustering performance metrics. The proposed method is tested on a custom-made experimental platform dataset, demonstrating robustness across various fault scenarios and providing valuable insights for practical applications in manufacturing.
Introduction
Motors are essential components in various manufacturing processes.However, they can wear out over time, resulting in unexpected equipment failures.PHM (prognostics and health management) is necessary to prevent unexpected equipment failures.PHM uses sensor data to assess system health, detect anomalies, and predict performance for the remaining lifespan of the asset [1][2][3].Due to the importance of motors in the manufacturing industry, many studies focus on PHM to provide early warnings for motor failures and enable effective maintenance strategies [4][5][6][7].
There are two primary approaches to PHM for motors: model-based PHM and datadriven PHM [8].Model-based PHM, which uses mathematical models, can be challenging, especially for complex systems [9].However, data-driven PHM has gained considerable attention with the development of the smart factory and advances in data processing technologies, such as the Industrial Internet of Things (IIoT) devices and deep learning (DL) methods.With these advancements, data-driven PHM with DL is a valuable approach in manufacturing environments, especially because it can reduce the reliance on domain expertise alone.
A variety of deep learning techniques have been applied to predictive and health management (PHM) of rotating machinery.Among these, self-supervised and semi-supervised learning have emerged as effective methods to overcome the challenge of limited labeling data.For example, Cui et al. employed a self-attention-based signal transformer to achieve a diagnosis accuracy of up to 92.81% on the CWRU dataset [10].This approach combined self-attention with contrastive learning to enhance fault detection capabilities.Similarly, Ding et al. employed momentum contrast learning for self-supervised pretraining, demonstrating substantial improvements in detection accuracy and fault occurrence time on experimental datasets [11].Wang et al. also achieved high precision, recall, and accuracy across different datasets using a 1D convolutional neural network for self-supervised signal representation learning [12].Semi-supervised learning has also demonstrated potential in PHM, effectively leveraging both labeled and unlabeled data.Yu et al. demonstrated the efficacy of consistency regularization with a convolutional residual network (CRN), achieving an average accuracy of 99.16% [13].Chen et al. integrated deep neural networks (DNNs) with a Gaussian mixture model (GMM), significantly improving performance metrics [14].Miao et al. developed an attention-assisted cyclic attention neural network, achieving an average accuracy of 99.29% [15].
Supervised learning remains vital for PHM due to its high accuracy, clear model interpretation, and versatility with various data types.Huo et al. achieved up to 100% accuracy in gear fault diagnosis using VGG and CNNs on two datasets [29].Zhu et al. combined CNNs and GRUs in Res-HSA, demonstrating high accuracy and low error on the IEEE-PHM-2012 and C-MAPSS datasets [30].Sun et al. achieved 99.97% accuracy in rotating machinery fault diagnosis with a bi-channel CNN and optimized Hilbert curve images [31].Wei et al. improved accuracy under noisy conditions with the WSAFormer-DFFN model, combining CNN and self-attention structures [32].These studies highlight the effectiveness of supervised learning for fault diagnosis using hybrid models.Integrating CNNs with attention mechanisms improves feature extraction and pattern recognition, while combining different neural network architectures captures both spatial and temporal dependencies.This approach better handles complex and noisy datasets, increasing diagnostic accuracy and confidence.
Transfer learning has been leveraged to enhance PHM by utilizing knowledge from related domains.Zhang et al. explored feature-level transfer learning, demonstrating higher fault identification accuracy under complex working conditions [33,34].This approach has proven effective in improving diagnostic performance and adapting to new fault scenarios with limited labeled data.Unsupervised learning, though less commonly applied, has also contributed to PHM by discovering underlying patterns in unlabeled data [35][36][37].
The recent studies on motor PHM using deep learning techniques are summarized in Table 1.These studies encompass a variety of methods, including supervised, unsupervised, and semi-supervised learning approaches.The focus has been on improving fault diagnosis accuracy and generalizability by leveraging large datasets and advanced DL models.However, the necessity to rely on substantial data volumes and high-performance computing resources presents significant challenges for real-time data analysis and the transmission of large datasets.This is due to the rapid increase in data volumes, which have reached zettabytes of scale in traditional cloud-based PHM systems [38].[37] To mitigate these challenges, edge computing has emerged as a promising solution [39,40].Edge computing facilitates real-time data analysis and reduces bandwidth requirements by processing data close to the source.This approach enables real-time analysis without the need to transfer large datasets across networks, reducing data storage and bandwidth [41][42][43].
In this study, we propose a novel approach for motor PHM on edge devices.Initially, we establish an experimental framework to simulate two distinct motor fault scenarios with varying severity.Vibration data are collected through high-resolution sensors and stored directly on the edge device.Given that edge devices generally have limited computational resources and memory, we devised an efficient data compression technique using principal component analysis (PCA) and an autoencoder (AE) encoder.Our research introduces a method for anomaly detection and fault severity estimation with minimal data, emphasizing an efficient data compression technique that maintains diagnostic accuracy.
Our study is distinct in its approach of collecting data from two representative fault types and utilizing these data to evaluate the performance of these compression methods.Specifically, we assess the degree of compression achieved by PCA and AE encoder and their impact on distinguishing between normal and faulty states, as well as the clus-tering performance by fault type.Our findings indicate that the degree of compression significantly impacts performance, underscoring the need for optimal data compression to enhance fault detection and classification in edge computing.This approach enables realtime analysis with limited resources while maintaining high diagnostic accuracy, making it practical for modern industrial applications.
The primary contributions of this study are as follows: 1.
We introduce an efficient data compression method for motor PHM on edge devices, addressing the limitations of their limited computational resources and memory.
2.
We analyze how different compression levels affect fault detection accuracy and severity classification, highlighting the trade-offs between data compression and diagnostic performance.
This article is structured as follows: Section 2 describes the experimental platform and data collection process.Section 3 details the proposed approach for data compression, unsupervised anomaly detection, and fault severity estimation.Section 4 presents the results.Finally, conclusions are made.Our source code was built with the scikit-learn [44] machine learning library and TensorFlow library [45].
Experimental Platform and Data Acquisition
Figure 1 shows the experimental platform we built for the edge-based PHM of motor.We utilized a 0.75 kW three-phase induction motor with a squirrel-cage rotor type manufactured by SIEMENS.The rotating body, which includes a hole for inserting weights, was positioned at the center of the shaft.The accelerometer used in this experiment was model VSA005 from IFM, which has a measuring range of ±25 g and a frequency range of 0-10 kHz.IFM's VSE004 diagnostic software was used for vibration measurement and monitoring.Throughout the experiments, the motor operated at a speed of 1000 rpm.
For effective monitoring, we selected a sensor (VSA005 from IFM) with a high sampling rate of 20 k.The downsampling technique was used to overcome the limitation of the software (VSE004 from IFM), which could only sample data at a rate of 100 k.By averaging every 5 data points from the original 100 k raw data, we effectively transformed it into a 20 k format.This approach allowed us to overcome the software limitation and contributed to the overall stability and reliability of the experiment.
We conducted experiments under three experimental conditions: (1) normal condition, (2) eccentricity fault condition, (3) bearing fault condition.The bearing fault was induced by intentionally drilling holes, and the severity of the fault increased with the number of holes, as shown in Figure 2. The eccentricity fault was induced by adding weights to the rotating body and the fault severity was escalated by increasing the number of weights, as shown in Figure 3.
Methodology
In this section, we will explain the methods that we use in this study.An implementation scheme of our proposed method is shown in Figure 4.The effects of motor faults appear as peaks in the frequency domain.Therefore, we first apply the fast Fourier transform (FFT), which is widely used in vibration analysis, to analyze the frequency domain [46].We performed the FFT in 0.5 s increments and set the resolution of the FFT to 2 Hz.This resulted in each dataset having 5000 initial components.Given that tens of gigabytes of data are generated per sensor per day, the need for compression is crucial to manage the bandwidth and storage requirements effectively.
We utilize PCA and an AE encoder for data compression.Due to the challenge of collecting fault data in manufacturing environments, we fit PCA using a dataset composed solely of normal data, as well as another dataset containing a mix of 1% anomaly data and 99% normal data.By training the AE exclusively on normal data, it becomes proficient at identifying deviations from typical patterns.To detect faults, we reconstruct the compressed data using the AE decoder and use the mean squared error (MSE) to distinguish between normal and abnormal data.Additionally, we use clustering performance metrics and the t-SNE technique based on the degree of data compression to evaluate the preservation of essential information.
Principal Component Analysis
PCA is an unsupervised statistical method that can be used for dimensionality reduction and feature extraction [47,48].PCA aims to transform a high-dimensional dataset into a lower-dimensional space while preserving the most important information.The PCA process involves several steps [49].First, the data are normalized using Equation (1).
where X is the original data, µ is the mean, and σ is the standard deviation Then, the covariance matrix is computed using Equation (2).
where n is the number of data, X is the normalized data, and µ is the mean.After that, the principal components are obtained through computing the eigenvectors and eigenvalues of the covariance matrix.Principal components are the directions in which the data vary the most.From the eigenvector matrix, we select the first k columns.The principal components are then used to project the data into a lower-dimensional space using Equation (3), where z is projection, X is the normalized data, V T k is a matrix containing k eigenvectors, and k is the number of eigenvectors that we choose.
We separate the PCA process into two stages: the fitting stage, where we obtain the principal components through eigenvalue decomposition (specifically, V k , as shown in Equation ( 3)); and the transformation stage, where the original data are projected onto the principal components to obtain a reduced-dimensional representation.In the fitting stage, we applied PCA in two scenarios: one involving only normal data, and the other with a 1% admixture of abnormal data to simulate manufacturing environments.After the PCA fitting process, we calculated the explained variance ratios and compared the results.This comparison allowed us to retain the principal components that explain a substantial portion of the dataset's variance, achieving effective dimensionality reduction while preserving the most informative aspects of the data.
Autoencoder
An autoencoder is an artificial neural network that compresses input data into a lowerdimensional latent space, and then, reconstructs it back to its original form.It consists of two components, an encoder and a decoder.The encoder consists of multiple hidden layers that progressively reduce the dimensionality of the input data [50].The encoder compressed the high-dimensional input data x ∈ R n in to latent vectors h ∈ R m by a function f θ : where h (i−1) denotes the output of the previous layer.s f is the activation function and we used the rectified linear unit (ReLU) function for the hidden layers and a linear activation function for the latent vectors in this study.W i is the weight matrix for the i th layer.b i is the bias vector for the i th layer.
The decoder reconstructs the original input from the latent vectors using a function g θ .At each layer i it applies the function as follows: where s g represents the activation function used in the decoder.In this study, we employed the ReLU function for the hidden layers and a linear activation function for the output layer.W ′ i is the weight matrix specific to the i-th decoder layer.b ′ i is the bias vector specific to the i-th decoder layer.
During training, the AE minimizes the difference between the input data and the reconstructed output [51].This process is typically achieved by adjusting the neural network parameters to reduce the mean squared error (MSE), which is calculated as follows: where n represents the number of data points.x i denotes the original input data.xi represents the corresponding reconstructed output.
Anomaly detection is a powerful application of AEs [35,37,[52][53][54].By training the AE on normal data, it learns to reconstruct the normal data accurately.However, when abnormal data are input during testing, the AE struggles to reconstruct them effectively and it leads to higher reconstruction errors.As a result, data with significant reconstruction errors can be detected as anomalies.In our case, we trained an AE using only compressed normal data for anomaly detection.After training, we used an AE encoder for data compression and an AE decoder for anomaly detection and fault severity estimation.We determined the threshold for anomaly detection by considering the top 5% of the data based on the loss function.
t-Distributed Stochastic Neighbor Embedding
t-SNE is an unsupervised dimensionality reduction technique aimed at mapping highdimensional data into a lower-dimensional space while preserving local data structures and inter-point relationships [55].Therefore, it is appropriate for evaluating compression in latent vectors.t-SNE comprises several key steps, described below [56].
Initially, it applies stochastic neighbor embedding (SNE) to the dataset, transforming high-dimensional Euclidean distances between data points into conditional probabilities, capturing similarities.The similarity between data point x j and data point x i is represented by the conditional probability p j|i , defined in Equation ( 7): In the low-dimensional space, t-SNE uses a student t-distribution with one degree of freedom, as expressed in Equation ( 8): The t-SNE minimizes the cost function defined in Equation ( 9), which includes the Kullback-Leibler (KL) divergence to align conditional probability distributions in highand low-dimensional spaces.
where P i represents the conditional probability distribution among all data points given a data point x i , and Q i corresponds to the conditional probability distribution over all other mapped points given mapped point y i [57].The optimization of this cost function is achieved using a gradient descent method.
The ability of an AE to generate a compressed latent space representation demonstrates its effectiveness in capturing essential information from the input data [58].The t-SNE method can be employed to qualitatively evaluate and visualize the efficiency of data compression [59].This study utilized the t-SNE method to visualize the distribution of data in the latent space compressed by the AE encoder as a function of the number of latent vectors.This visualization demonstrates the effectiveness of the compressed data in preserving the essential information of the original data.
Result
In this section, we discuss the results obtained from applying our methodology to datasets.In manufacturing environments, there can be scenarios where fault data are either unavailable (due to lacking labels) or in limited supply.Therefore, we applied our methodology in two scenarios: one using only normal data during the PCA fitting stage, and another incorporating 1% of anomaly data.
PCA Fitting with Only Normal Data
We first assumed that there were no fault or labeled fault data.We exclusively applied the PCA fitting stage to the normal data.After completing the PCA fitting process, we obtained the explained variance ratios for different numbers of components, as shown in Table 2.We collected data for 40 min with a sensor with a sampling rate of 20 k, and performed FFT in 0.5 s increments.As a result, 4800 datasets were obtained for each scenario.Accordingly, when we performed PCA, we obtained 4800 components, which is the same number as the maximum number of datasets.To assess the trade-off between preserving information from the original data and achieving data compression, we conducted experiments using various PCA-explained variance ratios of 50%, 60%, and 70%.Through this process, the data were compressed to 1.08%, 2.24%, and 6.64% of the FFT data size, respectively.As a tool of anomaly detection and fault severity estimation, we used the AE decoder to analyze both the eccentricity fault datasets and the bearing fault datasets to compute the MSE.The results are depicted in Figure 5, where the number after each label represents the fault severity, i.e., a label number of 1 (E.fault 1, B. fault 1) represents the mildest fault.For anomaly detection, as shown in Figure 5, it is possible to detect anomalies across all explained variance ratios of the dataset, with the MSE increasing as the fault severity increases.The fault detection criterion is the red dashed line, representing the top 5% MSE threshold based on normal data.However, when the explained variance ratio is 50% (Figure 5a,d), it is hard to distinguish between the two lower-severity faults (E.fault 1, 2 and B. fault 1, 2).When the PCA-explained variance ratio is increased to 60% (Figure 5b,e), the MSE difference between the two lower-severity faults becomes more pronounced.In Figure 5c,f, the separation of MSE based on fault severity is clearly visible.This demonstrates the trade-off between the PCA-explained variance ratio and fault severity identification, which is critical for balancing data compression with effective fault detection.The PCA and AE encoder models, fitted and trained on the normal data, were sequentially applied to generate compressed datasets of eccentricity fault data and bearing fault data to visualize the distribution of the compressed data.This process included the following steps: First, we applied PCA to each dataset and adjusted the number of components to explain 50%, 60%, and 70% of the variance.Specifically, in the PCA model fitted with normal data, 54 components explained 50% of the variance, 124 components explained 60%, and 332 components explained 70% (refer to Table 2).Next, we trained an autoencoder on normal data, where the number of latent vectors was set to 10%, 20%, and 30% of the principal components.For instance, for the PCA with 332 components (70% variance explained), the latent vectors for the autoencoder were 33, 66, and 100, respectively.After training, we applied the autoencoder to compress each dataset (eccentricity fault data combined with normal data and bearing fault data combined with normal data) and visualized the compressed data using t-SNE.The results of these visualizations are presented in Figures 6 and 7.The t-SNE visualization helps reduce uncertain estimates by showing the distribution of the compressed data.This qualitative evaluation demonstrates the trade-offs involved in selecting the appropriate level of compression to ensure data integrity.Following this visual analysis, we use clustering metrics to quantitatively validate the results and ensure the reliability of our methodology.
Figure 8 illustrates the clustering performance results of comparing normal data with two types of fault data using the PCA and AE encoder compression models trained on normal data.Figure 8a shows the results for normal data and eccentricity fault data, while Figure 8b presents the results for normal data and bearing fault data.In this evaluation, data were compressed by adjusting the PCA-variance-explained ratio and the number of latent vectors in the AE encoder, followed by measuring clustering performance.Clustering performance was assessed using three metrics: the Adjusted Rand Index (ARI), normalized mutual information (NMI), and the Fowlkes-Mallows Index (FMI).High clustering scores indicate that despite the data compression, the boundaries between normal and anomalous states, as well as the severity of faults, were clearly distinguished.This implies that the compression techniques maintained the integrity of the data necessary for accurate fault detection and severity classification.Specifically, when clustering normal data and eccentricity fault data, the most distinct results were observed with a PCA-variance-explained ratio of 60% and AE encoder compression rates of 10% or 20%, corresponding to 25 latent vectors.At this setting, the data compression ratios were reduced to 0.24% and 0.5%, respectively, compared to the original vibration data post-FFT, while still achieving high performance.For normal data and bearing fault data, the clustering performance was relatively lower.However, our approach provides effective compression strategies even for data that are more challenging to cluster.For example, a compression method with an NMI of about 0.67 using only 0.22% of the data can be obtained with a PCA-variance-explained ratio of 50% and an AE encoder compression ratio of 20%.Alternatively, a compression method with an NMI of about 0.76 using 0.66% of the data can be obtained with a PCA-variance-explained ratio of 70% and an AE encoder compression ratio of 10%.
PCA with 1% Anomaly Data
We also considered the case where a small amount (1%) of fault data are added.During the PCA fitting stage, we utilized a dataset that combined 1% anomaly data with the remaining 99% normal data.For the PCA fitting using eccentricity fault data, we mixed the fault severity levels to reflect actual manufacturing conditions (E. fault 1: 50%, E. fault 2: 33.3%, E. fault 3: 16.7%).For the bearing fault data, there were significant deviations from the normal data, as shown in Figure 5d-f.Mixing data with such large deviations would lead to excessive data compression during the PCA process, resulting in excessive loss of information.Therefore, only the mildest defects (B.fault 1: 100%) were considered in the PCA fitting with the bearing fault data.
The explained variance ratios for each PCA result are presented in Tables 3 and 4. Despite adding only 1% fault data, a higher explained variance ratio is achieved with fewer principal components compared to using only normal data.The fault data introduce more variability and distinct features, making it easier for PCA to capture significant directions of variance.Fault data often have distinct patterns or anomalies that stand out compared to the homogeneous normal data, allowing PCA to capture these differences more effectively with fewer components.The performance evaluation results using PCA fitting with 1% fault data are shown in Figure 9.The MSE results (Figure 9a,b,e,f) effectively differentiate fault severity levels.The t-SNE visualization shows that information in the latent space is well preserved.For PCA fitting with eccentricity fault data, 70% explained variance and 30% AE latent vector ratio resulted in 56 latent vectors and a 1.12% compression ratio (Figure 9c,d).For PCA fitting using the bearing fault dataset, only 70 principal components were needed to explain 70% variance, significantly lower than the 188 components for the 1% eccentricity fault dataset and 332 components for the normal dataset.Thus, we used 80% explained variance and 30% AE latent vector count, resulting in 70 latent vectors and a data compression ratio of 1.4% (Figure 9g,h).The results in Figure 9 also demonstrate the versatility of our method in fault detection and data compression across different fault scenarios.Figure 9b,d show bearing fault data compressed using PCA fitted with 1% eccentricity fault data, while Figure 9e,g show eccentricity fault data compressed using PCA fitted with 1% bearing fault data.These results indicate that our method can effectively be applied to different fault scenarios even with data from only one fault type.This is because PCA captures significant variance directions that indicate fault patterns, which are applicable across different types of faults.By including a small amount of fault data, PCA adapts to capture key features distinguishing normal from faulty conditions.The subsequent AE compression preserves these features in the latent space, ensuring essential fault information is retained.Additionally, selecting an appropriate compression ratio is crucial to maintaining the balance between data reduction and information preservation, ensuring the compressed data are both efficient and effective for fault detection.This adaptability makes our method versatile for real-world manufacturing environments, where comprehensive labeled datasets for all fault types are impractical.
Conclusions
This study introduces an innovative approach for PHM of motors in manufacturing environments using edge computing.By addressing the challenges of data volume and processing limitations on edge devices, we developed a data compression method combining PCA and an AE encoder.Our experimental setup involved capturing vibration data under normal, eccentricity fault, and bearing fault conditions using high-resolution sensors.The methodology involved FFT followed by PCA and AE encoder compression.By fitting PCA on normal data and incorporating a minimal amount of fault data, we optimized the compression process.AE models, trained to accurately reconstruct normal data, allowed for effective anomaly detection through mean squared error (MSE) evaluation.
The results demonstrated that our approach could achieve significant data compression ratios while maintaining high accuracy in fault detection and severity estimation.Specifically, PCA combined with AE provided effective compression down to 1.12% of the original data size, enabling real-time analysis on edge devices without significant information loss.The t-SNE visualizations further validated the preservation of essential data characteristics in the compressed space, facilitating clear fault differentiation.
Figure 10 summarizes the clustering performance across different compression ratios and methods, offering both qualitative and quantitative analysis.These results were obtained by using PCA fitted with 0.5% eccentricity fault and 0.5% bearing fault data (E.fault 1: 25%, E. fault 2: 16.7%, E. fault 3: 8.3%, B. fault 1: 50%), demonstrating how well the compression methods preserve the structure of the data across different fault types and severities.Figure 10a shows t-SNE visualizations of data compressed using various PCA-variance-explained ratios and AE compression levels.Figure 10b presents clustering performance metrics, such as ARI, NMI, FMI, for different PCA-variance-explained ratios and the number of AE latent vectors.Figure 10c highlights three representative data points with excellent compression rates and high ARI clustering scores, marked in pink, yellow, and brown.The corresponding t-SNE visualizations in Figure 10a are also highlighted in the same colors.The compression method highlighted in pink uses only 0.78% of the data, in comparison to the FFT data.The yellow and brown methods utilize 0.44% and 0.08%, respectively.These methods demonstrate optimal compression while retaining a reasonable amount of essential information.This demonstrates that our methodology can effectively compress data in a manufacturing environment.
In conclusion, our research provides a viable solution for motor PHM on edge devices, striking a balance between data compression and diagnostic accuracy.By significantly reducing the storage space required and minimizing bandwidth usage for data transmission, our approach enhances the efficiency of data management in manufacturing processes.Additionally, our techniques improve clustering performance and facilitate better fault identification accuracy by leveraging an effective balance in compression degree.Furthermore, our approach includes data structure visualization, which aids in understanding the underlying patterns and relationships within the data.This advancement not only optimizes current manufacturing processes but also lays the groundwork for future innovations in predictive maintenance and operational efficiency.
Figure 1 .
Figure 1.Experimental platform used for data extraction.
Figure 2 .
Figure 2. Detail of bearing fault implementation.
Figure 4 .
Figure 4.An implementation scheme of our proposed method.
Figure 5 .
Figure 5. Calculated MSE of eccentricity and bearing fault datasets using PCA fitted only with normal data.(a-c) show MSE results for eccentricity faults (E.fault 1 to E. fault 4) at 50%, 60%, and 70% explained variance.(d-f) show MSE results for bearing faults (B.fault 1 to B. fault 3) at the same explained variance.Each graph includes a red dashed line indicating the top 5% MSE threshold based on normal data.The legend is consistent across all graphs.
Figure 6 .
Figure 6.t-SNE visualization of normal and eccentricity fault data after compression.(a,d,g) show results where the principal components were selected to explain 50% of the variance.(b,e,h) show PCA with 60% variance explained.(c,f,i) show PCA with 70% variance explained.AE encoder compression rates are 30% in (a-c), 20% in (d-f), and 10% in (g-i).The number of latent vectors is indicated in each subplot.The legend colors are consistent across all subplots.
Figure 7 .
Figure 7. t-SNE visualization of normal and bearing fault data after compression.(a,d,g) show resultswhere the principal components were selected to explain 50% of the variance.(b,e,h) show PCA with 60% variance explained.(c,f,i) show PCA with 70% variance explained.AE encoder compression rates are 30% in (a-c), 20% in (d-f), and 10% in (g-i).The number of latent vectors is indicated in each subplot.The legend colors are consistent across all subplots.
Figure 8 .
Figure 8. Clustering performance evaluation using PCA and AE encoder compression models fitted and trained with normal data.(a) Normal data and eccentricity fault data.(b) Normal data and bearing fault data.The evaluation was performed by adjusting the PCA-variance-explained ratio and the number of latent vectors in the AE encoder.The legend colors are consistent between (a) and (b).
Figure 9 .
Figure 9. Calculated MSE and t-SNE visualizations of eccentricity and bearing fault datasets.(a-d) show results with PCA fitted using 1% eccentricity fault data and 99% normal data.Each graph includes a red dashed line indicating the top 5% MSE threshold based on normal data.(e-h) show results with PCA fitted using 1% bearing fault data and 99% normal data.
Figure 10 .
Figure 10.(a) t-SNE visualizations of data compressed using different PCA-variance-explained ratios (60% to 95%) and AE compression levels (1% to 30%).Numbers indicate the number of latent vectors after compression.(b) Clustering performance metrics (ARI, NMI, FMI) for varying PCA-varianceexplained ratios and number of AE latent vectors.(c) Clustering performance as a function of the compression rate, with three representative points highlighted in pink, yellow, and brown.The pink method uses 0.78% of the FFT data, yellow uses 0.44%, and brown uses 0.08%.These points show high compression rates and appropriate accuracy, with corresponding t-SNE visualizations highlighted in (a).
Table 1 .
A summary of deep-learning-based fault diagnosis methods for rotating machinery.
Table 2 .
The number of principal components and explained ratio with only normal data.
Table 3 .
The number of principal components and the explained variance ratio obtained by fitting PCA with 1% eccentricity fault data and 99% normal data.
Table 4 .
The number of principal components and the explained variance ratio obtained by fitting PCA with 1% bearing fault data and 99% normal data.
|
v3-fos-license
|
2020-10-28T19:20:18.455Z
|
2020-10-19T00:00:00.000
|
226346604
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adtp.202000179",
"pdf_hash": "3752bc375321df7e515c5a558fd4add4e28a4a5b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3030",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "e07ae15266923bea8a25bfa8ba7c88210dc4fb72",
"year": 2020
}
|
pes2o/s2orc
|
Peritumoral Delivery of Docetaxel‐TIPS Microparticles for Prostate Cancer Adjuvant Therapy
Abstract Recurrence of prostate cancer after radical prostatectomy is a consequence of incomplete tumor resection. Systemic chemotherapy after surgery is associated with significant toxicity. Improved delivery methods for toxic drugs capable of targeting positive resection margins can reduce tumor recurrence and avoid their known toxicity. This study evaluates the effectiveness and toxicity of docetaxel (DTX) release from highly porous biodegradable microparticles intended for delivery into the tissue cavity created during radical prostatectomy to target residual tumor cells. The microparticles, composed of poly(dl‐lactide‐co‐glycolide) (PLGA), are processed using thermally induced phase separation (TIPS) and loaded with DTX via antisolvent precipitation. Sustained drug release and effective toxicity in vitro are observed against PC3 human prostate cells. Peritumoral injection in a PC3 xenograft tumor model results in tumor growth inhibition equivalent to that achieved with intravenous delivery of DTX. Unlike intravenous delivery of DTX, implantation of DTX‐TIPS microparticles is not accompanied by toxicity or elevated systemic levels of DTX in organ tissues or plasma. DTX‐TIPS microparticles provide localized and sustained release of nontoxic therapeutic amounts of DTX. This may offer novel therapeutic strategies for improving management of patients with clinically localized high‐risk disease requiring radical prostatectomy and other solid cancers at high risk of positive resection margins.
Introduction
Prostate cancer is the second most commonly diagnosed cancer in men and a leading cause of cancer related deaths. [1,2] DTX is a taxane-based chemotherapeutic that stops the growth of tumor cells by binding to intracellular -tublin, which disrupts microtubular function via assembly of microtubules resulting in mitotic arrest and ultimately cell death. [10] Use of DTX as an adjuvant or neoadjuvant chemotherapeutic with radical prostatectomy in patients with clinically localized, high-risk prostate cancer may improve recurrence-free survival. [11,12] However, systemic administration of DTX is associated with a range of toxicityrelated side effects including neutropenia, leukopenia, neurological toxic effects, diarrhea, alopecia, asthenia, and nausea. [13] Given the accessibility of residual cancer in the prostate bed or tissue cavity due to positive surgical margins following excision of the prostate, we have previously proposed the use of biodegradable microparticles to enable the targeted delivery of DTX as an adjuvant therapy during surgical removal of the prostate for cancer. [14] Our previous study established an optimized process for loading TIPS microparticles with DTX and demonstrated in vitro their potential utility for tumor growth inhibition over a period of 12 days. The antisolvent precipitation method used for loading TIPS microparticles with DTX differs from other methods used in previous drug-delivery studies with TIPS microparticles and results in drug being predominantly coated onto the microsphere surface. [15][16][17] Following on from our preliminary studies, here we have evaluated the in vivo performance of DTX-TIPS microparticles, including the pharmacokinetic profile, efficacy for inhibiting tumor growth and tolerability using a human prostate tumor xenograft model. The therapeutic index of DTX delivered via TIPS microparticles is shown to be improved compared with a conventional intravenous method of delivery, with inhibition of tumor growth achieved in the absence of systemic toxicity. TIPS microparticle delivery of DTX, together with the facile formulation method for coating the microparticles with drug, provides a promising strategy for effectively managing the risk of positive surgical margins associated with radical prostatectomy in prostate cancer and other localized, surgically resectable solid cancers.
Immobilization of DTX onto PLGA TIPS Microparticles
The amount of DTX loaded onto the TIPS microparticles was indirectly quantified by measuring the amount of DTX remaining in solution at different time points during the loading phase. Approximately 80% of the DTX was loaded onto the microparticles from the solution within 120 min of initiating mixing, matching the amount previously reported. [14] Scanning electron microscopy of the microparticles confirmed the presence of crystalline DTX on the surface of the microparticles following incubation in the DTX solution, which was absent in unloaded control TIPS microparticles (Figure 1).
Activity of DTX Released from the DTX-TIPS Microparticles
The in vitro activity of DTX released from TIPS microparticles was investigated in perfusate collected using a dynamic system. The perfusate containing DTX from TIPS microparticles was collected at regular intervals and evaluated for antineoplastic activity by assessing onset of apoptosis in cell-based assays using the PC3 prostate cancer cell line. Phase contrast microscopy revealed characteristic morphological features of apoptosis including membrane blebbing, nuclear fragmentation, vesicular structure formation, and fragmentation into membrane-bound apoptotic bodies (Figure 2a). A sustained cytotoxic effect was observed in PC3 cells incubated in the DTX-TIPS perfusate collected at different intervals over 15 days (Figure 2b). After 12 days incubation in fresh medium the cumulative toxic effects of DTX released into the perfusates collected at all time points resulted in fewer adherent cells remaining in the culture. The number of cells displaying apoptosis after 24 h incubation in perfusates collected at later time points was reduced when cultured in fresh medium at day 1 but increased to 100% after 12 days in fresh medium for all perfusates.
Further evaluation of the cytotoxicity of DTX in perfusates collected at time points beyond 15 days was conducted using 3D spheroid cultures composed of PC3 cells incubated with the perfusates collected over 35 days, with the culture medium being replenished at regular intervals corresponding to the time point of perfusate collection. The diameter of the spheroids incubated with medium conditioned with DTX-TIPS microparticles, measured from images acquired as the experiment progressed, were significantly reduced in size compared with the diameter of the spheroids at corresponding time points in the control group incubated in nonconditioned complete medium (p < 0.0001; Figure 2c,d). The diameter of the spheroids in the treated group progressively decreased until day 24 before, followed by plateauing until the end of the experiment at day 35. In contrast, the diameter of spheroids in the control group incubated in nonconditioned complete medium continued to increase in size until at timepoints beyond 14 days their diameter exceeded the field of view so that measurements could no longer be collected for later timepoints.
Analysis of the DTX release profile in vivo was investigated in plasma collected from nontumor bearing BALB/cAnNCrl mice. Detectable levels of DTX released from TIPS microparticles were observed at 1 and 24 h postadministration. DTX was not detectable in the plasma thereafter. Plasma samples collected from the group receiving I.V. DTX once weekly for three weeks contained significantly higher levels of DTX at 1 h (102-fold increase) and 24 h (8-fold increase) postadministration compared to the DTX-TIPS group (p < 0.01) (Figure 3). The administration of I.V. DTX resulted in detectable levels remaining in the plasma at 48 and 72 h and also at day 10 and day 15, corresponding to the second and third doses of I.V. DTX delivered at day 7 and day 14.
The antineoplastic activity and systemic toxicity of DTX-TIPS microparticles was investigated in vivo using a human prostate tumor xenograft model in NSG mice. PC3 cells were subcutaneously transplanted into immunocompromised mice. At day 14 postcell injection, palpable tumors had formed, measuring 0.03-0.05 cm 3 . Peritumoral delivery of TIPS microparticles (+/− DTX) and retention at the site of implantation was confirmed by histology, with the microparticles remaining in situ for the duration of the study (Figure 4a). Following administration of the microparticles (+/− DTX) or I.V. DTX or saline, tumor growth was assessed over 33 days. Bioluminescence radiance, indicative of the tumor size, did not increase between day 0 and day 10 in the groups treated with I.V. DTX or DTX-TIPS, whereas the signal was significantly increased in the groups receiving either saline of TIPS microparticles only (p < 0.0001) (Figure 4b,c).
Tumor volume measurements in mice receiving I.V. DTX did not increase during the study (Figure 5a). In mice receiving DTX-TIPS microparticles, there was no significant increase in tumor volume compared with the mice receiving I.V. DTX until day 21 (p < 0.01). At this time point, the increase in tumor volumes in the group treated with DTX-TIPS microparticles was approximately sixfold lower compared with the group treated with saline only at the same time point. In mice receiving either TIPS microparticles only or I.V. saline, the tumor volume was significantly increased at day 16 (p < 0.05) and day 14 (p < 0.05), respectively, compared with the mice receiving I.V. DTX. Qualitative macroscopic assessment of the resected tumors at the end of the study revealed visibly smaller tumors collected from the groups treated with DTX ( Figure 5b).
Significant toxicity was associated I.V. delivery of DTX, exhibited by progressive weight loss from day 7 onward, leading to −7.0% ± 2.3% at day 28 and −12.7% ± 3.5% at day 35 compared with the starting weight (p < 0.0001; weight loss exceeded >15% at day 35 in 3/7 mice). In contrast, no significant weight loss was observed in the groups receiving DTX-TIPS microparticles, TIPS microparticles only or I.V. saline (Figure 5c).
Tissue levels of DTX in explanted organs (heart, kidney, lung, spleen, and liver) were analyzed days 1, 10, and 35 postadministration ( Table 1). Elevated levels of DTX were detected in all organs at all time points following I.V. administration of DTX compared with DTX-TIPS administration, with the exception of the liver and spleen at day 35, where DTX was not detectable for either treatment group.
Clinical utility of taxane-based drug formulations in cancer is often limited by chemotherapy dose-limiting toxicity, nonresponders and resistance. [18] Conventional modes of delivery for many chemotherapeutics involve intravenous administration of multiple doses, resulting in much of the drug not reaching the diseased tissue. New approaches for drug delivery that involve delivery directly to the target site would alter the pharmacokinetic profile and biodistribution of the chemotherapeutic agent and improve its therapeutic index. This approach would be beneficial for high-risk patients with localized disease where there is a higher likelihood of a positive resection margin following surgical extirpation. [9] To mitigate the risk of residual cancer cells remaining due to positive surgical margins following radical prostatectomy, delivery of therapeutic quantities of a chemotherapeutic agent, such as DTX, into the postsurgical tissue cavity would directly target these residual tumor cells and potentially reduce disease recurrence whilst avoiding systemic toxicity associated with systemic administration. Furthermore, increasing the bioavailability of DTX at the tumor site may also reduce development of resistance. [19] We envisage delivery of highly porous biodegradable TIPS microparticles coated with DTX into the postsurgical tissue cavity following radical prostatectomy in all patients could provide an effective prophylactic measure to mitigate the risk of residual tumor cells that may remain and thus diminish the risk of disease recurrence. For this approach to be clinically adopted it will require evidence that the DTX released is effective against the tumor cells but does not cause systemic toxicity associated with I.V. delivery. The current study explored this concept further and demonstrated that DTX-TIPS microparticles can deliver sustained amounts of DTX targeted to tumors that is effective in attenuating tumor growth without causing systemic toxicity. The level of tumor growth inhibition achieved following peritumoral delivery of a single dose of DTX-TIPS microparticles was similar to that achieved following I.V. delivery of three doses of DTX at weekly intervals. Intravenous DTX for the treatment of metastatic prostate cancer is often given over six cycles.
The antineoplastic (and cytotoxic) activity of DTX is well established and involves binding to intracellular -tublin and activation of apoptosis. [10] This activity was confirmed in vitro using 2D and 3D model systems, which exhibited signs of apoptosis and cytotoxicity similar to reported in our previous study. [14] The findings from the current in vitro study demonstrated that the antineoplastic effect of DTX released from TIPS microparticles was sustained and lasted at least up to 35 days. Interestingly, the potency of the cytotoxic effect observed in the 2D model after 1 day incubation in fresh medium postperfusate treatment became more diminished at later perfusate collection time points (indicated by fewer cells displaying features of apoptosis). However, further incubation of the treated cells in fresh medium up to 12 days coincided with all of the cells displaying features of apoptosis. This suggests that although the amount of DTX released from DTX-TIPS was reduced at later time points, the concentration was still sufficient to activate apoptotic pathways.
The sustained activity of DTX released from TIPS microparticles in vitro corresponded with the tumor growth inhibitory effect observed in vivo. DTX-TIPS microparticles attenuated significant tumor growth up to day 21 postadministration. It is worth noting that at day 21, the group receiving I.V. DTX had received three doses of DTX (the most recent being at day 14), which would account for the continued inhibition of tumor growth in this group beyond day 21. The I.V. dose of DTX chosen for the current study was equivalent to the dose previously reported for murine xenografted tumor growth inhibition studies. [20,21] The I.V. formulation consisted of DTX dissolved in ethanol together with the nonionic surfactant excipient Polysorbate 80, producing a formulation that is similar to the clinically approved injection formulations of DTX (e.g., Taxotere) that also contain Polysorbate 80 to stabilize aqueous formulations for parenteral administration. Although mice in the I.V. DTX group displayed sustained inhibition of tumor growth, at time points beyond day 21 all mice in this group showed signs of progressive toxicity, demonstrated by significant weight loss at days 28 and 35. In contrast, mice that received DTX-TIPS did not exhibit weight loss at the later time points, while still displaying evidence of tumor inhibition compared the control groups receiving TIPS microparticles only or I.V. saline. Systemic toxicity following the I.V. dosing regimen corresponded with increased levels of DTX present in the plasma and organ tissues over an extended period of time, which was absent in the DTX-TIPS microparticle treated group. The detection of DTX in organ tissues but not in plasma at day 35 following I.V. delivery was a surprising observation. A possible explanation for this is binding of DTX to plasma proteins which would prevent it from being detected by liquid chromatography-mass spectrometry analysis without further sample extraction procedures. This effect could also account for the level of DTX in the plasma at 24 h being <5% of the quantity detected at 1 h postadministration Recent years have seen a shift toward the development of various taxane-based nanoformulations (liposome, micelle, nanoparticle, and nanoemulsion), with many incorporating advanced functionality to assist drug binding to specific sites following I.V. delivery. However, challenges with nanotechnology include quality control, physicochemical stability, storage conditions, scaleup manufacture, in vivo metabolism and off-target migration. [22] The current proof-of-concept study demonstrates the feasibility of using microparticles for targeted delivery of toxic chemotherapeutic agents. Such microscale delivery systems offer several advantages when targeting positive surgical margins. The study demonstrates that the microparticles are small enough for delivery into the tissue, for example though a needle or cannula, whilst being large enough to prevent migration from the site of delivery. The latter feature is an important attribute for ensuring the drug release remains localized. The composition of PLGA TIPS microparticles used in the current study has been scaled up for investigation in other clinical conditions. These studies have demonstrated that TIPS microparticles have long-term stability if stored under dry conditions (6 years plus; data not shown). Findings from the current study show that DTX-TIPS microparticles are compatible with dry storage but will require further investigation to establish the long-term stability of the combined formulation.
A variety of acute and long-term side effects are associated with commercial formulations of DTX. These include infusion reactions, febrile neutropaenia, fatigue, pneumonitis, and peripheral neuropathy. [13] To facilitate DTX loading of TIPS microparticles via antisolvent precipitation the formulation of DTX used in the current study omitted the inclusion of Polysorbate 80, which has been implicated in causing systemic and injection site adverse events. [23] Efficacious clinical use would require the DTX-TIPS microparticles to release an adequate amount of DTX to destroy all remaining tumor cells in the prostate bed. Further refinement of the DTX-TIPS microparticles to achieve the necessary antineoplastic potency might be achieved by loading a greater quantity of DTX onto the surface of the microparticles. This could deliver a higher payload of DTX to the diseased tissue but care would need to be taken to avoid elevated concentrations entering the systemic circulation causing toxicity. Our previous study showed that ≈95% of DTX is released from the current composition of DTX-TIPS microparticles during the first 5 days, with approximately one third being released during the first 24 h. [14] An alternative approach for achieving enhanced potency would be to alter the composition of the TIPS microparticles to a polymer that degrades more slowly and releases DTX over a longer period. However, histology at the end of the current study indicated that TIPS microparticles were largely intact and retained in the peritumoral location of delivery, indicating higher drug loading rather than slower degradation would be beneficial.
The DTX-TIPS microparticles represent an integral drugdevice combination product, where the primary mode of action of the product is pharmacological, through the antineoplastic action of DTX. In addition to the action of the DTX, it is likely that the TIPS microparticle component of the product will also provide a temporary tissue scaffold that might also facilitate tissue healing after prostate removal. Since the current study demonstrated persistence of TIPS microparticles at a heterotopic site further investigation will be required to evaluate the likely safety and benefit of the microparticle device component of the product following As part of their development process, we have confirmed compliance with ISO 10993 (biocompatibility), with data demonstrating a lack of antigenicity and minimal tissue reaction. If DTX-TIPS microparticles are to be delivered into the prostate cavity at the time of surgery for all patients undergoing radical prostatectomy early phase clinical safety testing will be necessary to demonstrate avoidance of immediate toxicity, bleeding, infection, anastomotic leak, and impediment of healing.
A caveat of the experimental approach used in the current study is the peritumoral delivery of DTX-TIPS microparticles around intact tumors in the xenograft model instead of the proposed clinical approach consisting of delivery into a tissue cavity containing residual tumor cells that exists in patients with positive resection margins following tumor excision. The size and location of the xenografted tumors made it impossible to simulate the presence of residual cancer cells in a tissue cavity following tumor resection. It is possible, therefore, that the model used to demonstrate the antineoplastic activity of DTX-TIPS microparticles may reflect an over representation of what is required for the intended clinical use of targeting fewer residual tumor cells following radical prostatectomy. Furthermore, previous studies have shown extensive binding of DTX to plasma proteins, including 1-acid glycoprotein, albumin and lipoproteins. [24] Therefore, the therapeutic index of DTX released from the microparticles into an environment immediately following surgery may also be influenced by the milieu of tissue fluid and plasma proteins, as well as the inflammatory response arising from tissue trauma associated with surgical extirpation.
The findings from the current study indicate DTX-TIPS microparticles could lead to improved patient outcomes if similar DTX concentrations can be achieved in the prostate bed with limited distribution to plasma and normal tissue. Localized, sustained delivery of the chemotherapeutic agent using the approach reported would avoid the need for pelvic radiotherapy and genitourinary side effects, and reduce the amount of healthcare time and costs, as well as patient inconvenience, associated with follow-up clinical visits. Early phase clinical testing will be needed in a small group of patients to enable evaluation of safety of DTX-TIPS microparticles delivered during surgery for high-risk patients undergoing radical prostatectomy (e.g., reduced incidence of severe (CTCAE G3+) treatment-related toxicity relative to historical control rates).
Conclusion
Collectively, the results from the current study indicate DTX-TIPS microparticles provide a safe and efficacious approach for delivering highly toxic chemotherapeutic agents whilst avoiding systemic toxicity. The combination product offers a new approach to adjuvant chemotherapy that is facile in execution for targeting residual tumor cells in situ resulting from a positive resection margin. This might allow patients to receive clinically transformative doses of DTX without toxicity and side-effects of conventional systemically administered chemotherapy. Although the current study has focused on use in prostate cancer, utility of DTX-TIPS microparticles poses minimal barriers to clinical adoption. Furthermore, its use may extend beyond prostate cancer and include patients undergoing surgical resection of other solid cancers where there is a risk of positive surgical margins or in anatomical locations where there is a need to minimize the extent of surgery to preserve tissue function.
Experimental Section
Fabrication of TIPS Microparticles: TIPS microparticles composed of PLGA were prepared as previously described. [14] PLGA PURASORB 7507 (75:25) polymer (Corbion, Amsterdam, Netherlands) was dissolved in dimethyl carbonate (Sigma Aldrich, Dorset, UK) overnight using magnetic stirring to produce a 10% (w/v) polymer solution. The polymer solution then was fed into a Nisco Encapsulator Unit (Nisco Engineering, Zurich, Switzerland; Frequency: 2.75 kHz, Amplitude: 70%) by a syringe pump (Harvard Apparatus, Kent, UK), at a constant flow rate of 2 mL min −1 . The polymer droplets were formed using a 100 µm sapphire nozzle and collected in liquid nitrogen. Residual solvent was removed from the frozen polymer droplets by lyophilization for 48 h. The dried PLGA TIPS microparticles were sieved to a size range of 250-350 µm and stored at room temperature in rubber stoppered glass vials under vacuum.
Loading TIPS Microparticles with DTX: DTX (Cayman Chemical, USA) was loaded onto TIPS microparticles using an antisolvent precipitation method. [14] 5 mg of PLGA TIPS microparticles was transferred into 20 mL clear type 1B borosilicate glass vials and sealed with a butyl injection stopper. 4.5 mL of ultrapure water was added to the vial and vortexed for 10 s. 0.5 mL of 1 mg mL −1 DTX in ethanol was added using 1 mL syringe with a 25 G needle through the rubber stopper. The vial was then vortexed for 10 s and placed on the roller mixer (IKA Roller 6 digital; 60 rpm) at room temperature for 2 h. DTX loading efficiency (DLE) onto the TIPS microparticles after 2 h incubation was calculated according to the following equation. The amount of free DTX left in the solution was measured by UV spectroscopy at the wavelength of 229 nm using a Nanodrop 2000c spectrophotometer (Thermo Scientific, Waltham, MA).
Unbound DTX and ethanol solution was removed from the microparticles by washing 3 × 5 mL ultrapure water, followed by desiccation under vacuum. Samples of the dried microparticles were coated with gold for 60 s using a Q150R ES gold coater (Quorum Technologies, Oxford, UK). Scanning electron microscopy (SEM; Hitachi S3400N scanning electron microscope) was used to confirm the presence of DTX on the surface of TIPS microparticles.
In Vitro Activity of DTX Released from the DTX-TIPS Microparticles-In Vitro Collection of DTX-TIPS Microparticle Conditioned Perfusate: Sustained release of DTX from the TIPS microparticles in vitro and its biological activity was investigated using a dynamic perfusion system to simulate release in a physiological environment when delivered following radical prostatectomy. DTX-loaded microparticles were mixed with 100 µL of 70% (v/v) GranuGel (Convatec, UK) diluted in ultrapure water and the mixture was placed between two 25 mm circular filter papers (Whatman qualitative cellulose filter paper, Grade 1), where their positions were held by a Swin-Lok plastic membrane filter holder. A hypodermic needle (18G × 40 mm) connected to the outlet of the filter holder, was inserted through the lid of a 50 mL polypropylene container to collect the perfusate. The perfusion system was placed inside an incubator at 37°C and complete culture medium (Ham's F12-K medium (Kaighn's modification) (Invitrogen) supplemented with 10% (v/v) fetal bovine serum (FBS) and 1% antibiotics) was used as the perfusate to simulate physiological tissue fluid. The perfusate was pumped through the filter holder using a peristaltic pump (Harvard Apparatus) at a flow rate of 0.01 mL min −1 . Conditioned perfusate was sampled at specified intervals and used to evaluate its biological activity in vitro.
In Vitro Activity of DTX Released from the DTX-TIPS Microparticles-In Vitro Biological Activity of DTX-TIPS Microparticle Conditioned Perfusate:
Human prostate cancer cells (PC3, American Type Culture Collection) were used to test the activity of DTX released from the DTX-TIPS microparticles. PC3 cells were maintained in complete culture medium. Cells were cultured at 37°C under 5% CO 2 atmosphere in a humidified incubator.
To assess the longevity of biologically active DTX released from TIPS microparticles, morphological changes indicative of apoptosis (including membrane blebbing, nuclear fragmentation, vesicular structure formation, and fragmentation into membrane-bound apoptotic bodies) were quantified in PC3 cells maintained in 2D culture following incubation for 24 h in the perfusate. The conditioned complete medium was collected over a 24 h period at different perfusion time points (days 1, 4, 8, and 12) and added to PC3 cells seeded in six-well plates at 1.5 × 10 5 per well. After 24 h incubation, the medium was replaced with fresh complete medium, which was replenished every 48 h for up to 15 days. Images of cell morphology were acquired using phase contrast microscopy using a Zeiss Primovert microscope and at least 25 cells in each group were analyzed to calculate the number of cells displaying apoptosis.
3D spheroids of PC3 cells were generated using methylcellulose as a scaffold, as previously described. [25] PC3 were seeded at a concentration of 2 × 10 4 cells/200 µL complete culture medium containing 20 wt% methylcellulose in 96-well ultralow attachment u-bottom plates. The cells were incubated for 2 days at 37°C under 5% CO 2 atmosphere in a humidified incubator until the spheroids had formed. The medium was replaced with 200 µL perfusate conditioned complete medium that was replaced with medium collected from the perfusate system at the corresponding time point. Images of the spheroids were acquired for each day of the culture. The dimensions of the imaged spheroids were measured using Image J. Feret's diameter was used to estimate the mean diameter of spheroids and plotted against time (GraphPad Prism Version 8.0; GraphPad Software, San Diego, USA).
In Vitro Activity of DTX Released from the DTX-TIPS Microparticles-Measurement of DTX Released In Vivo from the DTX-TIPS Microparticles:
To determine the concentrations of DTX present in the circulation post-treatment, DTX-TIPS microparticles were implanted subcutaneously into nontumor bearing BALB/cAnNCrl mice (n = 5) (7-8 weeks, 17-20 g, Charles River). Control animals received either control TIPS microparticles (5 mg TIPS microparticles after mixing into a suspension in 100 µL of 70% (v/v) GranuGel; n = 5) or I.V. DTX (10 mg kg −1 via tail vein delivery once weekly for three weeks (n = 5). (The I.V. formulation of DTX was prepared by dissolving 10 mg DTX in 500 µL of absolute ethanol. Once fully dissolved, 500 µL of Polysorbate 80 was added and gently mixed to obtain a presolution that was finally diluted 1:10 (v/v) in saline to produce 1 mg mL −1 DTX.) For measurement of circulating DTX in plasma, blood samples were collected via the tail vein at day 0 (predose), and days 1, 2, 3, 7, and 10 then every 5 days until day 35 immediately before the end of in life phase. Blood samples were collected into 10 µL capillaries containing sodium heparin (Hirschmann, Eberstadt, Germany) and placed in wells of a 2 mL deep well plate that was stored at −80°C until extraction.
Liquid chromatography-mass spectrometry analysis was carried out with a Waters ( In Vitro Activity of DTX Released from the DTX-TIPS Microparticles-In Vivo Tumor Growth Inhibitory Activity: Female NSG immunodeficient mice (NOD.Cg-PrkdcscidIl2rgtm1Wjl/SzJ; Charles River), aged 6-7 weeks and 18-20 g in body weight, were acclimatized for 1 week prior to injection of tumor cells. PC3 cells (5.0 × 10 6 cells in 100 µL phosphate buffered saline (PBS)) were injected subcutaneously into the right flank of each mouse. The dimensions of tumor were measured three times per week using digital calipers and the tumor volume was calculated using the following formula Tumor volume ( At day 14 postcell injection, the mice were randomized into four groups: Group 1 intravenous (I.V.) DTX (10 mg kg −1 via tail vein delivery once weekly for three weeks; n = 7); Group 2 peritumoral injection of DTX-TIPS microparticles (5 mg of TIPS microparticles loaded with 500 µg DTX; n = 7); Group 3 peritumoral injection of control TIPS microparticles (5 mg of control TIPS microparticles; n = 7); Group 4 I.V. saline (10 µL g −1 body weight; n = 4). Mice in Groups 2 and 3 received TIPS microparticles (+/− DTX) after mixing the microparticles into a uniform suspension in 100 µL of 70% (v/v) GranuGel. The suspension of TIPS microparticles in GranuGel was delivered subcutaneously via 1 × 100 µL depot using a 1 mL syringe and 16G needle around the periphery of the tumor. Mice were monitored for signs of toxicity (weight loss, body condition, and uncoordinated movement) over the duration of the study. Mice were euthanized if weight loss exceeded 15% of the starting body weight or they showed excessive signs of toxicity. At the end of the in-life phase (days 1, 10, and 35 post-treatment) the mice were euthanized by overdose of CO 2 , followed by cervical dislocation and organs (heart, liver, kidneys, lungs, and spleen) were collected, weighed and immediately frozen in liquid nitrogen before storage at −80°C until further analysis. Evaluation of DTX concentrations was carried out by liquid chromatography tandem mass spectrometry in tissue following homogenization in 3 mL g −1 (spleens 5 mL g −1 ) 10 × 10 −3 m PBS using a Precellys 24 homogenizer (Bertin Technologies, Montigny-le-Bretonneux, France). 45 µL tissue homogenate was spiked with 5 µL DMSO, samples were mixed and protein precipitated with 150 µL acetonitrile containing DTX-d 9 as internal standard (25 × 10 −9 m). Samples were mixed and centrifuged. Supernatant was taken and diluted 40:60 with 0.5% sodium acetate (20 × 10 −6 m) in ammonium acetate (10 × 10 −3 m, pH 5). Blanks, calibration curve and QCs were prepared as above using tissues obtained from the same strain of mice (NSG) and spiked with DMSO, working calibration standard and working QC, respectively. Blanks consisted of protein precipitated with acetonitrile alone, blank+, standards, and QCs were protein precipitated with acetonitrile containing DTX-d 9 .
The tumors were explanted and processed for histology. Tissues were fixed in 10% formalin, dehydrated and embedded in low-melting point wax (Paraplast X-TRA, Sigma). Tissue sections cut from the wax-embedded tissue were stained with hematoxylin and eosin.
Generation of Luc-GFP Expressing PC3 Cell Line: Lentivirus encoding firefly luciferase and GFP was produced in HEK 293 T cells using a calcium phosphate precipitation protocol adapted from that described by Tiscornia et al., [26] using the transfer plasmid pSEW-Flagx3-FLuc-2A-GFP (which was a kind gift from Dr. Martin Pule, UCL Cancer Institute), together with packaging plasmids, Gag-pol (pCMV-R8.74; Addgene Plasmid # 22036) and VSV-G (pMD2.G; Addgene Plasmid # 12259). To improve viral titers, sodium butyrate (1 × 10 −3 m) was added to the media, 24 h prior to lentiviral harvest. [27] Lentivirus was harvested into PC3 culture medium, passed through a 20 µm syringe filter, and added directly to PC3 cells growing at 40% confluence for transduction. After 24 h PC3 cells were changed into fresh media, and successful transduction noted after a further 24 h with over 90% +ve GFP expression seen using fluorescence microscopy.
Whole Body Bioluminescence Imaging: For bioluminescence imaging (BLI) of the tumor, PC3 cells were transduced with recombinant lentivirus to express the luciferase gene, as described above. Luciferase ± PC3 cells (5 × 10 6 in 100 µL PBS) were injected subcutaneously into the right flank of female NSG immunodeficient mice. At day 14 postcell injection, the mice were randomized into four groups: Group 1 intravenous (I.V.) DTX (10 mg kg −1 via tail vein delivered once; n = 5); Group 2 peritumoral injection of DTX-TIPS microparticles (n = 4); Group 3 peritumoral injection of control TIPS microparticles (n = 4); Group 4 I.V. saline (10 µL g −1 body weight; n = 3). BLI was performed using IVIS Lumina (PerkinElmer USA). Mice were injected intraperitoneally with 75 mg kg −1 d-luciferin (Promega) in 200 µL of PBS. Sequential BLI images were acquired 5 min after luciferin injection using 1 s exposure time with 1 min delay between two consecutive acquisitions. BLI images were analyzed using Aura Imaging Software (Spectral Imaging, USA) where a free draw region of interest (ROI) around the whole body and control circular background ROI was placed on the first image and subsequently pasted over each new image acquired until all ROIs reached their maximum intensity. The photon signal in the maximal signal ROI was quantified as Radiance (photons/s/cm 2 /sr). Representative images underwent thresholding to the same scale and presented using radiance (photons/s/cm 2 /sr) as color scale by utilizing the same software.
All in vivo experiments were performed under a UK Home Office license (PLN: 70/8421), in compliance with the 1986 United Kingdom Home Office Animals (Scientific Procedures) Act and with the approval of the University College London local ethics committee.
Statistical Analysis: Data were tested for statistical significance using GraphPad Prism software (version 8.0; GraphPad Software San Diego, CA), with sample size (n) and values indicated in the figure legends. Statistical evaluation of differences between groups was performed by two-way analysis of variance (ANOVA) with Sidak's multiple comparisons test with a single pooled variance used to compare differences between two or more groups. Data values are plotted as mean ± standard deviation (SD) unless stated otherwise in the figure legend.
|
v3-fos-license
|
2019-04-27T13:13:45.026Z
|
2019-03-29T00:00:00.000
|
134332677
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://revistas.ufpr.br/floresta/article/download/53994/37964",
"pdf_hash": "6b8a4645f369e30493f4c257b6a4aafc4f1d2aa7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3031",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "6b8a4645f369e30493f4c257b6a4aafc4f1d2aa7",
"year": 2019
}
|
pes2o/s2orc
|
RELATIONSHIP BETWEEN THE STREAMFLOWS AND PRECIPITATIONS IN ITAPEMIRIM RIVER BASIN
The present paper analyzed the relationship between streamflow and rainfall at the stream of Itapemirim River Watershed (BHRI). The minimum streamflow with duration of seven days (Q7), the average annual streamflow (Qmed) and the annual maximum streamflow (Qmax) of 11 BHRI’s sub-basins were determined. Trends in rainfall and streamflow series were analyzed with Mann-Kandall and Pettitt nonparametric tests. Regression analysis between rainfall and streamflow were established. Results showed no trends on rainfall and Q7 series. Trends in Qmed were found only at one sub-basin and trends in Qmax were found in two sub-basins. The total rainfall was correlated with the streamflow of all the sub-basins. It has been found that annual rainfall amount (Pa) explained considerably variations in Qmed and Q7. Rainfall amounts of the rainiest month (Pmax) showed good correlation to Qmax.
INTRODUÇÃO
Although water is a renewable natural resource, the environmental problems related to its distribution and availability are increasingly present today.Among the problems that aggravate stress and water scarcity in many regions of the planet are changes in water availability and increased demand associated with global changes, with extreme hydrological events that increase the vulnerability of the human population and compromise the food security (HOEKSTRA et al., 2012).
The flow regime is capable of integrating several components of a river basin.This hydrological variable has a variation in time and can be directly influenced by the climate (mainly by precipitation), basin physical characteristics and anthropic alterations, such as the changes in the use and occupation of the soil and damming of fluvial waters (DIAS et al., 2015;FREDERICE;BRANDÃO, 2016, RODRIGUEZ, et al., 2010, SALEMI et al., 2013).Its knowledge is a fundamental subsidy for planning and decision making in relation to the use rationalization and conservation of water resources (CRUZ;TUCCI, 2008).
In river basins, the supplying of water from precipitation triggers a series of internal processes.Being the main ones, infiltration, surface and subsurface storage in the soil, surface draining, runoff, weather conditions and erosive processes, which interact constantly between themselves.As for its performance in the river water regime, the precipitation volume strongly influences the dynamics of its flows, reflecting in the behavior of this variable during periods of droughts or floods.Some studies have verified that trends observed in river basin flows were explained by variations in rainfall regimes (ESPINOZA VILLAR et al., 2009;KREPPER et al., 2008;SANTOS et al., 2010).
In recent years, accumulated annual rainfall in the state of Espírito Santo, Brazil, has been below average in many regions, leading to losses of several orders, from agriculture to domestic and industrial supplies.Facing this, some measures to ensure supply were taken, such as water supply companies prioritizing service to the population (as recommended in Article 1 of Law 9,433) and reviewing their contracts with the industrial sector, in addition to restriction or prohibition of irrigation in some municipalities during droughts.In this sense, the hydrographic Itapemirim River Basin (BHRI), as well as others in the State, had their water availability changed due to the droughts at the present time.At BHRI, part of their municipalities decreed a water emergency in 2015.
From the foregoing, the present study aims to analyze the behavior of the BHRI watercourse flows and relating them to the rainfall precipitation behavior.
Area under study
The study area comprises the Itapemirim river basin, located in the southern region of the state of Espírito Santo (Figure 1A), between the meridians 40°48' and 41°52' West longitude and between the parallels 20°10' and 21°15' South latitude.It has a total area of approximately 5,920 km 2 , and it covers eighteen municipalities in the state of Minas Gerais and part of the municipality of Lajinha.
According to the classification of Köppen, the predominant climatic types in BHRI are "Cwa" -Subtropical dry winter climate (with temperatures below 18ºC) and hot summer (with temperatures over 22ºC) and "Cwb" -Subtropical altitude climate, with dry winter and warm summer (ALVARES et al., 2013).
The vegetation in the study area is within the areas of the Atlantic Forest biome, in the following formations: Submontane Dense Ombrophylous Forest, Montane and Altomontane; and Semidecidual Seasonal Forest of Lowlands and Submontane.
Hydrological data
The daily historical series of 11 fluviometric stations (Figure 1B) located in the BHRI and 29 rainfall stations (Figure 1C) were obtained from Hydrological Information System of ANA -Hidroweb (Agência Nacional de Águas, 2016) for BHRI and adjacent areas.The collection period of the data series was related to the hydrological years between 1984/1985 and 2013/2014.The BHRI was subdivided into sub-basins relating to the contribution areas of each of the 11 fluviometric stations.
For each of the fluviometric stations, the average minimum flow rate within 7 days (Q7), mean annual flow (Qmed) and maximum annual flow (Qmax) were obtained for each hydrological year.The determination of the respective flows was carried out using software HIDRO 1.2.1 (Agência Nacional de Águas, 2016), considering the hydrological year beginning in October and ending in September.The rainfall data for each sub-basin were used to obtain, for each hydrological year, the following rainfall: monthly maximum (Pmax), monthly minimum (Pmin), rainiest semester (Psc), of the summer (Psummer) and the annual total (Pa).
Statistical analysis
In order to verify the occurrence of significant trends and/or abrupt changes in the flow series (Q7, Qmed and Qmax) and precipitation (Pmax, Pmin, Psc, Psummer and Pa) Mann-Kendall non-parametric statistical tests, as recommended by the World Meteorological Organization (SENA; LUCENA, 2013), and Pettitt (PETTITT, 1979), both with a significance level of 1% were used for each sub-basin.
Subsequently, associations were performed for each sub-basin under study by regression analysis between the dependent variables (Qmed, Q7 and Qmax) and the independent variables (Pmax, Pmin, Psc, Psummer, and Pa), as was done in other studies (LATUF, 2007;SANTOS, et al., 2010).Only the following associations of dependent variables with independent variables were analyzed: Qmed with Pa; Q7 with Pmin, Psc, Psummer, and Pa; and Qmax with Pmax.In the regressions, the adjustments of the linear and quadratic functions were tested in order to verify in which of the functions the relation between the variables was significant by the test F at 1% of probability.
RESULTS
Tables 1 and 2 show the results of the non-parametric Mann-Kendall test (p-value and trend slope) for the precipitation and flow series of each sub-basin studied.In these, the p-value less than 0.01 indicates that there is no monotonic tendency statistically significant in the series at 1% probability.Regarding to the slope tendency, calculated according to the method of Sen (SEN, 1968), positive values indicate a tendency of increase and negatives of decrease from the precipitation averages or flow over the years.
Sub-basin
Mann-Kendall test Table 3 shows the results of the regression analysis between the precipitation flows.It is emphasized that only the pertinent relations are presented to the type of precipitation in which the relation with the flow was significant with a level of 1% of test F. Table 3. Regression equation to estimate the average annual streamflow (Qmed), the minimum streamflow with duration of seven days (Q7), and the annual maximum streamflow (Qmax) of the Itapemirim river subbasins where significant correlations with rainfall were found (F test at 1%).Tabela 4. Equações para estimativa das vazões anuais média (Qmed), mínima média com sete dias de duração (Q7) e máxima (Qmax) nas sub-bacias do Rio Itapemirim para as quais houve correlação significativa com diferentes precipitações, pelo teste F a 1%.
DISCUSSION
The non-parametric Mann-Kendall and Pettitt tests indicate that all precipitations analyzed (Pmax, Pmin, Psc, Psummer, and Panual) had a p value equal to or greater than 0.01.That is, during the analyzed period, the series of these precipitations were stationary, with no significant trends occurring or abrupt changes at a significance level of 1%.
By evaluating the relevant results for the Man-Kendall test for flow rates, it is observed that Q7 was stationary in all sub-basins.The Qmed was not stationary only for Fazenda Cacheta, showing an upward trend of 0.17 m 3 s -1 year -1 , which corresponds to approximately 2.5% of Qmed in this sub-basin.The Qmax was not stationary only for two sub-basins: Castelo (CAS), with a trend of increase of 3.5 m 3 s -1 year -1 , and Iúna (IUN), with an increase trend of 1.07 m 3 s -1 year -1 ; 2.1% and 1.6% of the maximum flows, respectively.Since there was no significant trend in precipitation behavior in these sub-basins, it is believed that such increases are due to several factors, especially changes in land use.
The Qmax analysis shows that, although there is a significant trend only for CAS and IUN, trend slope values of all sub-basins were positive, denoting behavior associated with the increase of this variable.In this same line, a study evaluating maximum flows in hydrographic basins of the whole Planet, using the Mann-Kendall Test with a 10% significance level (DO et al., 2017), found a predominance of significant increasing trend to Qmax in smaller basins (less than 500 Km 2 ).
Regarding the Pettitt test results, with only one exception, all three outlets in all sub-basins did not show significant sharp changes (p-value> 0.01) in their values.The exception was the Qmax behavior in the Fazenda Cacheta (FCA) sub-basin, which showed a trend, with a rupture in the series occurred in 1995, according to Figure 2. A new application of the Mann-Kendall test, using the data before and after the rupture year, does not show a significant trend in the behavior of these flows (p-value equal to 0.38 and 0.90 before and after rupture, respectively).The differences, as not associated with changes in precipitation, may also be due to changes in land use or construction of hydraulic works, for example.m 3 s -1 ) and after (77.3 m 3 s -1 ) 1995 were statistically different according to Pettitt test.Figura 2. Comportamento da vazão máxima na sub-bacia Fazenda Cacheta, na qual as médias antes (38,4 m 3 s -1 ) e depois (77,3 m 3 s -1 ) de 1995 são diferentes de acordo com o Teste de Pettitt.
Regarding the regression equations presented in Table 3, it can be observed that in ten of the 11 subbasins, Pa satisfactorily explained the variation of Qmed (p-value <0.01).The sub-basin for the Iúna fluviometric station was the only one in which the correlation between Qmed and Pa was not significant at 1% F-test.In general, changes in the Qmed of a large part of the BHRI corresponded directly to the behavior of the annual rainfall, considering the analyzed period.The values of the determination coefficients of these sub-basins ranged from 0.36 (FCA) to 0.83 (Usina Paineiras -UPA), all of which showed a positive trend between Qmed and Pto, in a way that higher average flows were explained by the annual rainfall performance.Such correlation is justified because precipitation is the primary source of water for the maintenance of hydrological processes in river basins, especially for the deflution of water courses.Once the average flow values were analyzed, integrators of all hydrological behavior during the year, high correlation with the annual total rainfall was effectively expected.
In the same way as presented here, other studies found significant and positive correlations of Qmed and Pa, as in Paraná River basins (KREPPER et al., 2008) and in the Cerrado Biome basins: Rio Preto and Ribeirão Entre Ribeiros (LATUF, 2007) and Ribeirão João Leite (SANTOS et al., 2010).Similarly, although the same correlation analysis of the present study was not done, some studies suggest a positive relationship between these variables, such as: for the Carmo River basin (BARBOSA et al., 2005) in the vicinity of the Itapemirim river basin; for the sub-basin of the Itacaiúnas River, in the Amazon (ARAUJO; OLIVEIRA, 2013); and for the Alto Amazonas basins (ESPINOZA VILLAR et al., 2009).Finally, a compilation of studies on the impact of climate change on water resources (MARENGO, 2008) shows the clear influence of increased precipitation on the increase in average flows in South America, especially in the southeast portion, where the basin under study is located.
In relation to the results of Table 3, the Q7 of the 11 sub-basins analyzed only showed a significant correlation (1% significance) with the precipitations, being: eight with Pa, eight with Psc, seven with Psummer and none with Pmin.The minimum flows in rivers, occurring during periods of drought, reflect the conditions of recharge of the aquifer in previous periods.In this way, the variation regime and accumulated precipitation, together with soil water infiltration conditions, transpiration of the vegetation cover (SANTOS et al., 2010), and local hydrogeology, (COELHO et al., 2015) directly influence these flows.In BHRI, the lack of association of Q7 with Pmin and the best correlations with Psummer, Psc and Pa are relevant since the recharge of the aquifers happens due to precipitations that occur in all years, with greater influence of those concentrated in the rainy season.
As the precipitation layer accumulated in the year (Pa) has a higher magnitude than the others, this was the characteristic of precipitation that contributed most to the variation of Q7, being related to this flow quite expressively (p-value <0.01), and with the highest R 2 , in the eight sub-basins where rain correlation was detected with Q7.Precipitations of rainy periods (Psc and Psummer), of lower magnitude than Pa, also correlate with Q7 of most sub-basins, although with lower R 2 (data not shown in Table 3).Cumulative rainfall over longer periods of time (Pa or its sum over several years) appears to be as important for the magnitude of Q7, that Mello et al. (2008) propose their use for estimating the impact of climate change on water availability in the Paracatu River basin.Similar results, in which a positive association of Q7 with Pa was found, were found in the previously mentioned studies, pertinent to the Rio Preto and Ribeirão Entre Rios basins (LATUF, 2007) and to Ribeirão João Leite basin (SANTOS et al., 2010).For the Rio do Carmo basin (belonging to Rio Doce basin), a correlation of Q7 with Pa and with Psc (BARBOSA et al., 2005) was also found.
None of the rains analyzed showed a significant correlation with Q7 of the FCA, IUN and Usina Fortaleza (UFO) sub-basins, with a p-value always higher than 0.09.These sub-basins have the lowest average values of Q7, in relation to the others, being 1.9 m³ s -1 in FCA, 3.4 m³ s -1 in IUN, and 1.2 m³ s -1 in UFO.The lower water availabilities of these areas may also be a reflection of several factors, such as geology, slope, soils, dams or others, which may hinder the storage of water in the subsoil and, therefore, did not cause increases in the flow rates Q7 in the drought.However, further studies are needed in relation to this question, just as it is not possible to say about the factors that caused this behavior in the Q7 of IUN and UFO.Thus, characterization studies of the biotic and abiotic factors of these basins are encouraged, so that it can be identified which of the environmental parameters of the river basins are more influential and, among them, which are capable of integrated management aiming at soil and water conservation, which in turn will favor the availability of water during periods of drought.
Due to accumulated annual rainfall or rainfall occurrence have responded by the majority of the variations occurring in flows Q7, this does not mean that this relationship does not exist in the sub-basins of FCA, IUN and UFO, since the BHRI rainfall data consist of interpolated average data, so there is some caution to the statement that the rains analyzed did not influence the flows of these sub-basins.
The data presented in Table 3 show that maximum monthly rainfall analyzed (Pmax) correlated significantly with Qmax in eight of 11 sub-basins.Only in the IUN, Ibitirama (IBI) and São Miguel (USM) subbasins Pmax did not significantly correlate with Qmax.In a similar result to that of the present study, Latuf (2007) also verified that the Pmax was the most explanatory variable for the changes occurred in the regime of maximum flows of six among the eight fluviometric stations of the basins of Rio Preto and Ribeirão Entre Ribeiros.
The significant correlation found was expected, since the maximum totals that occur in smaller time intervals are responsible for the generation of greater volume of surface runoff.This, in turn, rapidly raises the flows of watercourses.As an example, recent studies in Brazil has found positive and statistically significant correlations of flood events (when Qmax occur) and daily and cumulative five-day maximum rainfall (ÁVILA et al., 2016).
In the Ribeirão João Leite basin in the Cerrado Biome, Pa, together with the percentage of natural vegetation, showed a better correlation with Qmax (SANTOS et al., 2010).Similarly, although the same correlation analysis of the present study was not performed, the correlation of the Qmax with the Pa was evidenced to Rio do Carmo basin (BARBOSA et al., 2005).
The results shown point to the possibility of using these rainfall characteristics in the regionalization of flows for BHRI.Although there are already studies in this sense (GASPARINI, 2014;REIS et al., 2013), the present study increased the analysis seeking to correlate precipitation characteristics with each of the flows (Qmed, Q7 and Qmax).An alternative to improve the performance of the regionalization equations would be the incorporation of Pa, which showed a significant correlation in all the stations studied.
Finally, relations between flows and rains can be complex due to the effects of flow attenuation, expressive in larger basins, compared to the smaller ones.The variability of the BRHI local meteorological conditions, influenced by the orographic effect of Serra do Caparaó (MELO JÚNIOR et al., 2006), due to climatic differences (ALVARES et al., 2013) and different air masses in the formation of rainfall (MINUZZI et al., 2007) also contribute to the variability of the flows in the sub-basins and, consequently, to the greater complexity of the rainfall-related studies.It is considered interesting that this study be deepened, correlating not only precipitation, but also with other climatic variables such as evapotranspiration, besides the land use itself, ecophysiographic and morphometric factors.In this way, it is possible to more adequately subsidize the decision-making process for the conservation and availability of water resources with regularity and sufficiency to meet the required demands.
CONCLUSIONS
Based on the presented paper, it is concluded that for the analyzed period at BHRI: • The precipitations were stationary with no significant alteration trends occurring.
• The Q7 was stationary with no significant change trends.
• The Qmed was not stationary only for the Fazenda Cacheta sub-basin, with an increase trend by approximately 2.5% per year.• The Qmax was not stationary only for Castelo and Iúna (IUN) sub-basins, with an increase of 2.1% and 1.6% a year, respectively.• The precipitated total exerted a strong influence on flows of the studied sub-basins.
• Annual rainfall (Pa) significantly explained the variations of Qmed and of Q7 in approximately 91% and 73% of the sub-basins, respectively.• Rainfall in the wettest months (Pmax) significantly explained Qmax variations in approximately 73% of the subbasins.
|
v3-fos-license
|
2023-08-24T06:17:38.311Z
|
2023-08-23T00:00:00.000
|
261074147
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11095-023-03583-0.pdf",
"pdf_hash": "716a03a72386ba2f36e76bf6c5735dc9f90f2f95",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3034",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "f3d74c892bbce7295c914d31310c677052eef881",
"year": 2023
}
|
pes2o/s2orc
|
Active Uptake of Oxycodone at Both the Blood-Cerebrospinal Fluid Barrier and The Blood-Brain Barrier without Sex Differences: A Rat Microdialysis Study
Background Oxycodone active uptake across the blood-brain barrier (BBB) is associated with the putative proton-coupled organic cation (H+/OC) antiporter system. Yet, the activity of this system at the blood-cerebrospinal fluid barrier (BCSFB) is not fully understood. Additionally, sex differences in systemic pharmacokinetics and pharmacodynamics of oxycodone has been reported, but whether the previous observations involve sex differences in the function of the H+/OC antiporter system remain unknown. The objective of this study was, therefore, to investigate the extent of oxycodone transport across the BBB and the BCSFB in female and male Sprague-Dawley rats using microdialysis. Methods Microdialysis probes were implanted in the blood and two of the following brain locations: striatum and lateral ventricle or cisterna magna. Oxycodone was administered as an intravenous infusion, and dialysate, blood and brain were collected. Unbound partition coefficients (Kp,uu) were calculated to understand the extent of oxycodone transport across the blood-brain barriers. Non-compartmental analysis was conducted using Phoenix 64 WinNonlin. GraphPad Prism version 9.0.0 was used to perform t-tests, one-way and two-way analysis of variance followed by Tukey’s or Šídák’s multiple comparison tests. Differences were considered significant at p < 0.05. Results The extent of transport at the BBB measured in striatum was 4.44 ± 1.02 (Kp,uu,STR), in the lateral ventricle 3.41 ± 0.74 (Kp,uu,LV) and in cisterna magna 2.68 ± 1.01 (Kp,uu,CM). These Kp,uu values indicate that the extent of oxycodone transport is significantly lower at the BCSFB compared with that at the BBB, but still confirm the presence of active uptake at both blood-brain interfaces. No significant sex differences were observed in neither the extent of oxycodone delivery to the brain, nor in the systemic pharmacokinetics of oxycodone. Conclusions The findings clearly show that active uptake is present at both the BCSFB and the BBB. Despite some underestimation of the extent of oxycodone delivery to the brain, CSF may be an acceptable surrogate of brain ISF for oxycodone, and potentially also other drugs actively transported into the brain via the H+/OC antiporter system. Supplementary Information The online version contains supplementary material available at 10.1007/s11095-023-03583-0.
Introduction
Several initiatives have been directed towards utilization of active uptake transporters for enhancement of drug CNS exposure [1,2].In this regard, the putative proton-coupled organic cation (H + /OC) antiporter system (henceforth also referred to as the antiporter system) is a promising target for brain drug delivery due to its association with the phenomenon of active uptake at the blood-brain barrier (BBB) [3][4][5].The active uptake, characterized by higher unbound brain interstitial fluid (ISF) concentrations in relation to unbound plasma concentrations, has been documented in rats and was associated with the antiporter system for several marketed drugs such as oxycodone [6,7], pyrilamine [7,8], diphenhydramine [9][10][11], varenicline [12], tramadol [13], memantine [14][15][16], and bupropion [17].In spite of almost 20 years of investigation of the phenomenon of active uptake via the H + /OC antiporter system, the exact mechanisms of drug transport, also across the blood-cerebrospinal fluid barrier (BCSFB), and its kinetics is still poorly understood.The functional presence of this system in humans remains to be elucidated and is primarily restricted by the inaccessibility to human brain tissue during pharmacokinetic (PK) studies.This implies the use of surrogate matrices of brain ISF such as cerebrospinal fluid (CSF) and blood for assessment of brain exposure in humans.Yet, the dynamic relationship between the brain ISF and CSF concentrations for substrates of the H + /OC antiporter system has not been investigated systematically.Hence, understanding of CNS drug disposition of substrates of the H + /OC antiporter system and, in particular, CSF exposure, also in relation to its sampling site, is considered one of the critical aspects needed for successful translation from preclinical species to patients and vice versa.
CSF drug exposure is governed by complex interrelated PK processes taking place on blood-brain ISF and bloodchoroidal CSF interfaces as well as in the brain parenchyma.A drug from the systemic circulation can reach CSF either directly via passage across the choroid plexus, i.e., the BCSFB, or indirectly by passage across the BBB followed by diffusion and/or convection transport from the brain ISF to CSF in ventricles and cisterna magna (CM) entering the CSF circulation [18].The BBB has the largest surface area for blood-brain exchange and, consequently, has the biggest contribution to brain drug delivery [19][20][21].In contrary, the BCSFB has a smaller surface area and its function regarding drug delivery has not been studied as extensively.Given the fact that there are structural and functional differences between the BBB and the BCSFB, also governed by dissimilarities in the molecular composition including transporter expression, the function of the antiporter system at the BCSFB needs investigation [19,[22][23][24].To understand the antiporter system's contribution to drug CNS disposition and, in particular, to CSF exposure, oxycodone, a known substrate of the antiporter system, was used as a model drug in this study [6,7].
Microdialysis provides unique possibilities to collect data on drug transport longitudinally and from multiple sites.Moreover, microdialysis has been recognized as the most powerful tool for the collection of unbound (free) drug in vivo [25][26][27].Simultaneous measurements of unbound concentrations in the brain ISF and blood allows assessment of the extent of drug delivery to the brain across BBB by means of the unbound partition coefficient (K p,uu,brain ) [28,29].The blood-brain ISF and blood-choroidal CSF interfaces are in the present paper investigated simultaneously, allowing for a side-by-side comparison of the extent of BBB and BCSFB transport over time, without affecting the CSF hydrodynamics and allowing for a systematic investigation of the relationship between drug concentrations in brain ISF and CSF in different locations, i.e., lateral ventricles (LV) and CM.
Another less investigated factor is the function of the antiporter system in females, as historically, the majority of animal and clinical studies reported are performed in males [30].Yet, potential sex differences in the function of the antiporter system may impact not only drug PK but also indirectly its pharmacodynamic effect.Studies on oxycodone PK in both sexes are few, and the results are inconsistent [31].In an early clinical study, lower oxycodone metabolism in women was documented and also attributed to the higher analgesic effect of oxycodone in women compared with that in men [32].Additionally, oxycodone PK and analgesic effect have been reported to differ between female and male rats [33], and prevailing among female rats in the estrus cycle [34].Higher clearance in male rats compared to that in females, resulting in a higher systemic exposure in females, was also documented [33].It is unclear whether there are true discrepancies between females and males in terms of oxycodone CNS exposure and effect.However, differences in the activity of the antiporter system would explain the observed differences in oxycodone CNS exposure and effect.Hence, in the present study, both female and male rats were included to investigate potential sex differences in oxycodone uptake across the brain barriers.
In summary, this study was conducted with the overarching aim to increase the knowledge on the antiporter system's contribution to drug CNS disposition with focus on CSF exposure, using the model drug, oxycodone.More specifically, the aims were to: 1) Compare the function of the H + /OC antiporter system by using the extent of the BBB and the BCSFB transport as a proxy, 2) Study potential sex differences in oxycodone systemic and CNS PK, and 3) Investigate the utility of CSF as a surrogate for brain ISF for assessment of unbound oxycodone concentrations.Hereby, evidence of active uptake of oxycodone, not only across the BBB, but also across the BCSFB is provided.K p,uu values in brain ISF and CSF were sex-independent and on average 46% higher at the BBB than at the BCSFB.Cisternal CSF concentrations underestimated the brain ISF concentrations by approximately 30% and were, thereby, within a drug industry-accepted three-fold range for the prediction of the observed data and indication of equivalence between methods.
Animals
All experiments were performed on drug-naïve female (n = 8) and male (n = 11) Sprague-Dawley rats (Taconic, Lille Skensved, Denmark) weighing 270-330 g.The rats were group housed by sex at 20-21°C and 45-65% humidity under a 12-hour light-dark cycle with free access to water and food, and acclimatized for one week before experiment.The experiments were in accordance with guidelines from the Swedish National Board for Laboratory Animals and was approved by the Animal Ethics Committee of Uppsala, Sweden (Ethical Approval Dnr.5.8.18-12230/2019).
The minimal sample size per group required for a two-tailed t-test study was estimated to be four to six, given the probability level (α = 0.05), the anticipated effect size (Cohen's d = 1.2-1.5),and the desired statistical power level (0.8) [35].Rats were not randomized, and the study was not blinded.
Vascular Catheterization and Microdialysis Probe Implantation
Anesthesia was induced by inhalation of 5% isoflurane and maintained by inhalation of 2.5% isoflurane.Additionally, during surgery the inhalation gas was supplemented with 3 L/min oxygen and 0.5 L/min nitrous oxide for sufficient oxygen supply and anesthesia, respectively.The rat was placed on a heating pad and the body temperature was continuously measured via a rectal thermometer (CMA 450 temperature controller; CMA Microdialysis AB, Kista, Sweden).All instruments used during surgery were sterilized at 230-250°C for 15 seconds using a glass bead dry sterilizer (Simon Keller AG, Switzerland).For drug administration, a catheter of polyethylene (PE)-50 tubing fused with a PE-10 tubing connected to silastic tubing was inserted into the left femoral vein (approximately 90% of the rats), or the left jugular vein (approximately 10% of the rats).For blood sampling, a PE-50 tubing fused with a PE-10 tubing was inserted into the left femoral artery.To avoid clotting, the catheters were filled with 100 IU/mL heparinized saline solution.To monitor unbound drug concentrations in plasma, a 10 mm CMA 20 Elite probe (CMA Microdialysis AB, Kista, Sweden) was inserted into the right jugular vein through a guide cannula, and fixed to the pectoral muscles with two sutures.To monitor unbound concentrations of oxycodone in the brain, probes were placed in two of the following locations: striatum (STR), and LV or CM, using a stereotaxic instrument for rats (David Kopf Instruments, Tujunga, CA).A midsagittal incision was made to expose the skull and lidocaine was administered onto the periosteum as local anesthesia.In accordance with The Rat Brain in Stereotaxic Coordinates [36], a guide cannula were implanted into STR with the coordinates +0.8 mm anteroposterior and − 2.7 mm lateral to bregma, and − 3.8 mm dorsoventral to the surface of the brain.The LV coordinates were − 0.9 mm anteroposterior, +1.6 mm lateral and − 2.9 mm dorsoventral relative to bregma, and the CM coordinates were − 1.93 mm anteroposterior, +3.15 mm lateral and − 8.1 mm dorsoventral, at an angle of 25° anterior from the dorsoventral axis, 18° lateral from the anteroposterior axis, relative to lambda.After insertion, the guide cannulas were anchored to the skull with anchor screws (flat tip, 2 mm long, 1 mm diameter; AgnTho's, Lidingö, Sweden) and dental cement.Thereafter, probes were inserted into the guide cannulas.The probes used were: a 3 mm CMA 12 Elite probe for STR and a 1 mm CMA 12 Elite probe for LV and CM (CMA Microdialysis AB, Kista, Sweden).The STR probe was always placed in the right STR.All ends of the cannulas and catheters were put into a plastic cup placed on the posterior surface of the neck out of reach from the rat.The venous and arterial catheters were passed subcutaneously to the neck, and placed into the plastic cup.After surgery, the rat was subcutaneously administered 5 mL/kg isotonic saline solution supplemented with buprenorphine at a dose of 0.01 mg/kg to maintain a healthy hydration and provide postoperative pain relief.Thereafter, the rat was placed in a CMA/120 system for freely moving animals (CMA Microdialysis AB, Kista, Sweden) with free access to food and water, and was allowed to recover for one day prior to experiment.The surgeries and experiments were started between 7 and 10 a.m. for all rats.
Microdialysis and Dosing Regimens
A brief description of the study design is presented in Fig. 1.The microdialysis experiment started approximately 24 h after surgery.The stabilization period lasted for 60 min and included perfusion of the probes with Ringer solution (145 mM NaCl, 0.6 mM KCl, 1.2 mM CaCl 2 , 1 mM MgCl 2 , 0.2 mM ascorbic acid, KH 2 PO 4 and K 2 HPO 4 ; pH 7.4) filtered using a 0.45 μm filter (Acrodisc® syringe filter 0.45 μm GHP membrane; Pall Corporation, Port Washington, NY, USA).The recovery across the probe membrane was determined in vivo with retrodialysis by calibrator throughout the experiment [37].The Ringer solution contained 44 ng/mL of the calibrator oxycodone-D3 and was perfused at a flow rate of 1 μL/min using a CMA 400 Syringe Pump (CMA Microdialysis AB, Kista, Sweden).
Dosing Regimen I
Dosing regimen I was performed to investigate the PK parameters in blood and CNS compartments allowing for assessment of the extent of oxycodone transport across the BBB and BSCFB in rats (n female = 8, n male = 11).After the stabilization period, the experiment was divided into two periods: a 60 min constant-rate infusion followed by a 120 min or 180 min wash-out period (Fig. 1).The intravenous (i.v.) infusion rate of oxycodone was 0.3 mg/kg/h administered using a Harvard 22 pump (Harvard Apparatus Inc., Holliston, MA).
Dosing Regimen II
In addition to Dosing regimen I, five rats (n female = 3, n male = 2) were also given Dosing regimen II.Dosing regimen II was performed to determine 1) the in vivo-assessed intra-brain distribution by estimation of the unbound volume of distribution in the brain (V u,brain ), 2) the fraction of unbound drug in blood (f u,blood ), and 3) confirm the K p,uu values to those obtained by Dosing regimen I.After the 180 min post-infusion period in Dosing regimen I, Dosing regimen II was initiated as a combination of an i.v.loading dose of 0.24 mg/kg administered over 2 min, and a maintenance dose of 0.54 mg/kg/h, given as an i.v.infusion over 60 min (Fig. 1).The targeted steady-state concentration was determined based on the maximum unbound concentration in blood obtained by Dosing regimen I.
Sampling of Dialysate, Blood and Brain
Throughout the experiment, dialysate samples were collected in pre-weighed polypropylene microvials with polyurethane caps (AgnTho's, Lidingö, Sweden), in 10-or 20-min intervals.At the end of each dialysate collection interval, the dialysate samples were immediately weighed, capped and stored at 6°C pending bioanalysis.
From each rat, blood samples were collected in heparinized (5 μL 5000 IU/mL heparin) polypropylene Eppendorf tubes (Eppendorf, Hamburg, Germany), before the start of infusion and at 5, 15, 25, 45, 55, 75, 85, 115, and 170 or 175 min, and for Dosing regimen II also at 275 min after the start of infusion.The maximal volume for each blood sample was 180 μL and a maximum of 2 mL blood was sampled from each rat before termination of the experiment.Up to 12 mL of terminal blood was sampled from the heart at 180 or 300 min using vacuum blood collection tubes (BD Vacutainer®, EDTA; Mediq, Utrecht, Netherlands) before isolation of the brain.After sample collection, the blood samples were immediately centrifuged Fig. 1 Schematic overview of the experimental design and Dosing regimens.Upon arrival, rats were acclimatized [1] for one week.On Day 1, approximately 24 hours prior the start of experiment, surgery [2] was performed, including catheterization of blood vessels for drug administration and blood sampling, followed by implantation of the CNS and blood probes.The experiment [3] was initiated the next day by a 60 min stabilization period, which implies perfusion of the probes.The rats were administered oxycodone using Dosing regimen I followed by a washout period (n = 14), or Dosing regimen I followed by a washout period and a second administration of oxycodone using Dosing regimen II (n = 5).Throughout the experiment, the probes were perfused with Ringer solution containing the calibrator oxycodone-D3 in 1 μL/min.The dialysate (Ringer solution exiting the probe) was sampled in 10-or 20-min intervals, and blood was sampled at the specified time points.The brain was isolated terminally.
(MicroStar12 centrifuge; VWR International AB, Stockholm, Sweden; or IEC Centra CL2, Labora; IEC international equipment company, Needham Heights, MA, USA) at 10,000 rpm for 5 min, and the plasma was transferred to Eppendorf tubes and stored at −20°C pending bioanalysis.
All brains were isolated and examined to ensure that there was no extensive bleeding around the probes, and to visually confirm the correct positions of the probes.The right STR (location of probe placement) and left STR, as well as the remaining whole brain (without STR; referred to as WB), were collected for bioanalysis of oxycodone to investigate the impact of probe placement on oxycodone total concentrations.
In Vivo Recovery Calculation
To monitor the probe recovery across the probe membranes during the experiment, retrodialysis by calibrator was performed throughout the experiment [37].The recovery of oxycodone was assumed to be the same as the recovery of the calibrator oxycodone-D3, since the two compounds are almost identical.The probe recovery of the calibrator was calculated as: where C in is the average calibrator concentration in the perfusion solution entering the probe, sampled from the Ringer solution containing syringes before and after the experiment.C out is the average concentration of the calibrator in the dialysate samples leaving the CNS and blood probes, collected from each probe throughout the experiment.Average recoveries were calculated for the blood probes (10 mm), STR probes (3 mm) and the LV and CM probes (1 mm), respectively.
Measurement of Oxycodone Partition into Blood Cells
The partition of oxycodone between blood and plasma (C b / C p ) has previously been studied, and the ratio of oxycodone concentration in blood to that in plasma was 1.3 ± 0.3 [6].To confirm the partition between blood and plasma in this study, the in vivo C b /C p was measured in a separate group of rats (n = 3) after administration of an i.v.loading dose of 0.24 mg/kg oxycodone over 2 min, followed by an i.v.maintenance dose of 0.54 mg/kg/h oxycodone, for 240 min.Blood was sampled from the heart by Vacutainer® tubes before decapitation.Blood was centrifuged at 10,000 rpm for 5 min.Thereafter, 25 μL of plasma and 25 μL of blood cells (BC) were individually transferred to new Eppendorf tubes.Samples were stored at 6°C until bioanalysis the following day. (1) To estimate the hematocrit, blood was collected in heparinized microhematocrit capillary tubes (VWR International, Radnor, USA).The capillary tubes were centrifuged for 5 min (Adams Readacrit®, micro-hematocrit centrifuge model CT-3400).The results were read immediately after centrifugation, by measurement of the length (cm) of the BC volume in the capillary and the full length of the blood volume in the capillary.The hematocrit (Ht) was calculated as follows: The partition to blood cells (C b /C p ) was calculated as follows [38]: where C BC and C p are the oxycodone concentration in BC and plasma, respectively, and Ht is the rat hematocrit.
Measurement of In Vitro Plasma Protein Binding and Brain Tissue Binding of Oxycodone
The equilibrium dialysis technique was performed to assess the fraction of unbound oxycodone in plasma (f u,plasma ) and whole brain as well as striatal homogenates (f u,brain ) in vitro, as previously described [39][40][41].Briefly, undiluted plasma or brain homogenate (1:9, w:v, diluted in phosphate-buffered saline, PBS, pH 7.4) from drug naïve rats was spiked with oxycodone at concentrations of 30 ng/mL and 300 ng/mL, respectively.To perform the equilibrium dialysis, a Teflon 96-well plate with a semipermeable membrane was used (molecular weight cut off: 12-14 kDa; Model HTD96b, HTDialysis, Gales Ferry, CT, USA).The spiked plasma or brain homogenate was dialyzed against equal volumes of PBS for 6 hours at 37°C with orbital shaking at 200 rpm in MaxQ4450 incubator (Thermo Fisher Scientific, NinoLab, Sweden).Terminally, plasma or brain homogenate, and PBS were sampled and stored at −20°C pending bioanalysis.Matrix matching was applied by adding PBS to the plasma and brain tissue homogenate samples, and vice versa adding plasma or brain tissue homogenate to the PBS samples.Samples were stored at −20°C pending bioanalysis.The ratio of drug concentration in buffer (C u,buffer ) to that in plasma (C p ) was calculated to estimate f u,plasma : The ratio of drug concentration in buffer to that in brain was calculated to estimate f u,brain : (2) Capillary length of BC Capillary length of plasma and BC where D is the dilution factor and f u,D is the fraction of unbound drug in the diluted brain tissue calculated as: where C u,buffer and C tot,tissue are the drug concentrations in the buffer and the diluted brain tissue homogenate, respectively.f u,brain , measured using the in vitro equilibrium dialysis technique, describes non-specific and specific oxycodone binding to brain cellular components.
Measurement of In Vitro Intra-Brain Distribution of Oxycodone
The brain slice method was performed to assess the in vitro relationship between total drug amount in the brainand the unbound brain ISF concentration (V u,brain , mL × g brain −1 ) in whole coronal brain slices and in the striatal area, as previously described [42][43][44].Briefly, six 300 μm slices were obtained from the brain of drug naïve rats (n = 3 per group) using a Leica VT1200 microtome slicer (Leica Microsystems AB, Sweden).The slices were then incubated in 15 mL of artificial extracellular fluid (aECF, pH 7.6 at room temperature) containing 30 ng/mL oxycodone.To match unbound ISF concentration measured in vivo in the present microdialysis study, an additional group was included with oxycodone aECF concentration of 300 ng/mL (n = 3 rats).The beaker was covered with a custom fabricated lid fitted with a Teflon fluorinated ethylene-propylene film (Teflon FEP film 50 Å, 12.7 μm).The incubation was performed under 75-80 mL/min oxygen at 45 rpm and 37°C for 5 hours (MaxQ4450 Thermo Fisher Scientific, NinoLab, Sweden).After incubation, aECF was sampled and the slices were cut into two halves, with one half undergoing micro-dissection to obtain STR.Consequently, the intact half of the brain slice (called collectively whole brain, WB) as well as the micro-dissected striatal area from the contralateral half were dried on a filter paper, weighed and homogenized individually in 9 volumes of aECF (w/v) with an ultrasonic processor (VCX-130; Sonics, Chemical Instruments AB, Sweden).The matrices were normalized for bioanalysis by adding blank aECF to the brain slices, and blank brain homogenate (1:4, w:v) to aECF containing samples.Samples were stored at −20°C pending bioanalysis.The in vitro V u,brain , was calculated as follows: (5) where A brain is the amount of the drug in the brain slice, C u,buffer is the measured aECF concentration of the drug and V i is the volume of the remaining aECF layer around the brain slices, here estimated to 0.133 mL × g brain −1 using [ 14 C]sucrose.
Bioanalysis
Quantitative analysis of oxycodone, oxycodone-D3 (only in dialysate samples), and oxycodone-D6 as internal standard (IS), in dialysate, plasma, BC and brain samples, respective blanks, as well as standards and QCs prepared in the respective matrices, was achieved by ultraperformance liquid chromatography-tandem mass spectrometry (UPLC-MS/ MS).The calibration curves included standards of 0.5, 1, 5, 10, 50, 100 and 150 ng/mL, and QCs of 2, 25 and 75 ng/mL for all matrices.
Sample Preparation
Dialysate samples (approximately 9 μL, estimated by the weight), and 8 μL of each of the following: standards (0.5-150 ng/mL), QCs (2, 25 and 75 ng/mL) and blank Ringer solution, were diluted with 25 μL of MilliQ water spiked with an IS concentration of 2-10 ng/mL.Before placing the samples on a 96-well plate, the samples were vortexed (Vortex-Genie® 2; Scientific Industries Inc., Bohemia, NY, USA) for 2 min and spun-down at 10,000 rpm for 15-45 s (ScanSpeed mini centrifuge; LaboGene, Lillerød, Denmark).Plasma samples from both the microdialysis and the equilibrium dialysis studies were prepared as previously described [45].Briefly, samples, standards (0.5-150 ng/ mL), QCs (2, 25 and 75 ng/mL) and blank plasma were thawed and vortexed for 2 min.A volume of 25 μL of sample was precipitated in 50 μL acetonitrile spiked with an IS concentration of 2-10 ng/mL and vortexed for 2 min, and then centrifuged at 10,000 rpm for 5 min (ScanSpeed mini centrifuge; LaboGene, Lillerød, Denmark).A supernatant volume of 40 μL was transferred to a new microvial and diluted with 80 μL of MilliQ water.The vials were then vortexed for 2 min and put on a 96-well plate.BC samples were included in the plasma sample analysis.
The brain samples were thawed, weighed and homogenized in MilliQ water (1:4, w:v) using an ultrasonicator (Sonics vibra cell, Chemical instruments AB; Sonic materials Inc., Newtown, USA).Brain samples from the microdialysis and brain slice and the equilibrium dialysis studies were treated using the same procedure as described above for plasma samples.The sample volume injected onto the column was 5 μL.
Quantification of Oxycodone and Oxycodone-D3
The system for chemical analysis consisted of The calibration curves were constructed using linear regression and a weighing function of 1/X 2 , which resulted in an even residual distribution and similar importance for all the concentrations in the calibration range.The coefficient of determination (R 2 ) was ≥0.99.The lower limit of quantification of oxycodone and oxycodone-D3 was set to be equal to the lowest level of the calibration curve, i.e., 0.5 ng/mL, in all matrices.Respective blank matrix samples with and without IS were included in the analytical runs to confirm the absence of contamination and carry-over.
Data Analysis for PK Parameters
To estimate the unbound oxycodone concentration in blood, STR, LV and CM from the measured microdialysis concentrations, the measured concentration was divided by the recovery of the probe, according to the following equation: where C u is the unbound drug concentration at the probe location in vivo, C dialysate is the concentration in the dialysate sample, and recovery is the calibrator recovery across the specific probe membrane (Eq.1).The average probe recoveries for the blood probe (10 mm), STR probe (3 mm), and LV and CM probe (1 mm) were 60.6 ± 16.0% (n = 22), 10.4 ± 4.4% (n = 12), and 3.38 ± 1.21% (n = 3), respectively.The recoveries were relatively stable over time and across dosing regimens, although with some time-independent fluctuations (Table S1).
To estimate total oxycodone concentrations in blood (C blood ) from the measured total concentrations in plasma, the total concentration in plasma (C p ) was multiplied with the C b /C p value, according to the following equation: PK parameters were estimated by non-compartmental analysis using Phoenix 64 WinNonlin (Certara, New Jersey, USA) [46], and denominated as in the software, i.e., the AUC from time 0-180 min (AUC last ) and that extrapolated to infinity (AUC inf_obs ), clearance (CL _obs ), volumes of distribution (V z_obs and V ss_obs ) and terminal half-life (t 1/2 ) of unbound oxycodone in blood, STR, LV and CM.Parameters estimated based on the last observed concentration were indicated with _obs and parameters estimated for steady-state were indicated with ss.AUC inf_obs and AUC last were estimated using the linear log trapezoidal method from the time of dosing extrapolated to infinity (inf) based on the last observed concentration (175-235 min) or from time 0 to the last sampled timepoint (last), and the first-order rate constant associated with the terminal part of the log-linear curve.The rate constant (Lambda_z) was estimated by linear regression of log concentration between 75 and 240 min.Mean unbound concentrations at steady-state in blood, STR, LV and CM were calculated from samples collected 35 to 55 min after the initiation of steady-state (Dosing regimen II).Mean total blood concentrations at steady-state were calculated from at least two samples collected minimum 35 min after initiation of steady-state.Steady-state clearances (CL ss ) was estimated based on the rate of infusion (R 0 ) and mean steady-state concentrations (C ss ): Fraction of unbound drug in blood was calculated as follows: where C u,blood,ss is the mean unbound concentration in blood at steady-state and C tot,blood,ss is the mean total concentration in blood at steady-state.The unbound partition coefficients, describing the extent of drug delivery to striatum (K p,uu,STR ), lateral ventricle (K p,uu,LV ) and cisterna magna (K p,uu,CM ) were estimated as follows: and where AUC u,brain is the AUC of the unbound drug concentrations in striatal ISF, or lateral ventricular or cisternal CSF, AUC u,blood is the AUC of the unbound drug concentrations in blood, C u,brain,ss is the mean unbound concentration in striatal brain ISF or lateral ventricular or cisternal CSF at steady-state, and C u,blood,ss is the mean unbound concentration in blood at steady-state, measured at 35, 45 and 55 min after the initiation of the loading dose.
To compare the extent of transport between two CNS sites (site 1 and 2), the relative extent was determined based on the ratio of the respective K p,uu value: The partition coefficient between total brain concentrations and total blood concentration, K p,brain , was estimated as follows: where C tot,brain,ss and C tot,blood,ss are the total concentration in brain (WB, right-and left STR) and the total concentration in at steady-state, respectively.
The in vivo apparent unbound volume of distribution in brain (V u,brain ), describing intra-brain distribution, was calculated as follows: where A brain is the amount of drug in the brain tissue corrected for dilutions during analysis and residual blood in the ( A brain C u,brainISF brain (Eq.17), and C u,brainISF is the unbound concentration in brain ISF.A brain was calculated, according to Fridén et al. [47] as follows: where C brain , C p and C er are the total drug concentrations in brain, plasma and erythrocytes, respectively.V er and V water are the erythrocyte volume and the apparent plasma water space in brain, respectively.A V er value of 2.13 μL × g brain −1 and a V water value of 10.3 μL × g brain −1 were used [47].V eff is the effective plasma space of the drug in the brain, calculated as follows: where f u,plasma is the fraction of unbound drug in plasma, V water is the apparent plasma water space and V protein is the apparent vascular space of plasma proteins.The f u,plasma value used was 0.89, and the V protein value used was 7.99 μL × g brain −1 , which was previously estimated using 14 C-dextran with a molecular weight of 70 kDa [47].
Conclusions regarding intra-brain distribution were drawn based on the V u,brain value and physiological volumes in brain tissue, where a V u,brain value around 0.2 mL × g brain −1 indicates restricted drug distribution in the brain ISF, and a V u,brain value above 1.0 mL × g brain −1 indicates brain tissue binding, active uptake into cells and/or distribution to subcellular organelles.The higher the V u,brain value, the more extensive brain tissue binding and/or intracellular distribution [48].V u,brain values measured in vivo were compared with in vitro intra-brain distribution parameters obtained in the brain slice (V u,brain ) and the brain homogenate (f u,brain ) assays.Inherent inverse correlation between the parameters, i.e., V u,brain ≈ 1/f u,brain was also applied for comparison [49].
Inclusion and Exclusion Criteria
In all stages of the study, rats were included if they were considered healthy based on Uppsala University assessment guidelines for pain and distress in experimental animals.Rats were placed for acclimatization when the body weight was within the specified range of 270-330 g.Thereafter rats were included in the experiment if catheterization of blood vessels and placements of probes were successful during the surgery (post-mortem verification of correct placement of the CNS probes was determined by visual examination of the brain).Bioanalytical runs were accepted if the precision was within 15% and the accuracy within 15%, except at the lower limit of quantification (LLOQ) (0.5 ng/mL) where an accuracy of ±20% was accepted.The accepted IS recovery of standards and quality control (QC) samples were 80 to 120%.The linearity of the ( 17) calibration curve was considered acceptable when the coefficient of determination (R 2 ) was above 0.99.The AUC inf_obs was used for calculations of K p,uu , except in one case where AUC last was used as the extrapolated area (from last time point to infinity) was above 20% of the total estimated area [50].
Statistical Analysis
Statistical analysis was performed using GraphPad Prism version 9.0.0 for Windows (GraphPad Software, San Diego, California USA, www.graph pad.com).Normal Gaussian distribution of the data was confirmed using Shapiro-Wilk normality test.For comparisons of AUCs, CL, V, t 1/2 and V u,brain between female and male rats, and between in vitro f u,brain in STR and WB, two tailed unpaired t-test was used.For comparison of K p,uu in STR, LV and CM, as well as of t 1/2 in blood, STR, LV and CM, one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test was used.K p,uu comparisons between female and male rats, as well as K p,uu obtained by AUC inf_obs and C ss , were performed using two-way ANOVA followed by Šídák's multiple comparison test.Paired one-way ANOVA followed by Tukey's multiple comparison test was used to compare total brain concentrations in right STR (location of probe placement), left STR, and whole brain.As ANOVA requires complete data without missing values, data sets with missing values were analyzed by fitting a mixed effects model.The determination coefficients (R 2 ) of the ratios of unbound concentration in striatum and lateral ventricle to that in blood (C u,STR /C u,blood and C u,STR /C u,blood ) and time, and the ratios and concentration, were obtained by computing a two-tailed correlation.Data are presented as mean ± SD, unless other is specified.Differences were considered significant at p < 0.05.
Active Uptake of Oxycodone at the BBB and the BCSFB
Active uptake of oxycodone was present also at the BCSFB, indicating that the same transporters are present both at the BBB and at the BCSFB.This was shown by higher unbound concentrations in both LV and CM than those in blood, with K p,uu,LV of 3.40 and K p,uu,CM of 2.68, while K p,uu,STR was 4.44 (Eq.12, Table I and Fig. 2A).The CSF exposure was independent of site of sampling as there were no difference between LV and CM (p = 0.23, Table S2).
The higher exposure in striatal ISF compared to CSF gives a relative extent of oxycodone delivery to STR compared with that to LV of 1.46 ± 0.3, showing that the extent across the BBB was 46% higher compared to that across the BCSFB at the LV site (p = 0.036, n = 9, Fig. 2A, Table S2).The relative extent of oxycodone delivery to STR to that in CM was 1.73 ± 1.01 suggesting 73% higher exposure in STR compared to that in CM (p = 0.017, n = 8, Fig. 2A, Table S2).
No sex differences were observed in the uptake across the barriers in female and male rats (Fig. 2B, Table I, Table S3).As expected, the estimations of partition coefficients were independent of the method of assessment, i.e., by AUC or C ss (Eqs.12-13, Table I, Table S4).
In addition to differences in exposure between the LV and STR, the t 1/2 in LV of 45.3 min was approximately 30% longer than those in blood and STR, estimated to 34.7 min (p = 0.0036) and 36.8 min (p = 0.0029), respectively.There were no significant differences in the t 1/2 in CM estimated to 50.1 min compared to those in blood, STR and LV, due to a high variability (Table S5).
Pharmacokinetics of Oxycodone
We did not find any sex differences in the systemic blood PK of oxycodone.The unbound blood concentration-time profiles were similar in female and male rats.Thus, there were no sex differences in clearance or volume of distribution, resulting in no difference in terminal t 1/2 estimations (Fig. 3A and Table II).Generally, the inter-individual variability measured as coefficient of variation (CV %) was similar between females and males.For instance, CV of K p,uu,STR was 27% in females and 20% in males, and CV of AUC inf_obs was 12% in females and 27% in males (Table S6).
Rapid Equilibrium of Oxycodone Uptake to Striatum, Lateral Ventricle and Cisterna Magna
There was a very rapid equilibration of oxycodone uptake to all three sites of the brain.The unbound concentration ratios across the BBB and the BCSFB (C u,STR /C u,blood and C u,LV / C u,blood , respectively) were initially over 10 times higher than during later time-points, yet, the ratios quickly stabilized (Fig. 4A-C, Eqs.12-13).A similar pattern was observed in CM.Already at the second sampling period, the values approached mean K p,uu values.There was a tendency to a time-dependent increase in unbound oxycodone concentration in LV after the end of infusion, but not in STR (R 2 = 0.82, p < 0.0001, and R 2 = 0.043, p = 0.37, in LV and STR, respectively).There is, however, no relationship between the ratios and the unbound concentration in blood (R 2 = 0.014, p = 0.049 in STR, and R 2 = 0.16, p < 0.0001 in LV, Fig. 4D).
Intra-Brain Distribution of Oxycodone
The intra-brain distribution of oxycodone showed no site-specific differences in total concentrations or total brain-to-blood concentration ratio, K p between left and right STR, indicating no influence of the microdialysis probe placement on the distribution and binding of oxycodone (Table S7).Individual total concentrations and the individual unbound striatal ISF concentrations measured with microdialysis in right STR resulted in an average V u,brain of 2.11 ± 0.95 mg × g brain −1 (n = 4, Eq. 16, Table S7), indicating distribution into brain parenchymal cells.V u,brain estimated in the brain slice assay at 300 ng/mL, matching average unbound striatal ISF concentration at steady-state, was 3.88 ± 0.20 mL × g brain −1 (n = 3, Eq.7) and was 1.8-fold higher than the in vivo measured unbound volume of distribution (p = 0.035 for unpaired t test with Welch's correction).The in vitro-estimated V u,brain values at 30 ng/mL were 4.05 ± 0.33 (N = 3, n = 2) and 4.4 ± 0.1 (N = 3, n = 2) in WB and STR, respectively, and were slightly higher compared to V u,brain values obtained at 300 ng/mL.The in vitro-estimated fraction of unbound oxycodone in brain (f u,brain ) was 0.45 ± 0.12 (N = 3, n = 1-2) and 0.34 ± 0.015 (N = 3, n = 1-2) in WB and STR, respectively, without a difference between the two (p > 0.99).The in vitro-estimated f u,brain measured for striatum corresponding to a V u,brain of 2.94 mL × g brain −1 calculated from the 1/f u,brain relationship.
Discussion
This study provides novel in vivo evidence that the H + /OC antiporter system is present and functioning at the BCSFB, based on an oxycodone K p,uu,LV of 3.41, as well as confirming its presence at the BBB [6], based on K p,uu,STR of 4.4.On average, the individual relative extent of oxycodone uptake across the BBB to that across the BCSFB was 46% higher across the BBB compared with that across the BCSFB.Although the extent of uptake is significantly different at the barriers, data suggest that drugs that are actively taken up at the BBB with this system will likely also have higher concentrations in CSF than in blood, yet, CSF concentrations will somewhat underestimate brain ISF concentrations.As the K p,uu ratios are within three-fold, CSF measurements Fig. 2 Scatter dot plots of K p,uu estimates.Unbound oxycodone partition coefficient (K p,uu ) estimates in striatum (STR, gray, N female = 7, N male = 10), lateral ventricle (LV, blue, N female = 4, N male = 6) and cisterna magna (CM, green, N female = 5, N male = 4) in (A) rats of both sexes, and (B) female (filled) and male (pattern) rats obtained by AUC (Eq.12).The mean is indicated by the top of the bar and the whiskers are representing the SD.Data and detailed statistics are presented in Table I could be used as a surrogate for brain delivery.This is relevant for oxycodone and likely also to other drugs that are actively taken up via the H + /OC antiporter system, like diphenhydramine, tramadol, memantine and others.
From the conceptual point of view, it is important to bear in mind that in in vivo conditions the assessment of the contributive role of each brain barrier, including BBB and BCSFB, to drug CSF exposure is very challenging.In case of oxycodone, the entry into the CSF could occur via transport across the BCSFB and the BBB followed by drainage from brain ISF.There are several studies showing evidence of ISF bulk flow draining into CSF, and that a small proportion of CSF is recycling into ISF [51][52][53][54].However, the extent of the ISF-CSF exchange is not fully understood from both physiological and PK points of view.In the light of this study, the transport of oxycodone to the CSF from the blood across BCSFB is likely higher than that from the brain ISF as no delay was observed in the oxycodone concentration-time profile for both LV and CM, with parallel profiles across all investigated compartments.Fig. 3 Concentration-time profiles of unbound oxycodone in females and males after administration of Dosing regimen I. Semilogarithmic concentrationtime profiles in (A) blood (N female = 6, N male = 10), (B) striatum (STR) (N female = 6, N male = 10), (C) lateral ventricle (LV) (N female = 3, N male = 6), and (D) cisterna magna (CM) (N female = 3, N male = 4).Data (A-C) are presented as mean ± SD or as individual data points (D).The dotted line at 60 min represents the stop of infusion.
Table II Pharmacokinetic Parameters of Unbound Oxycodone in Blood
Mean ± SD.Comparisons between female and male rats were performed using two-tailed unpaired t-tests with the following results: AUC inf_obs (p = 0.67), AUC last (p = 0.74), CL _obs (p = 0.38), V z_obs (p = 0.69), V ss_obs (p = 0.44) and t 1/2 (p = 0. Higher extent of oxycodone in brain ISF than in CSF in both the LV and CM, may indicate a lower expression of the transporter at the BCSFB than at the BBB.In spite of strong evidence from in vivo and in vitro investigations supporting the active uptake phenomenon [55], the gene(s) coding for the antiporter system has not yet been confirmed.Remarkably, functional involvement of two protein components, transmembrane 7 superfamily member 3 (TM7SF3) and LHFPL tetraspan subfamily member 6 (LHFPL6), has been found in a recent proteomics-based transporter identification study performed in in vitro cell lines [5].The expression levels of the antiporter system proteins at CNS barriers are to our knowledge not known.However, in an early study, in vivo and in vitro experiments of isolated rabbit choroid plexus indicated carrier-mediated uptake of diphenhydramine at the BCSFB [56].In mouse brain endothelial cells, the TM7SF3 mRNA expression level was higher than that of breast cancer resistance protein (BCRP), and in human brain endothelial cells, the LHFPL6 mRNA expression level was higher than that of glucose transporter 1 (GLUT1) and L-type amino acid transporter 1 (LAT1) [12].
Observed dissimilarities in absolute concentrations of oxycodone between ISF and CSF are critical for evaluation of CNS exposure in humans as CSF is often used as a surrogate for brain ISF [18,57].Given varying conclusions on the predictive ability of CSF for brain ISF of drugs with active transport mechanism [23,58,59], it is crucial to examine this relationship.In this study, observed similar CSF exposure in LV and CM suggests that the CSF concentration of oxycodone is independent of the location of sampling between these two.However, CSF is often sampled by lumbar puncture in humans while from CM in research animals [57].Remarkably, the ratio of oxycodone AUC in lumbar CSF to that in total plasma was reported to be 1.18 in humans [60].By considering the correction for plasma protein binding (f u,plasma of 0.55 based on FDA label) and neglected drug binding in CSF, it can be concluded that there is an active uptake of oxycodone into the CSF in humans with an estimated K p,uu,CSF of ca 2 which is within the range of K p,uu,CM (mean value of 2.68) observed in rats in this study.Yet, potential differences in concentration of oxycodone between lumbar and CM or LV CSF should be considered for evaluation of the extent of the BCSFB transport in humans.
In our study, initially higher concentration ratios were observed at both the BBB and the BCSFB independent of dosing regimen (Fig. 4A-C).The lack of relationship between oxycodone C u,STR /C u,blood and C u,STR /C u,blood and unbound blood concentration was confirmed by the low coefficients of determination (Fig. 4D).The rapid brain uptake of oxycodone, with an initial C u,STR /C u,blood above 10 followed by a stabilization at 3.03 was previously documented [6], with similar trends also found for diphenhydramine [61] and bupropion [17].The latter could reflect a phenomenon of saturation of oxycodone transport at higher plasma concentration It was previously suggested that an overshoot in K p,uu may also be observed due to a reduced proton gradient over time [17], as the antiporter system is pH sensitive [62].However, we do not see any obvious reason to suspect such a rapid shift in pH.Another potential explanation for initially higher single time point concentration ratios is that the initially lower unbound concentrations have larger uncertainties due to allowance of ±20% deviation in the quantification of lower concentrations.If this is the case the uptake should increase again when oxycodone concentrations in blood decrease over time which is not the case at the BBB.Additionally, not all concentration ratios at low blood concentrations are above the average K p,uu values.
As the conclusions of active uptake to the CNS is dependent on a correct estimation of unbound concentrations, the determination of recovery is crucial for correction of the incomplete recovery across the microdialysis probe.To minimize the uncertainties in recoveries and unbound concentration estimations, retrodialysis by calibrator was applied.The recovery was monitored continuously in each individual in vivo, which has been suggested as a reliable method to assess probe recovery [37], and the recovery was generally stable over time.The average recoveries of the 10 mm and 3 mm probes were both above 10%, while that of the 1-mm probes was 3.38%, which is considered low.A low recovery is resulting in uncertainties in the unbound concentration estimations.However, in the LV and CM, this short probe length is required.A decreased flow rate could have increased the recovery, however, the frequent 10-min sampling interval applied with 1 μL/min was needed with respect to temporal resolution and the sample volume required for analysis [63].
Our results did not reveal any sex differences in neither oxycodone transport across CNS barriers nor systemic PK, which is contrary to previously reported sex and/or estrus cycle differences presented for oxycodone analgesic effect, brain exposure, clearance, metabolism and oral bioavailability [33,34].The analgesic effect of oxycodone was not examined in this study, still, the analgesia has previously been documented to correlate with unbound oxycodone concentrations in brain [34].Also, estrus cycle was not assessed in this study.However, as the hormonal profile in rodents is rapid with a 4-day cycle [64], and as our experiments were conducted on eight females from different litters and at different time points, it is likely that the results represent the overall cycle.Even though it has been suggested that pooling females in different cycle phases may mask differences dependent on the estrus cycle [64], the observed variability in oxycodone CNS delivery and systemic PK was similar or lower among females than among males, and no outliers were detected.Important differences between our study and previously reported studies are higher oxycodone doses and oral administration, which may contribute to the inconsistent results [33,34].Sex differences in analgesia were reported to be present at higher but not at lower oxycodone doses in mice [33].Also, the oral bioavailability of oxycodone is very low in rats and is subject to sex differences, with females having a 5-fold higher bioavailability [33].The brain exposure of unbound oxycodone was reported to differ between females in diestrus and male rats, despite similar plasma levels of oxycodone [34].This may indicate differences in oxycodone brain delivery, however, the sex-and estrus cycle differences in oxycodone CNS exposure and analgesia were attributed by the authors to differences in the brain CYP2D oxycodone metabolism.These observations are difficult to evaluate as oxycodone was administered orally and blood PK was not characterized, and limited to determination of plasma concentration at a single time point.Altogether, the interpretation of previously observed sex differences in oxycodone CNS exposure and analgesia in the light of our own findings is challenging due to multiple confounding factors inherent to the study designs including oxycodone dose, route of administration with low oral bioavailability, and sampling of blood and brain.Yet, our findings from a dedicated PK study with longitudinal sampling of unbound concentrations of oxycodone in the blood and several brain compartments after administration of clinically relevant doses, are ruling out the existence of sex differences in the extent of oxycodone transport across brain barriers in healthy rats.
Conclusions
This study provides novel in vivo evidence of functional presence of the putative H + /OC antiporter system at the BCSFB, yet, reaching somewhat lower K p,uu values than that at the BBB for oxycodone.Thus, CSF concentrations slightly underestimated striatal ISF concentrations.Still, CSF could be an acceptable surrogate for brain ISF for drugs handled by this transporter, as the deviation is within three-fold.The lack of sex difference in the active uptake of oxycodone to STR, LV and CM, suggests that the antiporter system likely contributes equally to CNS exposure in female and male rats.A high extent of drug delivery to the brain is beneficial for CNS drug delivery and reduces the risk of peripheral side effects.These findings therefore also widen the possibilities for characterization of the antiporter system as a viable drug delivery target to enable discovery and development of novel CNS drug candidates.Hence, targeting the H + /OC antiporter system in CNS drug development is of great promise for brain-specific drug delivery, and the present study indicates that measuring CSF may be a good alternative in species where brain tissue sampling or microdialysis cannot be performed, given that these species express the antiporter system to the same extent as in rats.
Fig. 4
Fig. 4 Unbound concentration ratios in striatum and lateral ventricle versus time and unbound blood concentration.The relationships of the single time point ratios of the unbound oxycodone concentration in striatum and lateral ventricle to that in blood (C u,STR /C u,blood , gray cross; C u,LV /C u,blood , blue empty diamonds), and (A) time during infusion (Dosing regimen I, N STR = 17, N LV = 10), (B) time after infusion (Dosing regimen I), (C) time during infusion (Dosing regimen II, N STR = 4, N LV = 3) and (D) unbound blood concentration (Dosing regimen I).The horizontal dotted lines represent mean K p,uu in STR (gray) and LV (blue) obtained by Dosing regimen I (A, B, D) and Dosing regimen II (C), and the shadowed areas represent the respective SD.The goodness of fit is described by the determination coefficients (R 2 ) which were obtained by computing a two-tailed correlation.
an Acquity Ultra-Performance Liquid Chromatography instrument, coupled to a Xevo TQ-S Micro mass spectrometer (Waters Corporation, Milford, Massachusetts, USA).The software used for quantification of analytes was MassLynx version 4.2, and TargetLynx (Waters Corporation, Milford, Massachusetts, USA).Chromatographic separation of analytes was performed on an AQUITY UPLC BEH C18-column (1.7 μm, 2.1 × 50 mm) coupled with a VanGuard Pre-Column made of the same material (Waters Corporation, Milford, Massachusetts, USA).The column was maintained at 40°C for optimal peak shape.The sample manager temperature was kept at 5°C.An elution gradient was used for an optimal separation with an initial gradient of 5% Mobile phase B (MPB) and 95% Mobile phase A (MPA).MPB was increased to 45% within 2.5 minutes, and maintained until 3.5 min.Thereafter, MPB was increased to 90% until 4 min, and reversed to 5% until 4.5 min and then maintained until 4.8 min.MPA consisted of 0.1% formic acid in MilliQ water, and MPB consisted of 0.1% formic acid in acetonitrile.The mobile phases were chosen based on low background and high sensitivity, and the elution gradient was chosen based on a low background, reasonable retention time and negligible carry-over.The UPLC retention times of oxycodone were 2.26 ± 0.034, 2.24 ± 0.029 and 2.20 ± 0.0062 min in dialysate, plasma and brain homogenate, respectively.A constant flow rate of 0.3 mL/min was used.
To minimize the salt exposure on the MS/MS, the flow from the UPLC was led to waste for the first 1.8 min and after 2.5 min following sample injection.The ionization source was an Electro Spray Ionization probe in positive mode.Quantification was performed using multiple reaction monitoring mode to monitor Parent → Daughter ion (m/z) transitions.The transition modes for oxycodone, oxycodone-D3 and oxycodone-D6 were 316.11 → 298.1 m/z, 319.11 → 301.1 m/z, and 322.18 → 304.1 m/z, respectively.The source dependent parameters maintained for analytes were as follows:
Table I
K p,uu in Striatum, Lateral Ventricle and Cisterna Magna Estimated in Female and Male Rats K p,uu estimated using Eq. 12 for Dosing regimen I, and Eq. 13 for Dosing regimen II.Mean ± SD.Statistical comparisons are presented in TableS2 and S3 and Table S2-S3.*p < 0.05, ns p > 0.05.
|
v3-fos-license
|
2020-07-16T09:02:51.440Z
|
2020-07-15T00:00:00.000
|
220630585
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/geriatrics5030043",
"pdf_hash": "491e3ae8e937595229979f0108bbf8ff4e993103",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3035",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8358a9fa3be5381b7745581254fc0ea22bcc71bd",
"year": 2020
}
|
pes2o/s2orc
|
Postoperative Complications of Hip Fractures Patients on Chronic Coumadin: A Comparison Based on Operative International Normalized Ratio
In current clinical practice, orthopedic surgeons often delay the surgery intervention on geriatric hip fracture patients to optimize the international normalized ratio (INR), in order to decrease the risk of postoperative hematological complications. However, some evidence suggests that full reversal protocols may not be necessary, especially for patients with prior thromboembolic history. Our study aims to compare the surgical outcomes of patients with normal versus elevated INR values. We conducted a retrospective chart review on 217 patients who underwent surgeries on hip fractures at two academic trauma centers. We found that in our group (n = 124) of patients with an INR value of 1.5–3.0, there was only one reoperation for a hematoma, but there was a trend for more blood transfusions. There was no statistically significant difference in the odds of reoperation or overall complications. Nevertheless, there were significantly more events of postoperative anemia in this high INR patient group.
Introduction
Early operative intervention of hip fractures has demonstrated a decrease in both morbidity and mortality in the geriatric population [1][2][3][4][5][6][7][8]. However, orthopedic surgeons often delay surgical interventions to optimize therapeutic international normalized ratio (INR) value of patients, a protocol aimed at decreasing the risk of bleeding complications and improving overall outcomes [9][10][11]. The clinical practice of delay is based on the philosophy that decreasing the INR at the time of surgery will decrease estimated blood loss (EBL), transfusion risk, and bleeding complications [12]. There is some evidence to contradict this, as some authors have suggested operating through antiplatelet therapy [13] or only give a single dose of vitamin K [14,15]. Yet, the risk of peri-operative bleeding and complications when operating on patients with elevated INR for hip fractures is relatively unknown.
In the general population of the US, there is a 1.5% prevalence of chronic therapeutic anticoagulation and this is even higher in the geriatric population [16]. In the hip fracture population, it has been estimated that a minimum, 4-8% of hip fractures are on chronic coumadin [17]. Reversing or discontinuing therapeutic anticoagulation is not without potential complications. Patients with atrial fibrillation or a history of prior thromboembolic event have a lower risk of an adverse event from a temporary discontinuation of warfarin compared to patients who have a mechanical heart valve [18][19][20][21][22][23]. The thought behind avoiding anticoagulation reversal to a sub-therapeutic range (an INR value below 1.5) is to avoid a supra-therapeutic 'rebound' INR from warfarin therapy, which theoretically leads to a bleeding complication. Avoiding reversal also prevents the need for large therapeutic bridging doses of low molecular weight heparin products that could also lead to increased bleeding, and a prolonged period of a sub-therapeutic INR values post-operatively and could elevate the risk of a thromboembolic event. Additionally, the morbidity associated with surgical delays may be avoided [1,2,[5][6][7][8]12,24].
At our institutions, patients who sustained hip fractures were routinely taken to the operating room for open reduction internal fixation, cephalomedullary nailing, or arthroplasty procedures without an aggressive reversal protocol and at times with no reversal if the INR was less than three. This practice has evolved over time and thus the purpose of this study was to evaluate the perioperative outcomes and safety of surgical intervention in hip fracture patients with therapeutic and sub-therapeutic INR values. Our hypothesis was that there would be no difference in complication rate following surgical intervention for patients with therapeutic versus sub-therapeutic INR values.
Study Patients
We performed an IRB-approved retrospective chart review of patients who underwent surgical intervention of hip fractures from 2005 to 2013 at two level II trauma centers. All geriatric hip fracture patients aged 65 years or older and who were on chronic warfarin were included in our study. All hip fracture types (femoral neck, intertrochanteric, and subtrochanteric fractures) were included. Surgical interventions including closed reduction percutaneous screw fixation (CRPP), open reduction internal fixation (ORIF), short and long cephalomedullary nailing (CMN), and arthroplasty were performed. Patients were excluded if their fracture was treated with non-operative management, or had an INR greater than 3.0 at the time of surgical intervention.
Data Collection
Baseline data including age, type of fixation, days to surgery, INR at the time of surgery, and the use of vitamin K and/or fresh frozen plasma (FFP) were reviewed. We used the INR value measured immediately before surgery as the INR value at the time of surgery. Patients were grouped into low and high INR groups if they had a therapeutic INR value below 1.5 or between 1.5 and 3.0, respectively.
The primary outcomes were the need for re-operation within 30 days for drainage of a postoperative hematoma or a major medical complication including cardiac events (atrial fibrillation, myocardial infarction, congestive heart failure), infection (urinary tract infection, pneumonia), thromboembolic complication, or acute kidney injury. Secondary outcomes were the rate of blood transfusions, time to operative intervention from admission, and length of stay. There was no standard protocol for postoperative packed red blood cell administration and the decision for transfusion was based on the attending surgeon's professional judgment at each institution.
Statistical Analysis
Fishers Exact Test and t-test were used to compare the baseline characteristics and outcomes between the two study groups, with an alpha level of 0.05.
Baseline Analysis
After reviewing 728 consecutive hip fracture cases, there were 216 hip fracture patients (29%) on chronic warfarin therapy during the study period. Sixty-one patients received both vitamin K and FFP preoperatively, while eight patients received FFP only. There were 37 total hip arthroplasty (THA) procedures performed in the low INR group, and 12 in the high INR group. The patients in the low INR group (81.29 years) were statistically significantly younger than the patients in the high INR group (83.47 years.) (p = 0.04).
Outcome Analysis
Two patients in the 1.5-3.0 group (High INR group) had a major complication event of hematoma formation (1.6%). One patient, who underwent a hemiarthroplasty procedure was treated non-operatively with medical management and observation. The other patient underwent a cephalomedullary nail and did undergo operative hematoma evacuation, thus one reoperation for hematoma (0.8%). We identified fifteen re-operations after surgical intervention with only two potentially related to bleeding complications: one superficial wound infection and one hematoma evacuation already mentioned above. The superficial wound infection occurred in a patient 6 weeks after hemiarthroplasty, who had an operative INR of 1.7; while the other patient underwent hematoma evacuation one week after a cephlomedullary nail, who had an operative INR of 2.3. The other 13 reoperations consisted of loss of fixation (7), malunion or nonunion (5), and painful hardware (1) ( Table 1), which all occurred more than 30 days after the initial surgery. All other secondary outcome measures did not reach a statistical significance between the low and high INR groups (Table 2). There were no statistically significant differences in the rate of medical complications between the two groups either ( Table 2).
Discussion
Orthopedic surgeons must weigh the benefits and risks of expedient surgical repair of therapeutically anticoagulated hip fracture patients. Numerous studies have identified the risks of surgical delay and the geriatric population were the most susceptible to the morbidity and mortality caused by the delay [2][3][4][5][6]8,[24][25][26]. To our knowledge, this is the first investigation in the literature that evaluates the safety of surgical intervention on hip fracture patients on chronic warfarin with therapeutic INR values between 1.5 and 3.0.
Collinge et al. retrospectively reviewed the rates of post-operative bleeding complications in hip fracture patients who were either on aspirin, clopidogrel, or warfarin therapy [13]. They did not find any increase in rate of complications in their series, but they limited their patients' sample to be those who had an operative INR value below 1.5. Similarly, to these investigations, we considered postoperative hematoma requiring operative decompression to be an important outcome measurement that captures significant morbidity for patients with therapeutic INR values. Nydick et al. reviewed post-operative complications in patients who were on clopidogrel therapy and who underwent non-elective orthopedic procedures [27]. They found no increased risks in their patients while on clopidogrel. In the case-control study, Feely et al. did not find any difference in perioperative complications, rate of blood transfusions, or mortality between hip fracture patients on clopidogrel and control patients [28]. In their retrospective comparison of hip fracture patients prescribed clopidogrel undergoing surgical intervention compared to controls, Wallace et al. found no difference in estimated blood loss. However, they did report a higher rate of blood transfusion (56% vs. 31%, p = 0.0121) [29]. Similar to our investigation, there was no standardized methodology regarding blood transfusion criteria in their review.
Our combined results from both institutions demonstrated no significant clinical difference in hematomas or medical complications between high and low INR groups. This is in contradistinction to other non-orthopedic procedures cited in the cardiothoracic literature demonstrating increased blood loss and transfusion in higher INR value patients [9]. We did observe two hematomas in the High INR groups; however only 1 resulted in an operative irrigation and debridement. Furthermore, no statistically significant differences were observed with regards to 30-day mortality, post-operative blood units administered, time to surgery, or length of surgery. In fact, the Low INR group had a slightly longer length of stay. presumably as a result of the INR needing to be either optimized for surgery or returned to a therapeutic level. Additionally, there were similar events of transfusions, anemia, and cardiac events.
The limitations of the study include the inherent bias and limitations of a retrospective design. As a retrospective review, our investigation reviewed established practice philosophies and tactics of multiple surgeons at two academic medical centers. This may have introduced a bias as we have an interest in justifying our existing practice tactics. Furthermore, some may believe that expeditious surgical treatment of hip fracture patients with elevated INR values is not the standard of care in their practice environment. Additionally, we did not consider the effectiveness of Vitamin K and FFP. Some but not all high-INR patients received the treatment. Moreover, we did not take a post-treatment INR value to confirm the normalization of INR value. Surgeries proceeded regardless of whether the Vitamin K and FFP actually took effect. Even though we looked at the average blood loss, the measurement of EBL as a metric is an imperfect way to evaluate blood loss anemia and its physiologic affects. Statistically, our results were limited by the unadjusted analysis. Our sample size was not powerful enough to perform a multivariate regression analysis. A prospective, multicenter trial with a power analysis would be more helpful in definitively answering this question. Patients in low and high INR groups could have differing medical optimizations and nuanced comorbid pathologies. Additionally, these results cannot be extended to newer anticoagulant medications with different mechanisms of action such as selective Xa inhibitors or direct thrombin inhibitors as these were not identified. It would be speculative to extend our results to patients anticoagulated on these medications instead of warfarin.
The delay in the surgical repair of patients with hip fractures increases the morbidity and mortality and thus surgery should be performed as soon as medically possible [4][5][6][7][8]11]. Warfarin anticoagulation has been identified as a factor contributing to delayed operative repair despite limited data in the orthopedic literature that suggest that operative INR values between 2.0 and 3.0 increase the risk profile of surgery [24]. Recognizing the danger in operative delay, some authors have written protocols to expeditiously reverse and manage therapeutically anticoagulated patients [30].
In our series, surgical repair of hip fracture patients with high INR values demonstrated no increased risk of complications, blood transfusions, or mortality. We found a relatively low rate of post-operative hematoma formation requiring drainage (0.8%) in this series. Since there are theoretical risks of delaying surgery, we feel justified in continuing our practice of acutely operating on proximal femur fracture patients with INR values of 1.5-3 but need to monitor these patients to ensure it is a safe practice. A larger-scale, prospective cohort study is necessary in order to investigate the impact of this clinical protocol for patients on chronic anticoagulants with hip fractures. Similarly, reversing therapeutic INR before hip fracture repair may be unnecessary and potentially delay surgical intervention. The management of the patient on chronic anticoagulation with a hip fracture is a complex medical and surgical event and to reduce the risks of medical and surgical complications, a better understanding of safe practice is necessary. With newer anticoagulation agents becoming more popular, understanding the risk of bleeding complications in the chronically anticoagulated patient is important for the hip fracture population.
|
v3-fos-license
|
2019-04-10T13:08:48.757Z
|
2019-04-08T00:00:00.000
|
52839852
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-17184-1_1.pdf",
"pdf_hash": "b96277de2999b0b7a782fe5f761ecbc7f9de4876",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3036",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "07fbab2e25a58030bbf0e175e44e5849fb384ff9",
"year": 2019
}
|
pes2o/s2orc
|
Time Credits and Time Receipts in Iris
. We present a machine-checked extension of the program logic Iris with time credits and time receipts, two dual means of reasoning about time. Whereas time credits are used to establish an upper bound on a program’s execution time, time receipts can be used to establish a lower bound. More strikingly, time receipts can be used to prove that certain undesirable events—such as integer overflows—cannot occur until a very long time has elapsed. We present several machine-checked applications of time credits and time receipts, including an application where both concepts are exploited.
Introduction
A program logic, such as Hoare logic or Separation Logic, is a set of deduction rules that can be used to reason about the behavior of a program.To this day, considerable effort has been invested in developing ever-more-powerful program logics that control the extensional behavior of programs, that is, logics that guarantee that a program safely computes a valid final result.A lesser effort has been devoted to logics that allow reasoning not just about safety and functional correctness, but also about intensional aspects of a program's behavior, such as its time consumption and space usage.
In this paper, we are interested in narrowing the gap between these lines of work.We present a formal study of two mechanisms by which a standard program logic can be extended with means of reasoning about time.As a starting point, we take Iris [13,11,14,12], a powerful evolution of Concurrent Separation Logic [3].We extend Iris with two elementary time-related concepts, namely time credits [1,9,4] and time receipts.
Time credits and time receipts are independent concepts: it makes sense to extend a program logic with either of them in isolation or with both of them simultaneously.They are dual concepts: every computation step consumes one time credit and produces one time receipt.They are purely static: they do not exist at runtime.We view them as Iris assertions.Thus, they can appear in the correctness statements that we formulate about programs and in the proofs of these statements.
Time credits can be used to establish an upper bound on the execution time of a program.Dually, time receipts can be used to establish a lower bound, and (as explained shortly) can be used to prove that certain undesirable events cannot occur until a very long time has elapsed.
Until now, time credits have been presented as an ad hoc extension of some fixed flavor of Separation Logic [1,9,4].In contrast, we propose a construction which in principle allows time credits to be introduced on top of an arbitrary "base logic", provided this base logic is a sufficiently rich variety of Separation Logic.In order to make our definitions and proofs more concrete, we use Iris as the base logic.Our construction involves composing the base logic with a program transformation that inserts a tick () instruction in front of every computation step.As far as a user of the composite logic is concerned, the tick () instruction and the assertion $1, which represents one time credit, are abstract: the only fact to which the user has access is the Hoare triple {$1} tick () {True}, which states that "tick () consumes one time credit".
There are two reasons why we choose Iris [12] as the base logic.First, in the proof of soundness of the composite logic, we must exhibit concrete definitions of tick and $1 such that {$1} tick () {True} holds.Several features of Iris, such as ghost state and shared invariants, play a key role in this construction.Second, at the user level, the power of Iris can also play a crucial role.To illustrate this, we present the first machine-checked reconstruction of Okasaki's debits [18] in terms of time credits.The construction makes crucial use of both time credits and Iris' ghost monotonic state and shared invariants.
Time receipts are a new concept, a contribution of this paper.To extend a base logic with time receipts, we follow the exact same route as above: we compose the base logic with the same program transformation as above, which we refer to as "the tick translation".In the eyes of a user of the composite logic, the tick () instruction and the assertion ȯ 1, which represents one time receipt, are again abstract: this time, the only published fact about tick is the triple {True} tick () {ȯ 1}, which states that "tick () produces one time receipt".
Thus far, the symmetry between time credits and time receipts seems perfect: whereas time credits allow establishing an upper bound on the cost of a program fragment, time receipts allow establishing a lower bound.This raises a pragmatic question, though: why invest effort, time and money into a formal proof that a piece of code is slow?What might be the point of such an endeavor?Taking inspiration from Clochard et al. [5], we answer this question by turning slowness into a quality.If there is a certain point at which a process might fail, then by showing that this process is slow, we can show that failure is far away into the future.More specifically, Clochard et al. propose two abstract types of integer counters, dubbed "one-time" integers and "peano" integers, and provide a paper proof that these counters cannot overflow in a feasible time: that is, it would take infeasible time (say, centuries) for an execution to reach a point where overflow actually occurs.To reflect this idea, we abandon the symmetry between time credits and time receipts and publish a fact about time receipts which has no counterpart on the time-credit side.This fact is an implication: ȯ N False, that is, "N time receipts imply False".The global parameter N can be adjusted so as to represent one's idea of a running time that is infeasible, perhaps due to physical limitations, perhaps due to assumptions about the conditions in which the software is operated.In this paper, we explain what it means for the composite program logic to remain sound in the presence of this axiom, and provide a formal proof that Iris, extended with time receipts, is indeed sound.Furthermore, we verify that Clochard et al.'s ad hoc concepts of "one-time" integers and "peano" integers can be reconstructed in terms of time receipts, a more fundamental concept.Finally, to demonstrate the combined use of time credits and receipts, we present a proof of the Union-Find data structure, where credits are used to express an amortized time complexity bound and receipts are used to prove that a node's integer rank cannot overflow, even if it is stored in very few bits.
In summary, the contributions of this paper are as follows: 1.A way of extending an off-the-shelf program logic with time credits and/or receipts, by composition with a program transformation.2. Extensions of Iris with time credits and receipts, accompanied with machinechecked proofs of soundness.3. A machine-checked reconstruction of Okasaki's debits as a library in Iris with time credits.4. A machine-checked reconstruction of Clochard et al.'s "one-time" integers and "peano" integers in Iris with time receipts.5.A machine-checked verification of Union-Find in Iris with time credits and receipts, offering both an amortized complexity bound and a safety guarantee despite the use of machine integers of very limited width.
All of the results reported in this paper have been checked in Coq [17].
2 A user's overview of time credits and time receipts
Time credits
A small number of axioms, presented in Figure 1, govern time credits.The assertion $n denotes n time credits.The splitting axiom, a logical equivalence, means that time credits can be split and combined.Because Iris is an affine logic, it is implicitly understood that time credits cannot be duplicated, but can be thrown away.
The axiom timeless($n) means that time credits are independent of Iris' stepindexing.In practice, this allows an Iris invariant that involves time credits to be acquired without causing a "later" modality to appear [12, §5.7].The reader can safely ignore this detail.
The last axiom, a Hoare triple, means that every computation step requires and consumes one time credit.As in Iris, the postconditions of our Hoare triples are λ-abstractions: they take as a parameter the return value of the term.At this point, tick () can be thought of as a pseudo-instruction that has no runtime effect and is implicitly inserted in front of every computation step.Here, array(a, xs) asserts the existence and unique ownership of an array at address a, holding the sequence of elements xs.This Hoare triple guarantees not only that the function call sort(a) runs safely and has the effect of sorting the array at address a, but also that sort(a) runs in at most 6n log n time steps, where n is the length of the sequence xs, that is, the length of the array.Indeed, only 6n log n time credits are provided in the precondition, so the algorithm does not have permission to run for a greater number of steps.
Time receipts
In contrast with time credits, time receipts are a new concept, a contribution of this paper.We distinguish two forms of time receipts.The most basic form, exclusive time receipts, is the dual of time credits, in the sense that every computation step produces one time receipt.The second form, persistent time receipts, exhibits slightly different properties.Inspired by Clochard et al. [5], we show that time receipts can be used to prove that certain undesirable events, such as integer overflows, cannot occur unless a program is allowed to execute for a very, very long time-typically centuries.In the following, we explain that exclusive time receipts allow reconstructing Clochard et al.'s "one-time" integers [5, §3.2], which are so named because they are not duplicable, whereas persistent time receipts allow reconstructing their "peano" integers [5, §3.2], which are so named because they do not support unrestricted addition.
Exclusive time receipts
The assertion ȯ n denotes n time receipts.Like time credits, these time receipts are "exclusive", by which we mean that they are not duplicable.The basic laws that govern exclusive time receipts appear in Figure 2.They are the same laws that govern time credits, with two differences.The first difference is that time receipts are the dual of time credits: the specification of tick , in this case, states that every computation step produces one time receipt. 3he second difference lies in the last axiom of Figure 2, which has no analogue in Figure 1, and which we explain below.
In practice, how do we expect time receipts to be exploited?They can be used to prove lower bounds on the execution time of a program: if the Hoare triple {True} p {ȯ n} holds, then the execution of the program p cannot terminate in less than n steps.Inspired by Clochard et al. [5], we note that time receipts can also be used to prove that certain undesirable events cannot occur in a feasible time.This is done as follows.Let N be a fixed integer, chosen large enough that a modern processor cannot possibly execute N operations in a feasible time. 4The last axiom of Figure 2, ȯ N False, states that N time receipts imply a contradiction. 5This axiom informally means that we won't compute for N time steps, because we cannot, or because we promise not to do such a thing.A consequence of this axiom is that ȯ n implies n < N : that is, if we have observed n time steps, then n must be small.
Adopting this axiom weakens the guarantee offered by the program logic.A Hoare triple {True} p {True} no longer implies that the program p is forever safe.Instead, it means that p is (N − 1)-safe: the execution of p cannot go wrong until at least N − 1 steps have been taken.Because N is very large, for many practical purposes, this is good enough.
How can this axiom be exploited in practice?We hinted above that it can be used to prove the absence of certain integer overflows.Suppose that we wish to use signed w-bit machine integers as a representation of mathematical integers.(For instance, let w be 64.)Whenever we perform an arithmetic operation, such as an addition, we must prove that no overflow can occur.This is reflected in the specification of the addition of two machine integers: Here, the variables x i denote machine integers, while the auxiliary variables n i denote mathematical integers, and the function ι is the injection of machine integers into mathematical integers.The conjunct −2 w−1 ≤ n 1 + n 2 < 2 w−1 in the precondition represents an obligation to prove that no overflow can occur.
Suppose now that the machine integers x 1 and x 2 represent the lengths of two disjoint linked lists that we wish to concatenate.To construct each of these lists, we must have spent a certain amount of time: as proofs of this work, let us assume that the assertions ȯ n 1 and ȯ n 2 are at hand.Let us further assume that the word size w is sufficiently large that it takes a very long time to count up to the largest machine integer.That is, let us make the following assumption: (large word size assumption) (E.g., with N = 2 63 and w = 64, this holds.)Then, we can prove that the addition of x 1 and x 2 is permitted.This goes as follows.From the separating conjunction ȯ n 1 * ȯ n 2 , we get ȯ(n 1 + n 2 ).The existence of these time receipts allows us to deduce 0 ≤ n 1 + n 2 < N , which implies 0 ≤ n 1 + n 2 < 2 w−1 .Thus, the precondition of the addition operation add (x 1 , x 2 ) is met.
In summary, we have just verified that the addition of two machine integers satisfies the following alternative specification: This can be made more readable and more abstract by defining a "clock" to be a machine integer x accompanied with ι(x) time receipts: Then, the above specification of addition can be reformulated as follows: In other words, clocks support unrestricted addition, without any risk of overflow.However, because time receipts cannot be duplicated, neither can clocks: clock (x) does not entail clock (x) * clock (x).In other words, a clock is uniquely owned.One can think of a clock x as a hard-earned integer : the owner of this clock has spent x units of time to obtain it.
Clocks are a reconstruction of Clochard et al.'s "one-time integers" [5], which support unrestricted addition, but cannot be duplicated.Whereas Clochard et al. view one-time integers as a primitive concept, and offer a direct paper proof of their soundness, we have just reconstructed them in terms of a more elementary notion, namely time receipts, and in the setting of a more powerful program logic, whose soundness is machine-checked, namely Iris.Persistent time receipts In addition to exclusive time receipts, it is useful to introduce a persistent form of time receipts. 6The axioms that govern both exclusive and persistent time receipts appear in Figure 3.We write ȫ n for a persistent receipt, a witness that at least n units of time have elapsed.(We avoid the terminology "n persistent time receipts", in the plural form, because persistent time receipts are not additive.We view ȫ n as one receipt whose face value is n.)This assertion is persistent, which in Iris terminology means that once it holds, it holds forever.This implies, in particular, that it is duplicable: ȫ n ≡ ȫ n * ȫ n.It is created just by observing the existence of n exclusive time receipts, as stated by the following axiom, also listed in Figure 3: ȯ n ȯ n * ȫ n.Intuitively, someone who has access to the assertion ȫ n is someone who knows that n units of work have been performed, even though they have not necessarily "personally" performed that work.Because this knowledge is not exclusive, the conjunction ȫ n 1 * ȫ n 2 does not entail ȫ(n 1 +n 2 ).Instead, we have the following axiom, also listed in Figure 3: More subtly, the specification of tick in Figure 3 is stronger than the one in Figure 2. According to this strengthened specification, tick () does not just produce an exclusive receipt ȯ 1.In addition to that, if a persistent time receipt ȫ n is at hand, then tick () is able to increment it and to produce a new persistent receipt ȫ(n + 1), thus reflecting the informal idea that a new unit of time has just been spent.A user who does not wish to make use of this feature can pick n = 0 and recover the specification of tick in Figure 2 as a special case.
Finally, because ȫ n means that n steps have been taken, and because we promise never to reach N steps, we adopt the axiom ȫ N False, also listed in Figure 3.It implies the earlier axiom ȯ N False, which is therefore not explicitly shown in Figure 3.
In practice, how are persistent time receipts exploited?By analogy with clocks, let us define a predicate for a machine integer x accompanied with ι(x) persistent time receipts: By construction, this predicate is persistent, therefore duplicable: We refer to this concept as a "snapclock", as it is not a clock, but can be thought of as a snapshot of some clock.Thanks to the axiom ȯ k ȯ k * ȫ k, we have: Furthermore, snapclocks have the valuable property that, by performing just one step of extra work, a snapclock can be incremented, yielding a new snapclock that is greater by one.That is, the following Hoare triple holds: The proof is not difficult.Unfolding snapclock (x) in the precondition yields ȫ n, where ι(x) = n.As per the strengthened specification of tick , the execution of tick () then yields ȯ 1 * ȫ(n + 1).As in the case of clocks, the assertion ȫ(n + 1) implies 0 ≤ n + 1 < 2 w−1 , which means that no overflow can occur.Finally, ȯ 1 is thrown away and ȫ(n +1) is used to justify snapclock (x ) in the postcondition.
Adding two arbitrary snapclocks x 1 and x 2 is illegal: from the sole assumption snapclock (x 1 ) * snapclock (x 2 ), one cannot prove that the addition of x 1 and x 2 won't cause an overflow, and one cannot prove that its result is a valid snapclock.However, snapclocks do support a restricted form of addition.The addition of two snapclocks x 1 and x 2 is safe, and produces a valid snapclock x, provided it is known ahead of time that its result is less than some preexisting snapclock y: Snapclocks are a reconstruction of Clochard et al.'s "peano integers" [5], which are so named because they do not support unrestricted addition.Clocks and snapclocks represent different compromises: whereas clocks support addition but not duplication, snapclocks support duplication but not addition.They are useful in different scenarios: as a rule of thumb, if an integer counter is involved in the implementation of a mutable data structure, then one should attempt to view it as a clock; if it is involved in the implementation of a persistent data structure, then one should attempt to view it as a snapclock.
HeapLang and the tick translation
In the next section ( §4), we extend Iris with time credits, yielding a new program logic Iris $ .We do this without modifying Iris.Instead, we compose Iris with a program transformation, the "tick translation", which inserts tick () instructions into the code in front of every computation step.In the construction of Iris ȯ , our extension of Iris with time receipts, the tick translation is exploited in a similar way ( §5).In this section ( §3), we define the tick translation and state some of its properties.
Iris is a generic program logic: it can be instantiated with an arbitrary calculus for which a small-step operational semantics is available [12].Ideally, our extension of Iris should take place at this generic level, so that it, too, can be instantiated for an arbitrary calculus.Unfortunately, it seems difficult to define the tick translation and to prove it correct in a generic manner.For this reason, we choose to work in the setting of HeapLang [12], an untyped λ-calculus equipped with Booleans, signed machine integers, products, sums, recursive functions, references, and shared-memory concurrency.The three standard operations on mutable references, namely allocation, reading, and writing, are available.A compare-and-set operation CAS(e 1 , e 2 , e 3 ) and an operation for spawning a new thread are also provided.As the syntax and operational semantics of HeapLang are standard and very much irrelevant in this paper, we omit them.They appear in our online repository [17].
The tick translation transforms a HeapLang expression e to a HeapLang expression e tick .It is parameterized by a value tick .Its effect is to insert a call to tick in front of every operation in the source expression e.The translation of a function application, for instance, is as follows: For convenience, we assume that tick can be passed an arbitrary value v as an argument, and returns v.Because evaluation in HeapLang is call-by-value and happens to be right-to-left7 , the above definition means that, after evaluating the argument e 2 tick and the function e 1 tick , we invoke tick , then carry on with the function call.This translation is syntactically well-behaved: it preserves the property of being a value, and commutes with substitution.This holds for every value tick .
As far the end user is concerned, tick remains abstract ( §2).Yet, in our constructions of Iris $ and Iris ȯ , we must provide a concrete implementation of it in HeapLang.This implementation, named tick c , appears in Figure 4.A global integer counter c stores the number of computation steps that the program is still allowed to take.The call tick c () decrements a global counter c, if this counter holds a nonzero value, and otherwise invokes oops ().
At this point, the memory location c and the value oops are parameters.We stress that tick c plays a role only in the proofs of soundness of Iris $ and Iris ȯ .It is never actually executed, nor is it shown to the end user.
Once tick is instantiated with tick c , one can prove that the translation is correct in the following sense: the translated code takes the same computation steps as the source code and additionally keeps track of how many steps are taken.More specifically, if the source code can make n computation steps, and if c is initialized with a value m that is sufficiently large (that is, m ≥ n), then the translated code can make n computation steps as well, and c is decremented from m to m − n in the process.
Lemma 1 (Reduction Preservation).Assume there is a reduction sequence: Assume c is fresh for this reduction sequence.Let m ≥ n.Then, there exists a reduction sequence: In this statement, the metavariable T stands for a thread pool, while σ stands for a heap.The relation → tp is HeapLang's "threadpool reduction".For the sake of brevity, we write just e for e tick c , that is, for the translation of the expression e, where tick is instantiated with tick c .This notation is implicitly dependent on the parameters c and oops.
The above lemma holds for every choice of oops.Indeed, because the counter c initially holds the value m, and because we have m ≥ n, the counter is never about to fall below zero, so oops is never invoked.
The next lemma also holds for every choice of oops.It states that if the translated program is safe and if the counter c has not yet reached zero then the source program is not just about to crash.By combining Lemmas 1 and 2 and by contraposition, we find that safety is preserved backwards, as follows: if, when the counter c is initialized with m, the translated program e is safe, then the source program e is m-safe.
Iris with time credits
The authors of Iris [12] have used Coq both to check that Iris is sound and to offer an implementation of Iris that can be used to carry out proofs of programs.The two are tied: if {True} p {True} can be established by applying the proof rules of Iris, then one gets a self-contained Coq proof that the program p is safe.
In this section, we temporarily focus on time credits and explain how we extend Iris with time credits, yielding a new program logic Iris $ .The new logic is defined in Coq and still offers an end-to-end guarantee: if {$k} p {True} can be established in Coq by applying the proof rules of Iris $ , then one has proved in Coq that p is safe and runs in at most k steps.
To define Iris $ , we compose Iris with the tick translation.We are then able to argue that, because this program transformation is operationally correct (that is, it faithfully accounts for the passing of time), and because Iris is sound (that is, it faithfully approximates the behavior of programs), the result of the composition is a sound program logic that is able to reason about time.
In the following, we view the interface TCIntf as explicitly parameterized over $ and tick .Thus, we write "TCIntf ($) tick " for the separating conjunction of all items in Figure 1 except the declarations of $ and tick .
We require the end user, who wishes to perform proofs of programs in Iris $ , to work with Iris $ triples, which are defined as follows: Definition 1 (Iris $ triple).An Iris $ triple {P } e {Φ} $ is syntactic sugar for: Thus, an Iris $ triple is in reality an Iris triple about the instrumented expression e tick .While proving this Iris triple, the end user is given an abstract view of the predicate $ and the instruction tick .He does not have access to their concrete definitions, but does have access to the laws that govern them.
We prove that Iris $ is sound in the following sense: Theorem 1 (Soundness of Iris $ ).If {$n} e {True} $ holds, then the machine configuration (e, ∅), where ∅ is the empty heap, is safe and terminates in at most n steps.
In other words, a program that is initially granted n time credits cannot run for more than n steps.To establish this theorem, we proceed roughly as follows: 1. we provide a concrete definition of tick ; 2. we provide a concrete definition of $ and prove that TCIntf ($) tick holds; 3. this yields {$n} e tick {True}; from this and from the correctness of the tick translation, we deduce that e cannot crash or run for more than n steps.
Step 1.Our first step is to provide an implementation of tick .As announced earlier ( §3), we use tick c (Figure 4).We instantiate the parameter oops with crash, an arbitrary function whose application is unsafe.(That is, crash is chosen so that crash () reduces to a stuck term.)For the moment, c remains a parameter.
With these concrete choices of tick and oops, the translation transforms an out-of-time-budget condition into a hard crash.Because Iris forbids crashes, Iris $ , which is the composition of the translation with Iris, will forbid out-oftime-budget conditions, as desired.
For technical reasons, we need two more lemmas about the translation, whose proofs rely on the fact that oops is instantiated with crash.They are slightly modified or strengthened variants of Lemmas 2 and 3. First, if the source code can take one step, then the translated code, supplied with zero budget, crashes.Second, if the translated code, supplied with a runtime budget of m, does not crash, then the source code terminates in at most m steps.
Lemma 5 (Safety Preservation, Strengthened).If for every location c the configuration ( T , σ [c ← m]) is safe, then (T, σ) is safe and terminates in at most m steps.
Step 2. Our second step, roughly, is to exhibit a definition of $ : N → iProp such that TCIntf ($) tick c is satisfied.That is, we would like to prove something along the lines of: ∃($ : N → iProp) TCIntf ($) tick c .However, these informal sentences do not quite make sense.This formula is not an ordinary proposition: it is an Iris assertion, of type iProp.Thus, it does not make sense to say that this formula "is true" in an absolute manner.Instead, we prove in Iris that we can make this assertion true by performing a view shift, that is, a number of operations that have no runtime effect, such as allocating a ghost location and imposing an invariant that ties this ghost state with the physical state of the counter c.This is stated as follows: Lemma 6 (Time Credit Initialization).For every c and n, the following Iris view shift holds: In this statement, on the left-hand side of the view shift symbol, we find the "points-to" assertion c → n, which represents the unique ownership of the memory location c and the assumption that its initial value is n.This assertion no longer appears on the right-hand side of the view shift.This reflects the fact that, when the view shift takes place, it becomes impossible to access c directly; the only way of accessing it is via the operation tick c .
On the right-hand side of the view shift symbol, beyond the existential quantifier, we find a conjunction of the assertion TCIntf ($) tick c , which means that the laws of time credits are satisfied, and $n, which means that there are initially n time credits in existence.
In the interest of space, we provide only a brief summary of the proof of Lemma 6; the reader is referred to Appendix A for more details.In short, the assertion $1 is defined in such a way that it represents an exclusive contribution of one unit to the current value of the global counter c.In other words, we install the following invariant: at every time, the current value of c is (at least) the sum of all time credits in existence.Thus, the assertion $1 guarantees that c is nonzero, and can be viewed as a permission to decrement c by one.This allows us to prove that the specification of tick in Figure 1 is satisfied by our concrete implementation tick c .In particular, tick c cannot cause a crash: indeed, under the precondition $1, c is not in danger of falling below zero, and crash () is not executed-it is in fact dead code.
Step 3. In the last reasoning step, we complete the proof of Theorem 1.The proof is roughly as follows.Suppose the end user has established {$n} e {True} $ .By Safety Preservation, Strengthened (Lemma 5), to prove that (e, ∅) is safe and runs in at most n steps, it suffices to show (for an arbitrary location c) that the translated expression e , executed in the initial heap ∅ [c ← n], is safe.To do so, beginning with this initial heap, we perform Time Credit Initialization, that is, we execute the view shift whose statement appears in Lemma 6.This yields an abstract predicate $ as well as the assertions TCIntf ($) tick and $n.At this point, we unfold the Iris $ triple {$n} e {True} $ , yielding an implication (see Definition 1), and apply it to $, to tick c , and to the hypothesis TCIntf ($) tick .This yields the Iris triple {$n} e {True}.Because we have $n at hand and because Iris is sound [12], this implies that e is safe.This concludes the proof.This last step is, we believe, where the modularity of our approach shines.Iris' soundness theorem is re-used as a black box, without change.In fact, any program logic other than Iris could be used as a basis for our construction, as along as it is expressive enough to prove Time Credit Initialization (Lemma 6).The last ingredient, Safety Preservation, Strengthened (Lemma 5), involves only the operational semantics of HeapLang, and is independent of Iris.
This was just an informal account of our proof.For further details, the reader is referred to the online repository [17].
Iris with time receipts
In this section, we extend Iris with time receipts and prove the soundness of the new logic, dubbed Iris ȯ .To do so, we follow the scheme established in the previous section ( §4), and compose Iris with the tick translation.
From here on, let us view the interface of time receipts as parameterized over ȯ, ȫ, and tick .Thus, we write "TRIntf (ȯ) (ȫ) tick " for the separating conjunction of all items in Figure 3 except the declarations of ȯ, ȫ, and tick .
As in the case of credits, the user is given an abstract view of time receipts: Definition 2 (Iris ȯ triple).An Iris ȯ triple {P } e {Φ} ȯ is syntactic sugar for: Theorem 2 (Soundness of Iris ȯ ).If {True} e {True} ȯ holds, then the machine configuration (e, ∅) is (N − 1)-safe.
As indicated earlier, we assume that the end user is interested in proving that crashes cannot occur until a very long time has elapsed, which is why we state the theorem in this way. 8Whereas an Iris triple {True} e {True} guarantees that e is safe, the Iris ȯ triple {True} e {True} ȯ guarantees that it takes at least N − 1 steps of computation for e to crash.In this statement, N is the global parameter that appears in the axiom ȫ N False (Figure 3).Compared with Iris, Iris ȯ provides a weaker safety guarantee, but offers additional reasoning principles, leading to increased convenience and modularity.
Step 1.In this step, we keep our concrete implementation of tick , namely tick c (Figure 4).One difference with the case of time credits, though, is that we plan to initialize c with N − 1.Another difference is that, this time, we instantiate the parameter oops with loop, where loop () is an arbitrary divergent term.9 Step 2. The next step is to prove that we are able to establish the time receipt interface.We prove the following: Lemma 7 (Time Receipt Initialization).For every location c, the following Iris view shift holds: We provide only a brief summary of the proof of Lemma 7; for further details, the reader is referred to Appendix B. Roughly speaking, we install the invariant that c holds N − 1 − i, where i is some number that satisfies 0 ≤ i < N .We define ȯ n as an exclusive contribution of n units to the current value of i, and define ȫ n as an observation that i is at least n.(i grows with time, so such an observation is stable.)As part of the proof of the above lemma, we check that the specification of tick holds: In contrast with the case of time credits, in this case, the precondition ȫ n does not guarantee that c holds a nonzero value.Thus, it is possible for tick () to be executed when c is zero.This is not a problem, though, because loop() is safe to execute in any situation: it satisfies the Hoare triple {True} loop() {False}.In other words, when c is about to fall below zero and therefore the invariant i < N seems about to be broken, loop () saves the day by running away and never allowing execution to continue normally.
Step 3. In the last reasoning step, we complete the proof of Theorem 2. Suppose the end user has established {True} e {True} ȯ .By Safety Preservation (Lemma 3), to prove that (e, ∅) is (N −1)-safe, it suffices to show (for an arbitrary location c) that e , executed in the initial heap ∅ [c ← N − 1], is safe.To do so, beginning with this initial heap, we perform Time Receipt Initialization, that is, we execute the view shift whose statement appears in Lemma 7.This yields two abstract predicates ȯ and ȫ as well as the assertion TRIntf (ȯ) (ȫ) tick .At this point, we unfold {True} e {True} ȯ (see Definition 2), yielding an implication, and apply this implication, yielding the Iris triple {True} e {True}.Because Iris is sound [12], this implies that e is safe.This concludes the proof.For further detail, the reader is again referred to our online repository [17].
Marrying time credits and time receipts
It seems desirable to combine time credits and time receipts in a single program logic, Iris $ȯ .We have done so [17].In short, following the scheme of §4 and §5, the definition of Iris $ȯ involves composing Iris with the tick translation.This time, tick serves two purposes: it consumes one time credit and produces one exclusive time receipt (and increments a persistent time receipt).Thus, its specification is as follows: Let us write TCTRIntf ($) (ȯ) (ȫ) tick for the combined interface of time credits and time receipts.This interface combines all of the axioms of Figures 1 and 3, but declares a single tick function 10 and proposes a single specification for it, which is the one shown above.
Iris $ȯ allows exploiting time credits to prove time complexity bounds and, at the same time, exploiting time receipts to prove the absence of certain integer overflows.Our verification of Union-Find ( §8) illustrates these two aspects.
Guéneau et al. [7] use time credits to reason about asymptotic complexity, that is, about the manner in which a program's complexity grows as the size of its input grows towards infinity.Does such asymptotic reasoning make sense in Iris $ȯ , where no program is ever executed for N time steps or beyond?It seems to be the case that if a program p satisfies the triple {$n} p {Φ} $ ȯ , then it also satisfies the stronger triple {$min(n, N )} p {Φ} $ ȯ , therefore also satisfies {$N } p {Φ} $ ȯ .Can one therefore conclude that p has "constant time complexity"?We believe not.Provided N is considered a parameter, as opposed to a constant, one cannot claim that "N is O(1)", so {$min(n, N )} p {Φ} $ ȯ does not imply that "p runs in constant time".In other words, a universal quantification on N should come after the existential quantifier that is implicit in the O notation.We have not yet attempted to implement this idea; this remains a topic for further investigation.
Application: thunks in Iris $
In this section, we illustrate the power of Iris $ by constructing an implementation of thunks as a library in Iris $ .A thunk, also known as a suspension, is a very simple data structure that represents a suspended computation.There are two operations on thunks, namely create, which constructs a new thunk, and force, which demands the result of a thunk.A thunk memoizes its result, so that even if it is forced multiple times, the computation only takes place once.
Okasaki [18] proposes a methodology for reasoning about the amortized time complexity of computations that involve shared thunks.For every thunk, he keeps track of a debit, which can be thought of as an amount of credit that one must still pay before one is allowed to force this thunk.A ghost operation, pay, changes one's view of a thunk, by reducing the debit associated with this thunk.force can be applied only to a zero-debit thunk, and has amortized cost O(1).Indeed, if this thunk has been forced already, then force really requires constant time; and if this thunk is being forced for the first time, then the cost of performing the suspended computation must have been paid for in advance, possibly in several installements, via pay.This discipline is sound even in the presence of sharing, that is, of multiple pointers to a thunk.Indeed, whereas duplicating a credit is unsound, duplicating a debit leads to an over-approximation of the true cost, hence is sound.Danielsson [6] formulates Okasaki's ideas as a type system, which he proves sound in Agda.Pilkiewicz and Pottier [19] reconstruct this type discipline in the setting of a lower-level type system, equipped with basic notions of time credits, hidden state, and monotonic state.Unfortunately, their type system is presented in an informal manner and does not come with a proof of type soundness.
We reproduce Pilkiewicz and Pottier's construction in the formal setting of Iris $ .Indeed, Iris $ offers all of the necessary ingredients, namely time credits, hidden state (invariants, in Iris terminology) and monotonic state (a special case of Iris' ghost state).Our reconstruction is carried out inside Coq [17].
Concurrency and reentrancy
One new problem that arises here is that Okasaki's analysis, which is valid in a sequential setting, potentially becomes invalid in a concurrent setting.Suppose we wish to allow multiple threads to safely share access to a thunk.A natural, simple-minded approach would be to equip every thunk with a lock and allow competition over this lock.Then, unfortunately, forcing would become a blocking operation: one thread could waste time waiting for another thread to finish forcing.In fact, in the absence of a fairness assumption about the scheduler, an unbounded amount of time could be wasted in this way.This appears to invalidate the property that force has amortized cost O (1).
Technically, the manner in which this problem manifests itself in Iris $ is in the specification of locks.Whereas in Iris a spin lock can be implemented and proved correct with respect to a simple and well-understood specification [2], in Iris $ , it cannot.The lock () method contains a potentially infinite loop: therefore, no finite amount of time credits is sufficient to prove that lock () is safe.This issue is discussed in greater depth later on ( §9).
A distinct yet related problem is reentrancy.Arguably, an implementation of thunks should guarantee that a suspended computation is evaluated at most once.This guarantee seems particularly useful when the computation has a side effect: the user can then rely on the fact that this side effect occurs at most once.However, this property does not naturally hold: in the presence of heapallocated mutable state, it is possible to construct an ill-behaved "reentrant" thunk which, when forced, attempts to recursively force itself.Thus, something must be done to dynamically reject or statically prevent reentrancy.In Pilkiewicz and Pottier's code [19], reentrancy is detected at runtime, thanks to a three-color scheme, and causes a fatal runtime failure.In a concurrent system where each thunk is equipped with a lock, reentrancy is also detected at runtime, and turned into deadlock; but we have explained earlier why we wish to avoid locks.
Fortunately, Iris provides us with a static mechanism for forbidding both concurrency and reentrancy.We introduce a unique token , which can be thought of as "permission to use the thunk API", and set things up so that pay and force require and return .This forbids concurrency: two operations on thunks cannot take place concurrenly.Furthermore, when a user-supplied suspended computation is executed, the token is not transmitted to it.This forbids reentrancy. 11he implementation of this token relies on Iris' "nonatomic invariants" ( §7.4).With these restrictions, we are able to prove that Okasaki's discipline is sound.
Implementation of thunks
A simple implementation of thunks in HeapLang appears in Figure 5.A thunk can be in one of two states: White f and Black v.A white thunk is unevaluated: the function f represents a suspended computation.A black thunk is evaluated: the value v is the result of the computation that has been performed already.-thunks can be shared overestimate a debt : iProp -there exist "thunderbolts" -the user is handed one gives rise to an n-debit thunk; {λt.isThunk t n Φ} the cost is O(1) -paying reduces one's debt Fig. 6.A simple specification of thunks in Iris $ Two colors are sufficient: because our static discipline rules out reentrancy, there is no need for a third color, whose purpose would be to dynamically detect an attempt to force a thunk that is already being forced.
Specification of thunks in Iris $
Our specification of thunks appears in Figure 6.It declares an abstract predicate isThunk t n Φ, which asserts that t is a valid thunk, that the debt associated with this thunk is n, and that this thunk (once forced) produces a value that satisfies the postcondition Φ.The number n, a debit, is the number of credits that remain to be paid before this thunk can be forced.The postcondition Φ is chosen by the user when a thunk is created.It must be duplicable (this is required in the specification of force) because force can be invoked several times and we must guarantee, every time, that the result v satisfies Φ v.
The second axiom states that isThunk t n Φ is a persistent assertion.This means that a valid thunk, once created, remains a valid thunk forever.Among other things, it is permitted to create two pointers to a single thunk and to reason independently about each of these pointers.
The third axiom states that isThunk t n Φ is covariant in its parameter n.Overestimating a debt still leads to a correct analysis of a program's worst-case time complexity.
Next, the specification declares an abstract assertion , and provides the user with one copy of this assertion.We refer to it as "the thunderbolt".
The next item in Figure 6 is the specification of create.It is higher-order: the precondition of create contains a specification of the function f that is passed as an argument to create.This axiom states that, if f represents a computation of cost n, then create (f ) produces an n-debit thunk.The cost of creation itself is 3 credits.This specification is somewhat simplistic, as it does not allow the function f to have a nontrivial precondition.It is possible to offer a richer specification; we eschew it in favor of simplicity.
Next comes the specification of force.Only a 0-debit thunk can be forced.The result is a value v that satisfies Φ.The (amortized) cost of forcing is 11 credits.The thunderbolt appears in the pre-and postcondition of force, forbidding any concurrent attempts to force a thunk.
The last axiom in Figure 6 corresponds to pay.It is a view shift, a ghost operation.By paying k credits, one turns an n-debit thunk into an (n − k)-debit thunk.At runtime, nothing happens: it is the same thunk before and after the payment.Yet, after the view shift, we have a new view of the number of debits associated with this thunk.Here, paying requires the thunderbolt.It should be possible to remove this requirement; we have not yet attempted to do so.
Proof of thunks in Iris $
After implementing thunks in HeapLang ( §7.2) and expressing their specification in Iris $ ( §7.3), there remains to prove that this specification can be established.We sketch the key ideas of this proof.
Following Pilkiewicz and Pottier [19], when a new thunk is created, we install a new Iris invariant, which describes this thunk.The invariant is as follows: γ is a ghost location, which we allocate at the same time as the thunk t.It holds elements of the authoritative monoid Auth(N, max) [12].The variable nc, for "necessary credits", is the cost of the suspended computation: it appears in the precondition of f .The variable ac, for "available credits", is the number of credits that have been paid so far.The disjunction inside the invariant states that: either the thunk is white, in which case we have ac credits at hand; or the thunk is black, in which case we have no credits at hand, as they have been spent already.
The predicate isThunk t n Φ is then defined as follows: The non-authoritative assertion with the authoritative assertion • ac γ that can be obtained by acquiring the invariant, implies the inequality nc − n ≤ ac, therefore nc ≤ ac + n.That is, the credits paid so far (ac) plus the credits that remain to be paid (n) are sufficient to cover for the actual cost of the computation (nc).In particular, in the proof of force, we have a 0-debit thunk, so nc ≤ ac holds.In the case where the thunk is white, this means that the ac credits that we have at hand are sufficient to justify the call f (), which requires nc credits.
The final aspect that remains to be explained is our use of NaInv(• • •), an Iris "nonatomic invariant".Indeed, in this proof, we cannot rely on Iris' primitive invariants.A primitive invariant can be acquired only for the duration of an atomic instruction [12].In our implementation of thunks (Figure 5), however, we need a "critical section" that encompasses several instructions.That is, we must acquire the invariant before dereferencing t, and (in the case where this thunk is white) we cannot release it until we have marked this thunk black.Fortunately, Iris provides a library of "nonatomic invariants" for this very purpose.(This library is used in the RustBelt project [10] to implement Rust's type Cell.)This library offers separate ghost operations for acquiring and releasing an invariant.Acquiring an invariant consumes a unique token, which is recovered when the invariant is released: this guarantees that an invariant cannot be acquired twice, or in other words, that two threads cannot be in a critical section at the same time.The unique token involved in this protocol is the one that we expose to the end user as "the thunderbolt".
8 Application: Union-Find in Iris $ȯ As an illustration of the use of both time credits and time receipts, we formally verify the functional correctness and time complexity of an implementation of the Union-Find data structure.Our proof [17] is based on Charguéraud and Pottier's work [4].We port their code from OCaml to HeapLang, and port their proof from Separation Logic with Time Credits to Iris $ȯ .At this point, the proof exploits just Iris $ , a subset of Iris $ȯ .The mathematical analysis of Union-Find, which represents a large part of the proof, is unchanged.Our contribution lies in the fact that we modify the data structure to represent ranks as machine integers instead of unbounded integers, and exploit time receipts in Iris $ȯ to establish the absence of overflow.We equip HeapLang with signed machine integers whose bit width is a parameter w.Under the hypothesis log log N < w − 1, we are able to prove that, even though the code uses limited-width machine integers, no overflow can occur in a feasible time.If for instance N is 2 63 , then this condition boils down to w ≥ 7. Ranks can be stored in just 7 bits without risking overflow.
As in Charguéraud and Pottier's work, the Union-Find library advertises an abstract representation predicate isUF D R V , which describes a well-formed, uniquely-owned Union-Find data structure.The parameter D, a set of nodes, is the domain of the data structure.The parameter R, a function, maps a node to the representative element of its equivalence class.The parameter V , also a function, maps a node to a payload value associated with its equivalence class.We do not show the specification of every operation.Instead, we focus on union, which merges two equivalence classes.We establish the following Iris $ȯ triple: where the functions R and V are defined as follows: otherwise The hypotheses x ∈ D and y ∈ D and the conjunct isUF D R V in the precondition require that x and y be two nodes in a valid Union-Find data structure.The postcondition λz. . . .describes the state of the data structure after the operation and the return value z.
The conjunct $(44α(|D|) + 152) in the precondition indicates that union has time complexity O(α(n)), where α is an inverse of Ackermann's function and n is the number of nodes in the data structure.This is an amortized bound; the predicate isUF also contains a certain number of time credits, known as the potential of the data structure, which are used to justify union operations whose actual cost exceeds the advertised cost.The constants 44 and 152 differ from those found in Charguéraud and Pottier's specification [4] because Iris $ȯ counts every computation step, whereas they count only function calls.Abstracting these constants by using O notation, as proposed by Guéneau et al. [7], would be desirable, but we have not attempted to do so yet.
The main novelty, with respect to Charguéraud and Pottier's specification, is the hypothesis log log N < w − 1, which is required to prove that no overflow can occur when the rank of a node is incremented.In our proof, N and w are parameters; once their values are chosen, this hypothesis is easily discharged, once and for all.In the absence of time receipts, we would have to publish the hypothesis log log n < w − 1, where n is the cardinal of D, forcing every (direct and indirect) user of the data structure to keep track of this requirement.
For the proof to go through, we store n time receipts in the data structure: that is, we include the conjunct ȯ n, where n stands for |D|, in the definition of the invariant isUF D R V .The operation of creating a new node takes at least one step, therefore produces one new time receipt, which is used to prove that the invariant is preserved by this operation.At any point, then, from the invariant, and from the basic laws of time receipts, we can deduce that n < N holds.Furthermore, it is easy to show that a rank is at most log n.Therefore, a rank is at most log N .In combination with the hypothesis log log N < w − 1, this suffices to prove that a rank is at most 2 w−1 − 1, the largest signed machine integer, and therefore that no overflow can occur in the computation of a rank.
Clochard et al. [5, §2] already present Union-Find as a motivating example among several others.They write that "there is obviously no danger of arithmetic overflow here, since [ranks] are only obtained by successive increments by one".This argument would be formalized in their system by representing ranks as either "one-time" or "peano" integers (in our terminology, clocks or snapclocks).This argument could be expressed in Iris $ȯ , but would lead to requiring log N < w − 1.In contrast, we use a more refined argument: we note that ranks are logarithmic in n, the number of nodes, and that n itself can never overflow.This leads us to the much weaker requirement log log N < w − 1, which means that a rank can be stored in very few bits.We believe that this argument cannot be expressed in Clochard et al.'s system.
Discussion
One feature of Iris and HeapLang that deserves further discussion is concurrency.Iris is an evolution of Concurrent Separation Logic, and HeapLang has shared-memory concurrency.How does this impact our reasoning about time?At a purely formal level, this does not have any impact: Theorems 1, 2, 3 and their proofs are essentially oblivious to the absence or presence of concurrency in the programming language.At a more informal level, though, this impacts our interpretation of the real-world meaning of these theorems.Whereas in a sequential setting a "number of computation steps" can be equated (up to a constant factor) with "time", in a concurrent setting, a "number of computation steps" is referred to as "work", and is related to "time" only up to a factor of p, the number of processors.In short, our system measures work, not time.The number of available processors should be taken into account when choosing a specific value of N : this value must be so large that N computation steps are infeasible even by p processors.With this in mind, we believe that our system can still be used to prove properties that have physical relevance.
In short, our new program logics, Iris $ , Iris ȯ , and Iris $ȯ , tolerate concurrency.Yet, is it fair to say that they have "good support" for reasoning about concurrent programs?We believe not yet, and this is an area for future research.The main open issue is that we do not at this time have good support for reasoning about the time complexity of programs that perform busy-waiting on some resource.The root of the difficulty, already mentioned during the presentation of thunks ( §7.1), is that one thread can fail to make progress, due to interference with another thread.A retry is then necessary, wasting time.In a spin lock, for instance, the "compare-and-set" (CAS) instruction that attempts to acquire the lock can fail.There is no bound on the number of attempts that are required until the lock is eventually acquired.Thus, in Iris $ , we are currently unable to assign any specification to the lock method of a spin lock.
In the future, we wish to take inspiration from Hoffmann, Marmar and Shao [9], who use time credits in Concurrent Separation Logic to establish the lock-freedom of several concurrent data structures.The key idea is to formalize the informal argument that "failure of a thread to make progress is caused by successful progress in another thread".Hoffmann et al. set up a "quantitative compensation scheme", that is, a protocol by which successful progress in one thread (say, a successful CAS operation) must transmit a number of time credits to every thread that has encountered a corresponding failure and therefore must retry.Quite interestingly, this protocol is not hardwired into the reasoning rule for CAS.In fact, CAS itself is not primitive; it is encoded in terms of an atomic { . . .} construct.The protocol is set up by the user, by exploiting the basic tools of Concurrent Separation Logic, including shared invariants.Thus, it should be possible in Iris $ to reproduce Hoffmann et al.'s reasoning and to assign useful specifications to certain lock-free data structures.Furthermore, we believe that, under a fairness assumption, it should be possible to assign Iris $ specifications also to coarse-grained data structures, which involve locks.Roughly speaking, under a fair scheduler, the maximum time spent waiting for a lock is the maximum number of threads that may compete for this lock, multiplied by the maximum cost of a critical section protected by this lock.Whether and how this can be formalized is a topic of future research.
The axiom ȫ N False comes with a few caveats that should be mentioned.The same caveats apply to Clochard et al.'s system [5], and are known to them.
One caveat is that it is possible in theory to use this axiom to write and justify surprising programs.For instance, in Iris ȯ , the loop "for i = 1 to N do () done" satisfies the specification {True} -{False}: that is, it is possible to prove that this loop "never ends".As a consequence, this loop also satisfies every specification of the form {True} -{Φ}.On the face of it, this loop would appear to be a valid solution to every programming assignment!In practice, it is up to the user to exhibit taste and to refrain from exploiting such a paradox.In reality, the situation is no worse than that in plain Iris, a logic of partial correctness, where the infinite loop "while true do () done" also satisfies {True} -{False}.
Another important caveat is that the compiler must in principle be instructed to never optimize ticks away.If, for instance, the compiler was allowed to recognize that the loop "for i = 1 to N do () done" does nothing, and to replace this loop with a no-op, then this loop, which according to Iris ȯ "never ends", would in reality end immediately.We would thereby be in danger of proving that a source program cannot crash unless it is allowed to run for centuries, whereas in reality the corresponding compiled program does crash in a short time.In practice, this danger can be avoided by actually instrumenting the source code with tick () instructions and by presenting tick to the compiler as an unknown external function, which cannot be optimized away.However, this seems a pity, as it disables many compiler optimizations.
We believe that, despite these pitfalls, time receipts can be a useful tool.We hope that, in the future, better ways of avoiding these pitfalls will be discovered.
Related work
Time credits in an affine Separation Logic are not a new concept.Atkey [1] introduces them in the setting of Separation Logic.Pilkiewicz and Pottier [19] exploit them in an informal reconstruction of Danielsson's type discipline for lazy thunks [6], which itself is inspired by Okasaki's work [18].Several authors subsequently exploit time credits in machine-checked proofs of correctness and time complexity of algorithms and data structures [4,7,21].Hoffmann, Marmar and Shao [9], whose work was discussed earlier in this paper ( §9), use time credits in Concurrent Separation Logic to prove that several concurrent data structure implementations are lock-free.
At a metatheoretic level, Charguéraud and Pottier [4] provide a machinechecked proof of soundness of a Separation Logic with time credits.Haslbeck and Nipkow [8] compare three program logics that can provide worst-case time complexity guarantees, including Separation Logic with time credits.
To the best of our knowledge, affine (exclusive and persistent) time receipts are new, and the axiom ȫ N False is new as well.It is inspired by Clochard et al.'s idea that "programs cannot run for centuries" [5], but distills this idea into a simpler form.
Our implementation of thunks and our reconstruction of Okasaki's debits [18] in terms of credits are inspired by earlier work [6,19].Although Okasaki's analysis assumes a sequential setting, we adapt it to a concurrent setting by explicitly forbidding concurrent operations on thunks; to do so, we rely on Iris nonatomic invariants.In contrast, Danielsson [6] views thunks as a primitive construct in an otherwise pure language.He equips the language with a type discipline, where the type Thunk, which is indexed with a debit, forms a monad, and he provides a direct proof of type soundness.The manner in which Danielsson inserts tick instructions into programs is a precursor of our tick translation; this idea can in fact be traced at least as far back as Moran and Sands [16].Pilkiewicz and Pottier [19] sketch an encoding of debits in terms of credits.Because they work in a sequential setting, they are able to install a shared invariant by exploiting the anti-frame rule [20], whereas we use Iris' nonatomic invariants for this purpose.The anti-frame rule does not rule out reentrancy, so they must detect it at runtime, whereas in our case both concurrency and reentrancy are ruled out by our use of nonatomic invariants.
Madhavan et al. [15] present an automated system that infers and verifies resource bounds for higher-order functional programs with thunks (and, more generally, with memoization tables).They transform the source program to an instrumented form where the state is explicit and can be described by monotone assertions.For instance, it is possible to assert that a thunk has been forced already (which guarantees that forcing it again has constant cost).This seems analogous in Okasaki's terminology to asserting that a thunk has zero debits, also a monotone assertion.We presently do not know whether Madhavan et al.'s system could be encoded into a lower-level program logic such as Iris $ ; it would be interesting to find out.
Conclusion
We have presented two mechanisms, namely time credits and time receipts, by which Iris, a state-of-the-art concurrent program logic, can be extended with means of reasoning about time.We have established soundness theorems that state precisely what guarantees are offered by the extended program logics Iris $ , Iris ȯ , and Iris $ȯ .We have defined these new logics modularly, by composing Iris with a program transformation.The three proofs follow a similar pattern: the soundness theorem of Iris is composed with a simulation lemma about the tick translation.We have illustrated the power of the new logics by reconstructing Okasaki's debit-based analysis of thunks, by reconstructing Clochard et al.'s technique for proving the absence of certain integer overflows, and by presenting an analysis of Union-Find that exploits both time credits and time receipts.
One limitation of our work is that all of our metatheoretic results are specific to HeapLang, and would have to be reproduced, following the same pattern, if one wished to instantiate Iris $ȯ for another programming language.It would be desirable to make our statements and proofs generic.In future work, we would also like to better understand what can be proved about the time complexity of concurrent programs that involve waiting.Can the time spent waiting be bounded?What specification can one give to a lock, or a thunk that is protected by a lock?A fairness hypothesis about the scheduler seems to be required, but it is not clear yet how to state and exploit such a hypothesis.Hoffmann, Marmar and Shao [9] have carried out pioneering work in this area, but have dealt only with lock-free data structures and only with situations where the number of competing threads is fixed.It would be interesting to transpose their work into Iris $ and to develop it further.is an archetypal example of what can be done with Iris, so, assuming familiarity with Iris, the construction is not difficult.
The first step is to allocate a ghost location γ, whose content is an element of the "authoritative monoid" Auth(N, +) [12].This step gives us the ability to express two forms of assertions about γ.The "nonauthoritative" assertion • m γ represents a contribution of m units to γ, whereas the "authoritative" assertion • n γ represents a guarantee that the sum of all contributions to γ currently in existence is at most n.We initialize the ghost location γ with the value n, so that, initially, we hold the conjunction The second step is to install an Iris invariant that ties the ghost location γ and the physical counter c together.Our invariant is as follows: This invariant asserts that, at all times, the value n that is stored in the counter c is at least as high as the sum of all (ghost) contributions • m γ in existence.By installing this invariant, we lose the points-to assertion c → n and the authoritative assertion • n γ .In return, once installed, an Iris invariant becomes shared: every thread can access the invariant for the duration of an atomic instruction.This is exploited (below) in the proof that tick c satisfies its specification.The previous two steps, together, form the view shift that appears in the statement of Lemma 6.The next step in the proof of the lemma is to provide a witness for the existential quantification over $.Our concrete definition of the Iris predicate $ is as follows: $n • n γ That is, a time credit is just a (ghost) contribution to γ.The nonauthoritative assertion • n γ , which we still have at hand at this point, allows us to justify the conjunct $n in the right-hand side of the view shift.There remains to justify the conjunct TCIntf ($) tick c in the right-hand side of the view shift.It is easy to check that the three axioms of time credits are satisfied.The law $(n 1 + n 2 ) ≡ $n 1 * $n 2 , in particular, follows immediately from the basic law • (n 1 + n 2 ) γ ≡ • n 1 γ * • n 2 γ .The last point that remains to be checked is the fact that tick c , whose code appears in Figure 4, satisfies the specification of tick , which appears in Figure 1.There, the invariant is opened once, to justify the read operation !c.The precondition $1 is just • 1 γ , which, together with the invariant, guarantees that the result k of this read operation is nonzero.This implies that oops (in this case, crash) is not executed.The invariant is then opened again, to justify the CAS operation CAS(c, k , k − 1).In the case where this operation succeeds, a frame-preserving update is performed so as to change the conjunction γ and reestablish the invariant.
B Time Receipt Initialization
This appendix provides a sketch of the proof of Lemma 7.
The ingredients are the same as in our earlier proof of Lemma 6 ( §A).This time, we allocate two ghost locations γ and δ, whose values inhabit the monoids Auth(N, +) and Auth(N, max), respectively.We install an invariant that ties this ghost state with the physical state of the counter c: In short, the counter c stores the number of steps that can still be taken before tick c executes loop () and diverges.This number is always of the form N − n − 1, where n is the number of steps that have already been taken.The above invariant guarantees that, at all times, the following properties hold: 1. n is at least the sum of all exclusive receipts ȯ m currently in existence; 2. n is at least the value m of any persistent receipt ȫ m currently in existence.3. n is less than N .
In light of the above invariant and definitions, it is straightforward to check that the interface TRIntf (ȯ) (ȫ) tick c (Figure 3) is satisfied.
For instance, to check that the law ȫ N False holds, it suffices to open the invariant, confront the assertions • N δ and • n δ to obtain N ≤ n, and combine this inequality with the inequality n < N so as to derive a contradiction.The fact that tick c (Figure 4) satisfies the specification in Figure 3 is also easily verified.In contrast with the case of time credits, where k could not be zero and oops was never executed, in the present case, k can be zero, and oops can be executed.Fortunately, here, oops is loop, whose postcondition is False, so, in the case where k is zero, tick c trivially satisfies its specification.In the case where k is nonzero, we get N − n − 1 > 0, that is, n + 1 < N , so that, after decrementing c, which amounts to incrementing n, we are able to re-establish the invariant.
Lemma 2 (
Immediate Safety Preservation).Assume c is fresh for e.Let m > 0. If the configuration ( e , σ [c ← m]) is safe, then either e is a value or the configuration (e, σ) is reducible.
|
v3-fos-license
|
2018-04-03T06:11:49.381Z
|
2017-04-13T00:00:00.000
|
26058091
|
{
"extfieldsofstudy": [
"Engineering",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/abb/2017/5868695.pdf",
"pdf_hash": "75a6f4d89cb90a6536af3f81e92bbfea0505083b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3037",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "4e9f8f2ee5ece458de406718b150fba3ac3dca49",
"year": 2017
}
|
pes2o/s2orc
|
An Ultrasound Imaging-Guided Robotic HIFU Ablation Experimental System and Accuracy Evaluations
In recent years, noninvasive thermal treatment by using high-intensity focused ultrasound (HIFU) has high potential in tumor treatment. The goal of this research is to develop an ultrasound imaging-guided robotic HIFU ablation system for tumor treatment. The system integrates the technologies of ultrasound image-assisted guidance, robotic positioning control, and HIFU treatment planning. With the assistance of ultrasound image guidance technology, the tumor size and location can be determined from ultrasound images as well as the robotic arm can be controlled to position the HIFU transducer to focus on the target tumor. After the development of the system, several experiments were conducted to measure the positioning accuracy of this system. The results show that the average positioning error is 1.01 mm with a standard deviation 0.34, and HIFU ablation accuracy is 1.32 mm with a standard deviation 0.58, which means this system is confirmed with its possibility and accuracy.
Introduction
Cancer is a major public health problem all over the world. According to statistics, common cancers in both sexes happened in the lungs, breasts, colorectum, prostate, stomach, liver, and cervix uteri [1]. Take liver tumor as an example. Early diagnosis and treatment of liver disease are very important measures to avoid worsening. Except biochemical tests such as GOT/GOP or α-globulin, ultrasound scanning is usually adopted for first-line screening and diagnosis. If the disease needs further treatment, tissue biopsy, percutaneous ethanol injection, or RF burning will be usually done under ultrasound guidance. For serious cases, open or minimally invasive liver dissection treatment will be necessary. However, all of them are invasive treatments.
Recently, a noninvasive high-intensity focused ultrasound (HIFU) thermal treatment has demonstrated high potential on tumor treatment. The physical principle of this interventional approach is to apply focused ultrasound waves to the tumor tissue such that the heating of the tissue causes its necrosis [2]. The concepts of noninvasive surgery using HIFU had been proposed by Lynn et al. in 1942 [3]. Later in 1960, W. J. Fry and F. J. Fry used HIFU to treat the patients who were suffering from various neurological disorders. Their study shows that the HIFU beams can be used to produce changes in practically any desired brain structure. And the changes can be induced without adversely affecting the intervening brain structure and without interrupting the vascular system even within the site in which irreversible or permanent changes in the neural components are produced [4]. During the past decade, HIFU therapy has been successfully delivered clinically to many lesions, including the prostate tumor [5,6], uterine fibroids [7,8], and liver tumor [9].
During the HIFU treatment, we need the information of the target tumor in order to calculate the size and location of the ablation zone. Kim et al. used magnetic resonance imaging (MRI) to guide HIFU ablation on 33 uterine fibroid patients.
Targeting accuracy values (displacements in absolute values) were 0.9 ± 0.7 mm, 1.2 ± 0.9 mm, and 2.8 ± 2.2 mm in LR, CC, and AP directions, respectively. Of 527 sonications, 99.8% (526 of 527) were within acceptance ranges [10]. Holbrook et al. focused on the distance error problem of liver tumor which is caused by respiration. MRI is chosen by them to build in their system to guide HIFU focal point and track the moving phantom [11]. Except MRI, ultrasound imaging (USI) is another choice for researches. Sakuma et al. developed a HIFU treatment system for minimally invasive liver surgery which integrated a three-dimensional USI system in 2002. Their study shows that navigation errors were within 3 mm [12]. Later in 2015, Peng et al. investigated the value of microbubble contrast-enhanced ultrasound (CEUS) in evaluating the treatment response of uterine fibroids to HIFU ablation and also compared the value with MRI. The result shows that CEUS has the ability to show the size of fibroids and the nonperfused areas of the fibroid clearly. And the results from CEUS correlated well with the results obtained from MRI [13]. From those literature reviews, we know that both MRI and USI can be chosen to guide HIFU ablation. MRI has its advantages in offering better images as well as temperature monitoring in the target zone [14]. However, the treatment time of MRI-guided HIFU procedure is longer than that of USI-guided HIFU. The cost of MRI-guided HIFU is also much more than that of USI-guided HIFU [15]. In addition, USI can apply the real-time images during the HIFU treatment. It is an advantage for the HIFU treatment.
Additionally, since the target tumor we faced is much larger than the size of the HIFU focal point, the treatment of the entire volume of tumor is not suitable for handheld HIFU transducer. Besides, HIFU treatment needs 0.5~5 s to ablate a single point. The stability and the positioning accuracy of the HIFU focal point should take a serious concern. Chauhan and ter Haar developed a HIFU treatment system named FUSBOT which combined with the robotic arm to achieve the stability and the positioning accuracy of HIFU focal point. According to their study, the navigation errors were within 0.5 mm [16,17]. Masamune et al. developed another HIFU positioning robot treatment system which integrated the HIFU transducer, ultrasound probe, and a robotic arm (which has 4 degrees of freedom) for fetal sacrococcygeal teratoma treatment. The positioning errors of their robotic arm in X, Y, and Z directions were −0.2 ± 0.3 mm, −0.1 ± 0.1 mm, and −0.0 ± 0.1 mm, respectively [18]. As shown above, most of the studies combined their system with a robotic arm (with different DOF or mechanism) to facilitate the stability and the accuracy of HIFU treatment system.
Eventually, it is quite difficult to assess the quality of this noninvasive therapy, and there is a dire need for a highaccuracy system supporting in planning, conducting, and monitoring such treatment. Therefore, this research aimed to study and develop an ultrasound imaging-guided robotic HIFU ablation system for tumor treatment. Instead of building a huge, solid, and expensive system, our HIFU ablation system combined with the existing ultrasound imaging equipment to achieve HIFU ablation function. The previous studies of this system were already revealed in the conference of Biomedical Electronics and Devices in 2015 [19].
Material and Methods
2.1. The Structure of the Ultrasound Imaging-Guided Robotic HIFU System. As shown in Figure 1, the ultrasound imaging-guided robotic HIFU system integrates the ultrasound imaging system (ALOKA, Prosound Alpha 6), the HIFU ablation system (Sonic H-106 probe with Instek, GFG-8255 signal generator and AR, and 150A100B power amplifier), the robotic arm (YAMAHA, YK400XG), the optical tracker (Northern Digital, Polaris Spectra), and a notebook (Dell, M4500) into this system. The ultrasound probe scans the tumor phantom to obtain the location of the tumor. The movement of the ultrasound probe is controlled by the motor-driven linear slide and detected by the optical tracker through the DRF (dynamic reference frame, a tool with three IR-reflective marker spheres, as shown in Figure 2), which is a reference coordinate frame tracked by the optical tracker. Through coordination transformation described below, the position of the tumor phantom relative to the ultrasound image frame can be transferred and represented by the robot frame. The robotic arm is thus able to bring the focal point of the HIFU transducer to aim at the tumor phantom. The signal generator and power amplifier are used to enable the HIFU transducer to generate high-intensity sound power for thermal therapy.
The blocks in the right column of Figure 3 show the working procedures of the ultrasound imaging-guided robotic HIFU system. Green blocks in the left column represent the preliminary works before starting the system. They are also the key points of this study which will be described clearly in the next sections.
Coordinate
Transformation between the Optical Tracker and the Ultrasound Image. Figure 4 illustrates our method for determining the coordinate transformation matrix T U I between the ultrasound probe frame O U and the ultrasound image frame O I . A mountain-type calibration template with three plates is fixed at the bottom of the water tank while a DRF O D is also mounted on the upper corner of the water tank. The position P D of the target point P relative to the tank DRF frame O D is calibrated prior to the experiment. A DRF O U is also attached on the ultrasound probe for position tracking of the probe. The calibration template is scanned by the ultrasound probe, and the image coordinate P I of the target point P is determined from the ultrasound image. The position of the target point P relative to the optical tracker frame can be expressed through either the tank DRF frame or the ultrasound probe frame as shown in where I represents the ultrasound image frame; U represents the ultrasound probe frame; T represents the optical tracker frame; D represents the tank DRF frame; T T D , T T U , P D , and P I are known.
The transformation matrix T U I can be determined by bringing the tracker and image coordinates of the target point P at three or more positions, P i P Di , P Ii , i = 1 2, …, N, N ≥ 3, into (1) and solved by optimization method such as the least square algorithm. After the transformation matrix T U I has been determined, the coordinates of any target tumor detected by the ultrasound probe can be transferred and expressed relative to the optical tracker frame as described by The calibration template used for the registration of the optical tracker frame and the ultrasound image frame is shown in Figure 5. Since ultrasound scan beam has a slice thickness (elevational direction), it is necessary to determine the middle plane of the slice so that the following positioning calibration will be more precise. Therefore, a three-layer template is designed to make sure that the ultrasound scan is correctly located on the middle plate which will have brighter or clear boundary images than those of the other two plates. Figure 6 shows the coordinate transformation relationship between the optical tracker and the robotic arm. A tracking device mounted with a DRF (coordinate frame E) and a pin of 10 cm in length (pinpoint P represents the focal point of the HIFU transducer) is designed and mounted at the end effector of the robotic arm. A DRF is fixed on the robot base and used to define the world coordinate frame W in case the optical tracker is moved during the registration. The robot coordinate frame is defined as frame R. The transformation matrices T T W and T T E can be determined directly by the optical tracker. The transformation matrix T R W will be solved so that the coordinates of the optical tracker frame can be transformed to the robot frame. In other words, according to the mathematical relationship we got from solving those matrices, the coordinates of any target point detected by the ultrasound probe can be transformed to robot frame through the optical tracker. O E represents the position of the origin of the coordinate frame E. And (3) shows the relationship between O E and O W (the coordinate frame W).
Coordinate Transformation between the Robotic Arm and the Optical Tracker.
If the robotic arm is manipulated to move around, the coordinates of point O E relative to the coordinate frames R and W are calculated by the robotic arm controller and (3), respectively. Therefore, the transformation matrix T R W between the robotic arm and world coordinate frame W can be determined by Because both O W and O R are not square matrices, we use least mean square algorithm to solve T R W .
Finally, after completing the registrations between the ultrasound image and the optical tracker as well as between the optical tracker and the robotic arm, the coordinates of the target tumor scanned and detected by the ultrasound system can be transformed and represented by the robot frame. The transformation relationship is defined by (6). Figure 7 illustrated the whole coordinate transformation method.
where P I is the image coordinate of the target tumor. Figure 7 also shows that the HIFU transducer has been mounted to the end effector of the robotic arm for HIFU thermal treatment.
Experiment Results and Discussions
After building the whole ultrasound imaging-guided robotic HIFU system and finishing the preliminary works, we conducted three experiments as follows. The first one is the accuracy experiment to evaluate the coordination transformation accuracy between the ultrasound image and optical tracker. The second one is also the accuracy experiment to evaluate the coordination transformation accuracy between the ultrasound image and the robotic arm. The third experiment is to test the positioning accuracy of the entire system with ablating a phantom.
Accuracy Evaluation I: The Positioning Accuracy of Target
Points. An experiment has been conducted to verify the positioning measurement error of the coordinate transformation between the ultrasound image and the optical tracker frame. The mountain-type template was seated in depth of 3 cm, 7 cm, and 12 cm. The template in each depth was scanned three times by the ultrasound probe. The distance error is defined as the difference between the coordinate of the target point under the ultrasound image coordinate system (P I ) and the coordinate of the target point under the optical tracker coordinate system (P T ). The distance errors of the three peak points (of the mountain-type template) in depth of 3 cm, 7 cm, and 12 cm are 0.67 ± 0.27 mm, 1.02 ± 0.26 mm, and 1.24 ± 0.24 mm, respectively. Table 1 listed the experiment data of the cases in 3 cm, 7 cm, and 12 cm depth.
Accuracy Evaluation II:
The Positioning Accuracy of the Robotic Arm. The robotic arm was commanded to move to ten positions in order to calculate the transformation matrix T R W by (5). After that, the calibration template was also seated in depth of 3 cm, 7 cm, and 12 cm and scanned by the ultrasound probe. There is a pin mounted on the end effector of the robotic arm ( Figure 8). Then the robotic arm was commanded to move pinpoint P (the end of the pin) to the three peak points of the template (as shown in Figure 8). The distance errors between the peak points and the pinpoint P are listed in Table 2. The distance errors in depth of 3 cm, 7 cm, and 12 cm are 0.72 ± 0.26 mm, 1.02 ± 0.26 mm, and 1.31 ± 0.23 mm, respectively.
Accuracy Evaluation III:
The Positioning Accuracy of the Ultrasound Imaging-Guided Robotic HIFU System with Ablating a Phantom. The ultrasound imaging-guided robotic HIFU treatment experiment was conducted by commanding the robotic arm to move the HIFU focal point to ablate the four corner points of a phantom, which was detected by ultrasound images. Figure 9 shows that the HIFU focal point can be positioned to the target (corner) points for thermal ablation. The average distance error is 1.32 ± 0.58 mm, and the distance error of each corner point is listed in Table 3.
Conclusions
This study proposes an ultrasound imaging-guided robotic HIFU experimental system for thermal ablation of tumors. By using this system, the positioning coordinates of targets (which are determined by the ultrasound imaging system) are transformed to the robot coordinate frames so that the robotic arm can move the HIFU transducer to ablate the target tumors. Instead of building the huge, solid, and costly system, this system tries to combine with the existing ultrasound imaging equipment to achieve HIFU ablation function.
The positioning accuracy evaluation results in Section 3 show that the distance error of the ultrasound imagingguided robotic HIFU system is 1.32 ± 0.58 mm. However, for clinical use, this system still has many things needed to improve. So far, this study has built an experimental HIFU treatment system and confirmed its possibility and accuracy. The next step of this research is to consider the path planning issue and the respiration problem (respiration might cause tumor moving during the HIFU treatment [14]) in order to get more closer to deal with a real HIFU treatment situation.
Conflicts of Interest
The authors declare that there is no conflict of interests regarding the publication of this paper. e locations of thermal ablation e corner points of the phantom Figure 9: Positioning experiment of the HIFU thermal ablation.
|
v3-fos-license
|
2023-12-05T06:18:00.013Z
|
2023-12-03T00:00:00.000
|
265607206
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202306457",
"pdf_hash": "6024c70438f25e96b361a8f3995cd34083924c4a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3039",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "bba55dd311bc8655ce2d1713bfcc45d4eaf10127",
"year": 2023
}
|
pes2o/s2orc
|
Dying to Defend: Neutrophil Death Pathways and their Implications in Immunity
Abstract Neutrophils, accounting for ≈70% of human peripheral leukocytes, are key cells countering bacterial and fungal infections. Neutrophil homeostasis involves a balance between cell maturation, migration, aging, and eventual death. Neutrophils undergo different death pathways depending on their interactions with microbes and external environmental cues. Neutrophil death has significant physiological implications and leads to distinct immunological outcomes. This review discusses the multifarious neutrophil death pathways, including apoptosis, NETosis, pyroptosis, necroptosis, and ferroptosis, and outlines their effects on immune responses and disease progression. Understanding the multifaceted aspects of neutrophil death, the intersections among signaling pathways and ramifications of immunity will help facilitate the development of novel therapeutic methods.
DOI: 10.1002/advs.202306457
Mature neutrophils do not undergo cell division, possess lobulated nuclei, and have relatively short lifespans.Their continual replacement occurs in the bone marrow via granulopoiesis.Under steady-state conditions, humans generate ≈1 billion neutrophils daily for every kilogram of body weight, and this number can increase to 10 billion in inflammatory cases. [6]Granulopoiesis relies on the cytokine granulocyte colony-stimulating factor.After entering the bloodstream, mature neutrophils have a half-life of less than 24 h. [7]Interestingly, granulopoiesis appears to depend on the phagocytosis of apoptotic neutrophils by tissue dendritic cells and macrophages.As apoptotic neutrophils become engulfed, a subsequent decline in interleukin-23 (IL-23) and IL-17 levels is observed.This reduction curtails the production of granulocyte colony-stimulating factor, thereby affecting the production of neutrophils. [8]eutrophil homeostasis depends on a finely tuned balance among cell maturation, release from the bone marrow, migration through vascular and tissue channels, aging, and death. [7]This trajectory is determined by factors ranging from inherent diurnal cell rhythms, interactions with both commensal and pathogenic microbes, environmental exposure to cellular senescence.Neutrophils can undergo various death modalities, such as apoptosis, NETosis, pyroptosis, necroptosis, and ferroptosis.Each modality distinctly modulates the immune responses and disease resolution pathways.Given the large cytotoxic repertoire of neutrophils, rigorous regulation of their production and clearance is necessary to avert unintended inflammatory repercussions in the host.Under physiological conditions, apoptotic neutrophil death modalities prevail, and apoptotic cells exert immunosuppressive effects by regulating T cell activation or phagocytosis by macrophages/dendritic cells. [9]However, certain infectious or inflammatory stimuli induce the lytic neutrophil death pathways, resulting in the release of damage-associated molecular patterns (DAMPs) and expulsion of cytotoxic granules (Figure 1). [10]hen dysregulated, this process aggravates the localized inflammatory responses and propagates the injury to adjacent tissues.A comprehensive understanding of the intricacies of neutrophil death pathways and the distinctions and intersections among various cell death patterns is imperative for the development of precise therapeutic approaches for associated diseases.In this review, we discuss the different modes of neutrophil death, emphasizing their key roles in host antimicrobial and inflammatory responses, with special attention to the convergent and divergent www.advancedscience.com Figure 1.Neutrophil Homeostasis and Multifaceted Death Pathways in Steady-State and Inflammatory Conditions.Neutrophils are generated in the bone marrow through granulopoiesis and subsequently enter the circulatory system.Depending on the specific microenvironment, neutrophils undergo various mechanisms of cell death. [7]These mechanisms encompass both non-lytic apoptosis and lytic death modalities, including necroptosis, pyroptosis, ferroptosis, and NETosis.These lytic processes are accompanied by the release of cytotoxic cellular proteases, cell-free DNA, and chromatin into the microenvironment.Each death pathway operates through distinct molecular mechanisms and regulatory networks, ultimately resulting in either immunosuppressive or pro-inflammatory outcomes. [10]Defects in the clearance of apoptotic neutrophils and the accumulation of cellular remnants contribute to the onset of inflammatory diseases and autoimmune disorders.signaling mechanisms and biological outcomes of each death modality.
Neutrophil Apoptosis
Apoptosis plays a key role in orchestrating neutrophil homeostasis and regulating inflammation.As neutrophils mature and migrate to peripheral tissues, apoptosis is a crucial in ensuring the equilibrium of circulating neutrophils and facilitating the efficient resolution of inflammatory responses. [11]
Molecular Mechanisms and Regulatory Pathways Governing Neutrophil Apoptosis
Apoptotic neutrophils exhibit distinctive morphological attributes.The alteration in cellular structure results in a reduced size, imparting a nearly spherical shape to the cells.This transformation is concomitant with or promptly followed by an extended phase of dynamic plasma membrane vesiculation, [12] wherein phosphatidylserine translocates from the inner leaflet to the outer leaflet of the plasma membrane.Simultaneously, the chromatin undergoes condensation, leading to DNA frag- , and TRAIL receptors, the extrinsic apoptotic pathway begins with the activation of Caspase-8, which also promotes Mitochondrial Outer Membrane Permeabilization (MOMP), ultimately leading to Caspase-3 activation, responsible for the execution phase. [18]Concurrently, the generation of reactive oxygen species (ROS) by NADPH oxidase serves as a complementary factor in this pathway. [19]b) Intrinsic Apoptosis: Within the intrinsic apoptosis pathway, pro-apoptotic dimers from the Bcl-2 family, Bax and Bak, embed themselves in the mitochondrial outer membrane, inducing MOMP.The release of Cytochrome c into the cytoplasm then initiates the activation of Caspase-9, eventually leading to Caspase-3-mediated apoptosis. [16]c) Pathogen-Induced Cell Death (PICD): This specialized process integrates the phagocytic elimination of microbial pathogens with the initiation of apoptosis.Following their antimicrobial actions, neutrophils are targeted for clearance via efferocytosis, typically carried out by macrophages.Effective efferocytosis ensures the timely removal of apoptotic neutrophils, preventing their progression to secondary necrosis.Inefficient clearance of apoptotic neutrophils may lead to the extracellular release of toxic granules and damage-associated molecular patterns (DAMPs), thus amplifying local inflammatory responses and perpetuating tissue injury. [29]Figure created with BioRender.com.mentation within the cell.Concomitantly, discernible shifts in organelle morphology occur. [13]The mitochondria transition from a tubular network to a clustered configuration is accompanied by depolarization, and a multitude of vacuoles emerge within the cytoplasm. [14,15]Ultimately, these cells give rise to apoptotic bodies, which are subsequently engulfed and cleared by phagocytes.
Similar to other cell types, neutrophil apoptosis is initiated via two distinct signaling pathways, intrinsic and extrinsic (Figure 2). [16]Within the intrinsic pathway of apoptosis, proapoptotic dimers of the Bcl-2 family, specifically Bax and Bak, are embedded in the mitochondrial outer membrane.This action results in the induction of mitochondrial outer membrane permeabilization (MOMP), leading to the dissipation of the mitochondrial outer membrane potential.Consequently, cytochrome c is released into the cytoplasm, which initiates the activation of Caspase-9.Subsequently, Caspase-9 drives the activation of Caspase-3, which is the central effector of the apoptotic process (Figure 2b). [15,17]Conversely, the extrinsic pathways are activated via the engagement of cell surface death receptors, such as FAS, tumor necrosis factor (TNF)-receptor 1 (R1), and TNF-associated apoptosis-inducing ligand receptors, from the TNF receptor superfamily.This engagement activates Caspase-8, which promotes MOMP, ultimately leading to the activation of Caspase-3 responsible for the execution phase (Figure 2a). [18]Exogenous apoptosis pathways in neutrophils rely heavily on reactive oxygen species (ROS) derived from NADPH oxidation.This has been corroborated using neutrophils isolated from patients with chronic granulomatous disease, in which a mutation at the NOX2 locus impedes typical NOX-mediated ROS production. [19]pecific intrinsic factors render the neutrophils susceptible to apoptosis.Maturation of neutrophils decreases the transcription and translation rates, consequently diminishing the expression of anti-apoptotic proteins though the expression of apoptotic effector proteins is preserved. [20]Apoptosis is regulated by various factors, such as the mitochondrial apoptosis-inducing factors. [21,22]Neutrophils are prolific generators of ROS. [23]Their distinctive granulocytic system, which houses abundant proteases that activate Caspases, contributes to the initiation of apoptosis. [24,25]The intricate molecular signaling underlying spontaneous neutrophil apoptosis necessitates further investigation to unravel its origin.Unveiling these diverse molecular mechanisms can provide insights into the intricate balance governing neutrophil survival and death.
Pathogen-Induced Neutrophil Apoptosis, Efferocytosis, and their Implications in Inflammation and Disease
In addition to classical death receptors, neutrophil apoptosis is also induced via pathogen-induced cell death, a significant form of neutrophil apoptosis that intertwines microorganism ingestion and killing with apoptosis induction (Figure 2c). [26,27]Specialized phagocytes effectively clear neutrophils via efferocytosis. [28]Ineffectual removal of apoptotic neutrophils leads to cell necrosis and subsequent extracellular release of toxic granulocytes and DAMPs, aggravating the local inflammatory responses and perpetuating tissue damage. [29]Pathogens such as Chlamydia pneumoniae and Klebsiella pneumoniae hinder apoptosis and support their survival and replication by obstructing neutrophil apoptosis and efferocytosis. [30,31]Similar to death receptor-induced neutrophil apoptosis, pathogen-induced cell death relies on ROS production, as illustrated by chronic granulomatosis in which certain bacterial and fungal infections persist. [32]eutrophil efferocytosis deficiency contributes to the pathogenesis of atherosclerosis in humans.Accumulation of oxidized lipoproteins and atherogenic factors leads to an influx of leukocytes into the inflamed sites. [33]Oxidized lipoproteins impair c-Mer tyrosine kinase (MerTK)-mediated macrophage receptor efferocytosis, thereby hampering the clearance of damaged apoptotic cells. [34]Failure to curb inflammation causes macrophages to transform into "foam cells," releasing proinflammatory cytokines and attracting more neutrophils, culminating in a highly inflammatory "necrotic core," characteristic of advanced atherosclerotic plaques. [35]n summary, efferocytosis is a vital pathway for clearance of apoptotic neutrophil, fostering the regression of inflammation in inflammation-related ailments.Pathogen-induced cell death is beneficial for resolving acute inflammation and infection.Pathogen survival is partly attributed to delays in neutrophil apoptosis and cell lysis.Prolonged neutrophil survival is linked to diverse inflammatory and immune diseases, such as the systemic inflammatory response syndrome and autoimmune and rheumatic diseases. [36,37]Enhanced understanding of the molecular mechanisms governing neutrophil apoptosis is imperative to identify novel therapeutic targets and develop strategies to address pathologies associated with the dysregulation of neutrophil apoptosis.
Neutrophil Apoptosis-Targeting Therapeutic Strategies for Inflammatory Diseases
Neutrophil apoptosis is a key mechanism in maintaining neutrophil homeostasis.Deficiencies in neutrophil apoptosis are associated with various inflammatory diseases and significantly influence the prognosis of infectious diseases.For instance, increasing alveolar neutrophil apoptosis mitigates inflammation and expedites clinical stability in patients with communityacquired pneumonia. [38]eutrophil apoptosis is triggered by several mechanisms, including growth factor deprivation, death ligand binding, ROS generation, and pathogen-induced events.Key regulators inhibiting neutrophil apoptosis include members of the BCL-2 family and inhibitors of the apoptosis protein (IAP) family. [39]ithin the realm of apoptosis regulated by BCL-2, members of the BCL-2 family predominantly oversee the mitochondrial outer membrane permeabilization via intricate protein interactions.These members share a hydrophobic BH3 domain that is crucial for mediating these interactions.Interplay between BH3 and anti-apoptotic BCL-2 proteins serves as the foundation for the development of small-molecule inhibitors mimicking BH3 protein-binding properties.BH3-only protein BIM provides relief in arthritis models, primarily by inducing apoptosis in macrophages and Gr1-positive bone marrow cells. [40]pecific MCL-1 inhibitors show promising potential in arthritis models due to their role in promoting neutrophil apoptosis.Strategies targeting MCL-1 and BCL-XL, including use of inhibitor R-roscovitine, a cyclin-dependent kinase (CDK) inhibitor that reduces MCL-1 protein levels and triggers neutrophil apoptosis, have successfully ameliorated arthritis in mouse models. [41,42]Neutrophils exhibit high sensitivity to FAS ligand (FASL)-induced apoptosis and elevated levels of TNF-.However, findings regarding TRAIL are still a subject of debate.Clinically approved human intravenous immunoglobulin (IVIg) formulations have been demonstrated to induce neutrophil apoptosis, with reactive anti-FAS antibodies identified as the active components. [43]Moreover, stimulatory anti-Siglec-9 antibodies found in IVIg formulations have been shown to induce neutrophil death via both apoptotic and non-apoptotic pathways. [44]otably, the IVIg preparations did not induce cell death in mouse neutrophils in the same manner as that in humans, highlighting a significant interspecies difference. [45]urthermore, advancements in nanotechnology and biotechnology have paved the way for innovative approaches targeting the specific treatment of pro-inflammatory neutrophils for the management of inflammatory diseases.The novel strategy of nanoparticles targeting activated inflammatory neutrophils for the delivery of pro-apoptotic drugs offers an alternative to the It commences with the disassembly of actin cytoskeletal structures within the neutrophil, followed by nuclear delobulation, which involves the reorganization of nuclear components. [47]Subsequent histone citrullination facilitates chromatin decondensation, allowing deagglutinated chromatin to mix with cytoplasmic granular components.The process culminates in plasma membrane rupture, releasing NETs into the extracellular milieu. [50]b) Vital NETosis (Non-Lytic NETosis): In contrast, vital NETosis enables neutrophils to form NETs without accompanying cell death. [48]During this process, NETs are extruded from neutrophils while maintaining membrane integrity.This non-lytic mode of NET formation maintains the neutrophil's functional capacity for tasks such as phagocytosis, allowing them to engage in microorganism engulfment and contribute to host defense while simultaneously releasing NETs. [49] systemic inhibition associated with current anti-inflammatory medications. [46]
NETosis
NETosis, a distinctive form of regulated cell death, remained obscure until 2004, when its crucial role in neutrophil function was revealed.Brinkmann et al. showed that neutrophils release chromatin loaded with granular antimicrobial proteins. [47]his process not only restricts bacterial dissemination by enhancing their adhesion within host cells but also enables fibrous chromatin structures to neutralize virulence factors and eliminate bacteria, augmenting the innate immune system defenses against acute infections.These fibrous extracellular constructs, now called neutrophil extracellular traps (NETs), were initially linked to cell death and the term NETosis was coined.Notably, not every pathway leading to NET formation results in cell death, even under controlled in vitro conditions, which has fueled the debate over the scope of NETosis.Two distinct forms of NETosis have been identified: classical or suicidal NETosis, which leads to cell death, and "vital NETosis," which maintains cellular viability along with a range of effector functions (Figure 3). [48,49]
Complexity of NETosis: Mechanisms, Triggers, and Pathophysiological Implications
NETosis involves a series of morphological alterations in neutrophils, such as arrested actin dynamics, depolarization, and loss of nuclear lobules.Subsequently, histone citrullination leads to chromatin decondensation.Consequently, the nuclear membrane disintegrates into vesicles containing dispersed chromatin, resulting in numerous vacuoles within the cells. [50]ese vacuoles differ from the cellular blistering observed during pyroptosis, which originates from the nuclear envelope that contains condensed chromatin. [51]Then, breakdown of the nuclear vesicles and cytoplasmic particles occurs, enabling the de-agglutinated chromatin to mix with cytoplasmic granular components, such as lactoferrin, BPI, LL-37, and histones.These elements are crucial for the formation of NETs that trap and eliminate bacteria. [52,53]Ultimately, the plasma membrane disintegrates, releasing chromatin into the extracellular space, where it forms extracellular traps known as NETs (Figure 3).This process enables the neutrophils to continue their bactericidal function beyond their limited lifespan, thereby extending their antimicrobial activity.
NETosis can be induced by many factors, including pathogens, lipopolysaccharides (LPS), antibodies, immune complexes, specific cytokines such as IL-8 and TNF-, [54,55] complement C3, [56] cholesterol crystals, [57] microcrystals, [58] and drugs such as phorbol myristate acetate (PMA), calcium, and potassium ionophores.Notably, the presence of NETs has been observed in vivo in experimental dysentery and spontaneous human appendicitis [59] underscoring that NETs are more than a consequence of PMA toxicity and represent a potentially relevant physiological process.Furthermore, the release is generally regarded as an active cellular process initiated by intrinsic pathways activated by external stimuli. [60]These findings implicate NETs in various physiological processes (Figure 4).
Initially, many studies investigating the molecular mechanisms of NETs focused on induction model utilizing PMA.This compound directly activates protein kinase C (PKC), [61] leading to the phosphorylation of NADPH oxidase (NOX2) and the resultant generation of ROS. [62]Downstream of PKC, a multitude of kinases are activated, including c-Raf, MEK, PI3K-Akt, and ERK. [63,64]Nonetheless, the precise function of these kinases remains unclear, with conflicting evidence.It is plausible that the overlap of downstream PKC signals, which can directly activate NADPH oxidase, contributes to this discrepancy.Exploration of the roles of these kinases in inducing NETosis has also highlighted the involvement of autophagy in the release of NETs.PI3K is indispensable for both NETosis and autophagy.NET release is hindered in the absence of ATG5/ATG7, essential autophagy proteins. [65,66]ctivation of NETosis depends on NOX2-mediated ROS production, which triggers Myeloperoxidase (MPO) activation (Figure 4).This activation initiates the translocation of neutrophil elastase (NE) from the azurosome to the nucleus.In the nucleus, NE aids histone proteolysis and disrupts chromatin packaging.Subsequently, MPO binds to chromatin and collaborates with NE to condense the chromatin, independent of its enzymatic activity. [67]A segment of MPO binds to NE, forming a complex called the azurosome.This complex liberates NE from the azurosome particles via hydrogen peroxide. [68]Subsequently, NE is released into the cytoplasm where it binds to F-actin filaments, undergoes degradation, and enters the nucleus. [69]Moreover, NE may contribute to nuclear membrane degradation by promoting the release of chromatin into the cytoplasm.This process involves the fusion of various granular contents with deagglutinated chromatin, plasma membrane ruptures, and NETs into the extracellular space.The introduction of exogenous ROS to the neutrophils of patients with chronic granulomatosis can [63,64] Elevated intracellular calcium ions are indispensable for NETosis and act as activators for PAD4, which in turn catalyzes histone citrullination and chromatin decondensation. [52]In synergy with PKC and CDK4/6, PAD4 facilitates the disassembly of nuclear architecture during NETosis. [70]The azurosome, a specialized organelle, harbors key enzymes like Myeloperoxidase (MPO), Neutrophil Elastase (NE), and Cathepsin G (CG), essential for NET biogenesis.Upon neutrophil activation, these enzymes translocate to the nucleus, furthering chromatin relaxation.The release of NE into the cytosol is ROS-and MPO-mediated and precedes its nuclear translocation, wherein it targets and degrades F-actin.Both pathogens and Pathogen-Associated Molecular Patterns are potent NET inducers. [69]Dysregulated NET formation or defective clearance can result in pathological NET accumulation, exacerbating inflammation and autoimmune diseases.Furthermore, NETs can interact with platelets, presenting potential complications such as vascular or catheter obstructions. [111]This figure serves as a comprehensive synopsis of the multifaceted processes underpinning NET formation, underscoring its relevance in host defense, inflammatory regulation, and disease etiology.ACPAs, autoantibodies to citrullinated protein antigens; I/R injury, ischemia-reperfusion injury; RA, rheumatoid arthritis; SLE, Systemic Lupus Erythematosus.stimulate NET formation, confirming the important role of ROS in NET formation. [50]n addition to NOX2, activation of cyclin-dependent kinases (CDK), which contribute to cell cycle entry, can trigger NETosis. [70]CDK4/6 and PKC mediate the disruption of the structural rigidity of the nucleus governed by the lamin A network, releasing chromatin during NETosis (Figure 4).Non-NADPH oxidation-dependent triggers of NETosis include immune complexes and nicotine.These triggers rely on the production of mitochondrial ROS. [71,72]Furthermore, vital nonlytic NET production is believed to occur independently of ROS. [48,49]herefore, different pathological stimuli may induce NETs via distinct pathways.
Histone citrullination and its proteolytic cleavage are considered essential for chromatin disassembly and the subsequent release of DNA during NETosis.One enzyme that plays a key role in this process is peptidylarginine deiminase 4 (PAD4), which removes positive charges from histones by converting arginine residues into citrulline (Figure 4).This conversion results in chromatin decondensation, ultimately leading to rupture of the nuclear envelope. [52,73]The involvement of PAD4 in NET for-mation remains controversial.In light of calcium ionophoreinduced NETosis, PAD4 has been proposed as a Ca 2+ -activated enzyme responsible for histone citrullination, serving as a link between ROS production and chromatin decondensation during NETosis. [52]Inhibition of PAD4 has been shown reduces NET formation in response to specific stimuli.[75] However, other studies have indicated that PMA-induced NETs formation is unaffected by PAD4 inhibition. [59]This suggests that PAD4 is not the sole determinant of NET formation.These variations in the role of PAD4 in NET formation may stem from our limited understanding of the diverse functions of PAD4 in neutrophil biology.Studies focusing on the role of citrullination in NETosis, particularly those induced by ionophores, have primarily focused on the analysis of histone proteins. [71]Interestingly, calcium ionophores induce extensive protein citrullination within cells.Beyond its role in promoting protein citrullination, the catalytic activity of PAD4 hinders the assembly and activation of NOX2. [76]Therefore, when PAD4 is considered a driving factor for NETosis, its function must be assessed with multiple detection methods.
NETosis in Host-Pathogen Interactions: Molecular Mechanisms, Pathogen Counterstrategies, and Clinical Implications
NETosis, a crucial mechanism for combating fungal and bacterial infections, uses NETs to entrap and eliminate pathogens directly via granule-derived proteins.Early foundational studies have established that Staphylococcus aureus triggers ROS-induced NETosis through NOX2. [50]As a result, neutrophil in patients with chronic granulomatous disease are unable to undergo NE-Tosis in response to Staphylococcus aureus or PMA. [77]However, revolutionary approaches, such as gene therapy and the introduction of exogenous ROS, are capable of inducing NET formation and restore antibacterial efficacy. [50,77]Remarkably, in a mouse model of Staphylococcus aureus skin infection, prompt release of NETs by neutrophils effectively thwarts systemic bacterial transmission via the bloodstream. [50]taphylococcus aureus -induced NET release occurs through the NADPH oxidation-independent pathway, signifying the capacity of neutrophils to sustain transient function while navigating toward the infection site, characterized as the non-lytic form of NETosis. [78]The mechanisms by which bacteria induce NET formation vary between studies.[81] Neutrophils can discern the size of microorganisms, prompting NETosis in response to larger pathogens, such as Candida albicans mycelia and bacterial aggregates. [80]Conversely, smaller microorganisms are engulfed by intracellular phagosomes and undergo fusion with azurosomes.This suppresses NE expression and inhibits chromatin condensation. [69,80]Selective induction of NETosis serves as the key mechanism for mitigating superfluous tissue damage in response to infections.
However, several small extracellular and intracellular pathogens, including viruses, can induce neurotoxicity. [82]Many of these microorganisms have evolved mechanisms to survive and evade phagosomes.Consequently, NETosis may be reserved for smaller microorganisms that impede phagosome-mediated killing.Neisseria gonorrhoeae, for instance, disrupts phagosome and azurosome fusion to impede its bactericidal effects, thereby triggering NETosis. [83]Aggregation is a strategy used by smaller microbes to eliminate phagocytosis.Extensive aggregates of Mycobacterium bovis and clusters of Bacillus Calmette-Guérin induce NETosis in a size-dependent manner. [67,80]Taken together, microbe-induced NETosis hinges on the size and pathogenicity of the microbes.This mechanism enhances our understanding of host defense strategies and offers insights into the fascinating interplay between microbial size and behavior and host immune responses.
NETs play pivotal roles in combating specific infections; however, the intricate molecular mechanisms underlying their antimicrobial efficacy remain unclear.Key molecules, such as NADPH oxidase, MPO, and PAD4, are integral to inducing NE-Tosis.Notably, patients deficient in glucose 6-phosphate dehydrogenase, the first enzyme of the oxidizing branch of the pentose phosphate pathway, exhibit diminished NADPH and ROS levels, making them more susceptible to bacterial infections. [84]dditionally, MPO deficiency in humans is primarily linked to recurrent fungal infections, [85] which is supported by similar findings in MPO-deficient mice.This finding reinforces the key roles of NETs in fungal infections.In contrast, the influence of NETosis on host susceptibility to bacteria and survival in septic mice remains unchanged in PAD4-deficient mice. [86]Nonetheless, PAD4-deficient neutrophils exhibit impaired antibacterial activity against pathogens, such as Shigella flexneri and group A Streptococcus pyogenes, consequently exacerbating necrotizing fasciitis. [73]These investigations underscore the significance of NETs in treating virulent bacterial and fungal infections.
Extracellular trap formation is an ancient and evolutionarily conserved defense mechanism. [87]However, bacteria have developed strategies to evade NETosis-induced killing.Numerous microorganisms express endonucleases that effectively degrade NETs. [88]Pseudomonas aeruginosa and group A Streptococcus suppress NETosis via Siglec-9 by coating with host sialylated glycoproteins. [89,90]Similarly, group B Streptococcus deploys molecules resembling sialic acids to dampen the ROS burst and reduce NETosis. [91]NET release has also been observed in response to viruses, such as the human immunodeficiency and respiratory syncytial viruses.In particular, HIV-1 virions stimulate dendritic cells to generate IL-10, thereby safeguarding the virus from NET-related protease degradation. [82]Additionally, the Hepatitis B virus exploits HBV E and HBV C proteins to curtail ROS production in neutrophils, effectively inhibiting NETosis. [92]Figure 4).
Janus-Faced Role of NETs in Immune Regulation, Autoimmunity, and Thrombosis
In addition to their antimicrobial functions, NETs play pivotal roles in coordinating the adaptive immune microenvironment and modulating inflammatory cytokines as key regulators of the immune response. [93,94]The roles of NET components in tissue damage during aseptic disease have been widely reported.
In addition to triggering immune system activation, NETs amplify the efficacy of antibacterial agents by concentrating them within the fiber network.This, in turn, mitigates the harm inflicted on host tissues upon exposure to these components. [95]nother important part of the regulatory effect of NETs on inflammation is the degradation of cytokines and chemokines by various proteases present in NETs. [96,97]In mouse models of gout induced by monosodium urate crystals, inflammation and gouty arthritis are more pronounced in ROS-deficient mice that are unable to induce NETs. [96]This underscores the anti-inflammatory properties of NETs.NETs also exert anti-inflammatory effects by modulating various immune cells.For instance, they regulate the functioning of dendritic cells and orchestrate the inflammatory response by increasing the expression of TH2 cytokines, while dampening the synthesis of TH1 and TH17 cytokines. [98]owever, excessive NET formation due to dysregulation and impaired elimination mechanisms can lead to detrimental outcomes, such as inflammation, autoimmune pathology, and vascular or catheter obstruction (Figure 4).Sterile inflammatory conditions often lead to the release of NETs, which can promote thrombosis, intensify local tissue damage, result in organ failure, and even death. [99]A notable example of organ dysfunction linked to excessive NETs formation is the rapid onset of acute respiratory distress syndrome (ARDS). [100]In some conditions, such as sepsis and acute injury, mitigating NETosis by NE or PAD4 defection has been shown to alleviate liver injury in mice. [101]Central to this is the role of NET-bound histones in mediating cytotoxicity. [102]These free histones are not only cytotoxic and damage cell membrane integrity, but also activate TLR2 and TLR4 within NETs. [103]Furthermore, in a mouse model studying ischemia-reperfusion injury, NETs exacerbated inflammation and liver damage.Interventions using DNase or PAD4 inhibitors effectively reduce inflammation and liver damage. [104]ystemic lupus erythematosus (SLE) is an autoimmune disease characterized by spontaneous NETosis in low-density neutrophils.NETs contain increased levels of autoantigens and mtDNA, which intensify the autoimmune and inflammatory processes. [105]Rheumatoid arthritis, which is characterized by autoantibodies against citrullinated protein antigens, is characterized by externalized citrullinated autoantigens and immunostimulatory molecules.The pathogenesis of rheumatoid arthritis involves accelerated NETosis, with NETs exacerbating the inflammatory response, thereby amplifying disease progression. [106]Additionally, in a rheumatoid arthritis model, fibro-engulfed NETs prompted the generation of citrulline histone antibodies. [107]110] NETosis plays a pivotal role in the pathogenesis of thrombosis.This serves as the foundation of thrombosis and is associated with an exaggerated innate immune response.Notably, the interplay between neutrophils and platelets contributes to thrombosis. [111]When activated by stimuli, such as LPS, histones, and arachidonic acid, platelets can potently induce NETs.This induction can occur through direct interactions facilitated by the P-Selectin-Mac1 axis [112] or via the release of soluble mediators, such as High-mobility group protein B1, Thromboxane A2, Platelet factor 4, von Willebrand Factor, and CCL5. [112,113]Additionally, platelets can release oxidized mitochondrial DNA, [114] which further stimulates NETosis. [115,116]An indicator of the progression of pathological injury in lung disease is heme release.Moreover, heme-induced platelet activation not only enhances platelet-neutrophil aggregation, but also leads to the accumulation of NETs within the lungs, thereby exacerbating lung injury.Notably, a significant reduction in the likelihood of thrombosis was observed in a PAD4 knockout mouse model with inferior vena cava stenosis. [117]n conclusion, immune thrombosis serves as a protective response against the capture and elimination of pathogens.However, excessive accumulation of NETs, along with their interactions with platelets and endothelial cells, fuels the formation of disease cascades, culminating in the development of substantial aggregates.This, in turn, contributes to the genesis of atherosclerotic plaques and exacerbates the pathological processes associated with ischemic stroke and myocardial infarction.
Neutrophil Pyroptosis
In contrast to apoptosis, a non-inflammatory process that encapsulates cellular contents within intact vesicles, pyroptosis is a unique lytic cell death pathway inextricably linked to inflammation.Pyroptosis is primarily driven by host defense mechanisms in response to external invasions or environmental stres-sors, with the overarching goals of promoting tissue repair and halting the spread of infection. [118]Pyroptosis induces cell membrane rupture, leading to the release of cytosolic contents, notably the pro-inflammatory cytokines IL-1 and IL-18.This exodus serves two purposes: an immediate defense against intracellular pathogens and a signal to attract additional immune cells to the site of infection or injury. [119]Notably, neutrophils, as central immune players, are significant contributors of IL-1 and other cytokines, attributed to their rapid mobilization to infection and inflammation sites. [120]
Neutrophil Pyroptosis: Molecular Mechanisms, Triggers, and Morphological Insights
In response to exogenous pathogens, such as LPS from Gramnegative bacteria belonging to Pathogen-Associated Molecular Patterns (PAMPs), or endogenous stimuli (e.g., cellular damage or stress-induced release of DAMPs, such as ATP, uric acid, and nucleic acids), immune cells (such as neutrophils and macrophages) use Pattern Recognition Receptors (PRRs) to identify these molecular patterns.Activation of inflammasome assembly, commonly the canonical complexes involving nucleotide-binding domain and leucine-rich repeat receptor (NLR) or Absent in melanoma 2 (AIM2), adaptor protein ASC, and effector molecule Caspase-1. [121]Caspase-1 processes the pro-inflammatory cytokines, IL-1 and IL-18, releasing their mature forms (Figure 5).A groundbreaking discovery in 2015 revealed that inflammatory Caspases (Caspase-1, -4, -5, and -11) activate Gasdermin D (GSDMD), which perforates the cell membrane and initiates pyroptosis upon cleavage. [122]GSDMD, comprising an ≈30 kDa amino-terminal fragment (GSDMD-NT) and approximately 20 kDa carboxy-terminal fragment (GSDMD-CT), maintains stability via the integration of GSDMD-CT into the ring of GSDMD-NT, thereby inhibiting GSDMD-NT-mediated pyroptosis. [123]orphologically, pyroptosis embodies characteristics of both necrosis and apoptosis.Unlike the abrupt disintegration observed in necrotic cell death, pyroptosis exhibits cytoplasmic flattening owing to leakage of the plasma membrane. [124]During pyroptosis, perforation of the cell membrane by GSDMD leads to subtle cellular swelling.This is attributed to the influx of water into the cytoplasm, induced by intracellular non-ionic penetrants.As a result, bubble-like protrusions resembling apoptotic bodies and termed ″pyroptotic bodies emerged on the cell membrane.The cell then experiences pervasive pore formation in its membrane, leading to the loss of membrane integrity and chromatin damage.In the final stage, the membrane breaks down entirely, rendering the cytoplasmic vacuoles. [125]The ensuing release of cellular contents acts as a catalyst for inflammation.
Neutrophil Inflammasome Activation and Pyroptosis: An Intricate Balance
Neutrophils are frontline defenders against infections and are rich in PRRs.However, induction of pyroptosis in neutrophils via inflammasome activation remains unclear.Several studies argue against Caspase-1 instigating pyroptosis in , enable neutrophils to sense exogenous pathogens, culminating in inflammasome assembly and Caspase-1 activation. [126]In addition, this activation cascade results in the mature release of IL-1 without invoking neutrophil pyroptosis. [128]Several regulatory pathways contribute to this resistance: (1) neutrophil-derived GSDMD-NT targeting azurosomes and autophagosomes, limiting its impact on cell membranes. [129](2) the serine protease inhibitors from the SERPINB family can impede inflammatory Caspases and neutrophil serine protease (NSP) activity, preventing unwarranted neutrophil pyroptosis. [130](3) the triggering of membrane repair mechanisms by GSDMD perforation, such as ESCRT-III, which might support neutrophil resistance to GSDMD-caused cytoplasmic membrane disruption. [133]b) Neutrophil Pyroptosis-Induced NETs Pathway: Intracytoplasmic LPS and bacterial agents activate Caspases, including Caspase-11 and Caspase-4/5, resulting in GSDMD-dependent neutrophil death.These Caspases act in concert with GSDMD to facilitate nuclear membrane penetration and histone degradation, processes integral to NET formation. [136]Neutrophil granules harbor specialized serine proteases, such as neutrophil elastase (NE) and cathepsin G (CTSG), capable of uniquely cleaving GSDMD, thus contributing to neutrophil pyroptosis. [138] neutrophils. [126,127]Instead, neutrophils appear to evade Caspase-1-mediated pyroptosis, ensuring sustained cytokine production at infection sites while maintaining their primary antimicrobial capabilities. [126]In both human and mouse neutrophils, the TLR4 agonist LPS upregulates NLRP3 and NLRC4, leading to significant IL-1 secretion.However, even after Caspase-1 activation following inflammasome engagement, neutrophils don't undergo lytic death.Similar findings were observed in Salmonella-infected mice. [126,128]GSDMD-NT, the product of GS-DMD cleavage, doesn't increase neutrophil membrane permeability or induce pyroptotic death.Potential reasons include: (1) neutrophil-derived GSDMD-NT targeting neutrophilic granules and autophagosomes, limiting its impact on cell membranes; [129] (2) the necessity for stringent regulation of cell death, inflammation, and pyroptosis in neutrophils -molecules like the Serpin family B members (SERPINB) inhibit inflammatory Caspase and neutrophil serine protease (NSP) activity, preventing unwarranted neutrophil pyroptosis and ensuring their anti-infection function; [130,131] (3) the triggering of membrane repair mechanisms by GSDMD perforation, such as ESCRT-III, which might support neutrophil resistance to GSDMD-caused cytoplasmic membrane perforation. [132,133]Notably, mature IL-1 in neutrophils is found in plasma membrane infoldings, hinting at a GSDMD-independent release mechanism. [134]Additionally, cytokines may be released via autophagosome-dependent secretion routes or via exosomes independent of GSDMD. [135]This may account for the release of crucial inflammatory factors by neutrophils without undergoing pyroptosis.
In summary, although neutrophils do not completely block GSDMD-induced membrane cleavage, as evidenced by azurosome membrane fragmentation, there is likely to be a regulatory mechanism governing GSDMD production and quantity.Exposure of neutrophils to cytoplasmic LPS or gram-negative bacteria activates non-canonical inflammasome (Caspase-4/5/11) signaling and triggers GSDMD-dependent neutrophil lysis. [136,137]ging neutrophils induced NE-dependent GSDMD cleavage and neutrophil death in vitro, validating GSDMD-mediated plasma membrane fragmentation. [138]Intriguingly, Caspase-11 and GS-DMD led to NET extrusion by mediating synergies between nuclear membrane permeabilization and histone degradation (Figure 5).Caspase-11 activation propels NETosis in neutrophils exposed to monosodium urate and in those from septic mice and patients. [139,140]This underscores the potential interplay between neutrophil pyroptosis and NETosis, suggesting that GSDMDinduced cytoplasmic membrane rupture may lead to NETs release.
Regulation of Neutrophil Pyroptosis: The Intricate Interplay between Granzymes and NSPs
The unique granzyme system of neutrophils significantly influences pyroptosis regulation via specific serine proteases capable of GSDMD activation in neutrophils.Elastase, an NSP within cytoplasmic particles, cleaves C268, which is seven residues upstream of D275 Caspase-1 cleavage site in hGSDMD. [138]This generates a functional GSDMD-NT fragment that punctures the plasma membrane.The absence of Elastase or GSDMD prolongs the neutrophil lifespan, suggesting that Elastase and GSDMD activation induce pyroptosis in neutrophils. [138]Another GSDMDcleaving NSP is Cathepsin G, which targets L274.Cleaved GS-DMD triggers pyroptosis and releases inflammatory cytokines, such as IL-1, from neutrophils.Serpinb1 and Serpinb6 negatively regulate GSDMD cleavage by Cathepsin G, underscoring the strict control over neutrophil death pathways and cytokine release. [130,131]GSDMD-NT can specifically target azurosome membranes while sparing other granular membranes, possibly due to affinity variations in distinct lipid structures.The dynamic evolution of the unique granular system of neutrophils during development suggests that developing neutrophils may be more susceptible to the regulation of pyroptosis.These insights suggest the existence of activation and inhibition events contingent on proteases and gasdermin proteins; however, the intricacies of neutrophil protease-gasdermin protein interactions remain unknown.
GSDMD in Neutrophil Death: Bridging Pyroptosis and NETosis in Immune Responses
GSDMD plays a multifaceted role in various neutrophil death pathways, including pyroptosis-induced intracellular trap-driven efferocytosis [141] and NETosis.Pyroptosis and NETosis often occur during similar disease scenarios, necessitating immune responses against microbes or sterile inflammation.Exposure of neutrophils to cytoplasmic LPS or Gram-negative bacteria activates non-classical (Caspase-4/11) inflammasomes, culminating in GSDMD-dependent neutrophil death and NETs release.Caspase-11 and GSDMD are pivotal for neutrophil plasma membrane rupture during the final stages of NET extrusion.Additionally, they facilitate early NETosis formation by mediating nuclear membrane penetration and histone degradation during DNA decondensation. [136]Caspase-11 is involved in NETs formation and has been validated in mouse models of Acute Gouty Arthritisand humans with sepsis. [136,139]SDMD-mediated NETosis is determined depending on the shear mode and disease context.NETs play dual roles in combating bacteria, while potentially inflicting tissue damage.Hence, the activation of GSDMD and the consequent NETs generation should be tightly regulated.Amidst numerous unresolved questions are which pathways dictate GSDMD fragment processing leading to cell lysis and which pathways are employed to fend off microbial attacks or sterile inflammation remain unknown. [10]n summary, GSDMD-mediated neutrophil death offers a distinct target for anti-inflammatory and antimicrobial therapies, although further research is required to elucidate its precise role.
Neutrophil Pyroptosis in Inflammatory Diseases and Potential Therapeutic Targets
In normal physiological processes, pyroptosis plays a pivotal role in host defense against pathogenic infections.However, excessive pyroptosis can lead to an overwhelming and sustained inflammatory response that has been implicated in the pathogenesis of various inflammatory diseases.While the existing literature on pyroptosis has predominantly focused on monocytes and macrophages, recent studies have shed light on the significance of pyroptosis in neutrophils.
Neutrophils undergo pyroptosis during infection, contributing to inflammation, and are considered a primary pathological factor in conditions such as sepsis. [142]Studies have revealed that sepsis in mice is associated with a substantial down-regulation of neutrophil N-acetyltransferase 10 (NAT10).Neutrophil-specific NAT10 overexpression mitigates neutrophil pyroptosis and reduces sepsis-related mortality in mice by reversing the ULK1-STING-NLRP3 axis. [143]Pyroptosis-induced cell death is recognized as a factor in various diseases, including cardiovascular diseases, neurological disorders, and liver conditions.A substantial portion of the ongoing research is centered on devising treatments for these diseases by targeting key inflammasome signals, such as NLRP3, Caspase-1, or GSDMD, to inhibit pyroptosis. [144]owever, neutrophil pyroptosis follows non-canonical inflammasome or pyroptotic pathways mediated by granulocyte systems.Additionally, GSDMD protein in neutrophils plays a crucial role in NET formation.The release of NETs facilitated by the lytic activity of GSDMD in neutrophils may represent a pivotal event in ARDS development. [145]Consequently, blocking the GSDMDmediated release of NETs mediated by GSDMD might offer a promising mechanism for treating associated inflammatory diseases.Further investigations are necessary to ascertain whether neutrophil pyroptosis can be effectively targeted as a therapeutic approach.
Neutrophil Necroptosis
Historically, necrosis has been considered an unregulated and spontaneous form of cell death.However, recent advancements in genetic, biochemical, and functional studies have redefined this process.Rather than being a mere accidental event, necrosis is now regarded as an active cell death pathway, termed necroptosis. [146]In relation to human diseases linked to neutrophils, such as cutaneous vasculitis, ulcerative colitis, and psoriasis, neutrophils migrate to inflammation sites and activate the RIPK3-mixed lineage kinase domain-like (MLKL) protein pathway, leading to necroptosis (Figure 6). [147]Given the pivotal roles that neutrophils play in various inflammatory processes and diseases and their abundance in the body, necroptosis may play fundamental roles in the onset and perpetuation of various conditions.
Necroptosis in Neutrophils: Mechanisms, Regulators, and Pathophysiological Implications
Unlike extrinsic apoptosis, necroptosis operates independent of apoptotic signaling.It is predominantly orchestrated by the synergistic activity of RIPK3 and MLKL proteins. [148,149]Necroptosis [150,155] In the absence of the XIAP, a crucial member of the IAP family, the inhibition of Caspase-8 triggers a shift in TNF-induced neutrophil cell death from apoptosis to RIPK1-RIPK3-MLKL-dependent necroptosis. [158]This transition represents a critical regulatory juncture in determining cell fate.Additionally, CA-MRSA stimulates an autocrine production of TNF in neutrophils, further amplifying the necroptotic cascade.In the presence of granulocyte-macrophage colony-stimulating factor (GM-CSF), ligation of adhesion receptors activates the RIPK1-RIPK3-MLKL-p38 MAPK-PI3K axis, culminating in ROS production via NADPH oxidase. [147]This condition is another pathway of neutrophil necroptosis.in neutrophils is predominantly triggered by various factors, such as TNF, TLRs, granulocyte-macrophage colony-stimulating factor (GM-CSF), and the engagement of adhesion receptors, such as CD44, CD11b, CD18, and CD15.Furthermore, the presence of monosodium urate crystals and phagocytosis of Staphylococcus aureus contribute to this process. [147,150,151]ellular morphological alterations during necroptosis bear striking similarities to those observed during necrosis.As neutrophils progress through necroptosis, the evident role of ROS is manifested in the degradation of organelle and azurosome membranes.This degradation sequence subsequently triggers endosomal and autophagosomal fusion, leading to the formation of expansive cytoplasmic vacuoles. [151]Concomitant with organelle swelling, there is a noticeable increase in cell volume.Subsequently, the nuclear membrane ruptures, chromatin undergoes slight condensation, and the plasma membrane becomes permeable, thereby facilitating the release of intracellular DAMPs. [152]eutrophils demonstrating vacuolation undergo rapid cell death and release DAMPs in the process, thereby potentiating the proinflammatory characteristics of necroptosis.
Necroptosis is initiated when the apoptotic protein Caspase is blocked.Since 2005, the significant roles of receptorinteracting protein kinase-1 (RIPK1) and its kinase inhibitor, necrostatin-1, in Caspase-independent necrosis have been reported. [146,153]RIPK3 has been identified as a crucial protein for necroptosis, [148,154] and MLKL, downstream of RIPK3, has been recognized as a necroptosis effector protein. [149]though neutrophil necroptosis has received less attention, possibly due to the short lifespan of neutrophils, a breakthrough was achieved in 2014 when Greenlee-Wacker et al. reported that community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) survive after being ingested during neutrophil phagocytosis.CA-MRSA promotes the necroptosis of neutrophils, undermines macrophage-mediated phagocytosis of neutrophils, and exacerbates infections. [150]This team subsequently ascertained that necrostatin-1, originally thought to inhibit cell lysis, exhibits off-target effects.Furthermore, CA-MRSA-phagocytic neutrophils were found to undergo a novel lytic programmed cell death driven by RIPK3 activity, independent of RIPK-1 or MLKL activity.This form of cell death differs from traditional necroptosis. [155]Some cases of necrosis may also be unrelated to RIPK1 and MLKL, as observed in myocardial necrosis induced by ischemia and oxidative stress, which act through the RIPK3-Ca 2+ -Calmodulin-dependent protein kinase pathway. [156]Therefore, neutrophil lysis caused by CA-MRSA could also be interpreted as RIPK3-dependent necroptosis (Figure 6).The ascertainment of RIPK3 involvement and unique lytic programmed cell death paradigm underscore the multifarious cell death responses elicited by CA-MRSA.
Our understanding of necroptosis mainly stems from TNF signaling research.TNF-triggered RIPK1-RIPK3-MLKL signaling pathway is the most extensively examined extracellular indicator of necroptosis (Figure 6).TNF plays a key role in initiating inflammation in response to infections and tissue damage.Within the realm of neutrophil death classification, X-linked IAP (XIAP), an inhibitor of the IAP family, holds significant sway, dictating whether neutrophils undergo apoptosis or necroptosis.Similar to its counterpart CIAP, XIAP also ubiquitinates RIPK1, thereby extending cellular survival. [157]In the absence of XIAP, Caspase inhibition prompts a shift from apoptosis to RIPK3-and MLKL-dependent necroptosis.Moreover, the omission of XIAP sensitized neutrophils to TNF--induced killing. [158]esearch indicates that necroptosis induction can be attributed to CA-MRSA, as it prompts neutrophils to undergo autocrine TNF stimulation.In this context, CA-MRSA exhibits the capacity to induce neutrophil necrosis by activating formylpeptide receptor 2-mediated TNF- autocrine signaling.This activation leads to MLKL phosphorylation and increased lactate dehydrogenase release through phenol-soluble modulins. [159](Figure 6).
Necroptosis in neutrophils, induced by the ligation of adhesion receptors and exposure to GM-CSF, depends on the ability of NADPH oxidase to generate ROS.These ROS, in turn, activate the RIPK1-RIPK3-MLKL-p38 MAPK-PI3K signaling axis.(Figure 6). [147]Remarkably, necroptosis can also occur independent of RIPK1 expression.For instance, during innate immune responses to viral and bacterial infections, Toll/IL-1 receptor domain-containing adaptor protein can induce IFN- without reliance on RIPK1.This mechanism has been observed in several studies. [160,161]Activation of Toll/IL-1 receptor domaincontaining adaptor protein has emerged as a potential contributor to adhesion receptor-triggered neutrophil necroptosis.Although the roles of p38 and PI3K in NADPH oxidase activation during neutrophil apoptosis have been documented, [162] the precise interplay between the RIPK3-MLKL complex and p38 MAPK activation remains to be fully elucidated.Notably', in patients with chronic granulomatous disease, neutrophils fail to undergo necroptosis upon adhesion receptor stimulation, [147] underscoring ROS as a crucial mediator in this form of necroptotic cell death.
Dual Roles of Neutrophil Necroptosis in Bacterial Infections: Protective Mechanism and Pathogenic Consequences
Neutrophils play a crucial role in eradicating pathogens through the necroptosis pathway, clearing the replication niches of pathogens, and eliminating infected cells to exert bactericidal effects.Additionally, necroptosis may contribute to the suppression of pro-inflammatory signaling.In a Staphylococcus aureus infection model, wild-type mouse neutrophils effectively eliminated bacteria at the infection site.In contrast, neutrophils lacking MLKL and human neutrophils from patients with chronic granulomatous disease exhibit diminished microorganism-killing capabilities. [152,163]evertheless, there are instances in which bacteria can trigger neutrophil necrosis, exacerbating histopathological damage during infection.For instance, CA-MRSA pneumonia, which is known for its drug resistance, virulence, and transmissibility, is associated with substantial morbidity and mortality.Infection scenarios can involve bacteria surviving after neutrophil phagocytosis [164] disrupting initial phagocyte-mediated defenses, and fostering persistent infection and inflammation.Neutrophil necroptosis results in the release of viable Staphylococcus aureus and intracellular DAMPs, thereby exacerbating the spread of infection due to localized tissue damage. [150]Furthermore, CA-MRSA can secrete virulence factors that activate MLKL phosphorylation, trigger neutrophil necroptosis, and increase lactate dehydrogenase release, ultimately causing severe lung injury. [155]egarding Klebsiella pneumoniae infections, neutrophil necroptosis arises not predominantly from interference with apoptotic activation, but from direct necroptosis induction.Pharmacological intervention employing RIPK1 and RIPK3 inhibitors to inhibit necroptosis-induced efferocytosis of Klebsiella pneumoniaeinfected neutrophils in vitro and ameliorate disease phenotypes in mouse models of Klebsiella pneumoniae pneumonia. [31]To conclusion, bacteria-induced neutrophil necroptosis can be detrimental to the host infection response.
Neutrophil Necroptosis in Inflammatory and Autoimmune Diseases: A Double-Edged Sword for Immunopathology and Treatment
Neutrophil necroptosis results in a substantial release of DAMPs from disrupted cell membranes.This characteristic has led to necroptosis being widely regarded as a pro-inflammatory form of cell death.Conversely, what is frequently disregarded is that upon TNF stimulation, the majority of cells instigate a rapid and vigorous pro-inflammatory reaction, with necroptosis potentially diminishing the production of pro-inflammatory cytokines by curtailing neutrophil lifespan.As a result, the overall inflammatory response prompted by TNF or LPS is curtailed, ultimately restraining the inflammatory cascade. [165]haracterization of the neutrophil necroptosis signaling pathway holds significant promise for the better management of tissue damage and excessive inflammation resulting from neutrophil dysfunction.This may assist in identifying suitable drug targets for neutrophil-related diseases, including cutaneous vasculitis, ulcerative colitis, and psoriasis. [147]Despite the extensive research on necroptosis, relatively little attention has been paid to its occurrence in neutrophils.The exact timing of neutrophil necroptosis remains elusive as it can transpire during migration or at sites of inflammation.
Neutrophil necroptosis plays a pivotal role in the pathogenesis of gout.In murine models of gouty arthritis induced by MSU crystals, formation akin to gout is induced through neutrophil necrosis.This process can be mitigated by downregulating RIPK3 or administering necrostatin-1. [166]In addition, studies have indicated that, in the joints of patients with rheumatoid arthritis, neutrophils activate RIPK1, RIPK3, and MLKL under the influence of CD44 and GM-CSF, leading to necroptosis, [167] thereby exacerbating tissue damage.Notably, treatment with necrostatin-1 in murine models of rheumatoid arthritis brings in a reduction in osteoclast numbers, a decrease in Th1 and Th17 cell populations, and an increase in Th2 and Treg cell populations. [168]This therapeutic approach also demonstrated the potential to decelerate disease progression.
Neutrophil Necroptosis and NETosis: Interplay, Inhibitors, and Implications in Immune Responses
Necroptosis in neutrophils results in the permeability of the cell plasma membrane and release of chromatin, a phenomenon potentially linked to neutrophil NETosis.Chemical inhibitors of the necroptotic pathway, such as the RIPK1 stabilizer necrostatin-1 or MLKL inhibitor necrosulfonamide, are used to suppress necroptosis and NET release in PMA-or monosodium urate crystalstimulated neutrophils. [168]Classical models of NETosis involve PMA and monosodium urate crystals, which trigger ROS production, subsequently activating RIPK3, leading to phosphorylated MLKL expression.Hence, ROS is regarded as a catalyst for PMA and monosodium urate crystal-induced necroptosis, which is intricately tied to the release of NETs.
NET liberation is an outcome or secondary event resulting from neutrophil necroptosis.However, the literature also presents conflicting evidence, suggesting that neither RIPK3deficient mouse neutrophils nor MLKL-inhibited human neutrophils exhibit anomalous NET formation upon activation or exposure to low PMA concentrations. [169]These findings indicate that NET formation is independent of RIPK3 and MLKL signal transduction.NET release during necroptosis may be the passive expulsion of chromatids associated with necrosis. [170]n cases where NET formation depends on the antimicrobial response of neutrophils, necroptosis is generally perceived to have detrimental effects on the host.The precise role of neutrophil necroptosis in driving NET production remains unclear.Therefore, efficient NET detection techniques are required to differentiate between NET formation and neutrophil necroptosis.
Neutrophil Ferroptosis
Ferroptosis is a distinct type of programmed cell death characterized by iron-mediated accumulation and peroxidation of polyunsaturated fatty acid-containing phospholipids to lethal levels in cell membranes. [171]The term "ferroptosis" was introduced by Brent R. Stockwell in 2012, highlighting the central role of glutathione peroxidase 4 (GPX4) in this process. [172]Although the exploration of ferroptosis has flourished, the understanding of its impact on neutrophils remains in its infancy.Several studies have indicated that neutrophils, via mechanisms involving NETs and MPO, can trigger ferroptosis in tissues or tumor cells during inflammatory diseases, leading to adverse effects. [173,174]
Neutrophil Ferroptosis in Disease: Implications for Systemic Lupus Erythematosus and Tumor Microenvironment Dynamics
Neutrophils play key roles in regulating iron homeostasis and nutritional immunity during infection.Ferroptosis of neutrophils can be regarded as the result of an iron homeostasis disorder, and bacteria use free iron ions to synthesize their own substances to replicate and aggravate the infection. [183]Despite advancements, the regulation of neutrophil ferroptosis in associated diseases remains relatively unexplored.Notably, ferroptosis of neutrophils has been documented in patients with SLE. [178]In patients with SLE, autoantibodies and IFN- augment ferroptosis in neutrophils by intensifying the binding of the transcription suppressor CREM to GPX4 promoters through activation of calcium/calmodulin kinase IV (CaMK IV), thereby reducing GPX4 expression and subsequently amplifying lipid ROS (Figure 7).Mice with neutrophil-specific GPX4 haploinsufficiency recapitulate the key clinical features of human SLE, and treatment with In patients with SLE, autoantibodies and IFN- augment ferroptosis in neutrophils by intensifying the binding of the transcription suppressor cAMP response element modulator alpha (CREM) to GPX4 promoters through activation of calcium/calmodulin kinase IV (CaMK IV), thereby reducing GPX4 expression and subsequently amplifying phospholipid-containing polyunsaturated fatty acid hydroperoxides (PL-PUFA-OOH). [178]Neutrophils significantly contribute to the synthesis of PUFA via the fatty acid transport protein 2 (FATP2)-mediated uptake of arachidonic acid. [186]Key enzymatic players, including acyl-CoA synthetase long-chain family member 4 (ACSL4), lysophosphatidylcholine acyltransferase 3 (LPCAT3), and arachidonate lipoxygenases (ALOXs), amplify the generation of PL-PUFA-OOH, serving as potent inducers of neutrophil ferroptosis.Neutrophil ferroptosis within the tumor microenvironment fosters tumor growth by establishing an immunosuppressive milieu, thereby impeding T-cell-mediated antitumor responses. [187]Neutrophil ferroptosis can lead to the release of Fe 2+ and PL-PUFA-OOH, thereby contributing to a range of immune regulatory mechanisms.ferroptosis inhibitors substantially reduces disease severity in lupus-prone mice, underscoring the role of neutrophil ferroptosis in lupus pathogenesis. [178]ron metabolism, redox pathways, and lipid metabolism collectively regulate ferroptosis.Compelling evidence suggests that genetic and pharmacological manipulation of these pathways can effectively govern cellular ferroptosis.Given the unique attributes of neutrophils, targeting the redox pathway is pivotal for modulating ferroptosis.The System Xc-GSH-GPX4 pathway assumes a central role in ROS scavenging.Consequently, numerous drug development efforts have focused on modulating iron-induced cell death by targeting key molecules in this pathway.The Nrf2 pathway responds to cellular oxidation by activating the transcription of genes involved in redox reactions.Targeting the KEAP1-Nrf2 axis holds promise as a viable strategy for regulating ironinduced cell death. [184]iven their prominence in cancer research, interest in neutrophil ferroptosis within tumors has been pronounced, with this cell subset being termed myeloid-derived suppressor cells (PMN-MDSCs).PMN-MDSCs exhibit a dual role, both promoting and inhibiting tumor growth, a dichotomy further emphasized by cutting-edge single-cell techniques that reveal neutrophil functional heterogeneity within tumors. [185]Recent discoveries have revealed spontaneous ferroptosis-driven death of PMN-MDSCs in the tumor microenvironment, where these cells exert substantial immunosuppressive effects.In the tumor microenvironment, easily oxidized polyunsaturated fatty acid-containing phospholipids undergo peroxidation, initiating ferroptosis and generating immunosuppressive lipid peroxidation.Both ACSL4 and FATP2 play roles in the regulation of ferroptosis and generation of immunosuppressive signals in PMN-MDSCs. [186,187]The main mechanism of FATP2 mediated suppressive activity involves arachidonic acid uptake and prostaglandin E2 synthesis.Pharmacological inhibition of FATP2 can eliminate the immunosuppressive activity of PMN-MDSCs, thereby significantly halting tumor progression.Combining FATP2 inhibition with immune checkpoint blockade has emerged as a promising strategy against tumor progression. [186]In immunocompetent mice, both genetic and pharmacological ferroptosis inhibitors negate the suppressive activity of PMN-MDSCs, thereby limiting tumor growth.Interestingly, the induction of ferroptosis appears to promote tumor expansion. [187]Thus, ferroptosis is unveiled as a manipulable immunosuppressive mechanism in PMN-MDSCs within the tumor microenvironment and presents a potential therapeutic target to counteract tumor progression.
PANoptosis: Unraveling the Multifaceted Cell Death Pathway
Although traditionally viewed as mechanistically distinct, it is now widely acknowledged that some cell death mechanisms exhibit extensive overlap.In some cases, multiple modes of cell death may coexist within one cell, leading to the conceptualization of an integrated cell death modality, known as "PANoptosis."PANoptosis is implicated in various types of cell death, such as pyroptosis, apoptosis, and necroptosis. [188,189][192] This genetic evidence establishes PANoptosis as a unique innate immune inflammatory programmed cell death pathway governed by a cytoplasmic multimeric protein complex, called as the PANoptosome.Malfunctions of crucial components of the PANoptosome have been linked to various human conditions, including neurodegenerative disorders, cancer, and increased susceptibility to infections. [193]ow the different modes of neutrophil death interplay in SLE presents a fascinating research avenue.SLE pathogenesis involves various neutrophil death modalities, with NETosis as the key player.Whether neutrophils exhibit distinct PANoptotic pathways remains unknown.Moreover, the mechanisms by which key molecules, such as MLKL and GSDMD, which are central to necroptosis and pyroptosis, influence the occurrence of NETosis remain unclear.Recent studies have revealed a significant correlation between key genes related to PANoptosis and neutrophil in patients with SLE. [194]However, the occurrence of PANoptosis in neutrophils remains controversial because of the unique nature of cell death.Potential variations in the mechanism of PANoptosis-induced cell death between neutrophils and other cell types, such as macrophages, complicate this issue.TGF- activated kinase 1 (TAK1), a key regulator of PANoptosis, plays contrasting roles in neutrophils and macrophages.In TAK1-deficient macrophages, cytoplasmic PANoptosome assembly is increased, leading to PANoptosis.However, TAK1-deficient neutrophils exhibit hyperproliferation and increased production of inflammatory cytokines. [190,195] more comprehensive understanding of the molecular underpinnings of this intricate crosstalk among different cell death modalities is crucial to facilitate the development of targeted inhibitors and activators of cell death pathways as well as therapeutic interventions for diseases.
Conclusions and Future Perspectives
Neutrophils participate in various death modalities that significantly influence their antimicrobial and inflammatory responses.Given their inherent cytotoxic capabilities, stringent regulation of neutrophils is imperative to prevent unintended inflammation.Non-lytic cell death mechanisms are predominant under homeostatic conditions.However, specific stimuli activate the lytic pathways, leading to the release of inflammatory agents and causing tissue damage.Manipulation of neutrophil death pathways may facilitate the development of novel therapies for infectious, autoimmune, and congenital neutrophil disorders.
On encountering irreparable disruptions in their intracellular or extracellular microenvironments, various death signal transduction pathways may be triggered, ultimately culminating in the death of mammalian cells.Each modality regulating cell death is initiated and perpetuated via distinct molecular mechanisms that engage in various forms of communication.Neutrophils, with their distinct functions and cell death characteristics, have a potent mechanism for ROS generation to combat the invading pathogens.ROS play diverse roles in neutrophil cell death.Therefore, ROS production and cellular localization may be critical factors linking the various neutrophil cell death pathways.Moreover, activation of neutrophils and occurrence of redox crises can result in distinct forms of cell death under various pathological conditions.In such cases, effective cellular protection may only be accomplished via interventions aimed at mitigating these crises or addressing their underlying causes rather than merely treating their manifestations.
Effective investigation of neutrophil death presents a unique challenge.Although murine models have substantially advanced our understanding of neutrophil behavior, notable differences exist between human and mouse neutrophils.These discrepancies include variations in circulating neutrophil counts, lifespan, receptor expression, and intracellular signaling pathways.These differences become especially pronounced in complex conditions, such as cancer and autoimmune disorders.Therefore, caution is advised when directly using the findings in murine models for human clinical applications.Recent development of humanized mouse models has provided a promising avenue to study human neutrophils in both health and disease conditions, thereby broadening the scope of translational research. [196]lthough isolating abundant pure human neutrophils from the blood is feasible, their natural propensity to undergo apoptosis post-isolation limits their genetic manipulation.Additionally, marked heterogeneity is observed between circulating and tissue-resident neutrophils. [197,198]Current endeavors are focused on establishing protocols to differentiate various stem cells into functionally active neutrophils that can aid in future research, especially genetic studies.
Although the individual death modalities have been extensively investigated, some aspects remain unknown.For example, the mechanisms by which these diverse neutrophil death pathways intersect or operate synergistically during infections remain unclear.Moreover, the feasibility of modulating these pathways to curtail the excessive release of neutrophil pro-inflammatory mediators that exacerbate tissue injury and cause septic shock remains unknown.
Haiyue
Figure 1.Neutrophil Homeostasis and Multifaceted Death Pathways in Steady-State and Inflammatory Conditions.Neutrophils are generated in the bone marrow through granulopoiesis and subsequently enter the circulatory system.Depending on the specific microenvironment, neutrophils undergo various mechanisms of cell death.[7]These mechanisms encompass both non-lytic apoptosis and lytic death modalities, including necroptosis, pyroptosis, ferroptosis, and NETosis.These lytic processes are accompanied by the release of cytotoxic cellular proteases, cell-free DNA, and chromatin into the microenvironment.Each death pathway operates through distinct molecular mechanisms and regulatory networks, ultimately resulting in either immunosuppressive or pro-inflammatory outcomes.[10]Defects in the clearance of apoptotic neutrophils and the accumulation of cellular remnants contribute to the onset of inflammatory diseases and autoimmune disorders.Figure created with BioRender.com.
Figure 2 .
Figure 2. Molecular Mechanisms Underlying Neutrophil Apoptosis.a) Extrinsic Apoptosis: Initiated by cell surface death receptors, such as FAS, TNFR1, and TRAIL receptors, the extrinsic apoptotic pathway begins with the activation of Caspase-8, which also promotes Mitochondrial Outer Membrane Permeabilization (MOMP), ultimately leading to Caspase-3 activation, responsible for the execution phase.[18]Concurrently, the generation of reactive oxygen species (ROS) by NADPH oxidase serves as a complementary factor in this pathway.[19]b) Intrinsic Apoptosis: Within the intrinsic apoptosis pathway, pro-apoptotic dimers from the Bcl-2 family, Bax and Bak, embed themselves in the mitochondrial outer membrane, inducing MOMP.The release of Cytochrome c into the cytoplasm then initiates the activation of Caspase-9, eventually leading to Caspase-3-mediated apoptosis.[16]c) Pathogen-Induced Cell Death (PICD): This specialized process integrates the phagocytic elimination of microbial pathogens with the initiation of apoptosis.Following their antimicrobial actions, neutrophils are targeted for clearance via efferocytosis, typically carried out by macrophages.Effective efferocytosis ensures the timely removal of apoptotic neutrophils, preventing their progression to secondary necrosis.Inefficient clearance of apoptotic neutrophils may lead to the extracellular release of toxic granules and damage-associated molecular patterns (DAMPs), thus amplifying local inflammatory responses and perpetuating tissue injury.[29] Figure created with BioRender.com.
Figure 3 .
Figure 3. Pathways of Neutrophil Extracellular Trap (NET) Formation.a) Lytic NETosis: This form of NET formation is characterized as a cell death pathway.It commences with the disassembly of actin cytoskeletal structures within the neutrophil, followed by nuclear delobulation, which involves the reorganization of nuclear components.[47]Subsequent histone citrullination facilitates chromatin decondensation, allowing deagglutinated chromatin to mix with cytoplasmic granular components.The process culminates in plasma membrane rupture, releasing NETs into the extracellular milieu.[50]b) Vital NETosis (Non-Lytic NETosis): In contrast, vital NETosis enables neutrophils to form NETs without accompanying cell death.[48]During this process, NETs are extruded from neutrophils while maintaining membrane integrity.This non-lytic mode of NET formation maintains the neutrophil's functional capacity for tasks such as phagocytosis, allowing them to engage in microorganism engulfment and contribute to host defense while simultaneously releasing NETs.[49] Figure created with BioRender.com.
Figure 3. Pathways of Neutrophil Extracellular Trap (NET) Formation.a) Lytic NETosis: This form of NET formation is characterized as a cell death pathway.It commences with the disassembly of actin cytoskeletal structures within the neutrophil, followed by nuclear delobulation, which involves the reorganization of nuclear components.[47]Subsequent histone citrullination facilitates chromatin decondensation, allowing deagglutinated chromatin to mix with cytoplasmic granular components.The process culminates in plasma membrane rupture, releasing NETs into the extracellular milieu.[50]b) Vital NETosis (Non-Lytic NETosis): In contrast, vital NETosis enables neutrophils to form NETs without accompanying cell death.[48]During this process, NETs are extruded from neutrophils while maintaining membrane integrity.This non-lytic mode of NET formation maintains the neutrophil's functional capacity for tasks such as phagocytosis, allowing them to engage in microorganism engulfment and contribute to host defense while simultaneously releasing NETs.[49] Figure created with BioRender.com.
Figure 4 .
Figure 4. Regulatory Mechanism of NET Formation and Release.The generation of ROS serves as a cornerstone in modulating NET release.Key kinases such as AKT, PI3K, and PKC are integral to the dynamic regulation of ROS levels.[63,64]Elevated intracellular calcium ions are indispensable for NETosis and act as activators for PAD4, which in turn catalyzes histone citrullination and chromatin decondensation.[52]In synergy with PKC and CDK4/6, PAD4 facilitates the disassembly of nuclear architecture during NETosis.[70]The azurosome, a specialized organelle, harbors key enzymes like Myeloperoxidase (MPO), Neutrophil Elastase (NE), and Cathepsin G (CG), essential for NET biogenesis.Upon neutrophil activation, these enzymes translocate to the nucleus, furthering chromatin relaxation.The release of NE into the cytosol is ROS-and MPO-mediated and precedes its nuclear translocation, wherein it targets and degrades F-actin.Both pathogens and Pathogen-Associated Molecular Patterns are potent NET inducers.[69]Dysregulated NET formation or defective clearance can result in pathological NET accumulation, exacerbating inflammation and autoimmune diseases.Furthermore, NETs can interact with platelets, presenting potential complications such as vascular or catheter obstructions.[111]This figure serves as a comprehensive synopsis of the multifaceted processes underpinning NET formation, underscoring its relevance in host defense, inflammatory regulation, and disease etiology.ACPAs, autoantibodies to citrullinated protein antigens; I/R injury, ischemia-reperfusion injury; RA, rheumatoid arthritis; SLE, Systemic Lupus Erythematosus. Figure created with BioRender.com.
Figure 4. Regulatory Mechanism of NET Formation and Release.The generation of ROS serves as a cornerstone in modulating NET release.Key kinases such as AKT, PI3K, and PKC are integral to the dynamic regulation of ROS levels.[63,64]Elevated intracellular calcium ions are indispensable for NETosis and act as activators for PAD4, which in turn catalyzes histone citrullination and chromatin decondensation.[52]In synergy with PKC and CDK4/6, PAD4 facilitates the disassembly of nuclear architecture during NETosis.[70]The azurosome, a specialized organelle, harbors key enzymes like Myeloperoxidase (MPO), Neutrophil Elastase (NE), and Cathepsin G (CG), essential for NET biogenesis.Upon neutrophil activation, these enzymes translocate to the nucleus, furthering chromatin relaxation.The release of NE into the cytosol is ROS-and MPO-mediated and precedes its nuclear translocation, wherein it targets and degrades F-actin.Both pathogens and Pathogen-Associated Molecular Patterns are potent NET inducers.[69]Dysregulated NET formation or defective clearance can result in pathological NET accumulation, exacerbating inflammation and autoimmune diseases.Furthermore, NETs can interact with platelets, presenting potential complications such as vascular or catheter obstructions.[111]This figure serves as a comprehensive synopsis of the multifaceted processes underpinning NET formation, underscoring its relevance in host defense, inflammatory regulation, and disease etiology.ACPAs, autoantibodies to citrullinated protein antigens; I/R injury, ischemia-reperfusion injury; RA, rheumatoid arthritis; SLE, Systemic Lupus Erythematosus. Figure created with BioRender.com.
Figure 6 .
Figure 6.Molecular Pathways Regulating Neutrophil Necroptosis.Community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) is phagocytosed by neutrophils yet manages to persistintracellularly.This intracellular survival of CA-MRSA instigates RIPK3mediated necroptosis, a form of programmed cell death, independently of MLKL.[150,155]In the absence of the XIAP, a crucial member of the IAP family, the inhibition of Caspase-8 triggers a shift in TNF-induced neutrophil cell death from apoptosis to RIPK1-RIPK3-MLKL-dependent necroptosis.[158]This transition represents a critical regulatory juncture in determining cell fate.Additionally, CA-MRSA stimulates an autocrine production of TNF in neutrophils, further amplifying the necroptotic cascade.In the presence of granulocyte-macrophage colony-stimulating factor (GM-CSF), ligation of adhesion receptors activates the RIPK1-RIPK3-MLKL-p38 MAPK-PI3K axis, culminating in ROS production via NADPH oxidase.[147]This condition is another pathway of neutrophil necroptosis.Figure created with BioRender.com.
Figure 6.Molecular Pathways Regulating Neutrophil Necroptosis.Community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) is phagocytosed by neutrophils yet manages to persistintracellularly.This intracellular survival of CA-MRSA instigates RIPK3mediated necroptosis, a form of programmed cell death, independently of MLKL.[150,155]In the absence of the XIAP, a crucial member of the IAP family, the inhibition of Caspase-8 triggers a shift in TNF-induced neutrophil cell death from apoptosis to RIPK1-RIPK3-MLKL-dependent necroptosis.[158]This transition represents a critical regulatory juncture in determining cell fate.Additionally, CA-MRSA stimulates an autocrine production of TNF in neutrophils, further amplifying the necroptotic cascade.In the presence of granulocyte-macrophage colony-stimulating factor (GM-CSF), ligation of adhesion receptors activates the RIPK1-RIPK3-MLKL-p38 MAPK-PI3K axis, culminating in ROS production via NADPH oxidase.[147]This condition is another pathway of neutrophil necroptosis.Figure created with BioRender.com.
Figure 7 .
Figure 7. Implications of Neutrophil Ferroptosis in the Pathogenesis of Systemic Lupus Erythematosus (SLE) and Tumor Progression.In patients with SLE, autoantibodies and IFN- augment ferroptosis in neutrophils by intensifying the binding of the transcription suppressor cAMP response element modulator alpha (CREM) to GPX4 promoters through activation of calcium/calmodulin kinase IV (CaMK IV), thereby reducing GPX4 expression and subsequently amplifying phospholipid-containing polyunsaturated fatty acid hydroperoxides (PL-PUFA-OOH).[178]Neutrophils significantly contribute to the synthesis of PUFA via the fatty acid transport protein 2 (FATP2)-mediated uptake of arachidonic acid.[186]Key enzymatic players, including acyl-CoA synthetase long-chain family member 4 (ACSL4), lysophosphatidylcholine acyltransferase 3 (LPCAT3), and arachidonate lipoxygenases (ALOXs), amplify the generation of PL-PUFA-OOH, serving as potent inducers of neutrophil ferroptosis.Neutrophil ferroptosis within the tumor microenvironment fosters tumor growth by establishing an immunosuppressive milieu, thereby impeding T-cell-mediated antitumor responses.[187]Neutrophil ferroptosis can lead to the release of Fe 2+ and PL-PUFA-OOH, thereby contributing to a range of immune regulatory mechanisms.Figure created with BioRender.com.
Figure 7. Implications of Neutrophil Ferroptosis in the Pathogenesis of Systemic Lupus Erythematosus (SLE) and Tumor Progression.In patients with SLE, autoantibodies and IFN- augment ferroptosis in neutrophils by intensifying the binding of the transcription suppressor cAMP response element modulator alpha (CREM) to GPX4 promoters through activation of calcium/calmodulin kinase IV (CaMK IV), thereby reducing GPX4 expression and subsequently amplifying phospholipid-containing polyunsaturated fatty acid hydroperoxides (PL-PUFA-OOH).[178]Neutrophils significantly contribute to the synthesis of PUFA via the fatty acid transport protein 2 (FATP2)-mediated uptake of arachidonic acid.[186]Key enzymatic players, including acyl-CoA synthetase long-chain family member 4 (ACSL4), lysophosphatidylcholine acyltransferase 3 (LPCAT3), and arachidonate lipoxygenases (ALOXs), amplify the generation of PL-PUFA-OOH, serving as potent inducers of neutrophil ferroptosis.Neutrophil ferroptosis within the tumor microenvironment fosters tumor growth by establishing an immunosuppressive milieu, thereby impeding T-cell-mediated antitumor responses.[187]Neutrophil ferroptosis can lead to the release of Fe 2+ and PL-PUFA-OOH, thereby contributing to a range of immune regulatory mechanisms.Figure created with BioRender.com.
Tu earned his Master's degree in neurobiology from Soochow University in 2021 and is currently pursuing a doctorate at the Institutes for Translational Medicine, Suzhou Medical College of Soochow University.His research primarily focuses on the functional regulation of neutrophils by the pulmonary microenvironment, the regulatory mechanisms of neutrophil death pathways, and their contributions to pulmonary fibrosis.Yufang Shi, a Member of Academia Europaea since 2020, serves as the Director and Distinguished Professor of the Institutes for Translational Medicine at Soochow University.He earned his Ph.D. in immunology from the University of Alberta and completed postdoctoral training at the University of Toronto.His groundbreaking work defined activation-induced cell death in proliferative T cells, establishing a link between the "life" and "death" of immune cells with the regulation of immune homeostasis.Furthermore, his research demonstrated the potent regulatory role of tissue stem cells in immune responses and proposed a novel therapeutic theory of stem cell-mediated tissue repair.Peishan Li is a professor at the Institutes for Translational Medicine, Suzhou Medical College of Soochow University.He obtained his Ph.D. in biochemistry and molecular biology from the Chinese Academy of Sciences in 2014 and completed postdoctoral work at The Jackson Laboratory from 2017 to 2020.His main research interests include immune metabolism and the regulation of tissue microenvironment homeostasis.He explores the mechanisms of lipid metabolism in shaping neutrophil function, thereby impacting host immune defense and pathogen immune escape.
|
v3-fos-license
|
2017-04-19T08:35:22.255Z
|
2016-11-08T00:00:00.000
|
15293432
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4441/8/11/519/pdf?version=1478591232",
"pdf_hash": "15c0df2a56ba444c5dcc7c40b6b4a894aecbee70",
"pdf_src": "Crawler",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3041",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "15c0df2a56ba444c5dcc7c40b6b4a894aecbee70",
"year": 2016
}
|
pes2o/s2orc
|
Global Sea Surface Temperature and Sea Level Rise Estimation with Optimal Historical Time Lag Data
Prediction of global temperatures and sea level rise (SLR) is important for sustainable development planning of coastal regions of the world and the health and safety of communities living in these regions. In this study, climate change effects on sea level rise is investigated using a dynamic system model (DSM) with time lag on historical input data. A time-invariant (TI-DSM) and time-variant dynamic system model (TV-DSM) with time lag is developed to predict global temperatures and SLR in the 21st century. The proposed model is an extension of the DSM developed by the authors. The proposed model includes the effect of temperature and sea level states of several previous years on the current temperature and sea level over stationary and also moving scale time periods. The optimal time lag period used in the model is determined by minimizing a synthetic performance index comprised of the root mean square error and coefficient of determination which is a measure for the reliability of the predictions. Historical records of global temperature and sea level from 1880 to 2001 are used to calibrate the model. The optimal time lag is determined to be eight years, based on the performance measures. The calibrated model was then used to predict the global temperature and sea levels in the 21st century using a fixed time lag period and moving scale time lag periods. To evaluate the adverse effect of greenhouse gas emissions on SLR, the proposed model was also uncoupled to project the SLR based on global temperatures that are obtained from the Intergovernmental Panel on Climate Change (IPCC) emission scenarios. The projected SLR estimates for the 21st century are presented comparatively with the predictions made in previous studies.
Introduction
Analysis of environmental and human impact of global warming is an important challenge which is studied in the literature, considering several different problems and approaches.One of the potential consequences of global warming is sea level rise (SLR) that may threaten the coastal regions of the world and impact human life.Based on historical records, global sea surface temperatures (SST) have increased by about 0.8 • C over the past 100 years, with seventy five percent of the increase occurring since 1980 [1].According to the projection of IPCC greenhouse emission scenarios, the global SST is likely to rise further within the range 1.1-6.4• C depending on the emission scenarios hypothesized to occur during the 21st century [2][3][4].Due to global warming, the global mean sea level has been observed to rise about 15-20 cm in the past century [5,6], and it is predicted that this rise will continue in the 21st century [4,7].Thus, it is important to predict the global SST change and SLR in the future for sustainable development planning of coastal regions of the world.
The inherent relationship between global SST and SLR is complicated.In recent studies, in an effort to simplify this relationship, empirical and semi-empirical models were proposed to predict global SLR based on temperature change data [8][9][10][11][12][13][14][15].These are primarily unidirectional models that use the temperature series data simulated from IPCC greenhouse gas emission scenarios as known inputs to project SLR, although there are some variations to this approach that are reported in the literature [9,12,15].Based on the predictions of these models, the SLR in the 21st century is estimated to be in the range 0.5-2.0m relative to 1990 levels, which are much higher than IPCC estimates [4].
As an extension of these approaches, a dynamic system model (DSM) was proposed to predict future SST change and SLR simultaneously [16].In DSM, the behavior of SST and SLR was described by a pair of coupled ordinary differential equations with two state variables: the sea surface temperature and the sea level.Thus, in the dynamic approach, the sea level change and sea surface temperatures are not independent but they are correlated.This underlying hypothesis is distinctly different from all other empirical models that are used in the literature and explained in more detail in [16].Later on, a vector-autoregressive (VAR) model that employs the same mathematical form of the discrete DSM approach was also developed in [17] that uses a stochastic cointegration method to describe the relationship between SST and SLR.This model has the same structure as the DSM model used in [16].Their results also confirm the hypothesis underlying the DSM approach [16].As stated in [17], the sea surface air temperatures will adjust to the average temperatures of the upper ocean due to the larger heat capacity of oceans relative to the atmosphere.As a result of this difference in heat capacities, SLR will directly affect the SST.Further, it is also well known that temperature change will affect SLR due to ice sheet melting, steric effects and other hydrologic phenomena.The information on both state variables is already imbedded in the historical data on SST and SLR which can be used to calibrate DSM.In both of these studies, the earth is considered to be a system and the behavior of the system is developed in terms of the two state variables of the system, i.e., SST and sea level.In this approach, the evolution of both states depends on the current state of both states and also on the behavior of the evolution of the system states over time.The resulting model, when calibrated using the historical data, showed that the rate of SLR is proportional to SST and this rise is also a function of the temporal state of the sea level.Similarly, the rate of SST change is a function of the temporal state of the SST and is also affected by the SLR.Unlike previous models, after calibration using the historical data, this model can be used to simultaneously predict both SST and SLR for the next century, which is one of the distinct difference between the DSM model and the previous models used in the literature.Applying this model [16], the authors have predicted the global SST and SLR in the 21st century.Relative to 1990, the results show an increase in SST of about 1.3 • C with a 90% confidence interval [1.1, 1.5] • C and a SLR of about 42.4 cm with a 90% confidence interval [40.0, 44.8] cm.These predictions are based on the assumption that the system operates under the historical pattern drawn from historical records [16].The DSM model was later extended to include greenhouse emissions as an external forcing function in the model [18] and a spatial SLR analysis version of the DSM model was also introduced in subsequent studies [19,20].In these studies authors argued that this concept is more meaningful than earlier semi-empirical studies since this approach incorporates the inherent two-way interaction that exists between SST and SLR into the model and its predictions.
These empirical models may provide an alternative approach to predict global warming and SLR in the 21st century over physically based models [4,20].However, one may notice that the earlier DSM models do not include the time lag impacts of temperature and sea level status on future SLR which is important.In the earlier DSM model and its various applications [16,[18][19][20], a one year time lag was used, whereas environmental systems may respond to changes imposed on them over a time lag period.To address the time lag effect between temperature increase and SLR, a time lag coefficient was introduced to the earlier semi-empirical models as well [9,12].This approach overlooks the impact of a series of temperature and sea level status over the previous years on the current temperature and sea level.In this study, we have noticed that, if the impact of SST and sea level status of a series of previous years are considered, the results obtained would be more accurate based on the evaluation of the performance measures that are discussed in this paper.Using this approach, the warming trend and SLR that is predicted with the DSM analysis would be different.In this study, a time-invariant (TI-DSM) and time-variant (TV-DSM) optimal DSM approach is proposed based on the earlier model proposed in [16].The model will include the impact of SST and sea levels in previous years on current temperature and SLR.The optimal time lag is determined using the root mean square errors and the coefficients of determination measures for both state variables.The optimal model is used to predict the temperature change and SLR in the 21st century.In order to evaluate the adverse effect of greenhouse gas emissions, a decoupled model is also used and the temperature series data from the six IPCC emission scenarios are used as the model input for temperature and the SLR is predicted based on these scenarios for the 21st century.The results obtained for this case are presented comparatively with earlier results which can be used in inundation studies of coastal regions [19].
Dynamic Systems Model with Time Lag
In an earlier study, the discrete solution of the DSM was presented using only the previous time values of SST and sea level [16].In this study, we extend this approach assuming that global SST change and SLR in the future depends on the status of global SST and sea level of several of the previous years.Introducing the time lag concept the DSM can be given as, where t is time, T(t) is the SST at t, and H(t) is the sea level at t, n is the number of time lags, τ i is the ith time lag where τ i = (i − 1)∆t, ∆t is the time interval and is set as one year for the prediction of yearly SST and sea levels, a j,t , b j,t and c j are time lag coefficients which are time lag period dependent.In this model, SST and sea level are the state variables of the system.The discrete solution of this model can be represented as, where k is the index of years, n is the specified maximum time lag, T(k) is the global SST in year k, H(k) is the global sea level in year k, a j,k−i is the coefficient associated with global SST at previous year(s) i and reflects the impact of global SST in previous year(s) i on current global SST and sea level, b j,k−i is the coefficient associated with global sea level at previous year(s) i and reflects the impact of the global sea level in previous year(s) i on the current global SST and sea level, c j is a constant and j = 1, 2. In Equation (2), n prior years is considered to give the equation in a general form.In application, fifteen years are considered and the optimum value of eight years is predicted to give the best results in an optimum sense.We define and obtain the matrix form of Equation (2) expressed as, where X(k) is the state vector in year k, A k−i is a (2 × 2) matrix for i = 1, . . ., n, C is a (2 × 1) vector.In this model, the system behavior depends on interaction of global SST and sea level in previous n years.This model is an extension of the DSM that was proposed earlier, or the original DSM is a special case of Equation ( 4) where n = 1 [16].
Calibration of DSM with Time Lag
We assume that time series historical data of yearly global SST and sea levels are available and this is denoted as, {(T (1) , H (1)) , (T (2) , H (2)) , . . . ,(T (N) , H (N))} (5) where N is the total number of years.In the DSM analysis [16], the available N-year time series data were used to identify the system coefficients.For that case, the coefficients identified would reflect the dynamic properties of the system on average for the complete dataset.However, it is expected that the SST and sea level predicted in the next year should be associated with the values of the state variables of several previous years.Assuming that the data at previous n years have the most impact on the SST and sea level during the next year, we can use the N-year data to identify the system coefficients for a fixed period time lag analysis, resulting in a time-invariant dynamic system (TI-DSM).The other option can be the following.When new data is added to the historical dataset, we may move the time lag data window forward a year as we drop the data for the first year of the historical time lag data set used earlier and use the new sub-dataset which still includes an N-year time lag data set to identify the new system coefficients for the period of n time lags.In this approach, the time lag period will also be moving forward.In this manner, as time moves forward, we can dynamically identify the new system coefficients, resulting in a time-variant dynamic system (TV-DSM).For generality, assuming that we need to predict the SST and sea level at year k + 1, based on the data from the previous n years, we define, where ) matrix, P is 2 × (2n + 1) matrix.Applying least squares method, we can identify the parameters of the n time lag matrix coefficients as, where the symbol ˆindicates the estimates of the corresponding coefficients.Once the system matrices are recognized, we can use Equation ( 4) to predict the global SST change and SLR using previous n year data as the initial condition (TI-DSM).
As time elapses, new data obtained from measurements will increase.One can utilize this new data to improve system matrices.Assuming that the length of data series used to recognize the system matrices is initially N, we define the length of the complete data window as N. When a new pair of yearly data is collected, we add the new data into the historical series and move the N-year time lag data window forward a year and then repeat the least squares method described above to recognize new system matrices.The system coefficients recognized using this procedure are time dependent, and we identify this system as the time-variant dynamic system (TV-DSM), expressed as, Water 2016, 8, 519 5 of 15 where A k−i (k) and C(k) are matrices for i = 1, . . ., n and vector for year k.It is important to notice that as we conduct this analysis in the year 2013; currently, there is no new data added to the dataset for 2014 and beyond.Thus, to demonstrate this approach, we will use a synthetic data generation process and add the data predicted for year (k + 1) into our dataset to form the new dataset and move on with our computation sequence as described above.We recognize that, in this case, the predicted data we have added to the dataset is not a measured data and will include modeling errors.Thus, the reader needs to recognize that the results obtained for this case will include the propagation of modeling errors over time which will render the one hundred year predictions less reliable.The synthetic data generation process is included here to demonstrate the time-variant modeling approach that is described in this study.
In order to incorporate the modeling and measurement errors, confidence interval theory can be used for reliability analysis [21].For a given confidence level α, 100 (1 − α) confidence intervals for global SST and SLR for dynamic prediction can be estimated by, where TCI (k) and ĤCI (k) represent the 100 (1 − α) confidence intervals of global SST and SLR at the kth year, t denotes the t-distribution, t α,m is the value of t for m degree of freedom and α confidence level, N is the number of historical data used for system recognition, n is the time lag, σT and σH are the standard deviations for global SST and SLR, which are estimated in the model calibration, e p is the term accounting for the error propagation in dynamic prediction, given by X p (i) is a (2n + 1) vector used in prediction, given by,
Determination of Optimal Time Lag
In Equation ( 2), the maximum time lag n is a variable parameter that needs to be determined by optimization.In building the model, the performance of the proposed model is evaluated using two indexes: the root mean square error (RMSE) and the coefficient of determination (R 2 ).We may combine both performance indexes to determine the optimal maximum time lag.The RMSE is a measure of the differences between values predicted by the model and the actual observations [22].It is defined as, where RMSE T and RMSE H are the root mean square errors for temperature and SLR, and T(k) and H(k) are the historical measurements of temperature and SLR at time step k.The coefficient of determination R 2 is a measure of how well future outcomes are likely to be predicted by the model [23].The R 2 is defined as, Water 2016, 8, 519 where R 2 T and R 2 H are the coefficients of determination for SST and SLR, and T and H are the average values of the observations of temperature and SLR, defined as In statistical definitions, the smaller the RMSE, the better the goodness of fit, while the closer the value of R 2 is to one, the better the linear regression fits the data in comparison to the simple average.In model predictions, one wants the proposed model to generate a smaller RMSE but a larger R 2 .
Optimal maximum time lag n should have the root mean square errors minimized and the coefficients of determination maximized simultaneously.Considering that the magnitudes of SST and SLR are not in the same scale, the following procedure is proposed to determine the optimal time lag n.For a given maximum time lag n, the system matrix can be identified for each time lag and is used to predict the global SST and SLR, and then the RMSE and R 2 for SST and SLR can be calculated.The RMSE and R 2 for each time lag are divided by their maximum values to transfer all RMSE and R 2 to an interval [0, 1], and the optimal maximum time lag n should have a minimum value for the performance index given by where f * (n * ) is the optimal performance index for the optimal maximum time lag n * , RMSE Tmax , RMSE Hmax , R 2 Tmax and R 2 Hmax are the maximum values of RMSE and R 2 for global SST and SLR, respectively.Equation ( 16) is a synthetic performance index that reflects the requirements for optimal maximum time lag.In this sense, the model with optimal maximum time lag is identified as the optimal dynamic system time lag model.
Numerical Results and Discussion
In this study, we apply the historical observations on SST and sea level data that was used in [16] to demonstrate the application of the model proposed above.The dataset includes yearly time series data of global mean surface SST and sea levels from 1880 to 2001.We use this dataset to calibrate the system coefficients for fixed period time lag (TI-DSM) and moving period time lag (TV-DSM) approaches introduced above.The calibrated model for both cases is then used to predict global SST and SLR in the 21st century where the synthetic data generation process is used for the TV-DSM approach as discussed above.
Time-Invariant DSM (TI-DSM) Application
The system calibration is based on the historical data on global SST and sea levels relative to 1990.For a given maximum time lag of 15 years, the system matrices for each time lag (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15) are identified and the resulting model is used to reconstruct the global SST and SLR.Based on the reconstructed results, the RMSE and R 2 are calculated and optimal maximum time lag is determined using Equation ( 16).This analysis resulted in an optimal time lag of eight years, n* = 8.The coefficients for the corresponding system are given in Table 1 for this case.Since none of these coefficients is zero, the results show that the future global SST and sea levels not only depend on their states in the last year, but also are affected by SST and sea level states in previous 8 years.
After calibrating the proposed model, we applied Equation (4) using the recognized matrices to reconstruct global SST and sea levels from 1880 to 2001.In the reconstruction, the eight pairs of historical data before 1888 were used as the initial condition, and the reconstructed results are shown in Figure 1.In this figure, it is shown that the historical data have a good fit.The 90% confidence intervals for global SST and SLR were estimated from Equation (11) for α = 0.1, N = 122, n = 8, t α/2,90 = 1.645, σT = 0.11, and σH = 0.80, and these results are shown as dashed lines in Figure 1.It can be seen that most of the historical data falls within the 90% confidence interval bands.In this calibration, RMSEs are 0.11 and 0.80, and R 2 s are 0.71 and 0.98 for global SST and SLR respectively.In comparison to the results obtained in [16], the reconstruction accuracy using this model has been improved.The R 2 s obtained for the reconstruction period, in particular, show a significant increase.This implies that the future predictions for global SST and SLR obtained from this model are likely to have higher accuracy.
Table 1.TI-DSM system matrices and vector recognized for the optimal time lag n * = 8.
Time Lag
Matrix In order to validate the performance of the model, a 10-fold cross-validation technique is applied to the measured dataset of SST and SLR from 1880 to 2001 [24].In this cross-validation technique, the dataset is randomly divided into 10 equal subgroups.Of the 10 subgroups, a single subgroup is retained as the validation data for testing the model while the remaining nine subgroups are used to identify the model.This process is repeated 10 times and the resultant parameters of the model are averaged to produce a single estimation.The resulting matrices are given in Table 2.When compared with Table 1, the values of the corresponding coefficients are very close to those presented in Table 1.When we used these matrices to reconstruct the SST and sea levels from 1880 to 2001 relative to 1990 levels, the resulting RMSEs were 0.1118 and 0.7828, and the R 2 was 0.7148 and 0.9773 for SST and SLR, respectively.These values are almost identical to the results obtained from the model identified using the complete dataset from 1880 to 2001.The system identified (Table 1), is used to predict global SST change and SLR in the 21st century relative to 1990 levels.In this prediction, eight pairs of the SST and sea levels from previous years relative to 1990 are taken as the fixed initial condition, and the predicted results over the next millennium are shown in Figure 2. The SST and sea levels obtained are monotonically rising if the greenhouse gases emissions to the atmosphere follow the pattern observed in the 20th century.According to this scenario, by the end of the 21st century, the SST will reach 1.9 • C with a 90% confidence interval [0.6, 3.2] • C while the sea level will rise to 56.1 cm with a 90% confidence interval [46.9, 65.2] cm.According to IPCC reports, the global SST rise in the year 2100 spans from 1.23 to 5.55 • C relative to 1990 levels [2,3].The SST change by the end of 2100, predicted using this model, is close to the best estimate in Scenario B1, in which SST increases about 2 • C with the interval [1.1, 2.9] • C [2,3].According to the IPCC Third Assessment Report, over the course of this century, the SLR is predicted to be in the range of 90 mm to 880 mm with a central value of 480 mm [3].The predicted SLR is within this range and very close to this central value.The 90% confidence intervals estimated from Equation (11) are shown as dashed lines in Figure 2. It can be seen that the confidence intervals gradually widen as the prediction time moves forward.This reflects the dynamic error propagation in the system and there is more uncertainty in temperature predictions.
Water 2016, 8, 519 9 of 15 [2,3].According to the IPCC Third Assessment Report, over the course of this century, the SLR is predicted to be in the range of 90 mm to 880 mm with a central value of 480 mm [3].The predicted SLR is within this range and very close to this central value.The 90% confidence intervals estimated from Equation ( 11) are shown as dashed lines in Figure 2. It can be seen that the confidence intervals gradually widen as the prediction time moves forward.This reflects the dynamic error propagation in the system and there is more uncertainty in temperature predictions.
Time-Variant DSM (TV-DSM) Application
Using the historical and the predicted results as the synthetic data, a time-variant dynamic system model with optimal eight time lag periods (TV-DSM) was also developed.In this approach, during the system calibration, the length of the original data series selected is 122.Thus, the length of the fixed data window will be chosen as 122.We note here that this is not a necessary condition for the proposed approach.The data window used in the calibration of the system may also be increased as the new data arrives over the years.The fixed data window approach is used here for computational convenience.The matrices Ak−i (k) and C(k) at k with years before 2002 are the same as those in the time-invariant DSM as given in Table 1.These matrices are then recalculated year by year from year 2002 forward using the moving data window as described above to form a time-variant dynamic system (TV-DSM).The global SST change and SLR in 21st century using the TV-DSM are shown in Figure 3a,b.The results show that by the end of the 21st century, the SST will reach 2.0 °C with a 90% confidence interval [1.5, 2.5] °C while the sea level will rise to 66.6 cm with a 90% confidence interval [63.2, 69.9] cm.These predictions are much closer to the projections of Scenario B1 [3].The 90% confidence intervals estimated by Equation ( 11) are shown as dashed lines in Figure 3.This is similar to the case of the TI-DSM where the widths of the confidence intervals gradually
Time-Variant DSM (TV-DSM) Application
Using the historical and the predicted results as the synthetic data, a time-variant dynamic system model with optimal eight time lag periods (TV-DSM) was also developed.In this approach, during the system calibration, the length of the original data series selected is 122.Thus, the length of the fixed data window will be chosen as 122.We note here that this is not a necessary condition for the proposed approach.The data window used in the calibration of the system may also be increased as the new data arrives over the years.The fixed data window approach is used here for computational convenience.The matrices A k−i (k) and C(k) at k with years before 2002 are the same as those in the time-invariant DSM as given in Table 1.These matrices are then recalculated year by year from year 2002 forward using the moving data window as described above to form a time-variant dynamic system (TV-DSM).The global SST change and SLR in 21st century using the TV-DSM are shown in Figure 3a,b.The results show that by the end of the 21st century, the SST will reach 2.0 • C with a 90% confidence interval [1.5, 2.5] • C while the sea level will rise to 66.6 cm with a 90% confidence interval [63.2, 69.9] cm.These predictions are much closer to the projections of Scenario B1 [3].The 90% confidence intervals estimated by Equation ( 11) are shown as dashed lines in Figure 3.This is similar to the case of the TI-DSM where the widths of the confidence intervals gradually increase as prediction time moves forward.When comparing the results shown in Figures 2 and 3, we may observe two points: (i) the predicted results using the TV-DSM are higher than those obtained using the TI-DSM; and, (ii) the confidence intervals using the TV-DSM are narrower than those obtained using the TI-DSM.These observations indicate that using the TV-DSM and thus new information in system recognition, may improve the prediction reliability although it is synthetic in this application.We must re-emphasize here that the predictions made for this case are based on the synthetic data generation process described earlier.
we may observe two points: (i) the predicted results using the TV-DSM are higher than those obtained using the TI-DSM; and, (ii) the confidence intervals using the TV-DSM are narrower than those obtained using the TI-DSM.These observations indicate that using the TV-DSM and thus new information in system recognition, may improve the prediction reliability although it is synthetic in this application.We must re-emphasize here that the predictions made for this case are based on the synthetic data generation process described earlier.
Applications Using IPCC Scenarios
The SLR in the 21st century depends on natural and anthropogenic radiative forcing generated by greenhouse gases such as carbon dioxide (CO2) [2].In order to assess the impact of greenhouse gas emissions on global warming, IPCC have developed six emissions scenarios based on different patterns of economic development, industrial development, and population growth in the future and modeled physical processes [2,3].These emission scenarios are labeled A1FI, A1B, A1T, A2, B1 and B2 [2].Among these scenarios, A1FI and A2 represent the highest emission of greenhouse gases into the atmosphere, A1T and B1 represent the least emissions, and A1B and B2 represent moderate emissions.The IPCC projected the global mean surface temperature dynamic processes from 1990 to 2100 for the six scenarios using the climate models (IPCC, 2007), which are shown in Figure 4. Accordingly, the temperature increase at the end of the 21st century spans from 2 to 4.5 °C depending on which scenario occurs.
The model given in Equation ( 2) is proposed to predict global SST change and SLR simultaneously.This model can also be decoupled and used to predict SLR if the global SST rise is known.In order to assess the adverse effect of greenhouse gas emissions on SLR, the global SST rise scenarios of IPCC, as shown in Figure 4, are chosen as known input data for the temperature and the
Applications Using IPCC Scenarios
The SLR in the 21st century depends on natural and anthropogenic radiative forcing generated by greenhouse gases such as carbon dioxide (CO 2 ) [2].In order to assess the impact of greenhouse gas emissions on global warming, IPCC have developed six emissions scenarios based on different patterns of economic development, industrial development, and population growth in the future and modeled physical processes [2,3].These emission scenarios are labeled A1FI, A1B, A1T, A2, B1 and B2 [2].Among these scenarios, A1FI and A2 represent the highest emission of greenhouse gases into the atmosphere, A1T and B1 represent the least emissions, and A1B and B2 represent moderate emissions.The IPCC projected the global mean surface temperature dynamic processes from 1990 to 2100 for the six scenarios using the climate models (IPCC, 2007), which are shown in Figure 4. Accordingly, the temperature increase at the end of the 21st century spans from 2 to 4.5 • C depending on which scenario occurs.
The model given in Equation ( 2) is proposed to predict global SST change and SLR simultaneously.This model can also be decoupled and used to predict SLR if the global SST rise is known.In order to assess the adverse effect of greenhouse gas emissions on SLR, the global SST rise scenarios of IPCC, as shown in Figure 4, are chosen as known input data for the temperature and the SLR is predicted using the second equation of Equation ( 2) as the decoupled model.The predicted results are shown in Figure 5.For comparison purposes, the projections from the semi-empirical model [10] are also shown as dashed lines in Figure 5.It may be seen that both predicted results have similar increasing patterns but the SLR predicted using the TI-DSM is higher than the projections obtained from the semi-empirical model for all scenarios [10].This reflects the effect of global SST and sea levels for the previous eight years on the current SLR.The 90% confidence intervals estimated from Equation (11) are also shown in Figure 5.The confidence intervals contain the predictions from the TI-DSM and the semi-empirical model [10].
model [10] are also shown as dashed lines in Figure 5.It may be seen that both predicted results have similar increasing patterns but the SLR predicted using the TI-DSM is higher than the projections obtained from the semi-empirical model for all scenarios [10].This reflects the effect of global SST and sea levels for the previous eight years on the current SLR.The 90% confidence intervals estimated from Equation (11) are also shown in Figure 5.The confidence intervals contain the predictions from the TI-DSM and the semi-empirical model [10].For this case, the predicted SLR at the end of 2100 and the 90% confidence intervals are given in Table 3.Among the IPCC Scenarios, A1FI and A2 show the highest amount of greenhouse gas emissions, resulting in sea levels of 110.2 cm and 92.6 cm on average with 90% confidence intervals of [88. 7, 131.7] cm and [75.9, 109.3]cm.B1 is the most environment-friendly scenario, in which greenhouse gas emissions are close to 1990 levels.Even if this scenario occurs in the 21st century, the temperature would increase by 1.8 °C [2].As a consequence, the sea level would rise 60.6 cm, to 85.8 cm at the end of 2100 relative to 1990 as predicted in this study.These results further illustrate that reducing greenhouse gas emissions is an extremely urgent task for the world.Table 1 also lists the results from the semi-empirical model [10].The relative differences for best estimates between the current model and the semi-empirical model are 7.9%, 6.5%, 6.4%, 6.2%, 4.6% and 5.3% for Scenarios A1FI, A1B, A1T, A2, B1 and B2, respectively.The SLR predictions by 2100, using the TI-DSM, range between 60 and 132 cm while the projection of the IPCC is between 18 and 59 cm by 2100 [3].The projections of the IPCC are widely considered to be an underestimation in the literature [10].The subsequent studies show that the SLR by 2100 ranges between 50 and 140 cm [10], 75 and 190 cm [12], 80 and 200 cm [25], 80 and 130 cm [13] and 59 and 180 cm [15].In comparison with these results, the predictions obtained from the TI-DSM are consistent but higher than the results projected from the earlier semi-empirical model studies.Similar analysis may also be performed using the TV-DSM model.For this case, the predicted SLR at the end of 2100 and the 90% confidence intervals are given in Table 4 and the predicted results for this case are shown in Figure 6.In this case, when the predicted results are used as the synthetic future data, the predictions for SLR are higher than the TI-DSM results.
Table 3. SLR at the end of the 21st century predicted by the TI-DSM for IPCC scenarios.For this case, the predicted SLR at the end of 2100 and the 90% confidence intervals are given in Table 3.Among the IPCC Scenarios, A1FI and A2 show the highest amount of greenhouse gas emissions, resulting in sea levels of 110.2 cm and 92.6 cm on average with 90% confidence intervals of [88. 7, 131.7] cm and [75.9, 109.3]cm.B1 is the most environment-friendly scenario, in which greenhouse gas emissions are close to 1990 levels.Even if this scenario occurs in the 21st century, the temperature would increase by 1.8 • C [2].As a consequence, the sea level would rise 60.6 cm, to 85.8 cm at the end of 2100 relative to 1990 as predicted in this study.These results further illustrate that reducing greenhouse gas emissions is an extremely urgent task for the world.Table 1 also lists the results from the semi-empirical model [10].The relative differences for best estimates between the current model and the semi-empirical model are 7.9%, 6.5%, 6.4%, 6.2%, 4.6% and 5.3% for Scenarios A1FI, A1B, A1T, A2, B1 and B2, respectively.The SLR predictions by 2100, using the TI-DSM, range between 60 and 132 cm while the projection of the IPCC is between 18 and 59 cm by 2100 [3].The projections of the IPCC are widely considered to be an underestimation in the literature [10].The subsequent studies show that the SLR by 2100 ranges between 50 and 140 cm [10], 75 and 190 cm [12], 80 and 200 cm [25], 80 and 130 cm [13] and 59 and 180 cm [15].In comparison with these results, the predictions obtained from the TI-DSM are consistent but higher than the results projected from the earlier semi-empirical model studies.Similar analysis may also be performed using the TV-DSM model.For this case, the predicted SLR at the end of 2100 and the 90% confidence intervals are given in Table 4 and the predicted results for this case are shown in Figure 6.In this case, when the predicted results are used as the synthetic future data, the predictions for SLR are higher than the TI-DSM results.
Conclusions
In this study, a DSM approach with time lag is developed to predict global SST change and SLR.In comparison with the empirical or semi-empirical models used in previous studies, this model accounts for the impact of global SST and sea level status from several previous years on the current SST and sea level.The optimal time lag to be used in this analysis is determined using an optimization analysis based on the accuracy of predictions among several time lags.The analysis is presented for constant period time lag and variable period time lag analysis.The resulting DSM obtained reveals the inherent historical relationship between global warming and SLR.This analysis is an extension of the DSM proposed by [16] and provides a simpler but effective alternative to predict global warming and SLR in the 21st century simultaneously.This approach also represents a generalized model for previous semi-empirical models.If one of two state variables is known, the model can be uncoupled to predict the dynamic process of other state variables, as the known state variable is chosen as the input.In this manner, the second equation in the DSM, Equations ( 1) and ( 2), can be used to predict SLR when the global temperature is obtained from other approaches, such as the AOGCM models of IPCC studies [3].
The optimal DSM was calibrated using the historical dataset on SST and sea level from 1880 to 2001 as used in [16].Considering that the model includes the effect of SST and sea levels in previous years, in model calibration we used the original historical records instead of the 2-year moving averaged data which were used in [16].The results show that the optimal DSM is the one with the maximum time lag n* = 8.The model predictions yielded RMSEs of 0.11 and 0.80, and R 2 s of 0.71 and 0.98 for global SST and SLR, respectively, in calibration, which shows an improvement when compared with [16].The optimal DSM was applied to predict global SST and SLR in the 21st century, and the results show that the global SST will reach 1.9 °C with a 90% confidence interval [0.6, 3.2] °C and the sea level will rise to 56.1 cm with a 90% confidence interval [46.9, 65.2] cm by the end of the 21st century relative to 1990.In order to assess the impact of greenhouse gas emissions on SLR, the second equation of the optimal model was used to project the SLR in the 21st century while the temperature rise of six emission scenarios, simulated by IPCC (2001), is used as the model input.For these cases, the resulting SLR at the end of the 21st century ranges from 61 to 132 cm.The prediction is consistent but higher than the predictions made in previous studies [10,12,13,15,25,26].The analysis
Conclusions
In this study, a DSM approach with time lag is developed to predict global SST change and SLR.In comparison with the empirical or semi-empirical models used in previous studies, this model accounts for the impact of global SST and sea level status from several previous years on the current SST and sea level.The optimal time lag to be used in this analysis is determined using an optimization analysis based on the accuracy of predictions among several time lags.The analysis is presented for constant period time lag and variable period time lag analysis.The resulting DSM obtained reveals the inherent historical relationship between global warming and SLR.This analysis is an extension of the DSM proposed by [16] and provides a simpler but effective alternative to predict global warming and SLR in the 21st century simultaneously.This approach also represents a generalized model for previous semi-empirical models.If one of two state variables is known, the model can be uncoupled to predict the dynamic process of other state variables, as the known state variable is chosen as the input.In this manner, the second equation in the DSM, Equations ( 1) and ( 2), can be used to predict SLR when the global temperature is obtained from other approaches, such as the AOGCM models of IPCC studies [3].
The optimal DSM was calibrated using the historical dataset on SST and sea level from 1880 to 2001 as used in [16].Considering that the model includes the effect of SST and sea levels in previous years, in model calibration we used the original historical records instead of the 2-year moving averaged data which were used in [16].The results show that the optimal DSM is the one with the maximum time lag n* = 8.The model predictions yielded RMSEs of 0.11 and 0.80, and R 2 s of 0.71 and 0.98 for global SST and SLR, respectively, in calibration, which shows an improvement when compared with [16].The optimal DSM was applied to predict global SST and SLR in the 21st century, and the results show that the global SST will reach 1.9 • C with a 90% confidence interval [0.6, 3.2] • C and the sea level will rise to 56.1 cm with a 90% confidence interval [46.9, 65.2] cm by the end of the 21st century relative to 1990.In order to assess the impact of greenhouse gas emissions on SLR, the second equation of the optimal model was used to project the SLR in the 21st century while the temperature rise of six
Figure 1 .
Figure 1.Reconstructed numerical results using the TI-DSM with the optimal time lag of 8 years.(a) Global temperature; (b) Sea level rise.
Figure 1 .
Figure 1.Reconstructed numerical results using the TI-DSM with the optimal time lag of 8 years.(a) Global temperature; (b) Sea level rise.
Figure 2 .
Figure 2. Predicted numerical results for the 21st century using the TI-DSM with the optimal time lag of 8 years.(a) Global temperature; (b) Sea level rise.
Figure 2 .
Figure 2. Predicted numerical results for the 21st century using the TI-DSM with the optimal time lag of 8 years.(a) Global temperature; (b) Sea level rise.
Figure 3 .
Figure 3. Predicted numerical results for the 21st century using the TV-DSM with the optimal time lag of 8 years.(a) Global temperature; (b) Sea level rise.
Figure 3 .
Figure 3. Predicted numerical results for the 21st century using the TV-DSM with the optimal time lag of 8 years.(a) Global temperature; (b) Sea level rise.
Figure 4 .
Figure 4. Temperature change processes simulated by IPCC, based on six greenhouse gas emission scenarios [2].
Figure 5 .
Figure 5. SLR obtained from the TI-DSM with an 8 year time lag based on IPCC scenarios.
Table 2 .
System matrices and vector obtained from 10-fold validation for the optimal time lag n * = 8.
Table 3 .
SLR at the end of the 21st century predicted by the TI-DSM for IPCC scenarios.
Table 4 .
SLR at the end of the 21st century predicted by the TV-DSM for IPCC scenarios.
|
v3-fos-license
|
2022-10-30T15:14:36.916Z
|
2022-10-27T00:00:00.000
|
253221315
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/22/21/8238/pdf?version=1666869511",
"pdf_hash": "a9bd45a2728e752b7cb6663187b13224e9c7ca6b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3042",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "3827357b22d05641113e5240c0ef3fb77db69f87",
"year": 2022
}
|
pes2o/s2orc
|
Muscle Oxygenation Measured with Near-Infrared Spectroscopy Following Different Intermittent Training Protocols in a World-Class Kayaker—A Case Study
Training elite kayakers at a distance of 1000 m is associated with aerobic and anaerobic metabolism, while intermittent training, in a variety of forms, is one of the effective ways to improve cardiorespiratory and metabolic function. Thus, this study aimed to investigate muscle oxygenation responses during repetition training (RT), interval training (IT), and sprint interval training (SIT). Near-infrared spectroscopy (NIRS) monitors were placed on the latissimus dorsi (LD), pectoralis major (PM), and vastus lateralis (VL) of a world-class kayaker during their preparatory period. The intensity of work, relief, and recovery intervals were the independent variables that were manipulated using three different training protocols. The inferential analysis between intermittent training protocols showed significant differences for all variables except total the hemoglobin (tHb) index in LD during bout 2 (F = 2.83, p = 0.1, ηp2 = 0.205); bout 3 (F = 2.7, p = 0.125, ηp2 = 0.193); bout 4 (F = 1.8, p = 0.202, ηp2 = 0.141); and bout 6 (F = 1.1, p = 0.327, ηp2 = 0.092). During the rest bouts, all training protocols showed significant differences for all variables except muscle oxygen saturation (SmO2) in the VL during bout 5 (F = 4.4, p = 0.053, ηp2 = 0.286) and tHb in VL during bout 1 (F = 2.28, p = 0.132, ηp2 = 0.172); bout 2 (F = 0.564, p = 0.561, ηp2 = 0.049); bout 3 (F = 1.752, p = 0.205, ηp2 = 0.137); bout 4 (F = 1.216, p = 0.301, ηp2 = 0.1); and bout 6 (F = 4.146, p = 0.053, ηp2 = 0.274). The comparison between IT protocols RT and SIT presented similar results. All variables presented higher values during SIT, except HR results. Finally, the comparison between IT and SIT showed significant differences in several variables, and a clear trend was identified. The results of this study suggest that the application of different intermittent exercise protocols promotes distinct and significant changes in the peripheral effect of muscle oxygenation in response to training stimuli and may be internal predictors of hemodynamic and metabolic changes.
Introduction
To achieve high-standard-specific goals, athletes are constantly looking for ways to optimize skeletal muscle function and its monitoring process through different types of physical training methodologies and evaluation procedures. Based on oxygen-dependent characteristics, NIRS is one of the non-invasive methods that can provide information about the changes in the oxygen saturation of muscle tissue during various sports exercises [1]. For that reason, the popularity of the NIRS method in sports research and real-world scenarios has been growing in recent years [2]. Wearable and wireless devices are fixed (AD), triceps brachii (TB), LD, and VL [30]. Thus far, there is only one study in junior male athletes that displayed a moderate correlation between the maximal O 2 extraction in the LD during an incremental test on a kayak ergometer and both 200 and 1000 m performances [15]. It is understood that some muscle oxygenation studies are limited by technical conditions that cause inconvenience in attaching sensors and performing paddling movements. However, there is a need to determine the contribution of other muscles, such as the PM, that have not been previously studied in any kayak training protocol. Depending on the demands of the training, several approaches exist to control and individualize intermittent exercise intensities [31,32]. Individual incremental test parameters are much more objective, practical, and likely the more accurate and effective criteria for achieving the desired performance results [33][34][35]. Paquette and Bieuzen [36] studied thirteen canoe kayak athletes of different genders and different levels to determine their muscle oxygenation and cardiac output responses to various HIIT sessions with the intensity ranging from 110% peak power output to all-out, suggesting that the muscle demand for O 2 is high, especially with the increase in the number and targeting intensity. However, it is unknown whether moderate-intensity repetition work can cause muscle oxygenation and cardiac output in world-class kayakers, compared with moderate interval and sprint interval training. Thus, the purpose of this study is to assess muscle oxygen responses during the RT, IT, and SIT in a world-class kayaker and to determine oxygenation parameters in the VL, PM, and LD muscles in each training workout. We hypothesized that the SIT training protocol activates the muscle's O 2 dynamics and oxidative energy metabolism more than the IT and RT, and this response would be detected using NIRS.
Subject
A male world-class kayaker (World Championship silver and bronze medal winner and European Championship bronze medal winner in 1000 m kayak flat water race event), during the preparatory period, participated in this study. At the start of the data collection, the participant's age was 32 years, with a height of 184.5 cm, body mass of 89 kg, and training volume of 18 h·week −1 . Physical characteristics are presented in Table 1. Note: VO 2 max-maximal oxygen uptake; CIL-critical intensity limit; VT2-2nd ventilatory threshold; HR-heart rate; W-watts.
Design
During the study, the athlete was encouraged to undertake their normal training but not to train on the day before each test. The athlete was acquainted with the experimental procedures prior to testing and gave written informed consent to participate in the study. All the experimental procedures involved in this study were approved by the Bioethics Research Committee of Vilnius Region (#158200-18 /11-1040-573). This study adhered to ethical principles under the Declaration of Helsinki.
The participant performed three randomized separated training sessions, RT, IT, and SIT, upon a Dansprint PRO, KE001 ergo, Denmark kayak ergometer at air brake resistance level 7. The ergometer was calibrated before each test according to the manufacturer's recommendations, and the tension in the ergometer's ropes was verified regularly [37]. All sessions were performed under similar environment conditions (relative humidity 60%) and circumstances (from 11.00 to 12.30 h).
The study protocol practice sessions started with a 15 min standard warm-up comprising rowing exercises and 5 min of recovery (Table 2). Table 2. Experimental conditions of the protocol depicting the three IT intensity modes.
Duration 15 6 Bouts
RT Warm-Up S- 6 6 Research Committee of Vilnius Region (#158200-18/11-1040-573). This study adhe ethical principles under the Declaration of Helsinki. The participant performed three randomized separated training sessions, RT, I SIT, upon a Dansprint PRO, KE001 ergo, Denmark kayak ergometer at air brake resi level 7. The ergometer was calibrated before each test according to the manufac recommendations, and the tension in the ergometer's ropes was verified regularl All sessions were performed under similar environment conditions (relative hu 60%) and circumstances (from 11.00 to 12.30 h).
The study protocol practice sessions started with a 15 min standard warm-up prising rowing exercises and 5 min of recovery (Table 2). For all tests, the athlete started in the kayak ergometer, waited 3 min for the s signal, and established the baseline responses for NIRS and HR monitor. The parti was asked to perform the same volume of 6 bouts of activity interspersed with 6 passive recovery (Table 2). To program the RT, IT, and SIT intensities, the subject' vidual power output characteristics were used at the critical intensity limit (CIL) the second ventilatory threshold (VT2) associated with physiological markers s VO2max, VO2, and HR (Table 1) [38]. The participant received stroke power outp heart rate (HR) feedback during the test and was asked to maintain the right intensi in each stage of the exercises. Blood lactate (Bla) concentration (mmol·L −1 ), as a pro metabolic anaerobic demand, was determined 3 min after the end of each interm training.
RT
The RT protocol comprised 6 bouts of 6 min of ergometer paddling at 200 wa tensity interspersed with 6 min of passive recovery. The participant was instructed sume the ready position, and after the starting signal, the activity lasted for 72 min
IT
The IT protocol comprised 6 bouts of 6 min that consisted of interspersing per 1-min ergometer paddling at the intensity of 200 watts and 1 min relief at 40 watt dling intensity. The 6 min of activity were interspersed with 6 min of passive rec Research Committee of Vilnius Region (#158200-18/11-1040-573). This study ad ethical principles under the Declaration of Helsinki.
The participant performed three randomized separated training sessions, RT SIT, upon a Dansprint PRO, KE001 ergo, Denmark kayak ergometer at air brake re level 7. The ergometer was calibrated before each test according to the manuf recommendations, and the tension in the ergometer's ropes was verified regula All sessions were performed under similar environment conditions (relative h 60%) and circumstances (from 11.00 to 12.30 h).
The study protocol practice sessions started with a 15 min standard warmprising rowing exercises and 5 min of recovery (Table 2). For all tests, the athlete started in the kayak ergometer, waited 3 min for the signal, and established the baseline responses for NIRS and HR monitor. The pa was asked to perform the same volume of 6 bouts of activity interspersed with passive recovery (Table 2). To program the RT, IT, and SIT intensities, the subje vidual power output characteristics were used at the critical intensity limit (CIL the second ventilatory threshold (VT2) associated with physiological markers VO2max, VO2, and HR (Table 1) [38]. The participant received stroke power ou heart rate (HR) feedback during the test and was asked to maintain the right inten in each stage of the exercises. Blood lactate (Bla) concentration (mmol·L −1 ), as a p metabolic anaerobic demand, was determined 3 min after the end of each inte training.
RT
The RT protocol comprised 6 bouts of 6 min of ergometer paddling at 200 w tensity interspersed with 6 min of passive recovery. The participant was instruct sume the ready position, and after the starting signal, the activity lasted for 72 m
IT
The IT protocol comprised 6 bouts of 6 min that consisted of interspersing p 1-min ergometer paddling at the intensity of 200 watts and 1 min relief at 40 wa dling intensity. The 6 min of activity were interspersed with 6 min of passive r Research Committee of Vilnius Region (#158200-18/11-1040-573). This study adhe ethical principles under the Declaration of Helsinki. The participant performed three randomized separated training sessions, RT, I SIT, upon a Dansprint PRO, KE001 ergo, Denmark kayak ergometer at air brake resi level 7. The ergometer was calibrated before each test according to the manufac recommendations, and the tension in the ergometer's ropes was verified regularl All sessions were performed under similar environment conditions (relative hu 60%) and circumstances (from 11.00 to 12.30 h).
The study protocol practice sessions started with a 15 min standard warm-up prising rowing exercises and 5 min of recovery (Table 2). For all tests, the athlete started in the kayak ergometer, waited 3 min for the s signal, and established the baseline responses for NIRS and HR monitor. The parti was asked to perform the same volume of 6 bouts of activity interspersed with 6 passive recovery (Table 2). To program the RT, IT, and SIT intensities, the subject' vidual power output characteristics were used at the critical intensity limit (CIL) the second ventilatory threshold (VT2) associated with physiological markers s VO2max, VO2, and HR (Table 1) [38]. The participant received stroke power outp heart rate (HR) feedback during the test and was asked to maintain the right intensi in each stage of the exercises. Blood lactate (Bla) concentration (mmol·L −1 ), as a pro metabolic anaerobic demand, was determined 3 min after the end of each interm training.
RT
The RT protocol comprised 6 bouts of 6 min of ergometer paddling at 200 wa tensity interspersed with 6 min of passive recovery. The participant was instructed sume the ready position, and after the starting signal, the activity lasted for 72 min
IT
The IT protocol comprised 6 bouts of 6 min that consisted of interspersing per 1-min ergometer paddling at the intensity of 200 watts and 1 min relief at 40 watt dling intensity. The 6 min of activity were interspersed with 6 min of passive rec ethical principles under the Declaration of Helsinki.
The participant performed three randomized separated training sessions, RT SIT, upon a Dansprint PRO, KE001 ergo, Denmark kayak ergometer at air brake re level 7. The ergometer was calibrated before each test according to the manufa recommendations, and the tension in the ergometer's ropes was verified regula All sessions were performed under similar environment conditions (relative h 60%) and circumstances (from 11.00 to 12.30 h).
The study protocol practice sessions started with a 15 min standard warmprising rowing exercises and 5 min of recovery (Table 2). For all tests, the athlete started in the kayak ergometer, waited 3 min for the signal, and established the baseline responses for NIRS and HR monitor. The pa was asked to perform the same volume of 6 bouts of activity interspersed with passive recovery (Table 2). To program the RT, IT, and SIT intensities, the subjec vidual power output characteristics were used at the critical intensity limit (CIL the second ventilatory threshold (VT2) associated with physiological markers VO2max, VO2, and HR (Table 1) [38]. The participant received stroke power out heart rate (HR) feedback during the test and was asked to maintain the right inten in each stage of the exercises. Blood lactate (Bla) concentration (mmol·L −1 ), as a p metabolic anaerobic demand, was determined 3 min after the end of each inte training.
RT
The RT protocol comprised 6 bouts of 6 min of ergometer paddling at 200 w tensity interspersed with 6 min of passive recovery. The participant was instruct sume the ready position, and after the starting signal, the activity lasted for 72 m
IT
The IT protocol comprised 6 bouts of 6 min that consisted of interspersing pe 1-min ergometer paddling at the intensity of 200 watts and 1 min relief at 40 wa dling intensity. The 6 min of activity were interspersed with 6 min of passive r The participant performed three randomized separated training sessions, RT, I SIT, upon a Dansprint PRO, KE001 ergo, Denmark kayak ergometer at air brake resi level 7. The ergometer was calibrated before each test according to the manufac recommendations, and the tension in the ergometer's ropes was verified regularl All sessions were performed under similar environment conditions (relative hu 60%) and circumstances (from 11.00 to 12.30 h).
The study protocol practice sessions started with a 15 min standard warm-up prising rowing exercises and 5 min of recovery (Table 2). For all tests, the athlete started in the kayak ergometer, waited 3 min for the s signal, and established the baseline responses for NIRS and HR monitor. The parti was asked to perform the same volume of 6 bouts of activity interspersed with 6 passive recovery (Table 2). To program the RT, IT, and SIT intensities, the subject' vidual power output characteristics were used at the critical intensity limit (CIL) the second ventilatory threshold (VT2) associated with physiological markers s VO2max, VO2, and HR (Table 1) [38]. The participant received stroke power outp heart rate (HR) feedback during the test and was asked to maintain the right intensi in each stage of the exercises. Blood lactate (Bla) concentration (mmol·L −1 ), as a pro metabolic anaerobic demand, was determined 3 min after the end of each interm training.
RT
The RT protocol comprised 6 bouts of 6 min of ergometer paddling at 200 wa tensity interspersed with 6 min of passive recovery. The participant was instructed sume the ready position, and after the starting signal, the activity lasted for 72 min
IT
The IT protocol comprised 6 bouts of 6 min that consisted of interspersing per 1-min ergometer paddling at the intensity of 200 watts and 1 min relief at 40 watt dling intensity. The 6 min of activity were interspersed with 6 min of passive rec The participant performed three randomized separated training sessions, RT SIT, upon a Dansprint PRO, KE001 ergo, Denmark kayak ergometer at air brake re level 7. The ergometer was calibrated before each test according to the manuf recommendations, and the tension in the ergometer's ropes was verified regula All sessions were performed under similar environment conditions (relative h 60%) and circumstances (from 11.00 to 12.30 h).
The study protocol practice sessions started with a 15 min standard warmprising rowing exercises and 5 min of recovery (Table 2). For all tests, the athlete started in the kayak ergometer, waited 3 min for the signal, and established the baseline responses for NIRS and HR monitor. The pa was asked to perform the same volume of 6 bouts of activity interspersed with passive recovery (Table 2). To program the RT, IT, and SIT intensities, the subjec vidual power output characteristics were used at the critical intensity limit (CIL the second ventilatory threshold (VT2) associated with physiological markers VO2max, VO2, and HR (Table 1) [38]. The participant received stroke power ou heart rate (HR) feedback during the test and was asked to maintain the right inten in each stage of the exercises. Blood lactate (Bla) concentration (mmol·L −1 ), as a p metabolic anaerobic demand, was determined 3 min after the end of each inte training.
RT
The RT protocol comprised 6 bouts of 6 min of ergometer paddling at 200 w tensity interspersed with 6 min of passive recovery. The participant was instruct sume the ready position, and after the starting signal, the activity lasted for 72 m
IT
The IT protocol comprised 6 bouts of 6 min that consisted of interspersing pe 1-min ergometer paddling at the intensity of 200 watts and 1 min relief at 40 wa dling intensity. The 6 min of activity were interspersed with 6 min of passive r For all tests, the athlete started in the kayak ergometer, waited 3 min for the starting signal, and established the baseline responses for NIRS and HR monitor. The participant was asked to perform the same volume of 6 bouts of activity interspersed with 6 min of passive recovery ( Table 2). To program the RT, IT, and SIT intensities, the subject's individual power output characteristics were used at the critical intensity limit (CIL) and at the second ventilatory threshold (VT2) associated with physiological markers such as VO 2 max, VO 2 , and HR (Table 1) [38]. The participant received stroke power output and heart rate (HR) feedback during the test and was asked to maintain the right intensity (W) in each stage of the exercises. Blood lactate (Bla) concentration (mmol·L −1 ), as a proxy for metabolic anaerobic demand, was determined 3 min after the end of each intermittent training.
RT
The RT protocol comprised 6 bouts of 6 min of ergometer paddling at 200 watts intensity interspersed with 6 min of passive recovery. The participant was instructed to assume the ready position, and after the starting signal, the activity lasted for 72 min.
IT
The IT protocol comprised 6 bouts of 6 min that consisted of interspersing periods of 1-min ergometer paddling at the intensity of 200 watts and 1 min relief at 40 watts paddling intensity. The 6 min of activity were interspersed with 6 min of passive recovery. The participant was instructed to assume the ready position, and after the starting signal, the activity lasted for 72 min.
SIT
The SIT protocol comprised 6 bouts of 6 min that consisted of interspersing periods of 10 s of ergometer paddling at 300 watts intensity with 30 s of relief paddling at 40 watts intensity. The 6 min of activity were interspersed with 6 min of passive recovery. The participant was instructed to assume the ready position, and after the starting signal, the activity lasted for 72 min. participant was instructed to assume the ready position, and after the starting signal, the activity lasted for 72 min.
NIRS Values
The oxygenation level of exercising muscles (oxygenated hemoglobin), SmO2 (%), and deoxygenated total hemoglobin, tHb (arbitrary units AU), were assessed with a NIRS device (Moxy Oxygen Monitor-USA, Hutchinson, MN, USA) ( Figure 1). Three NIRS monitors were placed and affixed using double-sided adhesive tape over the left (dominant) VL, PM, and LD muscles: for the VL, on the distal part of the VL muscle belly (10 cm above the proximal border of the patella); for the PM, on the center of the muscle belly along in the principal direction of the muscle fibers of the sternocostal head, and for the LD, on the midpoint between the inferior border of the scapula and the posterior axillary fold. The skinfold thickness at each site was measured using a skinfold caliper (Harpenden, C-136) to ensure that the skinfold thickness was less than half the distance between the emitter and the detector (25 mm). The raw muscle O2 saturation (SmO2) and total hemoglobin concentration (tHb) signals were captured at 10 Hz, and the data were smoothed using a 10th order low pass-zero phase Butterworth filter (cut-off frequency 0.1 Hz) provided by the recording Artinis Software (Oxysof, Artinis Medical System, Elst, The Netherlands) [39]. Black elastic bandages were used to shield the probes from ambient light and minimize movement during exercise. The values of muscles oxygenation at the baseline (averaging 30 s before exercise), during exercise (sample size of each bout n = 180), and during exercise recovery periods (sample size of each bout n = 180) were recorded in the Moxy PC software (Fortiori Design LLC, Minneapolis, MN, USA), which allowed for the calculation of the average of the recorded values and the lowest point of the SmO2 in each training. The variation between recovery and exercise in SmO2 (Δ SmO2) was calculated by evaluating the difference between the minimum SmO2 and baseline SmO2, Three NIRS monitors were placed and affixed using double-sided adhesive tape over the left (dominant) VL, PM, and LD muscles: for the VL, on the distal part of the VL muscle belly (10 cm above the proximal border of the patella); for the PM, on the center of the muscle belly along in the principal direction of the muscle fibers of the sternocostal head, and for the LD, on the midpoint between the inferior border of the scapula and the posterior axillary fold. The skinfold thickness at each site was measured using a skinfold caliper (Harpenden, C-136) to ensure that the skinfold thickness was less than half the distance between the emitter and the detector (25 mm). The raw muscle O 2 saturation (SmO 2 ) and total hemoglobin concentration (tHb) signals were captured at 10 Hz, and the data were smoothed using a 10th order low pass-zero phase Butterworth filter (cut-off frequency 0.1 Hz) provided by the recording Artinis Software (Oxysof, Artinis Medical System, Elst, The Netherlands) [39]. Black elastic bandages were used to shield the probes from ambient light and minimize movement during exercise. The values of muscles oxygenation at the baseline (averaging 30 s before exercise), during exercise (sample size of each bout n = 180), and during exercise recovery periods (sample size of each bout n = 180) were recorded in the Moxy PC software (Fortiori Design LLC, Minneapolis, MN, USA), which allowed for the calculation of the average of the recorded values and the lowest point of the SmO 2 in each training. The variation between recovery and exercise in SmO 2 (∆ SmO 2 ) was calculated by evaluating the difference between the minimum SmO 2 and baseline SmO 2 , and the tHb (∆ tHb) variation was also assessed by calculating the variation between the maximum tHb and baseline tHb [40].
Heart Rate Responses
HR responses were assessed with a telemetric HR monitor (Polar RS800 CX, Polar Electro Oy, Kempele, Finland). The HR (sample size of each bout n = 180) was measured during all the interval bouts, including during the rest. The HR signals were treated using a moderate filter, cleaning and replacing all irregular heartbreaks with interpolated, adjacent R-R interval values using the Polar Software (Pro Trainer 5, Polar Electro, Finland).
Blood Lactate Concentration
Blood lactate (Bla) concentration (mmol·L −1 ) was calculated 3 min after the end of the protocols. The blood lactate samples were taken from the participant's fingertip and immediately analyzed with a validated lactate analyzer (Lactate Pro; Arkray, Tokyo, Japan).
Statistical Analysis
Descriptive analysis is presented in Tables 3 and 4, and data are presented as means (M) ± standard deviations (SD). Before using the parametrical statistical procedures, the assumptions of normality and sphericity were verified. A one-way repeated-measure ANOVA was performed to identify the differences in muscle oxygen saturation and the total hemoglobin in the VL, PM, and LD muscles, and the heart rate between the interval training modes. Bonferroni's corrections were used for the comparisons of more than two groups, and Cohen's d was calculated as the effect-size measure. The alpha level for all statistical tests was set a priori at α = 0.05, and the calculations were carried out using the SPSS software V24.0 (IBM SPSS Statistics for Windows, Armonk, NY, USA: IBM Corp.). The thresholds for effect-size statistics were <0.2, trivial; <0.6, small; <1.20, moderate; <2.0, large; and >2.0, very large. These statistical computations were processed with a specific post-only crossover spreadsheet for each age group [41].
Results
The results of the inferential analysis between the intermittent training protocols during the exercise bouts and rest bouts are presented in Tables 3 and 4 Figures 2 and 3 show the standardized (Cohen) differences for the pairwise comparisons. The comparison between the protocols showed that the RT protocol presented higher deoxygenation levels than the IT protocol. On the other hand, the SIT protocol presented higher deoxygenation levels than the RT and IT protocols but only in the LD muscle (Figure 2). the SIT protocol presented higher deoxygenation levels than the RT and IT protocols but only in the LD muscle ( Figure 2). However, the SIT and IT protocols presented higher mean O2 saturation levels duri the passive recovery than the RT protocols. However, the SIT and IT protocols presented higher mean O 2 saturation levels during the passive recovery than the RT protocols.
Discussion
Our research aimed to assess muscle oxygenation responses during the RT, IT, and SIT in a world-class kayaker and to determine their parameters in the VL, PM, and LD muscles. The findings of this case study only partially confirm our hypothesis: (1) the RT was characterized by a greater mean deoxygenation rate than the IT protocol, and the SIT mean deoxygenation was greater than the RT and IT workouts only in the LD muscle; however, (2) the mean O 2 saturation level during the passive rest period was higher in the SIT and IT protocols than in the RT; (3) oxygenation responses in the three active muscles suggest higher PM muscle recruitment than those of the LD and VL muscles as well as changes in the level of muscle contribution during the exercises of different intensities. This study shows the possibilities of using NIRS devices in the monitoring of elite kayak paddling performance and may provide complementary information to the HR and Bla concentration on a local muscle metabolism level.
O 2 Dynamics during Different Training
In this study, we identified oxygen muscle changes during the RT, IT, and SIT protocols. The RT induced greater mean oxygenation in the PM and VL and caused a greater HR response than other protocols we applied. In our design, the RT involved a constant intensity of around~65% of the critical intensity limit (CIL) when performing a 200 W workload. The increase in mean HR during the RT may reflect an increase in the cardiac output associated with a central cardiocirculatory component of the training [42]. IT intensities (200 W) were similar to RT intensities; however, relief intervals reduced the mean oxygenation level of the IT, which had the lowest ∆ SmO 2 (%) and the highest ∆ tHb during all exercise bouts when compared with the RT and SIT protocols ( Table 5). The RT was distinguished by the duration of continuous work, while the SIT featured the increased intensity of short intervals. The reviewers of this type of RT response categorize them as metabolic, eliciting large requirements from the O 2 transport and utilization systems [43], and responses to protocols such as the SIT are considered metabolic but with a certain degree of neuromuscular strain [21]. Previous reports state that during moderate-intensity IT, the systolic volume of the heart increases during recovery intervals, causing myocardial metabolic load [21]. Therefore, hypothetically, peripheral metabolic changes were not expected in our study. Paquette and Bieuzen [40] considered that ∆ SmO 2 is a good performance predictor since SmO 2 represents the balance between O 2 delivery and extraction at the muscle level [4]. Thus, a decrease in SmO 2 may originate from both reduced delivery and/or increased extraction. However, it is difficult to draw a conclusion indicating the importance of the exercise mode to elicit the cardiovascular component. The control or adjustment of the intensity of the training sessions related to HR may be limited due to the well-known HR delay at exercise onset [20], which showed a slower response than the SmO 2 response during the IT protocol (Figure 4. As the oxygen demand in the working muscle is the driving force for oxygen delivery by the cardiovascular system [42], muscle deoxygenation responded even faster than the oxygen uptake to the onset of a time trial [44]. as impairments in neural drive and motor unit activation or metabolite accumulation [49]. The oxygenation in the muscles quickly adjusted post-exercise, indicating that the use of NIRS technology showed high sensitivity and may lead to discussion and further investigations as to whether oximetry and HR monitoring are more sensitive methods, especially in the IT ( Figure 4) and SIT ( Figure 6). The three training protocols elicited different increases in blood lactate concentrations during the exercise, showing the contribution of the anaerobic glycolytic system, inferred by blood lactate accumulation, to be numerically greater in the RT (3.5 mmol·L −1 ) ( Figure 5) than in the IT (1.4 mmol·L −1 ) ( Figure 4) and SIT (1.8 mmol·L −1 ) ( Figure 6). The benefit of the relief intensity has often been discussed via changes in blood lactate concentration [50]; however, neither blood [51] nor muscle lactate has a direct (nor linear) relationship with performance capacity [50]. It has also been shown that substantially different intermittent training modalities (as assessed by accumulated Bla-1 levels and the HR) may have relatively similar muscle mean peripheral O2 responses. Judging by the NIRS indicators, in the LD, the peripheral effects on oxygen extraction during the SIT protocol were higher than during the RT. It has been previously established that increasing exercise intensity improves aerobic energy metabolism, which is primarily linked to increased skeletal muscle mitochondrial content and capillary density [45].
Another feature of our study was to monitor the mean muscle oxygenation of each training session in 6 min duration rest bouts (Figures 4-6). It was possible to observe after which training protocol O 2 returned faster to the pre-exercise conditions since recovery is an important component to improve physical training adaptations [46]. In the RT, oxygenation during the rest bouts in the PM ranged from 54.4% to 65.9%, in the IT from 72% to 79.5%, and in the SIT from 63.5% to 78.6% of SmO 2 ( Table 3). Our findings suggest that the link between the O 2 uptake recovery might be related to the exercise intensity and the nature of repeated sequences in the IT and SIT. The ability to resist fatigue (SmO 2 % decrement) and replenish the energy substrates (ATP and PCr) are oxygen-dependent processes [47]. In the present study, during the rest bouts, muscle oxygenation in different training protocols returned at a different pace to pre-exercise (~80%) levels (Figures 4-6), indicating a possible recovery of muscle PCr [48]. The VL muscle was the least affected, presenting oxygenation at its highest level, and in the RT, the mean values ranged from 71.1% to 76.5%, while in the IT, it ranged from 77% to 81%, and in the SIT, from 76.3% to 80.8% of SmO 2 (Table 4). Different levels of muscle recovery may be related to factors such as impairments in neural drive and motor unit activation or metabolite accumulation [49]. The oxygenation in the muscles quickly adjusted post-exercise, indicating that the use of NIRS technology showed high sensitivity and may lead to discussion and further investigations as to whether oximetry and HR monitoring are more sensitive methods, especially in the IT ( Figure 4) and SIT ( Figure 6). The three training protocols elicited different increases in blood lactate concentrations during the exercise, showing the contribution of the anaerobic glycolytic system, inferred by blood lactate accumulation, to be numerically greater in the RT (3.5 mmol·L −1 ) ( Figure 5) than in the IT (1.4 mmol·L −1 ) ( Figure 4) and SIT (1.8 mmol·L −1 ) ( Figure 6). The benefit of the relief intensity has often been discussed via changes in blood lactate concentration [50]; however, neither blood [51] nor muscle lactate has a direct (nor linear) relationship with performance capacity [50]. It has also been shown that substantially different intermittent training modalities (as assessed by accumulated Bla-1 levels and the HR) may have relatively similar muscle mean peripheral O 2 responses. Muscle oxygen saturation (SmO2) kinetics in latissimus dorsi, pectoralis major, and vastus lateralis and heart rate (HR) response during RT protocol; blood lactate (Bla) concentration 3 min after the end of the protocol. Figure 6. Muscle oxygen saturation (SmO2) kinetics in latissimus dorsi, pectoralis major, and vastus lateralis and heart rate (HR) response during SIT protocol; blood lactate (Bla) concentration 3 min after the end of the protocol.
O2 Responses in Different Muscles
Information about simultaneous oxygenation in different muscles provides a potential understanding of internal load. Paquette and Bieuzen [30] aimed to understand Figure 6. Muscle oxygen saturation (SmO 2 ) kinetics in latissimus dorsi, pectoralis major, and vastus lateralis and heart rate (HR) response during SIT protocol; blood lactate (Bla) concentration 3 min after the end of the protocol.
O 2 Responses in Different Muscles
Information about simultaneous oxygenation in different muscles provides a potential understanding of internal load. Paquette and Bieuzen [30] aimed to understand muscle oxygenation in more than one active muscle and suggested that the maximum O 2 extraction is independent and a better performance predictor than the VO 2 max in sprint canoeing and kayaking. Thus, our main results on muscle oxygenation during the RT show differences between SmO 2 in the LD, PM, and VL and between the tHb in the VL, PM, and LD during all workout intervals (Figure 2). This could suggest higher PM recruitment during ergometer paddling than the LD, and especially with the VL, in the applied intermittent training sessions. The ∆ tHb was lower in the VL than in the PM and LD across all the training protocols, suggesting a decrease in the leg muscle's blood volume (Table 4). This is in line with previous studies that showed a higher energy requirement of the fatigued muscle per unit of external work performed than the non-fatigued muscle [52]. The deoxygenation of the LD in the SIT was higher than during the other protocols, which was confirmed by a previous electromyography study on different muscle activation levels during kayak paddling, which showed that the LD muscle is highly active during the draw phase of the kayaking [52]. However, an increase in different muscle activation levels during different training protocols, which will likely produce an increase in O 2 extraction, may be associated with the technique required to cover the distances of different intensities. The tHb was lower in the VL (12.4 ± 0.1) than in the PM (13.2 ± 0.1) and LD (13.3 ± 0.1), suggesting a decrease in the muscle blood volume in lower body muscles. The drop in O 2 saturation in the less active muscles is explained by the sympathetic flow induced by exercise, promoting vasoconstriction in this tissue and consequently, a redirection of the blood flow to the more active muscles [53]. This way of explanation about the decreased muscle oxygenation in the non-exercising limb was already used during graded leg cycling exercises, by adopting ultrasound and NIRS methods [54]. At the same time, we did not observe any differences in the tHb between the RT, IT, and SIT in the LD during the exercise bouts and in the VL during the rest bouts, which should be considered in future studies.
Despite some limitations of the NIRS technique and its technology [55], this study was conducted during a real training scenario in the preparatory training period for the world-class kayak competition. Our study was limited to one participant to find out the individual response to single kayak training. Intermittent training is associated with aerobic and anaerobic metabolism; therefore, for practical reasons, it was not possible to invasively determine the accumulation of Bla after each exercise bout by measuring this level at the end of the training. However, previous studies of elite kayakers have shown [56] that the mean lactate threshold occurred at a blood lactate concentration of 2.7 mmol·L −1 , an HR of 170 beats·min −1 , and a VO 2 of 44.2 mL·kg −1 ·min −1 . The lactate threshold presented corresponded to a percentage of 89.6% of the maximum heart rate and 82.4% of the VO 2 peak. This shows that the characteristics of our subject are close to these indicators. Therefore, the relationships between oxygen kinetics and anaerobic metabolism should be further examined with experimental training studies.
Conclusions
The current results suggest that the observations of intermittent exercise performance and significant changes in the peripheral effect of muscle oxygenation in response to training stimuli are the internal predictors of the aerobic metabolism intensity related to work, relief, and recovery intensity. Differences in muscle oxygenation suggest muscle recruitment between the PM, LD, and VL during different exercises; however, this area is still poorly understood requiring further research. To our knowledge, this is the first study that shows the significant contribution of the PM muscle on individual performance in world-class kayakers following different modality intermittent kayak training. In addition to the HR, blood lactate, and VO 2 measurements, wearable NIRS technology is, therefore, a significant tool for monitoring muscle oxidative metabolism during different training modalities.
|
v3-fos-license
|
2019-06-01T13:16:02.468Z
|
2018-07-01T00:00:00.000
|
171903950
|
{
"extfieldsofstudy": [
"Philosophy"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/trans/v41n3/0101-3173-trans-41-03-0039.pdf",
"pdf_hash": "16f192ce3dc4024ebfdddad8fc50ea1e6b4b6dbe",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3043",
"s2fieldsofstudy": [
"Philosophy"
],
"sha1": "66359330f151b254f0d4da7f71cc45d0983cbd4f",
"year": 2018
}
|
pes2o/s2orc
|
’ s theory oF judgment ( 1889 ) : a critique oF aristotle ’ s correspondence theory oF truth
The purpose of this paper is to discuss the concept of truth formulated by Franz Brentano in 1889. As a textual basis, I take Brentano’s communication, presented to the philosophical community of Vienna in March 1889, entitled “On the concept of truth” (Über den Begriff der Wahrheit), and I provide a systematic exposition of Brentano’s analysis of the problems surrounding the interpretation of the Aristotelian concept of truth as correspondence. My analysis explains how Brentano reinterpreted the Aristotelian concept of truth as correspondence within the conceptual framework of his descriptive psychology.
the results of some analyses that I have recently published in two papers, which presented what Brentano defined as a conceptual framework (Gedankenkreise) of his descriptive psychology. In one of these papers (BRITO, 2012a), I analyzed some of Chisholm's considerations about changes present in the Brentanian definition of psychical phenomena, as presented in his works from 1889-1891, which comprise the book Descriptive psychology. In addition, I introduced Chisholm's thesis, which points out changes in the 'doctrine of intentional in-existence of an object', as presented in the book Psychology from an empirical standpoint. More specifically, I described how Chisholm found these changes in the Brentanian description of psychical phenomena and attributed them to the abandonment of the Thomistic-Aristotelian ontology that underlies the doctrine of intentional in-existence of an object. Chisholm's analysis also presents a new definition of psychical phenomena, as formulated by Brentano in the work Descriptive psychology. In order to demonstrate this new definition, I highlighted how the new epistemological foundation pointed out by Chisholm's thesis describes the intentional relation of all psychical acts. Finally, I highlighted the virtue of Chisholm's thesis in its interpretation of the reception of Descartes in the work Descriptive psychology and its new classification of psychical phenomena.
My second paper (BRITO, 2012b) analyzed the manuscript Psychognosy (Psychognosie); this is the title of the readings presented by Franz Brentano at the University of Vienna in 1890. Specifically, I addressed the new description of intentional psychical acts, as explained by Brentano, based on the analysis of consciousness developed in this manuscript. Thus, my research showed how Brentano described the unity of consciousness as the whole, distinguished in its parts, which was the object of Psychognosy. In addition, the fundamental point of my research demonstrated that, by assuming the interpretation introduced by Chisholm (1969), the new description of psychical acts (as a relation between the parts of consciousness) introduces some fundamental changes to the Brentanian theory of intentionality. It describes two distinguishable types of intentional relations which characterize, on the one hand, the intentional acts of presentation and, on the other hand, the intentional acts of judgment.
Specifically, my analysis showed that Brentano (1995, p. 27) describes presentation as the class of "the parts of intentional pairs of correlates" and, based on this description, the presentation is an intentional act directed toward its respective correlate, characterized as an immanent object. Moreover, I showed that judgment is described as the class of "merely distinctional parts of the psychical Diploseenergie [primary and secondary psychical relation]" (BRENTANO, 1995, p. 27) and, based on this description, the judgment is an intentional act directed toward a presentation. Thus, I highlighted the fact that judgment is directed toward presentation as a whole and presentation is directed toward the immanent object. Therefore, as psychical Diploseenergie, presentation (as a whole-and not the immanent object of presentation) is the primary object of judgment, and judgment itself is the secondary object.
Therefore, the description of intentional activity of the act of judging as the class of merely distinctional parts of the primary and secondary psychical relation (Diploseenergie) is the Brentanian fundamental assumption for the analysis of the concept of truth as correspondence. Thus, I move onto the textual analysis without losing sight of this assumption.
Franz Brentano's concept oF truth and related proBlems
In the work On the concept of truth (Über den Begriff der Wahrheit), Brentano's analysis returns to the classic Aristotelian definition of truth understood as adaequatio rei et intellectus. However, opposed to the traditional interpretation of the Aristotelian concept of truth as correspondence, this analysis is guided by definitions, distinctions, and classifications made by Brentano himself in his doctoral thesis, in which he developed a theory about the multiple meanings of being, according to Aristotle. Thus, as he had done in 1874 based on Aristotelian assumptions, Brentano consistently sustained, in 1889, that true and false (in themselves) could be found in all judgments, and could be deemed either affirmative or negative. Brentano's analysis (1975, p. 22-23) was timely and he directly pointed to the main Aristotelian definition of truth by mentioning and interpreting the following passage of Metaphysics: The preceding investigation has shown that Aristotle uses the words "true" and "false" in several senses; hence it will now be important to determine in which of these meanings it is employed when he deals with being in the sense of being true and non-being in the sense of being false. It does not seem difficult to decide this question since, in Met. VI. 428 Aristotle explains himself with a clarity that leaves nothing to be desired by saying that the on hos alethes and the me on hos pseudos occur only in judgments, either affirmative or negative. "Being as the true and non-being as the false are found in combination and separation, and both together in the division into contradictories, since the truth has affirmation when there is combination, and negation when there is separation, while the false in each case has the contradictory opposite.... For the true and the false are not in things...but in the understanding, and not even in the understanding where simple concepts are concerned." Obviously, it is the judgment which is here called true and false, hence to be or not to be.
Formulating his own meaning as he interpreted Aristotle's theory, Brentano used the delimitation of judgment as a place of truth and falsity as a touchstone for his description of judgment as a psychical act of attribution of true and false. As we shall see, this description was characterized by the fact that judgment, analyzed from the perspective of Brentano's Descriptive psychology from 1889, had a specific structure. In addition, this structure consisted of existential predication of a presentation (act of presenting anything), be it simple [(A) is] or compound [(A and B) is]. Thus, Brentano (1975, p. 23) explained and exemplified the act of judgment as follows: Judgment itself is the subject to which being belongs as a predicate. Hence the being of which he here speaks is not the copula which connects subject and predicate in the sentence itself, especially since a negative judgment, too, is said to have being, and an affirmative one non-being. Rather we are concerned with a being which is predicated of the entire, fully articulated judgment. This may be clarified through an example. Let us suppose somebody wanted to demonstrate to someone else that the sum of the angles in a triangle is equal to two right angles, and that he requires as a starting point of the proof the assumption that the exterior angles are equal to the opposing interior angles. The question now is whether this is or is not [the case], i.e., is it true or is it false? It is! i.e., it is true.
It is worth mentioning that Brentano developed his analysis of the Aristotelian concept of truth by taking as a basis the foundations of his description of psychical phenomena to judge, defined in 1889 as merely distinctional parts of the psychical Diploseenergie [primary and secondary psychical relation]. In this context, judgment was described as a psychical act intentionally directed toward presentation and, in turn, presentation was described as an intentional psychical act directed toward an immanent object. Because of these theoretical orientations, Brentano held that the correct definition of truth could be established by analyzing the Aristotelian answer to the question: "When is a judgement false and when is it true?" As described in the following quote, Aristotle's answer to that question demarcates the starting point of the Brentanian analysis: His answer is this: a judgment is true if the one who makes the judgment is related to things in a way which corresponds to them, and a judgment is false if the one who makes the judgment is related to things in a way which is contrary to them. "He who thinks the separated to be separated and the combined to be combined has the truth, while he whose thought is in a state contrary to that of the objects is in error." (Met. IX, 10, 1051b3). And so it was that truth was explicated as being a kind of agreement or correspondence obtaining between things and judgment. (BRENTANO, 2009, p. 4) Having thusly explained the Aristotelian definition, Brentano recognized the need to dispel the problems involved in the notion of "correspondence". Let us consider the first such problem.
Brentano agreed that it was necessary to clarify the Aristotelian way of understanding the connection of what is combined and the separation of what is separated. In other words, he said that this was explicit in the following statements by Aristotle: Aristotle states in the De Interpretatione that judgment is a combination of thoughts or concepts (συμπλοκή τῶν νόηματων); that it is a synthesis (σύνϑεσις). He says that the synthesis consists either in taking one thing to be combined with another, thus forming a unity, or else in taking one thing to be separated or cut off from another. One judges truly when one takes as combined things that really are combined, or when one takes as separated things that really are cut off from one another. One judges falsely, on the other hand, if one judges in a way that is contrary to the way in which the things are related. (BRENTANO, 2009, p. 11) According to the Brentanian analysis, this definition of truth incurred ambiguity because it supported the interpretation proposed by "[...] those who think there is a relation of identity, or of sameness, or of similarity, between a true thought and a thing" (BRENTANO, 2009, p. 14).
Guided by the theory of judgment based on the philosophy of mind, developed by 1889, the Brentanian analysis deemed as completely false the proposition that truth is the correspondence of judgment and things. Brentano (2009, p. 13-14) asserted that "it must be given an interpretation quite different from the one offered by those who think there is a relation of identity, or of sameness, or of similarity, between a true thought and a thing." Indeed, he intended to support the thesis that the truth is the correlation between judgment and anything, but that anything is not a real thing. Thus, by means of a disjunctive syllogism, the classical contradictions were shown to involve the possibility of agreement among judgments and real things. Such contradictions involved both cases of negative judgments and cases of affirmative judgments. In order to demonstrate the problem, I will discuss, separately, both of these classes of impossibilities which Brentano sought to eliminate from his theory of truth. I will start with the former.
As described in the following quote, Brentano exemplified the impossibility of correspondence between negative judgments and things based on the fact that what should be the correspondent of true negative judgment does not exist as a real thing-or, in his terms, is an existing non-real.
The difficulty is especially clear in the case of simple denial. If the truth of "There is no dragon" were to reside in a correspondence between my judgment and an object, what would the object be? Certainly not the dragon, since there isn't any dragon. Nor is there any other real thing which could count as the corresponding reality. (BRENTANO, 2009, p. 13) As Brentano argues, the foregoing argument proposes a refutation of the traditional interpretation of the notion of correspondence of a judgment to something real, based on the fact that true negative judgments are such, regardless of the reality of what is being affirmed. This means that Brentano is already analyzing the correspondence of negative judgments, based on the criteria of his descriptive psychology. In other words, he is simply assuming the reality of the act of presenting anything, rather than the reality of what is presented (whose existence can be real or non-real). As highlighted in the citation that follows, the same is true for the negative judgment of existence.
A similar situation holds when, instead of denying a thing simply, the judgment denies it only as being a real determination of some other object. Suppose I say, "Some man is not black". What is required for the truth of the statement is, not that there is black separated from the man, but rather that on the man. There is an absence or privation of black. This absence, this non-black, is clearly not an object; thus again there is no object given in reality which corresponds to my judgment. It is quite obvious, therefore, that in the case of every true negative judgment-and unmistakably so when the judgment is simple-the correspondence which is supposed to hold between true judgments and reality is not to be found. (BRENTANO, 2009, p. 13) Having discussed the first impossibility of correspondence, I will move on to the second class of contradiction which involves the impossibility of Franz Brentano's theory of judgment (1889) Artigos / Articles correspondence between judgments and real things. Also, according to Brentano, in addition to true negative judgments, true affirmative judgments also explain the impossibility of correspondence between a judgment and a real thing. As described in the following quotation, this impossibility becomes evident when anything that should be the correspondent of this true judgment does not necessarily exist as a real thing; specifically, when it is a collection of things, parts of it, limits of such a thing, anything in the future or in the past, etc. Therefore, due to this non-existence of things, which necessarily correspond to the affirmative true judgments, Brentano (2009, p. 13) concluded that the traditional interpretation of the concept of adaequatio rei et intellectus was completely annihilated. Let us consider his argument.
The other case, which seems to lead to a similar result, may be seen by noting the area in which the affirmative function is exercised. We find, of course, that the affirmative judgment often does apply to things; but we also find-I shall make the point clear with examples-that it often applies to objects to which the word "thing" should not be applied at all. Now whenever a true affirmative judgment does apply to a thing-whether the judgment be one which simply accepts or affirms the thing or one which attributes to it some further determination-we can indicate a correspondence between the judgment and the thing. But how are we to do this when the judgment does not apply to a thing? A true affirmative judgment may, of course, apply to a single thing. But it may also apply to a collection of things, or to a part of a thing, or to the limit or boundary of a thing, and the like-all these latter being objects which are not themselves things. Or, if there were someone who ventured to say of such objects that they are really things, would he want to say the same of an object that I know to have perished a long time ago, or to exist in the distant future?
Here we are not dealing with any thing that exists external to me. And still more! What if I affirm the absence or the lack of a thing? Will it then be said that this absence, this lack of a thing, is itself a thing? Or if I were to say that there is a certain impossibility, or that there are certain eternal truths (the laws of mathematics, for example), would it then be supposed that there are eternal things, perhaps similar to Platonic ideas, which exist in, or outside of, the world? Certainly not! The whole idea of the adaequatio rei et intellectus seems to go completely to pieces.
Based on the exposition of such impossibilities, the Brentanian analysis inferred the need for a new interpretation of what would become known as the Aristotelian conception of concordance between judgment and any thing. Having presented the Brentanian point of departure, I will now return to the exposition of the philosophical problem of truth, to which Brentano presents a solution within his philosophy of mind.
the cartesian orientation toward dissolving misconceptions concerning the concept oF truth
As explained by the Brentanian analysis, the guiding philosophical problem of the concept of truth was introduced into the history of philosophy by the sophist Gorgias. Taking this historical-philosophical fact into account, Brentano's argument considered the classic problematization, as presented by Gorgias, to be a straw man. In his analysis of the pre-Socratic and sophistic arguments, Gorgias was the first to point out the problematic consequences of the notion of truth as correspondence. Brentano (2009, p. 5) summarizes the criticism of Gorgias of the notion of truth as correspondence as follows: We have Gorgias denying that anything real can be known, and then adding that, even if there were anything that could be known, it would be impossible to communicate the knowledge from one person to another. There is nothing which corresponds completely to anything other than itself. What is external to me is not in me, and what is and remains in me does not pass over into anyone else. Thus truth, as well as the communication of truth, is impossible. If any of our thoughts can be said to be true, then, according to Gorgias, everything else can be said to be true. For every thought is identical with itself as well as different from all other thoughts. But that every thought should be called true, even when I think of a chariot race on the sea, is an absurdity.
There is no doubt that the thesis of Gorgias needs a more rigorous analysis, because it characterizes the fundamental problem of the theory of knowledge. However, Brentano's purpose was to describe how the problem of truth posed by Gorgias could be dissolved by the philosophy of mind. Thus, Brentano's question is: how? The answer is by the application of the criteria of analysis which were found in the description of relation between the parties and the whole of consciousness; in other words, through the relations that support the theoretical foundations of his descriptive psychology.
The Brentanian analysis explains that the solution to the problem raised by the sophist Gorgias "[...] stems from a failure to recognize the distinction, which Descartes had described as the distinction between formal and objective reality" (BRENTANO, 2009, p. 10). However, with regard to this Cartesian solution, Brentano (2009, p. 10) said that " [it] had been brought fully to light long before by Aristotle, who used it in overcoming the absurdities and sophistries of Parmenides, Gorgias, Protagoras, and others". As mentioned in the following quote, the solution to the problem is to distinguish, in a psychical act of belief, the constituents of formal reality and the elements of objective reality: If I believe something, then this belief is "formally" in me. When I later recall the belief, then, according to Descartes' way of speaking, the value is "objectively" in me. In each case the same particular act of belief is involved; but in the one case it is my act itself and in the other it is only the immanent object of my remembering. (BRENTANO, 2009, p. 10) Brentano's view expressed in the quote above assumes the following interpretations in the contexts of formal reality and objective reality: in the context of formal reality, the psychical act of believing formally consists of belief. In this case, the belief would be the psychical act in its full operation (or, in Brentano's terms, a judgment). Thus characterized as judgment, the psychical act of believing presupposes a psychical act of presenting anything. The following propositions illustrate Brentano's analysis. The proposition "I think I hear that sound" should be reduced to the psychical form, "It is true, I hear that sound." Also, these two modes are described as existential predications of the type [(A) is] because "A" describes "I hear that sound" and "is" describes the act of judgment that assigns the affirmative mode of existence to proposition "A". Similarly, the proposition "I do not think I hear that sound" should be reduced to the psychical form, "It is not true, I hear that sound." These two modes are also described as existential predications of the type [(A) is not] because "A" describes "I hear that sound" and "is not" describes the act of judgment that assigns the negative mode of existence to proposition "A".
In the context of objective reality, the psychic act of remembering objectively consists of belief. The question is, how? In this case, the belief is the immanent object to the psychical act of remembering, characterized as a presentation. I will draw on previous proposals to illustrate the point analyzed by Brentano. Taken as basic psychical acts, the act of hearing anything and the act of remembering anything are presentations; that is, they are acts of presenting immanent object correlates. In the examples presented, the correlate of the act of listening is "the sound heard" and the correlate of the act of remembering is "the act of having believed I heard something". Therefore, to Brentano, an act of the second type (judgment as belief ) came to be conceived as a distinguishable part of an act of type one (memory, as presenting "a belief in anything"). Consider the details of that distinction when applied to the criteria of analysis found in the description of relation between the parties and the whole of consciousness; in other words, when applied to the relations that support the theoretical foundations of his descriptive psychology.
In the case of the formal reality of the belief, the act of judging positively (to believe) includes a distinction between the primary object and secondary object. On the one hand, the act of presenting as the primary object is directed to the immanent object; the latter constitutes the presented correlate. On the other hand, the secondary object involves the affirmative act of judging (to believe) by assigning reality (the being-real) to the primary object (act of presenting the immanent object). However, in the case of objective reality of belief, the act of remembering anything is simply an act of presenting and, therefore, is directed toward a presented object. Belief is, thus, this immanent presented object.
If I apply the distinctions proposed by Brentano's criteria of analysis, I can say that, as formal reality, belief is the secondary objective activity of the act of judging; as objective reality, belief is the correlate in the act of rememberingthat is, simply the immanent object given in that intentional relation.
Brentano supports his thesis about the truth as correspondence in two ways: a) his criteria for description of the relation between parts and the whole of consciousness; b) a distinction between the two Cartesian modes of psychical activity (formal reality and objective reality). Brentano's thesis claims that only the description of formal reality could explain, in an evident way, correspondence which characterizes the knowledge of activity. Judgments, structured by primary and secondary psychical relations (Diploseenergie), rather than presentations constituted by objective realities, explain the notion of truth as correspondence in the Brentanian theory of knowledge. According to Brentano, that is the main argument for rejecting the notion of truth in the class of presentations. Thus, there is no way to conceive presentations as knowledge.
It should be noted that this distinction, structured by primary and secondary psychical relations (Diploseenergie), refers not only to the psychical activity of the second class (judgments), but also to the psychical activities of the third class (feelings of love or hate). As Brentano (2009, p. 10) points Franz Brentano's theory of judgment (1889) Artigos / Articles out in the following quote, this should be the locus of the whole theory of knowledge and even the theory of moral knowledge: …similarly for every other mental function-volition, desire, aversion, and the like. Every mental act, in itself given formally, has its immanent object which, in Descartes' terms, is given objectively. To avoid misunderstandings, we might express this better by saying that the immanent object is given intentionally. It is obvious that no contradiction is involved in saying that something is in me intentionally but not formally, or vice versa, a fact that can be illustrated by the example of remembering, and by thousands of others. A mistake on this point would be a relapse into the crudest stages of the development of the theory of knowledge.
The foregoing arguments suffice to allow me to move on to present the psychical structure used by Brentano to interpret the Aristotelian notion of truth as correspondence.
Brentano's interpretation: the evidence oF judgment and the concept oF truth
Let us move on, then, to Brentano's interpretation, taken in light of the criteria of analysis of the work Descriptive psychology. Here is what he said: Following Aristotle's statement that a judgment is true if it takes as combined what is combined, and so on, we can say: a judgment is true if it asserts of some object that is, that the object is, or if it asserts of some object that is not, that the object is not-and a judgment is false if it contradicts that which is, or that which is not. (BRENTANO, 2009, p. 14) The above quote explains the relation between truth and being. Before I address this point, it is interesting to note the assumptions of the Brentanian analysis. Brentano supports the Aristotelian theory that the true and the false, taken in the proper sense, are in judgment; however, according to his analysis, this means that the judgment is the existential predication of presentation, be it simple [(A) is] or compound [(A and B) is]. Therefore, the Brentanian analysis needs to define the epistemological status of this mode of being characterized by existential predication.
Brentano explored the limits of his own doctoral thesis and took advantage of the theory of multiple meanings, which refers to being able to distinguish between reality and existence. Thus, in 1889, the constituent parts of a psychical act of judgment were also distinguished on the basis of multiple senses of being, which characterize the most basic distinguishable components. On the one hand, the correlate of the act in the primary psychical relation exists or does not exist. In this case, as a basic intentional relation, the presentation is referred to as the correlate which is anything existing but also non-existing. On the other hand, the constituent act of secondary psychical relation is the mode of reality assignment or reality denial. In this case, as a psychical activity, judgment is real in the positive and negative senses. This reality (Wirklichkeit) is the activity of consciousness itself. As a complex intentional relation (Diploseenergie), judgment addresses (and value) positively or negatively the correlate presentation of anything existing or non-existing. Although it is a relatively simple shift, Brentano (2009, p. 16) himself acknowledged that the philosophical tradition did not realize the advantages of this distinction: And finally we shall not be tempted, as so many have been, to confuse the concept of a thing with the concept of an existent. It is a few thousand years since Aristotle investigated the manifold senses of being; it is regrettable that even today there are so many who have not learned from his investigation. This epistemological reorientation assumed by Brentano inside Aristotelian and Cartesian thought has another implication. It establishes the criteria that support the four fundamental consequences of the definition of truth as correspondence. Let us see how this happened.
First, as stated in the following quotation, that criterion allowed Brentano to recognize the field covered by judgment as unlimited. In other words, every judgment is based on a presentation which is characterized as a primary psychical relation, because all judgment is characterized as a secondary psychical relation. Thus, any presentation could be part of the judgment field of activity. Now, this meant that one could judge as true or false a presentation which assumed anything (existing) to be a correlate of the act of presenting, but one could also judge as true or false a presentation that assumed "nonexistence" as a correlate of the act of presenting. Why was this possible in 1889? It was possible because the psychical phenomenon of judgment was described as an assignment of reality (being real or being non-real) to the psychical activity of presenting. Thus, Brentano (2009, p. 14) said: The area to which our judgments may be applied is unlimited, and the content of judgment may be as we like. But our judgment always pertains to some entity or other. And what does "entity" signify? It is a term that can be applied to God or to the world, to anything whatever, and to any non-thing.
Secondly, the mode of intentional reference of judgment, while directed toward presentation, explains one bipolarized structure: affirmative and negative. This meant, as Brentano (2009, p. 14) expressed, that "[...] this limitless area can be divided into two parts": affirmative judgment and negative judgment. Moreover, this " [...] opposition between the affirmative and the negative judgment implies, as we know, that in any given case one, and only one, of the two modes of judging is appropriate and that the other is inappropriate" (BRENTANO, 2009, p. 14). Also, according to Brentano (2009, p.14), " [...] this fact is ordinarily expressed by saying that, of two contradictory judgments, one and only one is true and the other false".
The third factor concerns the descriptions of the affirmative domain of judgment, while assigning (being) real to the presentation, and the descriptions of the negative domain, while assigning (being) non-real to the presentation. As part of the psychical whole, the (being) real constituent of the activity of judging is related in some way to the existing; i.e., the correlate of the constituent act of presenting. This relation, as emphasized by Brentano (2009, p. 14) below, is the first indication of harmony or correspondence between the parts of the psychical act: Let us say that the area to which affirmative judgement is appropriate is the area of the existent, a concept to be sharply distinguished from that of thing; and that the area to which the negative judgement is appropriate is the area of the non-existent.
Brentano then took what was explained in these three points to define the notion of truth as correspondence or agreement. His analysis found that the classic Aristotelian definition should recognize that the truth is the correct assignment of reality (being real) and unreality (being non-real). In this case, the contribution of the philosophy of mind is the possibility of describing the correctness of this act of judging.
the truth as correspondence
The truth is a judgment, as Brentano reminds us constantly in his references to Aristotelian arguments. This argument must be considered, however, in the following sense. The psychical activity of judgment is the intentional relation structured as primary and secondary psychical relations (Diploseenergie). Thus, the psychical act of judging is the secondary activity which ascribes reality (being real) or unreality (being non-real) to the primary psychical relation, called the act of presenting an immanent object (existing or non-existent); i.e., the presentation. Therefore, this act makes the fundamental point of the theory of knowledge explicit; specifically, the fact that the correspondence and agreement could not be identical or similar, but should be conceived as harmony, relevance, or correspondence. This harmony would occur between the activity which assigns reality or unreality (i.e., secondary psychical relation directed toward the primary psychical relation) and the activity of presenting the existing or the non-existing (primary psychical relation, which is called the act of presenting an immanent object, be it anything or "any non-thing"). Let us consider these two fundamental characteristics which make the notion of truth explicit.
First, harmony or correspondence of a true judgment, described by Brentano as fairness, can be explained from both affirmative and negative judgment domains. In other words, every description of judgment in the affirmative domain (where the valuation of truth occurs) shows that the truth is explicit in two areas. On the other hand, the truth is explicit in the correct assignment of unreality (being non-real) to the presentation (when it is a reference to "any non-thing"-the non-existent). In addition, and in a contrary way, any description of the negative domain of judgment (which occurs with an evaluation of false) shows that the false is also explicit in two areas. On the one hand, the false is explicit in the incorrect attribution of reality (being real) to the presentation (when it is a reference to "any non-thing"-the non-existent). On the other hand, the false is explicit in the incorrect assignment of unreality (being non-real) to the presentation (when it is a reference to anything-the existent). Thus, the theoretical efforts of Brentano (2009, p. 14) became understandable, as described in the next quote, when he redefined the notion of correspondence and harmony or agreement: And this is all there is to the correspondence of true judgment and object about which we have heard so much. To correspond does not mean to be the same or to be similar; but it does mean to be adequate, to fit, to be in agreement with, to be in harmony with, or whatever equivalent expressions one may choose to apply. Franz Brentano's theory of judgment (1889) Artigos / Articles In these terms, therefore, Brentano has defined the notion of truth as the notion originating in the correct psychical attribution of reality (being real) and unreal (being non-real). This can be described as the correctness between two real elements (zwei wirklichen Elemente): a psychical element (the judgment activity) and also another psychical element (the presentation activity).
conclusion
As part of an unfinished project in 1889, Brentano's definition of truth has taken a central position in his philosophy of mind. Alongside the problems arising from the basis of the notion of intentional relations, problems related to the truth of evident judgments and to evidence of correct preference assumed central roles in his analysis around 1889.
From 1905 onwards, Brentano proposed a radical overhaul of his project of the philosophy of mind. This radical turn, called reism, is characterized by the phase in which Brentano described the concrete or individual mode as the exclusive mode of the existence of things. However, this was not a pure and simple abandonment of his theory of intentional relation, with its implications for the concept of evident truth and the concept of correct moral preference. On the contrary, such a turn underscored the radicalization of his theory of intentional relation in order to ensure evidence of true knowledge and correct moral preference.
Thus, the analysis set out here is a fundamental step in understanding the final phase of Brentanian thought. For this reason, it is also relevant to analyze Brentano's interpretation of the Aristotelian concept of truth as correspondence and its reworking based on the description of psychical activity of judgment, as an original source of knowledge of what is true and false. BRITO
|
v3-fos-license
|
2018-04-03T04:43:55.079Z
|
2013-07-15T00:00:00.000
|
36039118
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jidc.org/index.php/journal/article/download/23857389/895",
"pdf_hash": "ac89ff035803014ae99e0ab65c8cafb9cf053b58",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3045",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3bbc9384ba2c26987d80f27a93894f290b43d23f",
"year": 2013
}
|
pes2o/s2orc
|
Chronic hepatitis B prognostic markers other than pre-treatment viral load predicted composite treatment outcome
Introduction: Chronic hepatitis B (CHB) is a globally common infectious disease. Its clinical course is complicated. In Southeast Asia, nucleos(t)ide analogues (NA) are commonly used drugs for CHB treatment. Composite treatment outcome has often been used in CHB clinical practice, but rarely predicted epidemiologically. This study aimed to compare the composite treatment outcome between CHB patients with low and high treatment-naïve viral load, and to identify its predictors Methodology: This retrospective cohort study followed up 95 CHB patients on NA treatment for a year. Composite treatment outcome was defined as undetectable HBV DNA level, ALT normalization and, HBeAg clearance in the case of HBeAg-positive patients. Multinomial logistic regression analysis was applied to analyze the significant treatment response predictors. Results: Complete composite treatment outcome was achieved by 52% of CHB patients with an initial viral load < 6.5 log 10 copies /ml, but 31% of those had an initial viral load ≥ log 6.5 log 10 copies /ml. Outcome was predicted by HBeAg negativity (adjusted relative risk ratio, aRRR = 11.1, 95 % confidence interval, CI 3-41.3) and ALT normalization within the sixth month of therapy (aRRR = 6.7, CI 1.8-24.9). An elevation of ALT to more than 1.5 times the normal value (40 IU/ml) can lead to an incomplete response on NA therapy (aRRR = 6.2, CI 1.5-26.6.) Conclusion: Routine clinical markers other than pre-treatment viral load predicted composite CHB outcome on NA Therapy.
Introduction
Hepatitis B is a common infection causing a global prevalence of 350 million chronic hepatitis B (CHB) cases [1,2].An estimated 100 million hepatitis B carriers live in countries of the WHO-designated South-East Asia Region, where approximately 300,000 deaths are attributed to hepatitis B annually [3].
Southeast Asian countries, such as Thailand and Myanmar, have a high prevalence of chronic hepatitis B infection, with 5% to 7% of the adult population being long-term carriers or CHB patients [2,4].
Compared to the natural evolution of CHB in western countries, the evolution of CHB presents some peculiarities in Asian countries where infection at childhood, and a resultant prolonged immune tolerance phase in infected persons, is common [5].Unsuppressed, persistent viral replication contributes to a higher incidence of consequent hepatocellular carcinoma and CHB-related deaths [6,7].
There are two main options for anti-hepatitis B treatment: interferon therapy and nucleos(t)ide analogue (NA) therapy [8,9,10,11,12].The latter is preferred by clinicians because of its easy administration, minimal adverse effects, and the prevalence of the genotype C virus in Asia [10,13].The hepatitis B clinical course is complicated and the treatment outcome is difficult to predict [14,15,16].Laboratory tests to monitor hepatitis B prognosis are costly and as yet are not covered by the health insurance system in many Asian developing countries [17].It would be beneficial to determine the practically useful predictors of CHB treatment outcome for such settings.
The baseline HBV DNA level determines the longterm clinical outcome and CHB-related mortality [18].
It also influences the virological response to antivirals [19].How treatment-naïve HBV DNA viral load level influences the one-year NA treatment outcome has yet to be studied in a Southeast Asia setting.We hypothesized that the CHB treatment outcome would be different, depending on the initial HBV DNA viral load of treatment-naïve patients.We therefore compared the composite hepatitis B treatment outcome between NA-treated patients with high and low pretreatment viral loads.Moreover, we aimed to determine the predictors of complete CHB treatment success after treating with NA therapy for one year.
Study site
The study was conducted at the Hospital for Tropical Diseases, a tertiary care setting in Bangkok, Thailand.
Ethic approval
This study was approved by the ethics committee of the Faculty of Tropical Medicine, Mahidol University, Thailand (MUTM 2010-044-01).For the purposes of data collection, the director of the hospital permitted the review of the medical records of chronic hepatitis B patients attending and being treated at the Hospital for Tropical Diseases, Bangkok.
Study population
Eligible CHB patients were selected from 2005 to October 2010.
Inclusion and exclusion criteria
Patients who met the following criteria were included in the study: 1. Chronic hepatitis B (CHB) patients diagnosed HBsAg positive for more than six months or the presence of HBsAg and anti-HBc IgG in the serum or the presence of HBV DNA in the serum 2. Patients with HBV DNA level ≥ 5 log 10 copies /ml or 20,000 IU/ml in HBeAg-positive patients [11] 3. Patients with HBV DNA level ≥ 4 log 10 copies /ml or 2,000 IU/ml in HBeAg-negative patients [11] 4. Patients who had a serum alanine aminotransferase (ALT) level 1.5 times or higher than the upper limit of normal (ULN) 5. Treatment naïve CHB patients receiving nucleos(t)ide analogues therapy for the first time 6.Men and women between 18 and 70 years of age The following patients were excluded from the study: 1. Patients co-infected with the hepatitis C virus 2. CHB patients who already had hepatocellular carcinoma and acute liver failure before the treatment 3. Patients previously treated with interferon 4. Pregnant women The study cohort was formed into two groups depending on whether the initial HBV DNA viral load level of the treatment-naïve patients was < or ≥ 6.5 log 10 copies /ml.A high initial viral load meant a pretreatment HBV DNA ≥ 6.5 log 10 copies /ml, and a low initial viral load meant < 6.5 log 10 copies /ml.The follow-up period was one year (12 months).
Study outcome
We defined the composite CHB treatment outcome as including all of the most interesting clinical elements: 1. Undetectable HBV DNA, indicating a successful and sustained suppression of HBV viral replication 2. ALT normalization, indicating minimal liver cell injury 3. HBeAg seroconversion, indicating immunological recovery and less viral replication [8,9,10,11,12] Data management and recoding of treatment outcome Complete treatment outcome was defined as an undetectable HBV DNA viral load, ALT normalization, and HBeAg negativity in the case of HBeAg-positive cases.Those cases who showed only an undetectable viral load were classified as "incomplete treatment outcome".Those who did not achieve the undetectable HBV DNA were classified as "lack of treatment outcome".Treatment outcome data was recorded in three categories as shown in Table 1.
Laboratory tests and normal values
In this study, two methods of assessing quantitative HBV DNA viral load measurement were used.The first was the COBAS Amplicor Monitor assay (Roche Diagnostics, Basel, Switzerland), with a detection range of 3x10 2 -2x10 5 copies per ml in the majority of the cases.The second was the Abbott Real Time HBV assay (Abbott Laboratories, Abbott Park, IL, USA), with a detection range of 10-110 x10 6 IU/ml, (1 IU = 3.41 copies/ml) in 28% of cases.The median value of minimal HBV DNA detection limit in this study was 317 copies/ml.Thus the undetectable HBV DNA viral load in this study was taken to be an HBV DNA level < 317 copies/ml.This is an acceptable virological treatment outcome measurement according to current guidelines, and previous reports have indicated that patients achieving this level have been free of long-term complications [9,18].
Serum alanine aminotransferase (ALT) level was assessed using a Cobas C501 analyzer (Roche Diagnostics, Basel, Switzerland), and HBsAg, HBeAg, anti HBe antibody and serum alpha fetoprotein (AFP) assays were conducted by using an Elecsys 2010 analyser (Roche Diagnostics, Basel, Switzerland) at the Hospital for Tropical Disease clinical laboratory.
ALT normalization was defined as an ALT level less than 40 IU/ml.Disappearance of HBeAg after treatment was defined as HBeAg negativity.
Statistical analysis
Stata version 11 (StataCorp LP, College Station, TX, USA) was used to analyze the data.The continuous variables were summarized as mean and standard deviation (SD), or median and interquartile range (IQR) based on the distribution of data.Categorical data was summarized as percentages.At the 12-month follow up, the composite treatment outcome was compared between the two groups in the study.The non-parametric test of trend for the ranks of across ordered groups in Stata was used to compare the CHB treatment outcome, divided into three categories.
Predictor variables such as pre-treatment viral load, pretreatment ALT, HBeAg positivity, serum alpha fetoprotein, age, gender, and drugs used were analyzed using univariate multinomial regression, to determine the significant predictors of composite clinical outcomes.
A P-value less than 0.2 and clinical logic was used to discriminate the variables to be included in the final multivariate model.Crude Relative risk ratio (RRR) and adjusted relative risk ratio (aRRR) with a 95% confidence interval (CI) were produced by multinomial regression analysis using Stata version 11.Significance was decided by a P-value less than 0.05 and a 95% confidence interval.
Results
This cohort study was comprised entirely of Thai patients.The average age of the patients was 44.5 years with male patients making up 72.63% of the cohort study.The average pretreatment HBV DNA viral load was 5.22 (6.38-7.52)log 10 copies /ml l and the average ALT was 36 (64 -158) IU/ml (Table 2).All patients were naïve CHB commencing NA treatment.Almost half of the patients (41.05%) received lamivudine.
We constructed the cohort based on whether the treatment-naïve HBV DNA viral load was < or ≥ 6.5 log 10 copies /ml (Table 2).Out of 95 patients, 47.37% had a HBV DNA viral load ≥ 6.5 log 10 copies /ml before starting nucleos(t)ide analogue therapy.Table 3 shows the pretreatment characteristics of the two groups of patients.Age distribution, gender, and proportion of patients with high ALT (above 1.5 times ULN) were similar between the two groups; however, the proportion of HBeAg-negative CHB was higher in the low initial viral load groups (Table 3).Other prognostic factors were well-balanced with no significant differences between the two groups.The
Treatment outcome after one year of nucleos(t)ide analogues in patients with low (<6.5 log 10 copies /ml) and high (≥6.5 log 10 copies /ml) pretreatment viral load
After one year of NA treatment, the low initial viral load group obtained 52% complete treatment response, compared to only 31.11% in the high initial viral load group.In addition, 28% in the low initial viral load group did not achieve undetectable viral load, compared to 51.11% in the high initial viral load group (Figure).Thus the CHB treatment outcome was significantly different between the two groups (P for trend = 0.018) (Table 4).
Out of the total study population, 38.95% did not obtain undetectable HBV DNA level, while 18.95% obtained undetectable HBV DNA but not complete treatment outcome (Table 4).Complete treatment outcome was obtained by 42.11% (40 out of 95 participants) (Table 4).Moreover, significantly better treatment outcome was observed in patients aged 45 years or older, HBeAg-negative CHB patients, and those who obtained ALT normalization after six months of NA therapy.
CHB treatment response predictors by multivariate multinomial logistic regression
Univariate and multivariate multinomial logistic regression analyses were used to determine the significant prognostic indicators for successful CHB treatment outcome.
The final model (Table 5) included pre-treatment viral load, pre-treatment ALT, HBeAg positivity, age and gender.
Multivariate multinomial logistic regression analysis produced two significant predictors for complete treatment outcome: HBeAg-negative CHB (aRRR 11.13,CI 2.99-41.37)and ALT normalization in the sixth month of therapy (aRRR 6.67, CI 1.79-24.87).These are significant predictors for a complete and successful CHB treatment outcome after one year of NA.In addition, a pre-treatment ALT level higher than 1.5 times ULN is significantly associated with an incomplete treatment response, aRRR 6.21, CI (1.45-26.61).
Uniqueness of the study
Hepatitis B clinical research is diverse, with different study outcomes being used as the single endpoint in many studies [20].In this study, we used a clinically useful, realistic composite treatment outcome as the study endpoint measure.We monitored a cohort of CHB patients with low and high pretreatment viral loads to assess the one-year NA treatment outcome.
Current hepatitis B guidelines concurrently emphasize undetectable HBV DNA as the main [8,9,11,12], as it indicates the sustained elimination or suppression of HBV viral replication.In addition, surrogate outcomes, such as ALT normalization and HBeAg conversion in the case of HBeAg-positive CHB patients, are of practical clinical interest.ALT normalization is a less specific marker for viral replication, but it is important when taking into account minimal liver cell damage and quality of the patient's life [9].HBeAg clearance and HBeAg conversion are also meaningful turning points after starting treatment of HBeAg-positive CHB patients [9,21].Moreover, there have been reports of CHB-related complications despite the attainment of undetectable HBV DNA [14,22].Therefore, it is preferable to interpret the HBV DNA viral load in the context of the ALT level and HBeAg status [20].
Better treatment outcome in low viral load and HBeAg negative hepatitis as predictors of composite outcome
The treatment outcome following a low pretreatment viral load was better than a high pretreatment viral load (Figure).The difference was significant both clinically and statistically.In the second part of our analysis by multivariate multinomial logistic regression, HBeAg-negative hepatitis was the significant predictor for complete treatment outcome (Table 5).These clinical research findings are heralded by evidence from a recent viral kinetic study and metaanalysis of viral kinetic studies by Ribeiro and colleagues [23].The viral clearance rate is significantly faster, and the viral half-life is shorter (13h vs.25 hr) for HBeAg-negative infection as compared to HBeAg-positive infection [23].In addition, viral productivity is lower in most HBeAgnegative hepatitis patients [24].
In the current cohort, assembled on the basis of the pretreatment viral load, 70% of the low pretreatment viral load patients were HBeAg negative (Table 3).The proportion of composite treatment outcome was shown to be higher among the low initial viral load group using trend analysis.After adjustment by all the covariant prognostic factors, HBeAg-negative hepatitis was shown to be the significant predictor of one-year NA response, aRRR11.13,CI (2.99-41.37)(Table 5).Therefore, these findings were synchronized with each other and supported by viral kinetic reports.
Pretreatment ALT 1.5 times higher than ULN as predictor for incomplete treatment outcome
The ALT level at presentation is of predictive value concerning long-term complications and mortality in the natural history of hepatitis B [25].In the current study, we found that a pre-treatment ALT level higher than 1.5 times ULN is a significant predictor of incomplete treatment response in NAtreated patients (Table 4).Incomplete treatment outcome in this study refers to those patients who obtained an undetectable HBV DNA level without concomitant ALT normalization, or HBeAg conversion in the case of HBeAg-positive CHB.Rising ALT or flare of ALT in a patient indicates a loss of tolerance in HBeA-negative or flare in HBeAgpositive CHB [26].Such patients are likely to achieve undetectable HBV DNA but they might have a sustained ALT elevation or an ongoing immunological response.Similar findings in Telvibudine-treated HBe Ag-positive patients were reported by Guo-Cai and colleagues in China [27].In our study, 22% of the patients received Telbivudine.
Six-month ALT normalization predicts composite outcome after one year of NA
Assessment of ALT is an affordable laboratory test that can be monitored frequently in a resource-limited setting, and it is less costly than virological markers.Previous literature has pointed out the value of baseline ALT as a prognosis factor [28]. High ALT > 5 ULN is predictive of a combined treatment response in interferon therapy [29]; however, evidence is not yet conclusive for NA therapy [29].The immunological response and the presumptive phase of the disease is different between interferon candidate and NA candidate ALT level changes are dynamic and fluctuating throughout the CHB course.ALT changes after starting treatment might be useful for clinical prognosis.We analysed whether a normalized ALT level at the sixth month would predict the composite outcome at one year of NA treatment.A significant association was revealed after adjustment by multivariate multinomial logistic regression (Table 5).Similarly, a previous study reported a significant correlation between ALT normalization at the six month and successful virological outcome at one year among the NA-treated, Thai, CHB patients infected with genotype B and C HBV viruses [30].The current study result confirmed that ALT normalization at six months after commencing NA treatment significantly predicts undetectable HVB DNA and concomitant biochemical remission and immunological recovery (aRRR 6.67, CI 1.79-24.87).This would be a useful prediction guide for NA therapy treatment outcome in practice.
Overall, the one-year CHB treatment outcome on NA can be predicted by the pretreatment ALT level, the HBeAg status at the start of NA therapy, and ALT normalization at six months, while viral load influences the long-term clinical outcome.
Conclusion
Viral load suppression is an important CHB treatment outcome suggested by the guidelines but practically, a composite clinical outcome defines the quality of the patient's life.This investigation highlights the use of routine prognostic markers to predict the composite clinical treatment outcome in a developing setting where NA is widely used to treat CHB.
Figure .
Figure.CHB treatment outcome after 12 months of NA
Table 1 .
Study outcomes and categorization of treatment outcome data
Table 2 .
Baseline characteristic of chronic hepatitis B patients in the study Abbreviations: CHB, chronic hepatitis B; IQR, interquartile rage; ALT, serum alanine amino transferase; AFP, serum alpha fetoprotein
Table 3 .
Clinical characteristic, prognostic factors and type of NA received by chronic hepatitis B patients in the groups formed by low and high pretreatment viral load Abbreviations: NA, nucleos(t)ide analogues; ALT, alanine aminotransferase; AFP, alpha fetoprotein; ULN, upper limit of normal Note: # P-value for Chi square test.ALT normal value is <40 IU/ml study cohort was followed up until 12 months of nucleos(t)ide analogue therapy.
Table 4 .
Chronic hepatitis B treatment outcome after one year of NA treatment by different prognostic factors
Table 5 .
Clinical predictors of composite treatment outcomes in chronic hepatitis B after one year therapy of nucleos(t)ide analogues by multivariate multinomial logistic regression analysis Abbreviation: N, number; Crude RRR,unadjusted relative risk ratio by univariate multinomial logistic regression; aRRR the relative risk ratio produced by adjusted multinomial logistic regression model; CI, 95% confidence interval; VL (pre)= pre-treatment HBV DNA viral load level; HBeAg negativity (pre)= HBeAg negative CHB before treatment; ALT (pre)= pre-treatment ALT level; ALT= alanine aminotransferase, AFP, alpha fetoprotein; ULN, upper limit of the normal Note: *P-value for AFT is that of univariate analysis.ALT normal value is <40 IU/ml.
|
v3-fos-license
|
2024-07-10T06:17:14.565Z
|
2024-07-08T00:00:00.000
|
271062642
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "067da1c83e9a1fb277a00e994a14ac287b1b36be",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3046",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "5c610543030f02e05f9f96c944dbacce0d86c4d7",
"year": 2024
}
|
pes2o/s2orc
|
Value-based preoperative assessment in a large academic hospital
Background Value-based healthcare (VBHC) is an approach that focuses on delivering the highest possible value for patients while driving cost efficiency in health services. It emphasizes improving patient outcomes and experiences while optimizing the use of resources, shifting the healthcare system’s focus from the volume of services to the value delivered. Our study assessed the effectiveness of implementing a VBHC-principled, tailored preoperative evaluation in enhancing patient care and outcomes, as well as reducing healthcare costs. Methods We employed a quality improvement, before-and-after approach to assessing the effects of implementing VBHC strategies on the restructuring of the preoperative evaluation clinics at Humanitas Research Hospital. The intervention introduced a VBHC-tailored risk matrix during the postintervention phase (year 2021), and the results were compared with those of the preintervention phase (2019). The primary study outcome was the difference in the number of preoperative tests and visits at baseline and after the VBHC approach. Secondary outcomes were patient outcomes and costs. Results A total of 9722 patients were included: 5242 during 2019 (baseline) and 4,480 during 2021 (VBHC approach). The median age of the population was 63 (IQR 51–72), 23% of patients were classified as ASA 3 and 4, and 26.8% (2,955 cases) were day surgery cases. We found a considerable decrease in the number of preoperative tests ordered for each patient [6.2 (2.5) vs 5.3 (2.6) tests, p < 0.001]. The number of preoperative chest X-ray, electrocardiogram, and cardiac exams decreased significantly with VBHC. The length of the preoperative evaluation was significantly shorter with VBHC [373 (136) vs 290 (157) min, p < 0.001]. Cost analysis demonstrated a significant reduction in costs, while there was no difference in clinical outcomes. Conclusions We demonstrated the feasibility, safety, and cost-effectiveness of a tailored approach for preoperative evaluation. The implementation of VBHC enhanced value, as evidenced by decreased patient time in preoperative evaluation and by a reduction in unnecessary preoperative tests. Supplementary Information The online version contains supplementary material available at 10.1186/s44158-024-00161-7.
Background
Preoperative evaluation is essential for patient safety and involves several members of the multidisciplinary team, including surgeons, anesthesiologists, and other healthcare providers, to identify organ dysfunction and stratify patient risk [1].It is recognized as a central aspect of perioperative care, even if its effectiveness in terms of measurable healthcare outcomes is debated [2].In recent years, the healthcare system has faced escalating pressures and expectations, driven by a growing population of frail and elderly surgical patients requiring more intensive perioperative care.This challenge is compounded by financial constraints and rising healthcare costs, which constrain the range of available healthcare options [3].As a result, even if an effective preoperative assessment is necessary to reduce preventable outcomes during surgical procedures, several concerns are frequently raised about its efficiency, including (i) the overlap and redundancy between surgical staging visits and preoperative assessment visits, (ii) waste and inefficiencies for patients and providers (i.e., multiple hospital accesses for different visits and tests, repeated testing over time) [4], and (iii) the increased workload for healthcare professionals due to inefficient processes [5].
Diverse approaches have been used by healthcare organizations [6].Value-based healthcare (VBHC) has been proposed as an innovative framework for transforming healthcare systems [7], along with a strategic agenda for moving to a high-value healthcare delivery system [8].The objective and innovation of VBHC is to establish the value that matters to patients as opposed to expanding service volume [9][10][11][12].In this view, better health outcomes are reflected by increasing both the health status and patients' and other stakeholders' values.
While there is emerging evidence of the benefits of VBHC [13], studies have considered VBHC deployment as a means of enhancing preoperative clinics [14].Our group recently published a report on the application of the VBHC system to a single surgical unit [15], but no studies have examined the effects of preoperative VBHC interventions at the hospital level.
Consequently, the aim of this study was to assess the clinical and organizational impacts of revising the preoperative pathway across multiple units of a large surgical hospital, aligning it with the principles of VBHC.
Methods
We conducted a before-and-after study on quality improvement in Humanitas Research Hospital, an academic hospital in Milan with a large surgical volume.The study received IRB approval No. 44/21, September 2021, from the Humanitas Research Hospital Independent Ethical Committee.Patient informed consent was waived due to the anonymization of the data, gathered from the Electronic Health Record (EHR) system.Inclusion criteria were access to preoperative evaluation clinics in 2019 and 2021, for the units involved in the VBHC initiative.We defined the primary outcome as the absolute variation in the number of preoperative tests performed per patient before the VBHC intervention (year 2019, baseline data) and after the VBCH intervention (year 2021).The year 2020 was used to train the personnel.Therefore, the study lasted 3 years in total.
Secondary study endpoints were identified by observing the standard VBHC 3-tiered hierarchy of outcomes: [7] i) Clinical outcome: postoperative mortality ii) Process impacts: ICU admission, request for chest X-rays and cardiac visits, lead time (pathway's total time), and value time (i.e., the time spent in valueadded activities) iii) Sustainability impact: tests/consults requested after preoperative assessment, and surgical cancellation due to inadequate preoperative assessment Finally, an analysis of the economic resources saved during the initiative was also planned, for each VBHC initiative, and the results were divided into a three-level evaluation for analysis.
i) Revenues improvements ii) Cost reduction iii) Released resources
This study was performed in subsequent phases: 1. Baseline phase: prior to the VBHC initiative, the hospital protocol for preoperative risk assessment was based on administering a uniform set of baseline tests across all surgical units, with additional tests according to patient age.Typically, patients would undergo these standardized tests at the beginning of the process before their medical evaluation, after which anesthesiologists or surgeons might request further assessments based on their professional judgment.This often led to redundant evaluations, inefficiencies, and a waste of both time and resources.2. Development of VBHC Initiative: to address the discrepancy between the need for expert final evaluation and the aim to minimize unnecessary testing, we designed an intervention based on a tailored preoperative approach, and evaluated its efficacy according to VBHC principles.
Guided by a core team, we established 11 multidisciplinary teams Integrated Practice Units, composed of anaesthesiologists, surgeons, cardiologists, hospital operations managers, and healthcare quality experts.We established dedicated Integrated Practice Units (IPU) for each surgical specialty, inspired by the framework proposed in the 2016 National Institute of Clinical Excellence (NICE) guidelines [16].Each IPU created a specific risk matrix based on two variables: (i) the patient's medical risk, and (ii) the severity or invasiveness of the surgery.
For optimal effectiveness, the risk matrix needs to be straightforward and rapid to implement.It should replace the uniform, standard protocol of initial testing conducted at the outset of the preoperative evaluation process.
For the medical risk, we developed a standardized anamnestic questionnaire (Fig. 1) [17].The medical risk questionnaire was designed to be straightforward to complete during the initial surgical consultation.
The proposing surgeon should also indicate surgical intervention risk.To standardize this process each IPU categorized all the surgical procedures they performed in categories of low, medium, and high surgical risk according to surgical invasiveness.Following the assignment of medical and surgical risks, these were aligned within a matrix comprising nine distinct strata, created by cross-referencing three levels of medical risk with three levels of surgical risk (Fig. 2).Each IPU decided the most appropriate preoperative tests for each stratum of the matrix, to create a specific preoperative pathway for each stratum and each IPUs.
In total, we developed eleven matrices (one for each IPU) for a total of 99 strata (9 for each of the 11 matrices).These 99 risk strata constituted the core of the personalized approach to perioperative risk, as each patient was included in one of the strata immediately on the day of the surgical proposal, well before the start of the preoperative evaluation process.Given the similarities between different strata, to simplify the process we ultimately synthesized the developed risk stratification in 40 different strata, divided into 11 IPUs (low, medium, and high complexity procedures).
The VBHC intervention was designed to comprehend the whole cycle of patient care, and to consider the reorganization of the process from the first visit of surgical proposal to the postoperative recovery.Within the developed VBHC approach the surgeon identifies the patient's risk through the medical risk questionnaire and the surgery's severity, and provides a final risk value from the matrix.The visits and medical tests for the pre-operative assessment are identified according to the matrix final risk value.For simplicity and synthesis, in each IPU's matrix there are three levels of complexity, even if some can have more than 3 levels: ▪ Low complexity includes surgeries classified as "minor severity" by surgeons and with medical risk class 1 and 2 (no test or simplified test routine) ▪ Medium complexity includes surgeries classified as "intermediate severity" or "major severity" and with medical risk classes 1 and 2, or surgeries classified as "minor severity" or "intermediate severity" and with medical risk class 3 (blood test and ECG in addition to vital signs detection).For more invasive surgeries ("major severity surgeries") classified as medical risk class 2, as well as intermediate severity surgeries under medical risk class 3, we incorporated an additional preoperative cardiology consultation.▪ High complexity includes "major severity" surgeries with medical risk class 3. We require a complete preoperative assessment with blood tests, an EKG, vital signs detection, a cardiac visit, and a multidisciplinary clinic dedicated to high-complexity patients.
VBHC application phase (post-intervention phase):
the VBHC approach was applied in 2021.Outcomes were collected throughout and subsequently analyzed at the end of the intervention period.In terms of surgical standards, there were no major differences between this phase and the preintervention phase.This study followed the SQUIRE 2.0 guidelines [18].In-depth description of VBHC initiative development and staff training is reported in Supplementary material 1.
Data reporting and statistical analysis
Potential sources of bias in cohort study were identified in selection and reporting bias.Selection biases were considered to have minimal impact in this cohort study, given the nature of EHR analysis (there was no change in EHR structure between 2019 and 2021), the study selection criteria, and the large population extracted.Reporting bias may have influenced the quality and completeness of entered data.For example, some data may be occasionally under-reported; however, this type of error is unlikely to have changed over time or have significant time trends, and is limited by the medical-legal requirements which mandate completeness of EHR documentation.
Variables were described by frequencies (percentage) or mean (SD) and median (IQR), as appropriate.Univariate associations were tested by chi-square test, and Mann-Whitney U test as appropriate, considering a threshold of 0.05 for statistical significance.All statistical analyses were performed using R software, version 4.2.1.
Results
We included data from 11,013 hospital admissions, with 5242 patients admitted during the preintervention phase and 4480 patients admitted during the post-intervention phase.
Table 1 presents a comparison of patient demographics between those undergoing day surgery and those admitted for in-hospital stays.Day surgery accounted for 26.8% (2955 cases) of all surgical procedures.The patients' mean age was slightly older than 60 years, and the age was generally lower in the day surgery cohort.In both 2019 and 2021, the proportion of patients with ASA class II disease was greatest.Higher-risk patients (ASA III and IV) were present in 23% of cases and were lower in the day surgery cohort (10.5%).The BMI was comparable between the subgroups and was in the range of normality for most patients.
Clinical results
The overall mortality was 0.2%, with no statistically significant difference between preintervention and postintervention cohorts.The ICU admission rate was similar (4.0%preintervention vs. 3.4% in post-intervention cohort, p = 0.11), while the ICU length of stay did not differ, with a median of 1 day in both cohorts (p > 0.99).
The in-hospital preoperative consultancies, which were considered a proxy of inadequate evaluation during the preoperative clinical assessment, were similar before and after the intervention (2.8% vs 2.5%, p = 0.39), with a nonsignificant decrease from the preintervention to the postintervention group.
There was a large decrease in the number of electrocardiograms, chest X-rays, and cardiology visits after the intervention compared to before the intervention, as reported in Table 2. Figure 3 shows how surgical units were affected by the reduction in preoperative chest X-rays after the introduction of the risk matrix.
The mean number of preoperative tests per hospital admission was significantly reduced from 6.15 exams Fig. 3 Thoracic X-ray before and after the intervention, broken by year and surgical unit per patient before the intervention to 5.32 exams per patient after the intervention (p < 0.001).Supplementary materials 3 and 4 report the percentage variation in preoperative testing after the intervention, between day surgery and hospitalized patients.
Cost results and other VBHC outcomes
The reduction in chest X-rays, cardiac consultations, and other preoperative tests as part of the intervention led to a decrease in overall costs.Table 2 shows that the reduction in the number of preoperative tests has resulted in an annual cost savings of nearly 90,000 €.
The decrease in preoperative test utilization not only cuts costs but also creates an opportunity to repurpose the freed diagnostic and human resources within the hospital.Through an internal analysis, we evaluated the potential cost savings from these released resources, taking into account the overall time saved by staff.Our analysis reveals that the value of these liberated resources is roughly equivalent to 95,000 €.Consequently, the total economic benefit can be estimated to be approximately 190,000 €.
The length of time spent by patients in the preoperative evaluation clinic was significantly lower the intervention (373 vs. 290 min, p < 0.001).The value-time which represents the duration patients spend in preoperative clinics actually dedicated to medical activities and evaluations (excluding idle time), increased from 33 to 40%.All outcomes according to VBHC principles are reported in Table 3.
Discussion
In this pre-post quality improvement study, we demonstrated that an early personalized intervention to optimize preoperative assessment is safe, clinically feasible, and improves patient-and cost-relevant outcomes according to the VBHC value equation.A new tailored risk matrix was introduced at the first visit, for a total of 40 strata of risk despite a simple and rapid compilation backed by an automatic EHR procedure.The intervention resulted in reduced costs (lower number of preoperative tests and consultations), and time spent by patients in preoperative evaluation, without worsening clinical outcomes.
Before this VBHC initiative, the anaesthesiologist evaluated the patient at the end of the evaluation process which was standardized mainly by patient age.This causes a waste of resources and a potential delay in care, due to the request for additional tests and consult by the visiting anaesthesiologist at the end of the process.To limit this phenomenon, the hospital protocol before the VBHC initiative required several tests from baseline, leading to unnecessary testing and waste of resources [19,20].
To avoid these limitations, we created a new risk matrix designed to route the patient at the beginning of the process.Anaesthesiologists and surgeons remain in charge of decision-making on preoperative evaluation (clearing patients for surgery), while delays, rescheduling, and unnecessary evaluations are reduced without an increase in costs, enhancing patient care and comfort or enhancing value from a multi-stakeholder perspective.A few cases were reported by Mahajan et al. [21]: Through the establishment of a Centre for Perioperative Management and Medical Informatics, the Brigham and Women's Hospital analyzed short-and long-term outcomes to improve perioperative patient safety, evaluating strategies to promote quality in perioperative care.In a study on patients undergoing joint replacement at Duke University (Durham, NC, USA), the authors found that the traditional pre-anesthesia evaluation model has minimal potential to influence the management of chronic comorbid medical conditions.They proposed screening patients early on for modifiable risk factors such as malnutrition, anemia, and diabetes to enroll them in dedicated preoperative clinics to improve outcomes and improve value in the process [22].Our approach to preoperative management was broader than previous literature, encompassing the majority of surgical specialties.Patients were rerouted since the surgical visit to different preoperative pathways based on tailored risk strata, a process that was simplified by the hospital's electronic medical record.This strategy allows for increased value while preserving surgical safety and clinical outcomes.In fact, our strategy did not increase adverse events like death, ICU admission, hospital readmissions, or proxies of detrimental outcomes such as length of hospital or ICU stay.We also did not increase the use of adverse organizational metrics, such as last-minute surgical cancellations.In contrast, the reduced number of tests and services required during the preoperative period reduced hospital expenditures [23].
In particular, the number of preoperative tests and visits was reduced by 36.5%, and this -considering the cost of each test and procedure, resulted in a direct economic impact of approximately 90,000 € in savings in a single year.Here, we refer to the hospital cost related to these procedures, while the cost of the national health system and the cost of personal medical insurance are unquestionably greater.Considering that the new preadmission process did not increase clinical complications and that unnecessary testing can lead to false-positive findings, requiring further testing and expenses (also in terms of patients' physical and psychological suffering), any strategy able to preserve outcomes by reducing evaluation and testing should be promoted.Moreover, the dramatic reduction in volumes for specific tests such as cardiac consults, X-rays, cardiac ultrasonography, and other visits permits the freeing of both human and equipment resources, which can then be reinvested in new medical tasks to generate additional value for patients and for the national healthcare system.The personalized preoperative risk matrix reduced the overall number of preoperative tests, but for high-risk patients, in which it increased the number of visits and tests, catering to their specific needs.The VBHC intervention aims for improved value equivalence, not just cost reduction, by providing proper medical care to groups that benefit more, such as doubling preoperative nutritional assessments in the postintervention phase compared to baseline.
To increase the value of care in a VBHC framework, patient participation is essential [24].To increase the benefit for patients and caregivers, we incorporated patients' perspectives into our intervention and used a patient-centered approach.In our study, we were able to reduce the length of time patients spent in a preoperative evaluation clinic.Patients' total time spent in the preoperative clinic significantly decreased by approximately 90 min after the intervention.Moreover, the time used for medical assessment increased by approximately 130% (from 33 to 40% of the total time), indicating that the intervention was able to reduce the downtime between visits and tests, through a reduction in the number and reorganization of the process.
Aligning the interests of patients, healthcare professionals, and institutions is mutually beneficial.The hospital's cost savings could be reinvested in other areas, leading to a more evenly distributed improvement in patient care.Similarly, the time saved by healthcare providers may be invested in other clinical activities, additional quality improvement, or improved quality of life at no additional cost.To implement VBHC measures, reliable information technology, and a dedicated quality improvement unit are paramount.In this view, operational initiatives including tele-preoperative assessment, and technologies (i.e., videos, websites, and apps for patient education) should be promoted to improve patient engagement [25,26] and reduce delays.In our experience, these technologies allow to increase patients' education and improve patients' safety and satisfaction and may be further used to improve value in healthcare at minimal additional costs [27].
Limitations
Due to the high surgical volume and specific organizational characteristics, the external validity of this singlecenter study may be somewhat limited.However, the majority of the limitations that are common in preoperative evaluation clinics are diffuse across hospitals, and we believe that our strategy of early personalized preoperative evaluation pathways can be replicated by other hospitals to improve VBHC outcomes.Accordingly, we describe the initial experience of the application of this VBHC to another hospital with our organization in Supplementary material 5.
This was an observational before-after study, to measure the effect of a large organizational intervention.While the authors took all the possible precautions to minimize the risk of confounders, observational studies are susceptible to the risk of confounders, and the influence of some confounding variables may be difficult to quantify.We also acknowledge that this study may be underpowered to detect differences in major clinical outcomes, such as mortality and severe complications, which are rare in elective surgical populations.Moreover, due to the nature of the data extracted, we did not conduct a propensity score matching.Nonetheless, the comparison of characteristics between the groups before and after the intervention revealed minor and clinically negligible variations in baseline characteristics, supporting the validity of our approach.
Another limitation regards the measured compliance with the intervention.During the initial implementation of the preoperative pathway, we did not specifically collect data on staff compliance or attendance at educational interventions.After the VBCH initiative, while not all possible deviations from the new standard could be collected, we collect specific outcomes such as new requests for preoperative tests by physicians after patient evaluation during the preoperative clinic.These deviations, which were requested upon clinical examination by the physician, were less than 1% of the total number of chest X-rays and 8% of the total number of preoperative cardiology consults and are presented by surgery in Supplementary material 6.
The VBHC initiative began before the pandemic, but its deployment was delayed during the initial stages of the SARS-CoV-2 pandemic when hospital accessibility was limited for caregivers and families.Nonetheless, the pandemic was a catalyst for reorganization, as it increased healthcare providers' education and the rate of deployment of the risk matrix.The COVID-19 pandemic highlighted the significance of preserving the continuity of care, and this project helped to sustain this continuity during the third and fourth waves in the Lombardy region in 2021, as spending less time in the preoperative clinic was associated with a decreased risk of COVID-19 exposure from ambient air.
Conclusions
In this before-and-after quality improvement study, we demonstrated that the application of VBHC principles to preoperative care through a structured organizational intervention is safe, clinically feasible, and can improve value in healthcare.With a general trend toward increased healthcare costs and decreased economic resources, the anaesthesiologist and hospital administrators should assume even greater responsibility for the perioperative process and promote a VBHC approach to enhance value in healthcare by aligning hospital interests with those of patients and society.
Table 2
Variation in preoperative test and preoperative consultancies, before and after the intervention, and relative cost-analysis EKG percentages sum up to more than 100%, as they could be repeated more than once (i.e., if an unplanned cardiology consult is requested)
Table 3
Value-based outcomes in the whole cohort a Mean (SD) b Median (IQR).Percentages are calculated on the whole surgical population (day hospital and inpatients) c Chi-square test or Mann-Whitney U
|
v3-fos-license
|
2022-09-10T15:31:28.878Z
|
2022-09-01T00:00:00.000
|
252173041
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1944/15/18/6208/pdf?version=1662541238",
"pdf_hash": "c91d725d2c8efa065c15b56fe333b54ed90bd309",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3048",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "e8785e6f1bfa47936e9f9487b7d80e637172ba04",
"year": 2022
}
|
pes2o/s2orc
|
Effect of Stiffening the Printed Circuit Board in the Fatigue Life of the Solder Joint
Predictive analysis of the life of an electronic package requires a sequence of processes involving: (i) development of a finite element (FE) model, (ii) correlation of the FE model using experimental data, and (iii) development of a local model using the correlated FE model. The life of the critical components is obtained from the local model and is usually compared to the experimental results. Although the specifics of such analyses are available in the literature, a comparison among them and against the same electronic package with different user printed circuit board (PCB) thicknesses does not exist. This study addresses the issues raised during the design phase/life analysis, by considering a particular package with a variable geometric thickness of the user PCB. In this paper, the effect of stiffening the user PCB on the fatigue life of a ball grid array (BGA), SAC305 solder joint is studied. The board stiffness was varied by changing the thickness of the PCB, while the size of the substrate, chips, and solder balls were kept constant. The test vehicle consisted of BGA chips soldered to a user PCB. The thickness of the user PCB was varied, but the surface area of the BGA chip remained identical. The test vehicle was then modeled using a finite element analysis tool (ANSYS). Using a global/local modeling approach, the modal parameters in the simulations were correlated with experimental data. The first resonance frequency dwell test was carried out in ANSYS, and the high-cycle fatigue life was estimated using the stress-life approach. Following the simulation, the test vehicle was subjected to resonance fatigue testing by exciting at the first mode resonance frequency, the mode with the most severe solder joint failure. The resistance of the solder joint during the experiment was monitored using a daisy-chain circuit, and the point of failure was further confirmed using the destructive evaluation technique. Both the experimental and simulation results showed that stiffening the board will significantly increase the fatigue life of the solder joint. Although the amplitude of the acceleration response of the test vehicle will be higher due to board stiffening, the increase in natural frequencies will significantly reduce the amplitude of relative displacement between the PCB and the substrate.
Introduction
Vibration and thermal-based environmental loading are commonly encountered during the service life of electronic components, subsystems, and assemblies. However, compared to thermal loading, vibration-based loading is more complex and has long been the subject of research. In electronic manufacturing industries, virtually all the equipment commercially manufactured today uses surface mount technology, as this allows for far more components to be packed into a much smaller space. In surface mount technology, solder joints are the primary mechanical, thermal, and electrical interconnects between the components and the printed circuit board. Therefore, the reliability of solder joints is crucial in electronic packaging. A solder joint is a metallurgical joining method in which two or more metals are joined by melting and flowing a filler metal, commonly known as solder, into the joint. The traditional lead-containing solder alloy was banned due to the inherent toxicity of lead, and as a result, researchers around the world have focused on the development of a lead-free alternative. Recent advancements in lead-free solder alloys can be found in the technical literature [1][2][3]. With regards to the vibration-based reliability modeling of lead-free solder alloys, most of the research has focused on developing a general methodology to predict the fatigue life of solder joints. These general methods [4][5][6][7][8][9][10][11][12][13][14][15] include: (i) modal parameter estimation of the electronic package using the experimental modal analysis technique and updating the modal parameters in a finite element (FE) based model [4][5][6][7], (ii) detailed finite element modeling of the solder joint [8][9][10], (iii) calculation of the fatigue strength coefficient and exponent by combining the experimental and finite element model [11,12,15], (iv) utilizing the finite element-based simulation approach to obtain the critical stress in the solder ball, (v) calculating the fatigue life of the solder ball based on the maximum von Mises stress in the solder ball, and (vi) comparing the theoretical finite element results with the experiments. All of these general methods utilize frequency domain techniques for the harmonic vibration excitation, and the time domain-based rainflow cycle counting algorithm for the random vibration base excitation. Although significant progress has been made in developing a general method to evaluate the fatigue life of solder joints, there is a lack of understanding of the effect that specific properties of the PCB have on fatigue life. The properties of the user PCB include: (i) stiffness, (ii) dimensions, (iii) mode of vibration, and (iv) input excitation frequency of the PCB.
In this paper, the worst-case vibration scenario (resonance-based fatigue) in a ball grid array (BGA) chip solder joint is studied using the finite element method and direct experimentation. The effect of increasing the PCB thickness on the overall life of the solder ball was investigated through experimentation and numerical simulation. The study was carried out in the first bending mode of the user PCB; the worst stress amplitude that the test vehicle will see under vibration. The test vehicle consisted of electronic chips and substrate mounted onto a user PCB using the BGA solder. The solder balls were electrically connected through daisy-chain circuits. The dimensions of the user PCBs were (77 mm × 77 mm × 1 mm) and (77 mm × 77 mm × 1.6 mm), where 1 and 1.6 mm are the thickness of the PCBs used, and 77 mm is the length and the width of the PCB. Figure 1 Shows the PCB with the chips, and Figure 2 gives a detailed view of the solder ball arrangement. As shown in Figure 1, two test vehicles are shown with PCBs of thickness of 1 mm and 1.6 mm. The solder balls electronics shown in Figure 2 were made of lead-free (SAC 305) tin/silver/copper alloys that contain 96.5% tin (Sn), 3% silver (Ag), and 0.5% copper (Cu). As BGA chips are used in different applications, including automotive, robotaxi, locomotive, and aerospace industries, and the minimum PCB thickness requirement is different for each application, it is crucial to analyze the effect of board stiffening in the life of the solder joint. solder, into the joint. The traditional lead-containing solder alloy was banned due to the inherent toxicity of lead, and as a result, researchers around the world have focused on the development of a lead-free alternative. Recent advancements in lead-free solder alloys can be found in the technical literature [1][2][3]. With regards to the vibration-based reliability modeling of lead-free solder alloys, most of the research has focused on developing a general methodology to predict the fatigue life of solder joints. These general methods [4][5][6][7][8][9][10][11][12][13][14][15] include: (i) modal parameter estimation of the electronic package using the experimental modal analysis technique and updating the modal parameters in a finite element (FE) based model [4][5][6][7], (ii) detailed finite element modeling of the solder joint [8][9][10], (iii) calculation of the fatigue strength coefficient and exponent by combining the experimental and finite element model [11,12,15], (iv) utilizing the finite element-based simulation approach to obtain the critical stress in the solder ball , (v) calculating the fatigue life of the solder ball based on the maximum von Mises stress in the solder ball, and (vi) comparing the theoretical finite element results with the experiments. All of these general methods utilize frequency domain techniques for the harmonic vibration excitation, and the time domain-based rainflow cycle counting algorithm for the random vibration base excitation. Although significant progress has been made in developing a general method to evaluate the fatigue life of solder joints, there is a lack of understanding of the effect that specific properties of the PCB have on fatigue life. The properties of the user PCB include: (i) stiffness, (ii) dimensions, (iii) mode of vibration, and (iv) input excitation frequency of the PCB. In this paper, the worst-case vibration scenario (resonance-based fatigue) in a ball grid array (BGA) chip solder joint is studied using the finite element method and direct experimentation. The effect of increasing the PCB thickness on the overall life of the solder ball was investigated through experimentation and numerical simulation. The study was carried out in the first bending mode of the user PCB; the worst stress amplitude that the test vehicle will see under vibration. The test vehicle consisted of electronic chips and substrate mounted onto a user PCB using the BGA solder. The solder balls were electrically connected through daisy-chain circuits. The dimensions of the user PCBs were (77 mm × 77 mm × 1 mm) and (77 mm × 77 mm × 1.6 mm), where 1 and 1.6 mm are the thickness of the PCBs used, and 77 mm is the length and the width of the PCB. Figure 1 Shows the PCB with the chips, and Figure 2 gives a detailed view of the solder ball arrangement. As shown in Figure 1, two test vehicles are shown with PCBs of thickness of 1 mm and 1.6 mm. The solder balls electronics shown in Figure 2 were made of lead-free (SAC 305) tin/silver/copper alloys that contain 96.5% tin (Sn), 3% silver (Ag), and 0.5% copper (Cu). As BGA chips are used in different applications, including automotive, robo-taxi, locomotive, and aerospace industries, and the minimum PCB thickness requirement is different for each application, it is crucial to analyze the effect of board stiffening in the life of the solder joint. The rest of the paper is organized as follows: Section 2 presents the methods and experiments, including the finite element analysis approach of the two test vehicles shown in Figure 1 and the experimental setup. Section 3 presents detailed information on the FE modeling results and the correlation between the experimental and the simulation results. Section 4 of the paper presents the conclusions of the work.
Methods and Experiments
In this section, the detailed finite element analysis and experimental setup for the test vehicle are presented. The finite element model for a test vehicle was developed using the global/local modeling technique. The acceleration response data, modal damping ratios, and the natural frequencies of the test vehicle obtained from the finite element model were correlated with the experimental data. The correlated model was then utilized further for local analysis.
Finite Element Model
The finite element model of the test vehicle was developed using the ANSYS structural analysis tool. The model consisted of a PCB, substrate, chip, and solder balls. There were a total of 769 solder balls in the assembly, as shown in Figure 2. It is well known that the stress induced in the inner balls is at its minimum during vibration loading [16][17][18] and can be neglected. Therefore, to reduce the complexity of the model, only the outer rows and columns were modeled. The chip (silicon die) was modeled above the substrate and the substrate and the chip, substrate and solder balls, and the solder ball and PCB were bonded. The system was modeled using 186 solids with quadratic elements. Figure 3 shows the global FE model of the test vehicle. A cut model was developed by taking one of the corner solder balls to create a local finite element model. It should be noted that while creating a cut model, the boundaries for the local model needed to be verified to ensure that they were far enough from the stress concentration region. Figure 4 shows the The rest of the paper is organized as follows: Section 2 presents the methods and experiments, including the finite element analysis approach of the two test vehicles shown in Figure 1 and the experimental setup. Section 3 presents detailed information on the FE modeling results and the correlation between the experimental and the simulation results. Section 4 of the paper presents the conclusions of the work.
Methods and Experiments
In this section, the detailed finite element analysis and experimental setup for the test vehicle are presented. The finite element model for a test vehicle was developed using the global/local modeling technique. The acceleration response data, modal damping ratios, and the natural frequencies of the test vehicle obtained from the finite element model were correlated with the experimental data. The correlated model was then utilized further for local analysis.
Finite Element Model
The finite element model of the test vehicle was developed using the ANSYS structural analysis tool. The model consisted of a PCB, substrate, chip, and solder balls. There were a total of 769 solder balls in the assembly, as shown in Figure 2. It is well known that the stress induced in the inner balls is at its minimum during vibration loading [16][17][18] and can be neglected. Therefore, to reduce the complexity of the model, only the outer rows and columns were modeled. The chip (silicon die) was modeled above the substrate and the substrate and the chip, substrate and solder balls, and the solder ball and PCB were bonded. The system was modeled using 186 solids with quadratic elements. Figure 3 shows the global FE model of the test vehicle. A cut model was developed by taking one of the corner solder balls to create a local finite element model. It should be noted that while creating a cut model, the boundaries for the local model needed to be verified to ensure that they were far enough from the stress concentration region. Figure 4 shows the The rest of the paper is organized as follows: Section 2 presents the methods and experiments, including the finite element analysis approach of the two test vehicles shown in Figure 1 and the experimental setup. Section 3 presents detailed information on the FE modeling results and the correlation between the experimental and the simulation results. Section 4 of the paper presents the conclusions of the work.
Methods and Experiments
In this section, the detailed finite element analysis and experimental setup for the test vehicle are presented. The finite element model for a test vehicle was developed using the global/local modeling technique. The acceleration response data, modal damping ratios, and the natural frequencies of the test vehicle obtained from the finite element model were correlated with the experimental data. The correlated model was then utilized further for local analysis.
Finite Element Model
The finite element model of the test vehicle was developed using the ANSYS structural analysis tool. The model consisted of a PCB, substrate, chip, and solder balls. There were a total of 769 solder balls in the assembly, as shown in Figure 2. It is well known that the stress induced in the inner balls is at its minimum during vibration loading [16][17][18] and can be neglected. Therefore, to reduce the complexity of the model, only the outer rows and columns were modeled. The chip (silicon die) was modeled above the substrate and the substrate and the chip, substrate and solder balls, and the solder ball and PCB were bonded. The system was modeled using 186 solids with quadratic elements. Figure 3 shows the global FE model of the test vehicle. A cut model was developed by taking one of the corner solder balls to create a local finite element model. It should be noted that while creating a cut model, the boundaries for the local model needed to be verified to ensure that they were far enough from the stress concentration region. Figure 4 shows the local cut model of the test vehicle. In order to analyze the cut model, displacement boundary conditions were used. These displacements were obtained by solving the global model. The local model was developed using the node sharing technique. The material properties that were used in the finite element analysis are tabulated in Table 1. local cut model of the test vehicle. In order to analyze the cut model, displacement boundary conditions were used. These displacements were obtained by solving the global model. The local model was developed using the node sharing technique. The material properties that were used in the finite element analysis are tabulated in Table 1.
Experimental Setup
The experimental work for the prediction of the life of a solder joint addressed in this paper consisted of: (i) modal analysis of the test vehicle, (ii) a stepped sine test to validate the amplitude of vibration around the natural frequency of the test vehicle, and (iii) excitation of test vehicle at its first resonance frequency. In order to perform this experimental work, a suitable experimental setup needed to be designed. Figure 4 shows the details of the experimental setup. The setup consisted of: (i) an electrodynamic shaker from Modal Shop Inc., (ii) shaker amplifier, (iii) dual channel closed loop vibration controller (spider 80X), (iii) EDM vibration control system software from Crystal Instruments, (iv) data logger for resistance measurement from Data Translation, and (v) accelerometers for monitoring and controlling the signal at the desired locations. The traditional methods presented in the existing literature [8,[10][11][12]15,19,20] utilize an electrodynamic shaker with a slip table assembly or fix the test vehicle directly to the shaker head. In contrast to the traditional methods of testing, the method presented in this work uses an electromagnetic shaker with a stringer rod. This rod works to dynamically decouple the shaker from the test fixture and the structure under testing. The stringer rod was equipped with an accelerometer at the top (head accelerometer), which was used to measure the direct input in the form of force/acceleration going to the fixture plate shown in Figure 5. A close-up view of the head accelerometer is shown in Figure 6. The head accelerometer was mounted exactly at the center of the fixture plate, such that the response at the four corner screws of the test vehicle will be identical. This is possible if the resonance vibration of the fixture local cut model of the test vehicle. In order to analyze the cut model, displacement boundary conditions were used. These displacements were obtained by solving the global model. The local model was developed using the node sharing technique. The material properties that were used in the finite element analysis are tabulated in Table 1.
Experimental Setup
The experimental work for the prediction of the life of a solder joint addressed in this paper consisted of: (i) modal analysis of the test vehicle, (ii) a stepped sine test to validate the amplitude of vibration around the natural frequency of the test vehicle, and (iii) excitation of test vehicle at its first resonance frequency. In order to perform this experimental work, a suitable experimental setup needed to be designed. Figure 4 shows the details of the experimental setup. The setup consisted of: (i) an electrodynamic shaker from Modal Shop Inc., (ii) shaker amplifier, (iii) dual channel closed loop vibration controller (spider 80X), (iii) EDM vibration control system software from Crystal Instruments, (iv) data logger for resistance measurement from Data Translation, and (v) accelerometers for monitoring and controlling the signal at the desired locations. The traditional methods presented in the existing literature [8,[10][11][12]15,19,20] utilize an electrodynamic shaker with a slip table assembly or fix the test vehicle directly to the shaker head. In contrast to the traditional methods of testing, the method presented in this work uses an electromagnetic shaker with a stringer rod. This rod works to dynamically decouple the shaker from the test fixture and the structure under testing. The stringer rod was equipped with an accelerometer at the top (head accelerometer), which was used to measure the direct input in the form of force/acceleration going to the fixture plate shown in Figure 5. A close-up view of the head accelerometer is shown in Figure 6. The head accelerometer was mounted exactly at the center of the fixture plate, such that the response at the four corner screws of the test vehicle will be identical. This is possible if the resonance vibration of the fixture
Experimental Setup
The experimental work for the prediction of the life of a solder joint addressed in this paper consisted of: (i) modal analysis of the test vehicle, (ii) a stepped sine test to validate the amplitude of vibration around the natural frequency of the test vehicle, and (iii) excitation of test vehicle at its first resonance frequency. In order to perform this experimental work, a suitable experimental setup needed to be designed. Figure 4 shows the details of the experimental setup. The setup consisted of: (i) an electrodynamic shaker from Modal Shop Inc., (ii) shaker amplifier, (iii) dual channel closed loop vibration controller (spider 80X), (iii) EDM vibration control system software from Crystal Instruments, (iv) data logger for resistance measurement from Data Translation, and (v) accelerometers for monitoring and controlling the signal at the desired locations. The traditional methods presented in the existing literature [8,[10][11][12]15,19,20] utilize an electrodynamic shaker with a slip table assembly or fix the test vehicle directly to the shaker head. In contrast to the traditional methods of testing, the method presented in this work uses an electromagnetic shaker with a stringer rod. This rod works to dynamically decouple the shaker from the test fixture and the structure under testing. The stringer rod was equipped with an accelerometer at the top (head accelerometer), which was used to measure the direct input in the form of force/acceleration going to the fixture plate shown in Figure 5. A close-up view of the head accelerometer is shown in Figure 6. The head accelerometer was mounted exactly at the center of the fixture plate, such that the response at the four corner screws of the test vehicle will be identical. This is possible if the resonance vibration of the fixture is far away/below the excitation frequencies. Figure 7 shows the test vehicle with the fixture. The fixture was developed in such a way that (i) the resonance modes of the test vehicle do not interact with the resonance vibration of the fixture, (ii) there are no resonance frequencies of the fixture in the excitation frequency range. As shown in Figure 7, the test vehicle was mounted to the fixture using four corner screws and two corner accelerometers. These accelerometers were used to measure the signal at the corner screws and serve as a control excitation signal/input to the test vehicle. The response from the chip and the response of the head accelerometer were used as monitor responses. The equation of motion for the test vehicle shown in Figure 7 can be written as, [M] .. is far away/below the excitation frequencies. Figure 7 shows the test vehicle with the fixture. The fixture was developed in such a way that (i) the resonance modes of the test vehicle do not interact with the resonance vibration of the fixture, (ii) there are no resonance frequencies of the fixture in the excitation frequency range. As shown in Figure 7, the test vehicle was mounted to the fixture using four corner screws and two corner accelerometers. These accelerometers were used to measure the signal at the corner screws and serve as a control excitation signal/input to the test vehicle. The response from the chip and the response of the head accelerometer were used as monitor responses. The equation of motion for the test vehicle shown in Figure 7 can be written as, where
Results from Experiments and Simulations
In this section, the results from the experiments, correlated finite element model results, and the life predicted from the experiments and simulations for two samples of test vehicle are presented. Each sample consisted of two identically manufactured and assembled units of BGA Chips. As this paper mainly deals with the effect of board stiffening in the life of the solder ball, the detailed experimental and finite element study was carried out with two different thicknesses of PCB Board, as mentioned in Section 1.
Sine Sweep and Stepped Sine Testing
The purpose of sine sweep or frequency scan testing is to roughly estimate the natural frequency of the test vehicle. The natural frequency will then be further verified by the stepped sine testing. In order to perform a sine sweep test, the test vehicle shown in Figures 5-7 was excited with a constant acceleration amplitude at a frequency ranging from (300-500) Hz at an octave rating of 0.5 octaves per minute for 1-mm thick board and (650-750) Hz for the 1.6-mm thick board. The maximum amplitude of vibration at each excitation frequency was recorded using the accelerometer glued to the top of the chip, as shown in Figure 7. For each sample, two identical PCBs with BGA chips were tested. Table 2 shows the natural frequency of the test vehicle obtained with the sine sweep test. It should be noted that for the same sample there was a slight difference in the natural frequency between board 1 and board 2. This inconsistency was possibly due to the manufacturing tolerance. The fatigue testing of each sample was carried out at the first mode, the most severe and likely bending mode for solder joint failure. The results for the higher modes of the test vehicles are not presented in this paper. The next step of the experiment was to perform a stepped sine test to validate the natural frequency results and obtain the modal damping ratio for each sample of the test vehicle. In order to perform the stepped sine test, the test vehicle was excited with a constant amplitude of excitation in a narrow frequency band around the first mode natural frequency of the test vehicle, and at each excitation, the steady state amplitude of vibration was obtained. The frequency response function was created based on the steady state amplitude of vibration over the excited frequency range. The damping ratio was then estimated from the generated frequency response function, and the obtained damping ratio was used to calibrate the FE based model. The same procedure of stepped sine testing was carried out in the FE model, and the results obtained were compared with the experimental stepped sine test results. The half power bandwidth method was used to estimate the damping ratio of the test vehicle. Equation (2) shows the half power bandwidth equation to estimate the modal damping ratio.
where ξ represents the modal damping ratio, ∆ f is the difference in frequencies corresponding to the half power bandwidth (−3 dB) that is 0.707 of the peak amplitude, and f n corresponds to the frequency at the maximum amplitude of vibration. Figure 8 shows the comparison between the experimental and FE model stepped sine test results for sample 1 (1 mm thick board) for an excitation input of 0.5 G. As shown in Figure 8, the frequency response function (FRF) for the experimental and the FE simulations are very close to each other. The FRF is the data recorded by the accelerometer, which was at the top surface of the chip shown in Figure 7, whereas the FE model data corresponds to the nodal FRF at the top surface of the chip. As indicated by Figure 8, the first mode resonance frequency from the FE model was estimated to be 468.81 Hz, whereas the experimental value was found to be 469.2 Hz. Similarly, for sample 2 (1.6 mm thick board), the comparison of FRF between the experimental and FE model for an excitation input acceleration of 0.5 G is shown in Figure 9. From Figures 8 and 9, it is clear that the increase in board thickness resulted in the increase in maximum amplitude of vibration at resonance and an increase in the resonance frequency of the test vehicle. Figure 10 shows the first bending mode vibration of the test vehicle with different PCB thicknesses extracted from the FE model. other. The FRF is the data recorded by the accelerometer, which was at the top surface of the chip shown in Figure 7, whereas the FE model data corresponds to the nodal FRF at the top surface of the chip. As indicated by Figure 8, the first mode resonance frequency from the FE model was estimated to be 468.81 Hz, whereas the experimental value was found to be 469.2 Hz. Similarly, for sample 2 (1.6 mm thick board), the comparison of FRF between the experimental and FE model for an excitation input acceleration of 0.5 G is shown in Figure 9. From Figures 8 and 9, it is clear that the increase in board thickness resulted in the increase in maximum amplitude of vibration at resonance and an increase in the resonance frequency of the test vehicle. Figure 10 shows the first bending mode vibration of the test vehicle with different PCB thicknesses extracted from the FE model. other. The FRF is the data recorded by the accelerometer, which was at the top surface of the chip shown in Figure 7, whereas the FE model data corresponds to the nodal FRF at the top surface of the chip. As indicated by Figure 8, the first mode resonance frequency from the FE model was estimated to be 468.81 Hz, whereas the experimental value was found to be 469.2 Hz. Similarly, for sample 2 (1.6 mm thick board), the comparison of FRF between the experimental and FE model for an excitation input acceleration of 0.5 G is shown in Figure 9. From Figures 8 and 9, it is clear that the increase in board thickness resulted in the increase in maximum amplitude of vibration at resonance and an increase in the resonance frequency of the test vehicle. Figure 10 shows the first bending mode vibration of the test vehicle with different PCB thicknesses extracted from the FE model.
Fatigue Test (FE Model) Results
The FE model of the test vehicle consisted of two parts: (i) a global model, and (ii) a local model. The acceleration responses of the test vehicle shown in Figures 8 and 9 were verified from the experimental data using the global model of the system, and the displacement boundary conditions for the local model were obtained from the global model. In order to obtain the maximum displacement at the cut boundary, the global model was excited at the first mode resonance frequency at an excitation amplitude of 0.5 G. Since this paper deals with the effect of board thickness on the fatigue life of the solder ball, the same level of excitation (0.5 G) was used for both samples. There are several reasons for using the low input excitation amplitude including: (i) the test vehicle may not see excessive excitation amplitude at resonance frequencies, as the field vibration is random, (ii) there will be significant geometrical nonlinearities that will be excited when the test vehicle is subjected to higher excitation, (iii) the linear FE model results will not be valid and cannot be compared with experiments if significant nonlinearities are excited during experimentations, and (iv) there is a high probability of a change in the failure mechanism between the field and the laboratory based testing if the excitation amplitude is kept large. For example, the failure may be in the form of pad cratering, as opposed to solder joint failure. Figure 11 shows the maximum amplitude of the steady state stress at the solder ball. The maximum stress was at the top surface of the solder ball at the substrate/solder ball interface. Similarly, Figure 12 shows a comparison of the maximum steady state stress amplitude in the solder ball between sample 1 and sample 2, simulated using the FE based model. In addition, the effect of board stiffening in the stress induced in a solder joint was analyzed by adding a 2-mm board thickness. Clearly, the solder ball with the 1 mm thick PCB showed the maximum stress. As it is well known that the life of the solder joint is dependent on the maximum von Mises stress and that the von Mises stress is dependent
Fatigue Test (FE Model) Results
The FE model of the test vehicle consisted of two parts: (i) a global model, and (ii) a local model. The acceleration responses of the test vehicle shown in Figures 8 and 9 were verified from the experimental data using the global model of the system, and the displacement boundary conditions for the local model were obtained from the global model. In order to obtain the maximum displacement at the cut boundary, the global model was excited at the first mode resonance frequency at an excitation amplitude of 0.5 G. Since this paper deals with the effect of board thickness on the fatigue life of the solder ball, the same level of excitation (0.5 G) was used for both samples. There are several reasons for using the low input excitation amplitude including: (i) the test vehicle may not see excessive excitation amplitude at resonance frequencies, as the field vibration is random, (ii) there will be significant geometrical nonlinearities that will be excited when the test vehicle is subjected to higher excitation, (iii) the linear FE model results will not be valid and cannot be compared with experiments if significant nonlinearities are excited during experimentations, and (iv) there is a high probability of a change in the failure mechanism between the field and the laboratory based testing if the excitation amplitude is kept large. For example, the failure may be in the form of pad cratering, as opposed to solder joint failure. Figure 11 shows the maximum amplitude of the steady state stress at the solder ball. The maximum stress was at the top surface of the solder ball at the substrate/solder ball interface. Similarly, Figure 12 shows a comparison of the maximum steady state stress amplitude in the solder ball between sample 1 and sample 2, simulated using the FE based model. In addition, the effect of board stiffening in the stress induced in a solder joint was analyzed by adding a 2-mm board thickness. Clearly, the solder ball with the 1 mm thick PCB showed the maximum stress. As it is well known that the life of the solder joint is dependent on the maximum von Mises stress and that the von Mises stress is dependent upon the mesh density, the life of the solder ball was calculated using the volume average von Mises stress across the thin layer of solder elements [12,14,17,19]. The volume average von Mises stress was calculated using Equation (3).
where σ i is the maximum stress in the i th element, v i is the volume of the i th element, and n is the number of elements in a thin layer across the substrate solder interface. There is always a research question of how many thin layers of elements to take so that the accurate stress in the solder joint can be estimated [4,16,21,22]. Taking more layers will change slightly the stress value, which will affect the value of b and σ f . Since the objective of this research was not the development of new parameters of b and σ f by varying the number of elements in the solder layer, only the elements containing the single thin layer of solder joint were taken into account, following the existing literature [21,22]. The life of the solder joint was estimated using the high cycle fatigue equation shown in Equation (4).
where σ f is the fatigue strength coefficient, b is the fatigue strength exponent, and 2N f is the number of cycles to failure. The material constant σ f and b can be estimated using the FE based simulated maximum stress. Several researchers obtained the material constant using the simulated maximum volume average von Mises stress [21][22][23]. Since our goal in this paper was to study the effect of board stiffness on the life of the solder joint (SAC 305), material constants were used from the existing literature. The material constants that were used to estimate the fatigue life of the solder joint using numerical simulation are tabulated in Table 3. Using Equations (3) and (4) Table 4.
Materials 2022, 15, x FOR PEER REVIEW 10 of 16 upon the mesh density, the life of the solder ball was calculated using the volume average von Mises stress across the thin layer of solder elements [12,14,17,19]. The volume average von Mises stress was calculated using Equation (3). Figure 11. Maximum steady state stress amplitude at the solder ball (sample 1). Figure 11. Maximum steady state stress amplitude at the solder ball (sample 1).
Experimental Observation of Fatigue Life
The fatigue life of the solder joints in both samples were observed experimentally to verify the simulation results. In order to compare the results between the simulations and the experiments, the test vehicle was subjected to 0.5 G input acceleration at the four corner screws at the first mode resonance frequency. The failure of the solder joint was observed by monitoring the resistance increase of the test vehicle with a daisy chain assembly. The resistance of the corner balls was measured using a data logger, while the other ball resistances were monitored using a milliohm meter. In both samples of testing, the corner balls failed first, which agrees with the simulation results. As per the IEEE standard, an increase in resistance above 20% during testing is considered a failure. Table 5 shows the failure time for each sample and the comparison between the time predicted by FE simulation and experimentations. The percentage error between the simulation and the experimental results was around 20%, which is quite similar to the results reported in the existing literature [18,19,22,23]. In both simulations and experiments, the life of the solder ball was more than three-times higher for a stiffened PCB board. These results indicate that, even though the amplitude of acceleration increases as a result of board stiffening, the magnitude of the relative motion between the PCB and the package that is directly dependent on the frequency of excitation will decrease. The decrease in relative motion will decrease the corresponding von Mises stress experienced by the solder joint.
Failure Analysis
The failure of the solder joint was investigated using the cross-sectioning technique following the IPC-TM-650, 2.1.1. Figure 13 shows a cross-sectional view of the package. The failure mode was the crack in the bulk solder joint below the IMC layer in the package side. For each board tested, only one of the corner balls was found to fail and the failure mechanism was consistent. An interesting observation from the failure analysis was that there were no any failures at the IMC layer. Figure 14 shows an optical microscope image of the corner solder ball for the 1-mm and 1.6-mm board. As shown in Figure 14, the images (C) and (D) are the close-up view of (A) and (B). A scanning electron microscope (SEM) was used to further verify the crack of the bulk solder joint. Figure 15 shows the results taken from the SEM for the 1.6-mm board (A) and 1-mm (B). Images from SEM also further confirmed the bulk solder joint crack. The typical failure mode of the solder joint was brittle failure at the IMC layer, due to over-stressing (very high excitation) at the board. The thickness of the IMC layer and the roughness of the solder/IMC interface is associated with the migration of the failure mode [24,25]. With a higher excitation energy, the increased in IMC layer thickness decreased the tensile strength of the IMC, and at the same time, the reduction in the roughness of the interface reduced the stress concentration in the bulk solder in the vicinity of the interface [25]. These two factors caused the ductile fracture failure mode to migrate into the bulk solder near the interface to brittle failure in the IMC layer.
Failure Analysis
The failure of the solder joint was investigated using the cross-sectioning technique following the IPC-TM-650, 2.1.1. Figure 13 shows a cross-sectional view of the package. The failure mode was the crack in the bulk solder joint below the IMC layer in the package side. For each board tested, only one of the corner balls was found to fail and the failure mechanism was consistent. An interesting observation from the failure analysis was that there were no any failures at the IMC layer. Figure 14 shows an optical microscope image of the corner solder ball for the 1-mm and 1.6-mm board. As shown in Figure 14, the images (C) and (D) are the close-up view of (A) and (B). A scanning electron microscope (SEM) was used to further verify the crack of the bulk solder joint. Figure 15 shows the results taken from the SEM for the 1.6-mm board (A) and 1-mm (B). Images from SEM also further confirmed the bulk solder joint crack. The typical failure mode of the solder joint was brittle failure at the IMC layer, due to over-stressing (very high excitation) at the board. The thickness of the IMC layer and the roughness of the solder/IMC interface is associated with the migration of the failure mode [24,25]. With a higher excitation energy, the increased in IMC layer thickness decreased the tensile strength of the IMC, and at the same time, the reduction in the roughness of the interface reduced the stress concentration in the bulk solder in the vicinity of the interface [25]. These two factors caused the ductile fracture failure mode to migrate into the bulk solder near the interface to brittle failure in the IMC layer. With the increase in excitation amplitude, the percentage of brittle failure in the IMC layer increased greatly. As reported in reference [25], under the strain rate of 2 s −1 , the percentage of the brittle failure in the IMC layer increased to 80% which was the dominant failure mode of the solder joint. One of the possible reasons for this was that SAC 305 is sensitive to the strain rate, and during high amplitude excitation, the deformation localization in the bulk solder is suppressed, thus enhancing the brittle fracture in the IMC layer. On the other hand, at low excitation, the IMC layer thickness will not increase significantly, which results in the fracture in the bulk solder.
In this research, the board was excited at very low excitation, such that the dynamics of the test vehicle were completely in the linear range, which resulted in purely fatigue failure of the solder balls. Similar results of bulk solder cracking are reported in Reference [26], when the BGA package was subjected to harmonic excitation at room temperature. With the increase in excitation amplitude, the percentage of brittle failure in the IMC layer increased greatly. As reported in reference [25], under the strain rate of 2 s −1 , the percentage of the brittle failure in the IMC layer increased to 80% which was the dominant failure mode of the solder joint. One of the possible reasons for this was that SAC 305 is sensitive to the strain rate, and during high amplitude excitation, the deformation localization in the bulk solder is suppressed, thus enhancing the brittle fracture in the IMC layer. On the other hand, at low excitation, the IMC layer thickness will not increase significantly, which results in the fracture in the bulk solder.
In this research, the board was excited at very low excitation, such that the dynamics of the test vehicle were completely in the linear range, which resulted in purely fatigue failure of the solder balls. Similar results of bulk solder cracking are reported in Reference [26], when the BGA package was subjected to harmonic excitation at room temperature. (A)
Conclusions
In this paper, the effect of PCB board thickness on the life of a BGA solder joint was investigated using the resonance-based fatigue testing approach. The test vehicle consisted of two different samples of user PCB: sample 1 with a 1-mm thick user PCB board, and sample 2 with a 1.6-mm thick PCB board assembled to the BGA package. A finite element model using the local/global modeling approach was used to predict the life of the solder balls. A global model was utilized to verify the resonance frequency, and the maximum amplitude of vibration of the test vehicle was obtained using the stepped sine test approach. From the global model, the maximum amplitude of vibration was correlated with the direct experimental results, and the displacement results were obtained at the cut boundary. The displacement values obtained were used as a boundary condition for a local model, and a resonance based fatigue simulation was carried out in the local model. Similarly, first mode resonance-based fatigue testing was conducted on two sample test vehicles, by exciting the test vehicle at the first mode resonance frequency. The results from the numerical simulations and experimentations showed that the life of the solder ball increased significantly when the board was stiffened. Stiffening of the board increased the natural frequencies of the test vehicle and the increase in natural frequency reduced the maximum amplitude of the relative motion between the PCB and the package. The reduction in the amplitude of relative motion decreases the stress in the solder joint. The following conclusions are drawn from the study conducted with two samples: The life of a solder joint is directly dependent on the excitation frequency, therefore the aspect ratio of the user PCB plays an important role in defining the life of a solder ball. FE based simulation results will agree well with the experimental results if an experiment is conducted with a low excitation input, such that the excitation will not induce system nonlinearities.
Future research should focus on finding out the effect of nonlinearities of the test vehicle in the life of a solder joint, as well as the failure mode of a solder joint with respect to different amplitudes of input excitation. With the increase in excitation amplitude, the percentage of brittle failure in the IMC layer increased greatly. As reported in reference [25], under the strain rate of 2 s −1 , the percentage of the brittle failure in the IMC layer increased to 80% which was the dominant failure mode of the solder joint. One of the possible reasons for this was that SAC 305 is sensitive to the strain rate, and during high amplitude excitation, the deformation localization in the bulk solder is suppressed, thus enhancing the brittle fracture in the IMC layer. On the other hand, at low excitation, the IMC layer thickness will not increase significantly, which results in the fracture in the bulk solder.
In this research, the board was excited at very low excitation, such that the dynamics of the test vehicle were completely in the linear range, which resulted in purely fatigue failure of the solder balls. Similar results of bulk solder cracking are reported in Reference [26], when the BGA package was subjected to harmonic excitation at room temperature.
Conclusions
In this paper, the effect of PCB board thickness on the life of a BGA solder joint was investigated using the resonance-based fatigue testing approach. The test vehicle consisted of two different samples of user PCB: sample 1 with a 1-mm thick user PCB board, and sample 2 with a 1.6-mm thick PCB board assembled to the BGA package. A finite element model using the local/global modeling approach was used to predict the life of the solder balls. A global model was utilized to verify the resonance frequency, and the maximum amplitude of vibration of the test vehicle was obtained using the stepped sine test approach. From the global model, the maximum amplitude of vibration was correlated with the direct experimental results, and the displacement results were obtained at the cut boundary. The displacement values obtained were used as a boundary condition for a local model, and a resonance based fatigue simulation was carried out in the local model. Similarly, first mode resonance-based fatigue testing was conducted on two sample test vehicles, by exciting the test vehicle at the first mode resonance frequency. The results from the numerical simulations and experimentations showed that the life of the solder ball increased significantly when the board was stiffened. Stiffening of the board increased the natural frequencies of the test vehicle and the increase in natural frequency reduced the maximum amplitude of the relative motion between the PCB and the package. The reduction in the amplitude of relative motion decreases the stress in the solder joint. The following conclusions are drawn from the study conducted with two samples: The life of a solder joint is directly dependent on the excitation frequency, therefore the aspect ratio of the user PCB plays an important role in defining the life of a solder ball. FE based simulation results will agree well with the experimental results if an experiment is conducted with a low excitation input, such that the excitation will not induce system nonlinearities.
Future research should focus on finding out the effect of nonlinearities of the test vehicle in the life of a solder joint, as well as the failure mode of a solder joint with respect to different amplitudes of input excitation.
Data Availability Statement:
The data presented in this study are available on request from the corresponding authors.
|
v3-fos-license
|
2024-07-20T13:17:17.238Z
|
2024-07-19T00:00:00.000
|
271293948
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "1481a5871769573346a768d115a060f4eef39640",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3049",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "bb006b6956f0826ae7fab472e74e3874befc101b",
"year": 2024
}
|
pes2o/s2orc
|
Safety and Efficacy of Camostat Mesylate for Covid-19: a systematic review and Meta-analysis of Randomized controlled trials
Background Camostat mesylate, an oral serine protease inhibitor, is a powerful TMPRSS2 inhibitor and has been reported as a possible antiviral treatment against COVID-19. Therefore, we aim to assess the safety and efficacy of camostat mesylate for COVID-19 treatment. Methods A systematic review and meta-analysis synthesizing randomized controlled trials from PubMed, Scopus, Embase, Cochrane, Web of Science, clinical trials.gov, and medrxiv until June 2023. The outcomes were pooled using Mean difference (MD) for continuous outcomes and risk ratio (RR) for dichotomous outcomes. The protocol is registered in PROSPERO with ID CRD42023439633. Results Nine RCTs, including 1,623 patients, were included in this analysis. There was no difference between camostat mesylate and placebo in producing negative PCR test results at 1–7 days (RR: 0.76, 95% CI: [0.54, 1.06] P = 0.1), 8–14 days (RR: 1.02, 95% CI: [0.84, 1.23] P = 0.87), or 15–21 days (RR: 0.99, 95% CI: [0.82, 1.19] P = 0.90); clinical resolution of symptoms at 1–7 days (RR: 0.94 (95% CI: 0.58, 1.53) P = 0.81), 8–14 days (RR: 0.91, 95% CI: [0.74, 1.11] P = 0.33, ), or 15–21 days (RR: 0.77, 95% CI: [0.40, 1.51] P = 0.45); and time to symptom improvement (MD:-0.38 weeks (95% CI: [-1.42, 0.66] P = 0.47, I2 = 85%). Conclusion Camostat mesylate did not improve clinical outcomes in patients with COVID-19, compared to placebo. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-024-09468-w.
Introduction
Coronavirus disease 2019 (COVID-19) is a novel coronavirus that originated in China's Hubei region and spread throughout the world in late 2019 [1][2][3].On March 11th, 2020, the WHO classified COVID-19 as a pandemic.COVID-19 is extremely contagious and has put an enormous burden on healthcare systems around the world.Pharmacological treatment of infected patients is required until herd immunity is acquired by extensive viral outbreaks or an effective prophylactic vaccination, since social distance is not an effective long-term standalone method.
Current treatment of COVID-19 is primarily hospitalbased and directed at advanced disease, for example with remdesivir with FDA approval based on three pivotal trials [4][5][6][7], and corticosteroids such as dexamethasone [8,9].Furthermore, Monoclonal antibodies can be used in the outpatient setting but they are expensive, logistically challenging to administer, and have variable degrees of efficacy due to viral variants [9].
Despite the recent progress of antiviral drugs, further therapeutic alternatives are still required, especially for post-exposure prophylaxis and COVID-19 early treatment in outpatient settings.New pharmaceutical targets have been suggested as viable options for antiviral drugs against COVID-19.To clarify, viral replication and disease progression can be effectively stopped by blocking viral host cell entry.Previous experimental data [10][11][12] show that the SARS-CoV-2 spike (S) protein binds to target cells via the host cell factors angiotensin-converting enzyme 2 (ACE-2) and that S protein cleavage by the host cell surface trans-membrane protease serine 2 (TMPRSS2) allow entry into target cells.
Camostat mesylate has been used in clinical settings to treat pancreatitis and reflux esophagitis for over two decades [11][12][13].Camostat mesylate molecules inhibit TMPRSS2 priming of S protein, a process that has been demonstrated to be both essential and sufficient for viral entry into respiratory epithelial cells [11,12].Also, COVID-19 infection of primary human lung epithelial cells was demonstrated to be inhibited by camostat mesylate.Camostat mesylate is a prodrug that, upon entering the bloodstream, rapidly converts to the pharmacologically active metabolite FOY-251, which inhibits TMPRSS2.FOY-251 has an EC50 of 178 nM against SARS-CoV-2 infection in Calu-3 lung cell culture [11].Moreover, even at high dosages, it has few, mild adverse effects and is readily produced at low costs.Hence, camostat mesylate was predicted to be a good candidate for the treatment of COVID-19.This systematic review and meta-analysis aims to synthesize evidence from randomized controlled trials (RCTs), investigating the efficacy and safety of camostat mesylate for COVID-19 treatment.
Protocol Registration
The Preferred Reporting Items for Meta-Analyses according to (PRISMA) guidelines [14] were followed for this meta-analysis.Our protocol was prospectively registered in the International Prospective Register of Systematic Reviews (PROSPERO) with ID CRD42023439633.
Data source and search strategy
An electronic search of PubMed, Scopus, Embase, Cochrane, Web of Science, clinical trials.gov,and medrxiv was conducted from inception to June 2023 without any search restrictions.In addition, references from any retrieved trials were screened manually to identify potentially relevant articles.Further details regarding data source and search strategy are given in (Table S1).
Eligibility criteria
A PICO criterion was used to include RCTs: population (P): patients with COVID-19 regardless of the disease severity; intervention (I): camostat mesylate; control (C): placebo with or without the standard of care; and outcomes (O): primary outcomes of this review were the efficacy outcomes: all-cause mortality, PCR negative, clinical resolution of symptoms, time to symptom improvement, hospitalization duration, and intensive care unit (ICU) admission or mechanical ventilation.The secondary outcomes included safety outcomes: any adverse events, any serious adverse, elevated liver enzymes, and specific safety events.
Study selection
Three reviewers (A.I., S.R., & M.M.) independently screened the studies using Covidence [15] after duplicates were screened and removed automatically.The remaining studies were carefully assessed in accordance with the eligibility criteria.All studies were initially short-listed based on title and abstract, and subsequently, full-length articles were reviewed.Any discrepancies and conflicts between the selected studies were resolved by a U.K.
Data extraction
Four reviewers (A.I., S.R., M.M., & M.M.N.) extracted data independently, including baseline, efficacy, and safety data.Baseline data included number of participants in each, mean age, gender, mean body mass index (BMI), mean duration of symptoms, ordinal severity score, and comorbidity data.Efficacy data was recorded in terms of number of patients with negative PCR (at 1-7 days, 8-14 days, and 15-21 days or more), clinical resolution of symptoms (at 1-7 days, 8-14 days, and 15-21 days or more), time to improvement in symptoms, viral load at the end of follow up, duration of hospitalization, all-cause mortality, and ICU admission or mechanical ventilation.Safety data included the incidence of any adverse event, any serious adverse event, and specific adverse events.Conflicts were solved by mutual discussion between reviewers.
Risk of Bias and Certainty of evidence
Four reviewers (A.I., S.R., M.M., & F.S) independently assessed the quality of included studies using the modified Cochrane Collaboration's risk of bias tool for randomized controlled trials [16] Conflicts were solved by mutual discussion between reviewers.
To appraise the quality of evidence, two reviewers (M.A. and U.K.) utilized the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) guidelines [17,18].We considered inconsistency, imprecision, indirectness, publication bias, and risk of bias.The evaluation was carried out for each outcome, and the decisions made were justified and documented.Any discrepancies were settled through discussion.
We followed the confidence interval cutoffs provided by Cochrane consumers and communication "how to grade?" guidelines [19].
Statistical analysis
RevMan (version 5.3; Copenhagen: The Nordic Cochrane Centre, The Cochrane Collaboration, 2014) was used for all statistical analyses [20].The results from trials were presented as risk ratios (RR) for dichotomous outcomes and mean difference (MD) for continuous outcomes with a 95% confidence interval (CI) and were pooled using a fixed-effects model in case of homogenous data and random effects model in case of heterogeneous data.According to the Cochrane Handbook (chapter nine) [21]., heterogeneity was considered significant if the alpha value of the Chi-square test is below 0.1, while the interpretation of the I-square test is as follows: (0-40%) not significant, (30-60%) moderate heterogeneity, (50-90%) substantial heterogeneity, and (75-100%) considerable heterogeneity.
Search results and study characteristics
The initial literature search yielded 816 studies after the removal of duplicate (n = 151) and irrelevant studies (n = 656), leaving nine RCTs for inclusion in the final quantitative and qualitative analysis.Out of total, 63 studies were excluded in full text screening with reason of exclusion mentioned in (Table S2).Finally, nine studies were included in the final analysis.Further details can be obtained from the PRISMA flowchart in (Fig. 1).
Included studies characteristics
Nine RCTs [22][23][24][25][26][27][28][29][30] were included in the final analysis with a total of 1,623 participants (n = 912 in the camostat mesylate group and n = 711 in the placebo group), with 52.7% of the patients being male.Most of the studies were conducted in the USA (n = 4), followed by an equal number of trials from Sweden, Austria, Japan, Denmark, Belgium, and South Korea.Camostat mesylate and placebo were given as oral tablets.The mean duration of follow-up was 2.8 weeks.The definition and criteria for serious adverse events were different in each article so we have explained it in Table S3 to make it clear.Further information about baseline study and patient characteristics are available in (Tables 1, 2), respectively.
Risk of Bias and Certainty of evidence
After a careful assessment using the Cochrane ROB 2.0 tool, six RCTs were concluded as having a low risk of bias [22][23][24][25][26][27], two showing some concerns [28,30], and one with a high risk of bias [29].(Fig.2).Certainty of evidence is demonstrated in detail in a GRADE evidence profile (Table 3).The details of all the domains which are assessed are mentioned in (Table S4-S12).
Statistically significant heterogeneity was observed in any adverse events outcome (I 2 = 80%, p < 0.0001).A leave-one-out sensitivity analysis was conducted; however, no single study could be found responsible for it (Table S13).
Discussion
The present systematic review and meta-analysis showed that camostat mesylate is overall ineffective in improving the clinical outcomes of COVID-19 patients while increasing the risk of any serious adverse events.Hence, camostat mesylate exhibited no superiority to placebo in reducing the risk of mortality and ICU admission or mechanical ventilation events.Similarly, it did not accelerate either the clinical recovery (clinical resolution of symptoms and time to symptom improvement) or the viral clearance (time for PCR negativation).Therefore, the current RCTs-based evidence suggests that camostat mesylate when given as monoantiviral therapy for COVID-19 patients may have no particular utility whether in mild, moderate, or severe forms.Death in patients with acute SARS-CoV-2 infection results from several causes, including multiple organ dysfunction syndrome, nosocomial superinfection (mainly ventilator-associated pneumonia), refractory hypoxemia/ pulmonary fibrosis (secondary to extensive lung damage), and fatal ischemic events affecting venous (e.g., pulmonary embolism) or arterial (e.g., stroke and myocardial infarction) circulation [31].For an antiviral drug to reduce the risk of these events it should prevent the progression to severe COVID-19 and hospitalization by early eradication of infection such as the FDA-approved drugs' combination nirmatrelvir/ritonavir (Paxlovid) which is also based on anti-protease activity [32,33].
Since the use of camostat mesylate did not affect the features of disease progression (infection's clinical evolution, viral load kinetics, ICU admission, and mechanical ventilation) reduction in mortality rates is unlikely to be achieved.Additionally, the absence of a significant decrease in hospitalization rates signifies that camostat mesylate has low benefits in patients at risk for severe COVID-19.Furthermore, the no change in time for clinical recovery among camostat-treated groups indicates that this drug may be a non-useful strategy to treat COVID-19 outpatients with both moderate and mild forms.
Moreover, the earlier control of viral replication is essential for an antiviral drug to be effective in COVID-19 patients [34].On one hand, this would prevent the tissular injury induced by either SARS-CoV-2 or its associated inflammation, and on the other hand, it would decrease the infectivity of patients, thereby minimizing disease transmission.The anti-SARS-CoV-2 activity of camostat mesylate was speculated from its potential to block TMPRSS2-mediated viral fusion; thus, inhibiting viral replication in host cells, as shown by in vitro human cell and animal studies [35,36].The fact that camostat mesylate did not induce significant acceleration in PCR negativation time likely reflects its failure to effectively contribute to viral clearance and replication arrest/ prevention.
Mechanistically, this seems to be due to two main reasons: (i) the non-pharmacological effectiveness of camostat mesylate as a TMPRSS2 inhibitor administered in monotherapy, or (ii) the non-utility of TMPRSS2 inhibition as an exclusive strategy to prevent viral invasion (the most likely probability).Hence, studies on the molecular pharmacology of camostat mesylate indicated that it may not be the optimal ligand to block TMPRSS2 activity [37][38][39].Notably, it has been revealed that camostat has lesser inhibition potential compared to a similar TMPRSS2 blocker nafamostat as the latter forms significantly higher amounts of enzyme-substrate stable complexes [39].Remarkably, the pharmacological potency of camostat mesylate was shown to be 10-fold less than that of nafamostat mesylate [2].Further results from animal studies concluded that nafamostat is a better candidate for the prevention of SARS-CoV-2 TMPRSS2-mediated entry compared to camostat [40].Simultaneously, it has been recently demonstrated that SARS-CoV-2 can enter target cells without the need for ACE2 and TMPRSS2 participation through "cell-to-cell fusion" mechanism.Notably, the involvement of TMPRSS2 in this mechanism was found to be dispensable suggesting that SARS-CoV-2 exhibits TMPRSS2-independent cellular invasion strategies [41].
Moreover, even in the absence of TMPRSS2, SARS-CoV-2 has an alternative route of entry by endocytosis and transportation into endolysosomes where it is released to the cytosol via the action of acid-activated cathepsin L protease [42].Therefore, SARS-CoV-2 can use these pathways to escape from camostat mesylate and other specific inhibitors of TMPRSS2.This possibility is more pronounced with the novel SARS-CoV-2 variants (i.e., Omicron) which no longer rely on TMPRSS-2 as a fusogenicity factor [43].Consequently, targeting TMPRSS2 alone is not sufficient to fully prevent penetration of SARS-CoV-2 to host cells.Another potential disadvantage of targeting TMPRSS2 is that this protein displays an interindividual structural variability with some functional variants being expressed at relatively high frequencies among many human populations [44].There are also interindividual quantitative variations in TMPRSS2 levels secondary to genetic polymorphisms across populations [45].Both qualitative and quantitative variations in TMPRSS2 may alter the individuals' response to camostat mesylate and similar drugs by potentially decreasing ligand potency and efficiency.
Besides the low efficacy profile, analysis of the safety profile indicated some concerns with camostat mesylate NA NA 47 Kinoshita et al. 2022 [25] 78 77 55. 7 (18.8)56.1 due to a higher risk of any serious adverse events in the treated groups compared to controls.The mechanisms of this molecule's toxicity are unclear; however, since TMPRSS2 is ubiquitously expressed in the human body its inhibition may result in systemic undesirable effects.Additionally, camostat mesylate has a broad action on other proteases involved in multiple functions such as blood pressure control and renal function, inflammation, and coagulation [46]; which when inhibited in COVID-19 patients (especially those with severe forms) may lead to more harms than goods.Worth mentioning that camostat mesylate has anti-diarrheic effects as it was shown to normalize intestinal hyperpermeability in rats which could explain the lower susceptibility to diarrhea in COVID-19 patients compared to placebo [43].
Strengths and limitations
To the best of our knowledge, this is the first metaanalysis that assesses the safety and efficacy of camostat mesylate in COVID-19 patients.Therefore, this paper presents the gold-standard evidence on this topic including all available RCTs that met our criteria to reach the highest accessible quality of evidence.We analyzed data from a large number (n = 1,623) of patients and provided key findings.However our paper is undermined by the following: first, we included three non-peer-reviewed reports, including a preprint ref and two unpublished RCTs data [27,30].Second, the included studies suffered from significant heterogeneity in the camostat dosing regimen, which can affect our findings.Third, all the [26,29], recruiting hospitalized patients with moderate to severe disease; therefore, our results may not be generalizable for severe COVID-19.
Implications and future perspectives
Targeting viral entry is a well-established strategy to fight viral diseases such as HIV and influenza virus infections; however, its benefit in COVID-19 remains questionable and is not yet supported by robust quality of evidence.Until full data becomes available, the results in this study do not exclude the usefulness of camostat mesylate in the context of COVID-19 infection as co-administration with other synergistic antiviral drugs may boost its efficacy profile.Since furin, another transmembranous enzyme involved in the proteolytic processing of SARS-CoV-2 is necessary for TMPRSS2-independent fusion (i.e., cellto-cell fusion), the combination of furin and TMPRSS2 inhibitors may enhance the overall preventive effects on viral entry and infectivity [13,38].Nevertheless, the constant changes in SARS-CoV
Conclusion
The current evidence does not support the efficacy of camostat mesylate in treating COVID-19 infection.Rather, it indicates some safety concerns that should be considered before further testing this drug in large-scale trials.Nevertheless, since the available data is incomplete more RCTs are still required to conclude the therapeutic benefit of camostat mesylate in COVID-19.At the same time, it might also be worthy to continue investigating the utility of viral entry inhibitors as potential treatment for COVID-19 by focusing on other TMPRSS2 inhibitors with greater pharmacological potency, agents with TMPRSS2-independent activity, or effective synergistic combinations of both.
Fig. 2
Fig. 2 Quality assessment of risk of bias in the included trials.The upper panel presents a schematic representation of risks (low = red, unclear = yellow, and high = red) for specific types of biases of each of the studies in the review.The lower panel presents risks (low = red, unclear = yellow, and high = red) for the subtypes of biases of the combination of studies included in this review
Fig. 3 Fig. 4 Fig. 5
Fig. 3 Forest plots of the primary efficacy outcome, RR: risk ratio, MD: mean difference, CI: confidence interval -2 cellular invasion pathways may not facilitate the development of the most adequate combination for viral entry inhibitors.Importantly, the presence of safety concerns with camostat mesylate use among COVID-19 patients should justify more caution and strict patient monitoring in future evaluations.Based on these concerns and the lack of proof of effectiveness, current guidelines should recommend against the use of camostat mesylate in COVID-19 patients outside the context of clinical trials.
Table 1
Summary characteristics of the included trials RCT: Randomized controlled trial, USA: United States of America; N/A: not available
Table 3
GRADE evidence profileKarolyi et al. is of high risk of overall bias and constitute 73.7% of the outcome pooled data b.Karolyi et al. is the only study that used lopinavir/ritonavir as a control, constituting 73.7% of the outcome pooled data c.Wide confidence interval that does not exclude the risk of appreciable harm/benefit, with a low number of events d.Low number of events Karolyi et al. is the only study that used lopinavir/ritonavir as a control, constituting 27.3% of the outcome pooled data h.Wide confidence interval that does not exclude the risk of appreciable harm/benefit i. Karolyi et al. is of high risk of overall bias and constitute 44.1% of the outcome pooled data j.I2 > 50% k.Karolyi et al. is the only study that used lopinavir/ritonavir as a control, constituting 44.1% of the outcome pooled data CI: confidence interval; MD: mean difference; RR: risk ratio Explanationsa.
|
v3-fos-license
|
2021-12-01T15:19:34.785Z
|
2021-11-26T00:00:00.000
|
244742365
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frma.2021.766552/pdf",
"pdf_hash": "31746a9d044a137f65b06baaa3b201ab1d459bcd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3051",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "31746a9d044a137f65b06baaa3b201ab1d459bcd",
"year": 2021
}
|
pes2o/s2orc
|
Are Conference Posters Being Cited?
We present a small case study on citations of conference posters using poster collections from both Figshare and Zenodo. The study takes into account the years 2016–2020 according to the dates of publication on the platforms. Citation data was taken from DataCite, Crossref and Dimensions. Primarily, we want to know to what extent scientific posters are being cited and thereby which impact posters potentially have on the scholarly landscape and especially on academic publications. Our data-driven analysis reveals that posters are rarely cited. Citations could only be found for 1% of the posters in our dataset. A limitation in this study however is that the impact of academic posters was not measured empirical but rather descriptive.
INTRODUCTION
The output that is analyzed in scientometric analysis oftentimes consists of conventional publication types like academic articles, monographs or papers in conference proceedings. Recently, the research about the impact of research software and research data has seen some growth. It is remarkable that another traditional means of scholarly communication, the academic poster, barely gets any attention when it comes to scientometric analysis (Rowe 2017a). Hence, this study analyzes whether or not academic posters are cited. To do this, it is necessary to distinguish what a scientific poster represents in comparison to other mediums of scholarly communication.
Usually, academic posters are a visual instrument to illustrate research ideas in a poster session at an academic conference (McCann et al., 1994). In contrast to reading research articles, the purpose of poster sessions is to attract and connect with students, colleagues and professionals on a specific research topic or special field of interest through a poster (Sexton 1984;Moneyham et al., 1996). In addition, poster presentations can be seen as a method of assessment which may help students and researchers to improve their practices and techniques (Bracher 1998). Poster exhibitions are also an effective form of the exchange of concepts, views and opinions at conferences (Schmidmaier 1981). Although posters are primarily a visual medium, they do require communication skills to provide exceeding information on the poster to outsiders (Rowe and Ilic, 2008). The average poster consists of four sections, namely, introduction, method, results and discussion which shows similarities to the structures of journal abstracts and scientific experiments (Shalom 1993;Rowe 2017b). Like journal abstracts, each poster merely gives information on the purpose of the study carried out by the researcher, a description of the used tools and procedures, a major conclusion as well as implications for research (Sexton 1984). Normally, these information are enriched with pictures and other visually stimulating elements to attract the attention of conference attendees.
Investigations on the effects and benefits of scholarly posters have existed for many decades now. First attempts to summarize and assess the influence of posters were undertaken in the mid 1970s. Maugh (1974) wrote that poster sessions are in many respects "a better way" of communication in contrast to reading and presenting full text papers at conferences. For instance, poster sessions give speakers more time to explain their research findings in-depth in comparison to conventional presentations. Further studies supported the idea of presenting research findings in the form of a poster and came to the conclusion that poster sessions can be a substitution and alternative to the traditional approach of reading research articles (Sexton 1984). It is estimated that between 1969 and 2014 poster presentations at the Federation of European Biochemical Societies (FEBS) meetings have become up to 40 times more frequent (Rowe and Ilic, 2015). Also, since the 1990s an exponential increase of posters and peer-reviewed literature can be observed (Rowe 2019) which raises the question of the importance of posters for other means of scholarly communication. Thus far, studies conducted on the prevalence and dissemination of academic posters mostly focused on the impact and leverage of posters at conferences (Salzl et al., 2008;Rowe and Ilic, 2009;Wallengren Lynch, 2017). However, not a single study vetted the ramification of scientific posters on academic publishing, especially through scientometric data mining.
METHODS
For this study, a collection of posters had to be identified. This collection had to match several criteria.
1. The posters must be considered "published." In order to be cited, we assume that a publicly accessible digital reproduction must be available. 2. Each poster must be assigned a Digital Object Identifier (DOI) to allow harvesting of citation counts. 3. The retrieved DOIs need to be registered in either Crossref or Datacite, since data from these DOI registration agencies are openly available and free to use. 4. A comprehensive analysis of posters requires a well curated and extensive collection.
Poster collections from the multidisciplinary open source repositories Zenodo and Figshare were found, each containing at least 5000 records (Zenodo: 7026, Figshare: 6018) as well as a low amount of mislabeled documents, which were validated manually in small data samples.
Figshare is a commercial internet based data storage focusing on academic research data management and research data dissemination. Figshare is maintained by Digital Science and was initially launched in 2011. Zenodo is an open digital archive developed by CERN and OpenAIRE.
The study takes into account the years 2016 until 2020 according to the date of publication on the platforms. We have decided to examine the years 2016-2020 because we can only query the complete range of posters in both repositories for these years due to API limitations. Also, we were not able to retrieve the actual publication date via API. Both APIs from Figshare and Zenodo only contain the date of publication on the platform. Accordingly, the date of publication on the platform may differ from the actual first publication. An important advantage of each repository is that research data can be filtered and sorted by various document types. For instance, it is possible to retrieve only posters from each repository, which is necessary for this inquiry.
In total, 13,044 records were retrieved and analyzed. Publication data was harvested via the OAI-PMH API of Figshare as well as by using Zenodo's REST-API. Data was harvested between 11 October and 12 October 2021. OAI-PMH is a widely used protocol for harvesting metadata descriptions. A REST-API is a programming interface. It describes an approach for communication between client and server in networks.
As stated above, each record should be assigned a unique identifier (DOI). We matched those DOIs with Crossref, Dimensions and DataCite to obtain citation counts for each poster. DataCite is a non-profit organization that organizes the allocation of DOIs for academic institutions. Another similar agency is Crossref. Crossref also operates as a non-profit, run by Publishers International Linking Association Inc. (PILA). Both also provide bibliographic and citation data. Dimensions is another comprehensive citation database. Both poster collections predominantly include posters with DOIs assigned by DataCite. Only a small fraction of DOIs originate from other registration agencies than DataCite.
We used a simple Python script to retrieve and harvest posters from Zenodo and Figshare and also for matching DOIs with the mentioned citation databases. We considered non-matching DOIs as zero-cited publications/posters. For posters that are available in both Figshare and Zenodo, we extracted the DOI prefix to indicate the affiliation to a repository. In addition, we eliminated duplicate DOIs with a lower citation count than their counterparts. For instance, if citation counts for a single DOI were found in Dimensions (twice) and Crossref (once), we valued the Dimensions entry higher, since it found and indexed more citations. Therefore in this specific example, Crossref's entry would be removed from the dataset. There were no cases of equal reference counts between citation sources in our dataset (except for DOIs that were found in both repositories).
According to Hyndman (2020) Figshare should have the same information as Dimensions, which it uses to provide citation data.
For this study, we tested both APIs and we noticed data discrepancies between the provided citation counts looked up for a given DOI. Since the citation counts we gathered via the Figshare API were much more comprehensible, we decided to forego the use of the Dimensions API in this study.
RESULT
Overall, citations could only be found for 1% of the posters in our dataset, a total of 137 posters. The number of posters as well as the number of cited posters for each data repository we examined is shown in Table 1. Figure 1 shows the number of posters cited over the years 2016-2020 (for Zenodo and Figshare). While a considerable increase in citations can be measured between 2016 and 2017 the citation rate decreases continuously from 2017 to 2020. In 2016, the lowest number of citations were measured, whereas in 2017 most of the posters were cited. Most citations were found in DataCite (77). Only four citations are from Crossref and 56 from Dimensions. As noted above, we only considered the citation source with the highest citation count for each DOI. With that said, 55% of citations are recorded in DataCite and 40% from Dimensions. Additionally, we found a certain amount of false positives, that is, documents categorized as posters that were actually other publication types and therefore mislabeled. In total, 7% of DOIs in the dataset are incorrectly categorized as posters. We excluded these DOIs for this analysis to receive more comprehensive outcomes.
As Figure 2 shows, most academic posters are only cited once according to data taken from our selected citation databases. However, a small fraction of posters were cited multiple times up to five, in two cases up to six times.
DISCUSSION
For our analysis we didn't differentiate between self-citations and other citations, as we wanted to investigate whether posters are cited at all. We did not map posters to disciplines or fields of research, either. The question, how posters are cited, and if poster citation behaviour is different from one discipline to the other, is fertile ground for further research.
The method and the data we used have some clear limitations. First of all, the data: To our knowledge, there is no data source that provides a large poster collection with sufficient data in good metadata quality. The data sources we selected offer the possibility of uploading poster collections or individual posters, and there is little to no curation effort on the platforms we chose. We therefore have a non-representative sample, which nevertheless provides a good first indication for answering our question of whether posters are cited.
Our study shows that academic posters, despite their significant role in scholarly communication, are sparsely cited. This may indicate a lack of accessibility and findability as well as a low reputation of posters as a citation source among researchers and scientists. Apparently, it is generally more accepted to cite a paper related to a poster since it is a widely recognized format.
Besides, not every data repository stores posters or labels them correctly. In this study, for instance, we recognized that approximately 7% of posters in Figshare and Zenodo were actually mislabeled. Also, it seems that oftentimes related posters and papers are named similarly so that it is harder to find the original related poster. Furthermore, our data reveal no specific gap between the number of citations for posters of different scientific disciplines, although this was not investigated in depth.
For more conclusive results, we contemplate matching DOIs with Scopus, Microsoft Academic and Google Scholar to retrieve more balanced citation data due to higher recall. Nevertheless, when trying to receive data from these services one will inevitably be faced with some obstacles e.g. restricted access or no public API.
Another way to study the leverage of academic posters would be to take altmetrics into consideration. With respect to our research question, posters could for example also be cited in micro-publications like blogs, tweets and academic social platforms such as Academia.edu and Researchgate. Eventually, this study could be an impulse to encourage researchers and analysts to start further studies in this particular research domain, since posters are playing a crucial role at academic conferences.
DATA AVAILABILITY STATEMENT
The datasets generated and analyzed for this study can be found on GitHub: https://github.com/naustica/poster_citations.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
FUNDING
The publication of this article was funded by the Open Access Fund of Technische Informationsbibliothek (TIB).
|
v3-fos-license
|
2020-01-16T09:04:49.228Z
|
2020-01-15T00:00:00.000
|
210334227
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0226422&type=printable",
"pdf_hash": "f1e0e1edf1c164ab6da83212da5f254295d9f081",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3052",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "aed5a84284889cce615d513a40289ba262678a61",
"year": 2020
}
|
pes2o/s2orc
|
Modelling the impact of migrants on the success of the HIV care and treatment program in Botswana
Introduction Botswana offers publicly financed HIV treatment to citizens, but not migrants, who comprised about 7% of the population in 2016. However, HIV incidence is not declining in proportion to Botswana’s HIV response. In 2018, Botswana had 86% of citizens living with HIV diagnosed, 95% of people diagnosed on treatment, and 95% viral suppression among those on treatment. We hypothesised that continued exclusion of migrants is hampering reduction of HIV incidence in Botswana. Hence, we modelled the impact of including migrants in Botswana’s HIV response on achieving 90-90-90 and 95-95-95 Fast-Track targets by 2020 and 2030, respectively. Methods The Optima HIV model, with demographic, epidemiological, and behavioural inputs, was applied to citizens of and migrants to Botswana. Projections of new HIV infections and HIV-related deaths were compared for three scenarios to the end of 2030: (1) continued status quo for HIV testing and treatment coverage, and maintenance of levels of linkage to care, loss to follow-up, and viral suppression among citizens and migrants (baseline); (2) with scaled-up budget, optimised to achieve 90-90-90 and 95-95-95 Fast-Track targets by 2020 and 2030, respectively, for citizens only; and (3) scaled-up optimised budget to achieve these targets for both citizens and migrants. Results A baseline of 172,000 new HIV infections and 8,400 HIV-related deaths was projected over 2020–2030. Scaling up to achieve targets among citizens only averted an estimated 48,000 infections and 1,700 deaths. Achieving targets for both citizens and migrants averted 16,000 (34%) more infections and 442 (26%) more deaths. Scaling up for both populations reduced numbers of new HIV infections and deaths by 44% and 39% respectively compared with 2010 levels. Treating migrants when scaling up in both populations was estimated to cost USD 74 million over 2020–2030. Conclusions Providing HIV services to migrants in Botswana could lead to further reductions in HIV incidence and deaths. However, even with an increased, optimised budget that achieves 95-95-95 targets for both citizens and migrants by 2030, the 90% incidence reduction target for 2020 will be missed. Further efficiencies and innovations will be needed to meet HIV targets in Botswana.
Introduction
Reaching the UNAIDS 90-90-90 Fast-Track targets by 2020 and further scaling up to 95-95-95 by 2030 is hypothesized to lead to a 90% reduction in HIV incidence and HIV-related mortality from 2010 levels [1]. Additional epidemiological transition metrics for tracking the HIV epidemic, such as the incidence:prevalence and the incidence:mortality ratios have also been proposed [2]. One of the major assumptions for achieving these reductions in the Fast-Track targets is that scale-up of effective prevention strategies, testing, treatment and viral load monitoring is applied to all population groups. However, some country programs often exclude key population groups vulnerable to HIV infection, including migrants.
With more than 258 million global international migrants in 2017, migration continues to be a key consideration in public health programs [3]. In sub-Saharan Africa, the region most affected by the HIV epidemic, migration and public health is even more pertinent. Migrants often face barriers to healthcare access and are often excluded from mainstream national programs [4,5]. The role of migration in the HIV epidemic is complex. While migration itself is not a risk factor for HIV infection, human mobility places people in situations that increase their risk of acquiring HIV and impact on them seeking timely care if infected (e.g., irregular migration status and lack of migrant-inclusive health policies, which may prevent early healthcare seeking) with the potential risks occurring both pre-and post-migration [6]. However, research findings can be mixed; some research has demonstrated that migrants are at increased risk of post-migration HIV acquisition while other studies showed no positive association between intensity of migration measured at country level and peak HIV prevalence [7][8][9][10]. The risk of post-migration HIV acquisition has not been clearly elucidated. Among migrants living with HIV, the proportion that acquired HIV post-migration has been reported to be as low as 2% and as high as 71% and can depend on the prevalence within their country/region of origin [8,11].
HIV prevalence is high in sub-Saharan Africa, including in Botswana, but unlike some countries in the region Botswana has a strong economy so attracts many migrants [12]. In 2015, it was estimated that Botswana had 161,000 documented migrants, approximately 7% of its total population [13]. Migrants to Botswana are mainly from within the region, especially Zimbabwe. Adult HIV prevalence is estimated at 20% among migrants, compared with 22.8% nationally [12,14]. Botswana has a free nationally funded HIV care and treatment program that covers all citizens; the system is considered to be highly successful, with Botswana being reported to be close to reaching the 90-90-90 targets. The government of Botswana funds at least 60% of its HIV response budget, which is expected to rise to USD339 million in 2030 [15,16]. In 2018, Botswana had 86% of citizens living with HIV diagnosed, 95% of people diagnosed on antiretroviral therapy (ART), and 95% viral suppression among those on ART (i.e. 86-95-95) [17]. However, HIV incidence remains a concern, with minimal incidence reductions observed between 2010 and 2018 [6,[18][19][20]. Importantly, migrants living with HIV are excluded from the free national HIV care and treatment program and many cannot afford to pay for their own HIV care. The implications of excluding migrants from mainstream HIV programs in Botswana is unknown, but may reduce the feasibility of achieving a 90% reduction in HIV incidence even if the Fast-Track targets are met among citizens.
We hypothesized that continued exclusion of migrants is slowing progress in reducing HIV incidence in Botswana. To test this hypothesis, we modelled the impact of including migrants in Botswana's HIV response on achieving 90-90-90 and 95-95-95 Fast-Track targets by 2020 and 2030, respectively.
Methods
We applied the Optima HIV model, a dynamic, population-based HIV model to test our hypothesis (described in detail elsewhere [21,22]). The Optima model tracks the entire population of people living with HIV (PLHIV) in a country or region between health states: from infection to diagnosis, linkage to care, ART initiation, viral suppression, and death; and across CD4+ count stages (acute HIV infection, >500, 350-500, 200-350, 50-200, and <50 cells/μL). The overall population is partitioned by population group and by HIV health state. The Optima HIV model is used for modeling HIV epidemics and the impact of interventions to address policy and program challenges. The model can project countries' progress towards the UNAIDS 90-90-90 targets, including HIV incidence and mortality.
Two populations were considered in the model: citizens of Botswana and migrants in Botswana. Migrants were defined as individuals who emigrated to Botswana. The model requires a range of demographic, epidemiological, and behavioral data inputs broadly related and these include population size; prevalence values for HIV, sexually transmitted infections (STIs), and tuberculosis (TB); risk behavior data (e.g., condom use); and biological constants (e.g., disease progression). HIV transmission was determined by the number and type of risk events (either within individuals' population groups or through interaction with other population groups) and the infection probability of each event. Botswana's HIV epidemic is driven predominantly by heterosexual sexual transmission, so the probability of people in the model becoming infected was determined by: 1) HIV prevalence (weighted by viral load) in partner populations; 2) average number of casual, regular, and commercial homosexual and heterosexual acts per person per year; 3) proportion of sexual acts in which condoms are used; 4) proportion of men who are circumcised; 5) prevalence of STIs; 6) proportion of acts that are covered by pre-exposure prophylaxis (PrEP) and post-exposure prophylaxis (PEP); 7) number of sexual partners; and 8) efficacy values for condoms, male circumcision, PEP, PrEP, and ART (suppressive and non-suppressive) in preventing HIV transmission (see supporting information, S1 and S2 Tables).
Data sources
Epidemiological, behavioral, and programmatic data, and estimates to inform the model, were collated from literature reviews and country program data between 2010 and 2018, and were validated in consultation with key country stakeholders. Two populations were considered in this analysis: citizens and documented migrants aged 15-65 years, each disaggregated by sex. Model projections were generated to 2030 inclusive. Table 1 lists the key data inputs to inform this modeling analysis.
Model calibration
We initialized the model in 2010 and produced projections from 2020 to 2030 inclusive. We fitted the model to key data points: population size, HIV prevalence, PLHIV on treatment, HIV-related deaths, and new HIV diagnoses. To get the best fit, we adjusted the following parameters: force of infection (which depends on the probability of infection, diagnosis rates and mixing of population groups), inhomogeneity of the population groups, and efficacy of suppressive ART [27]. This was initially done using the auto-calibrate feature, which uses an optimisation to vary the calibration parameters simultaneously to minimise the model error from the data points. The auto-calibrated parameters can then be manually adjusted in the web-interface to fine-tune the model fit.
Analytical approach
Projections of new HIV infections and HIV-related deaths were compared for three scenarios to the end of 2030: (1) status quo for HIV testing and treatment coverage, and levels for linkage
For each scenario, we used the model to project the cumulative number of new HIV infections and HIV-related deaths between 2020 (base year) and 2030 inclusive, and the percentage change in annual new HIV infections, HIV incidence and HIV-related deaths by 2030 inclusive compared with 2020. These outcomes were obtained for the total Botswana population and separately for citizens and migrants. We also compared the projected new infections in 2030 with the published estimates for 2010 [28]. Assuming a conservative cost of USD300 per person per year for treating an individual with ART in Botswana, we estimated the cost required to treat migrants per year when scaling up in both populations [29]. Cost of HIV treatment includes drug costs, laboratory work, and service delivery.
Results
For the baseline scenario, 172,000 cumulative new HIV infections were projected to occur from 2020 to 2030. For the baseline scenario, are projected to increase from 13,000 in 2020 to 18,800 in 2030. Scaling up to achieve 90-90-90 and 95-95-95 Fast-Track targets among citizens averted an estimated 48,000 infections, and achieving targets for both citizens and migrants over this period averted an additional 16,000 infections (34% more) (Fig 1). Thus, scaling up in citizens was projected to result in 124,000 cumulative new infections in 2020-30 (inclusive), while scaling up in citizens and migrants was projected to result in 108,000 new infections. Scaling up in both populations would reduce HIV incidence from 0.64 per 100 person-years (12,500 new HIV infections) in 2020 to 0.23 per 100 person-years (7,300 new HIV infections) in 2030.
Compared with 2020 levels, new HIV infections in the total Botswana population in 2030 were projected to be 31% higher (29% higher in migrants and 31% higher in citizens) in the baseline scenario, 23% lower (23% lower in both citizen and migrants) when scaling up in citizens only, and 42% lower (41% lower in migrants and 42% lower in citizens) when scaling up in both population groups (Table 2). Relative to the estimated 13,000 new HIV infections in Botswana in 2010, scaling up to Fast Track 95-95-95 targets by 2030 in both population groups was projected to result in a 44% reduction in new HIV infections in 2030 (7,300 projected in 2030), well short of the 90% reduction in HIV incidence target [28]. Fig 2 presents new HIV infection trends over time for each scenario and population group.
The baseline scenario was projected to result in 8,400 HIV-related deaths between 2020 and 2030, compared with 6,700 and 6,200 when scaling up in citizens only or in both population groups respectively (1,700 and 2,200 deaths averted, respectively) [ Table 2]. This translates to 26% more deaths being averted when scaling up in both citizens and migrants. Compared with 2020 levels, HIV-related mortality in the total Botswana population in 2030 was projected to be 15% higher in the baseline scenario (13% higher in migrants and 15% higher in citizens), 28% lower (no change for migrants and 31% lower in citizens) when scaling up in citizens only, and 39% lower (58% lower in migrants and 37% lower in citizens) when scaling up in both population groups (Fig 3).
Treating migrants living with HIV was estimated to cost USD74 million based on the projected annual costs shown in
Epidemiological transition metrics
In our model, the IPR in 2030 would be 0.02 when scaling to 95-95-95 in all population groups, compared with 0.04 for the baseline scenario and the IMR would be >1 in both population groups.
Discussion
Our modeling demonstrated that without scaling up diagnosis and treatment rates, new HIV infections are likely to increase in Botswana over the next 10 years. Including migrants (around 7% of the total population) who are mainly from Zimbabwe in the scale-up of services to reach 95-95-95 almost doubled the reduction in 2030 incidence. However, even if 95-95-95 was achieved in citizen and migrants, reductions in new infections would still fall well short of the 90% target outlined in the Fast-Track targets [30]. This implies that without a significant increase in the coverage of additional prevention measures, scaling up testing and treatment alone to 95-95-95 in all population groups will not meet HIV incidence reduction targets. While including migrants had the greatest benefits for the migrant population, the model demonstrated that meaningful benefits were also accrued among citizens in Botswana.
Despite some gains in reducing new HIV infections, our models suggest more is needed beyond meeting the 90-90-90 and 95-95-95 targets if the HIV incidence reduction targets are to be met in Botswana. Whilst our results for the projected care cascade are similar those recently reported by Kibona and Yang (2018), unlike us, they estimated that Botswana would meet the HIV incidence and HIV-related mortality targets by 2030 [31]. An important difference between the two models is that Kibona and Yang did not consider international migration, whether documented or undocumented. Additionally, Kibona and Yang calibrated their model with epidemiological data, including HIV incidence between from 2010 to 2016, but Botswana recorded an increase in new infections in 2017 (compared with drops in incidence in previous years). We used HIV incidence from 2017 in our model.
Botswana mostly over 60% of its HIV program and there is concern that expanding the programs to migrants would be an added cost to the country. However, expanding the program to include migrants would be cost-effective in the long run due to the reduction in new HIV Gaolathe and colleagues (2016) and considered expanding ART coverage, male circumcision, and PrEP, but did not consider migrants [20]. Again the model and costing estimates did not call out migrants, similar to previous cost estimates for Botswana [32]. We estimated that treating migrants would cost USD74 million between 2020 and 2030. Further research is needed to examine the cost-effectiveness of treating migrants and the budgetary implications for Botswana. By underscoring the considerable challenge associated with achieving global HIV incidence reduction targets, our findings are consistent with those of Scott et al. (2018), who modeled the Australian HIV epidemic. Modeling a best-case HIV treatment and care scenario among men who have sex with men (MSM) in Australia, they found that even if high-risk MSM had four HIV tests per year, ART, viral suppression, and PrEP, and condom use coverage was 100%, only an 80% reduction in HIV incidence could be achieved by 2030 [33]. While these findings apply to a highly concentrated HIV epidemic in Australia, Williams et al. (2017) obtained similar results when modeling a generalized epidemic [34]. Williams et al. found that only through a universally expanded program that included both prevention and treatment in South Africa can HIV incidence and mortality targets be met by 2030. Other models focusing on India and the USA have also failed to demonstrate 90% HIV incidence reduction by 2030 if Fast-Track targets are met [35,36]. Our finding are also consistent with analysis of population-based surveys, surveillance, and routine program data used in Ethiopia that demonstrated that for HIV testing, ART, viral suppression, and AIDS-related deaths Fast-Track targets will be reached, but not targets for reductions in new HIV infections based on progress made between 2011 and 2016 [37].
The Fast-Track targets are focused on the need to scale up HIV testing, linkage to care, ART initiation, adherence to ART, retention in care, and viral suppression. However, given the difficulty in achieving the 90% reduction in incidence even if the 95-95-95 targets are met, there is a need to focus on two key things. First that important subgroups in the population at risk of HIV infection, which includes migrants, are not excluded from testing, treatment and care. Second it highlights the central role of HIV prevention when trying to reduce HIV incidence. Various other factors are influencing this outcome including a lack of focus on reducing transmission among MSM and sex workers, and low uptake of HIV testing and prevention among young people [38].
It is important to monitor social and structural barriers to the HIV response in country programs in order to maintain focused strategies for HIV prevention and treatment. In Botswana, multiple concurrent partners, alcohol consumption, drug dependence, transactional sex, intergenerational sex, inconsistent condom use, high population mobility, and stigma and discrimination have been identified as barriers to HIV prevention [38]. Targeting these elements of the HIV epidemic in Botswana, including universal access to prevention and treatment for all population groups, will be required for Botswana to reach Fast-Track targets by 2030. In particular, targeting mobile men has been shown to significantly reduce HIV transmission in home communities, providing additional support for extending HIV treatment and care to all populations [39].
The use of additional epidemiological transition metrics for tracking the HIV epidemic, such as the incidence:prevalence ratio (IPR) and incidence:mortality ratio (IMR), have been proposed and used in UNAIDS reports [2,6,40]. IPR is ratio of the number of new infections to the number of PLHIV; an IPR at or below the benchmark of 0.03 signals that progress is on track to end AIDS as a public health threat. In our model, the IPR would signify significant progress while scaling to 95-95-95 in all population groups. The IPR typically reduces when incidence is lower than all-cause mortality among PLHIV. IMR is the ratio of the number of new HIV infections to all-cause mortality. In our model, the IMR was >1, in line with a growing population of PLHIV.
Our study has several limitations. The model required a burn-in period of 10 years prior to 2020 to stabilize and replicate Botswana's epidemic trends. The reliability of estimated changes in HIV incidence and mortality under various scenarios may have been affected by unknown data and associated assumptions. Age transitions were not included in the model, meaning that we did not factor in young people as they grew into the modelled age group (16-65 years). The model also made simplifying assumptions about the migrant population due to the absence of data: we did not account for dynamic migration and imported infections among migrants (e.g. the migrant population only accounted for an initial prevalence, a net change in population size over time, and new infections occurring within Botswana), and migrants were assumed to mix randomly with non-migrants (if assortativity among migrants was higher, this would reduce the impact that treating migrants has for non-migrants compared to what the model has projected). As with other models, our analysis was dependent on the availability, reliability, and quality of data inputs. Trends in new infections have varied in Botswana; in particular, the number of new infections increased in 2017. Large fluctuations reduce the model's reliability. More generally, mathematical models are simplifications of reality and cannot capture all aspects of the real world, hence it is important to understand the assumptions and parameters used in the model. We believe that while there is uncertainty in the model inputs where assumptions were made due to missing data, the projected trends and relative differences are reasonable estimates that reflect the best available data at the time of modeling.
Conclusion
The inclusion of migrants in Botswana's national HIV care and treatment program would result in important prevention and health benefits for both citizens and migrants. Including migrants in Botswana's mainstream HIV care and treatment program was estimated to avert an additional 16,000 new HIV infections, a 34% increase. However, while inclusion of migrants will accelerate the rate of reduction of new HIV infections by 2030, this will not be sufficient to reach the 95-95-95 incident reduction target. More efficiencies and higher coverage levels are needed, utilizing current prevention modalities (e.g. condom use, voluntary medical male circumcision, treatment as prevention, and PrEP) and ensuring equitable access to treatment in all populations groups in order to meet the 2030 target for HIV incidence reduction [41].
|
v3-fos-license
|
2020-12-03T09:05:45.782Z
|
2020-11-26T00:00:00.000
|
229369644
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://gchron.copernicus.org/articles/2/355/2020/gchron-2-355-2020.pdf",
"pdf_hash": "91d25192234596d50d2d8bd27af5b0cd5b65b841",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3053",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"sha1": "d118db21b0e21698229148e3d5e7d493e4c68d0c",
"year": 2020
}
|
pes2o/s2orc
|
Production of 40 Ar by an overlooked mode of 40 K decay with implications for KAr geochronology
The decay of 40K to the stable isotopes 40Ca and 40Ar is used as a measure of time for both the K-Ca and KAr geochronometers, the latter of which is most generally utilized by the variant 40Ar/39Ar system. The increasing precision of geochronology has forced practitioners to deal with the systematic uncertainties rooted in all radioisotope dating methods. A major component of these systematic uncertainties for the K-Ar and 40Ar/39Ar techniques is imprecisely determined decay constants and an incomplete knowledge of the decay scheme of 40K. Recent geochronology studies question whether 40K can decay to 40Ar via an electron capture directly to ground state (ECground), citing the lack of experimental verification as reasoning for its omission. In this study, we (1) provide a theoretical argument in favor of the presence of this decay mode and (2) evaluate the magnitude of this decay mode by calculating the electron capture to positron ratio (ECground/β) and comparing calculated ratios to previously published calculations, which yield ECground/β between 150–212. We provide support for this calculation through comparison of the experimentally verified ECground/β ratio of 22Na with our calculation using the theory of β decay. When combined with measured values of β and β decay rates, the best estimate for the calculated ECground/β for 40K yields a partial decay constant for 40K direct to ground-state 40Ar of 11.6±1.5×10−13 a−1 (2σ ). We calculate a partial decay constant of 40K to 40Ar of 0.592± 0.014× 10−10 a−1 and a total decay constant of 5.475± 0.107× 10−10 a−1 (2σ ), and we conclude that although omission of this decay mode can be significant for K-Ar dating, it is minor for 40Ar/39Ar geochronology and is therefore unlikely to have significantly biased published measurements.
Introduction
40 K is a naturally occurring radioisotope of K with atomic abundance of 0.0117 % (Garner et al., 1975). 40 K undergoes a branched decay to 40 Ar and 40 Ca with a total halflife of ca. 1.3 Ga and is the basis of the K-Ca and the K-Ar geochronometers (Aldrich and Nier, 1948;Wasserburg and Hayden, 1955;Marshall and DePaolo, 1982). The K-Ar system is most often exploited using the variant 40 Ar/ 39 Ar method, wherein some of the 39 K in the sample is transmuted to 39 Ar by irradiation with fast neutrons, thereby allowing both the parent and the daughter nuclides to be measured as isotopes of Ar (Merrihue and Turner, 1966). The latter is widely used to date geological events that span Earth's history, from volcanic eruptions recorded in historical texts (e.g., Preece et al., 2018;Renne et al., 1997) to the earliest events in the solar system (e.g., Renne, 2000).
Advances in analytical precision have forced practitioners in geochronology to address systematic uncertainties that are inherent in all radioisotope dating methods, such as uncertainties in the measurement apparatus, prior assumptions made by the observer, or interference from environmental factors. For the K-Ar system, these uncertainties also include those that arise from imprecisely known decay rates of 40 K. In the geological literature, there have been two influential reviews of measurements of the 40 K decay rate. Beckinsale and Gale (1969) provided the first comprehensive review of measured and predicted decay rates, which became the basis of the convention adopted by Steiger and Jäger (1977) used by the geochronological community for the next 20 years. Subsequently, Min et al. (2000) provided a more lengthy, critical review of available specific activity data determined by direct measurements of decay and updated the derived decay rates for newer physical constants. More recently, the 40 K decay parameters were estimated by Renne et al. (2010Renne et al. ( , 2011, and although direct measurements of the 40 K decay were incorporated into the estimate, it was heavily weighted to an intercomparison with 238 U decay. The decay rate determined by Renne et al. (2010Renne et al. ( , 2011, and the Min et al. (2000) decay rates are the most frequently used in 40 Ar/ 39 Ar geochronology. These evaluations, along with those from the nuclear physics community, have been summarized recently by Cresswell et al. (2018Cresswell et al. ( , 2019. Despite decades of work and long-standing interest in 40 K decay, there remains uncertainty over the nature of the decay scheme. There is consensus that most 40 K decays by β − to 40 Ca or by electron capture to 40 Ar via an excited state and that a small amount (∼ 0.001 %) of 40 K decays to 40 Ar via β + . The early but influential review of 40 K decay by Beckinsale and Gale (1969) included these decay modes and also included a prediction of a second electron capture decay directly to the ground state of 40 Ar that would add an additional ∼ 2 % to the rate of decay from 40 K to 40 Ar. Many workers in geochronology (e.g., those who use the Min et al., 2000, decay constants) have ignored this prediction, and some nuclear physics tabulations do not clearly include it (Endt, 1990;Audi et al., 2003). The influential review by Min et al. (2000) described this decay mode as "unverified" and having a "questionable" existence.
However, the putative electron capture to ground-state decay-mode decay constant is of the same order of magnitude as the uncertainties in the decay rate of 40 K to 40 Ar and therefore may be a non-negligible and potentially important part of the geochronological system. Here, we describe the theoretical basis of this predicted decay mode and demonstrate the robust nature of the prediction via an analogous calculation of 22 Na decay. We describe experiments that could be made to measure this decay mode and also identify observations from nuclear physics experiments that offer evidence for its existence. We conclude that the evidence for this decay mode is strong and, despite the large uncertainty, should be considered in evaluations of the 40 K decay rate.
Historical overview
At present, 40 K has three experimentally verified decay modes (Fig. 1).
1. β − decay to 40 Ca, this mode can be verified by direct measurement of the β − emission.
2. Electron capture to an excited isomer of 40 Ar, followed by decay to the ground state of 40 Ar accompanied by emission of a 1.46 MeV γ -ray. Hereafter, we denote this decay mode as EC*. This mode can be verified by direct measurement of the γ emission.
3. β + decay from the ground state of 40 K to the ground state of 40 Ar (Engelkemeir et al., 1962), this is a very small component of the total decay rate and has been verified by direct measurement of the β + emission.
In their paper reporting the measurement of β + /β − , Engelkemeir et al. (1962), through private correspondence with Brosi and Kettle, proposed that an electron capture mode that goes directly to ground-state 40 Ar also exists, with an electron capture to positron ratio of 155. This decay mode is hereafter denoted EC ground . This decay mode has not been experimentally detected, in part because the measurement is much more difficult to make than the others. If it exists, it would contribute about 0.2 % to the total decay rate of 40 K or about 2 % to the 40 Ar branch. The EC ground decay mode was included in the review by Beckinsale and Gale (1969) and then subsequently in Steiger and Jäger (1977). This decay mode is also included in the widely used Evaluated Nuclear Structure Data File (ENSDF) and Decay Data Evaluation Project (DDEP) evaluations (Chen, 2017;Mougeot and Helmer, 2009, respectively). However, evaluations by Endt andVan der Leun (1973, 1978), Endt (1990), and Audi et al. (2003) do not explicitly include this decay mode, with Audi et al. (2003) giving a transition intensity that is the combined EC* and β + intensities. Min et al. (2000) have questioned its validity because there is no experimental verification and therefore do not include EC ground in their estimates.
Why there must be an EC ground decay mode
In nuclides that are too proton-rich and therefore radioactive, protons decay to correct this imbalance via two mechanisms. Either, (1) the nucleus undergoes electron capture wherein an orbital electron is captured by the nucleus, or (2) the nucleus emits a positron (β + ). Both processes are types of β decay and result in the transformation of a proton to a neutron to conserve charge, and they both also emit a neutrino in order to conserve lepton number and energy. These two processes are typically paired: coupled electron capture-β + is the second most abundant decay type on the chart of the nuclides, after β − decay (Audi et al., 2003). The electron capture (EC) and positron (β + ) decay modes are linked because both processes have the same initial and final nuclear states.
β + decay is always accompanied by EC, but the converse is not always true (Bambynek et al., 1977). This is because β + decay, unlike EC, requires a minimum amount of energy (∼ 1022 keV, equivalent to the combined rest masses of both a positron and an electron) in order to produce the emitted son (1999) and Leutz et al. (1965), where 1 is the electron capture branch to the excited state of 40 Ar with y-ray emission (EC*), 2 is the electron capture direct to the ground state of 40 Ar (EC ground ), 3 is the positron decay to ground state of 40 Ar, and 4 is the β decay to the ground state of 40 Ca. The disputed decay mode, EC ground , is highlighted in red.
positron and an electron (the latter to satisfy charge conservation). The energy between initial and final states is the Q value, which is a function of the difference between the initial and final masses. This energy is shared between the outgoing neutrino, atomic excitation of the daughter system, recoil energy, and nuclear excitation of the daughter system. The energy that dictates if positron emission is possible is denoted Q EC . In the decay of 40 K, the EC* branch has an energy difference (Q EC ) between the initial and excited isomer state of only 44 keV. In contrast, the energy difference between 40 K and the ground state of 40 Ar is 1504.4 keV (Wang et al., 2017), an energy greater than the combined rest masses of the positron and electron. Therefore, the EC* branch, with an energy difference of only 44 keV, cannot be the complement to the β + decay, and the EC ground must exist to provide the β + complement. The experimental observation of the β + decay mode comes from a single measurement by Engelkemeir et al. (1962). We rely on this measurement to make our argument for the existence the EC ground . It is possible that the positrons observed arise from the pair production of the ∼ 1460 keV gamma; Engelkemier et al. (1962) discuss this possibility in their experiment, calculating this as 55 %-60 % of the total positron detection rate. However, positrons produced by this mode of pair production would be monoenergetic at 440 keV, whereas the observed positron energy spectra exceed this value, with a maximum of 491 keV. A reasonable fit is also observed between the measured β + energy spectrum and the theoretical third forbidden unique energy spectrum, supporting the argument that these β + are from a decay mode rather than arising from pair production.
Theory and calculation of EC ground /β +
In the decay of 40 K, the nuclide can reach a more stable state ( 40 Ca or 40 Ar) only by violating quantum selection rules. Quantum selection rules place formal constraints of the possible transitions of a system from one quantum state to another. In this case it places constraints on the possible set of transitions from the parent 40 K state to the daughter 40 Ar. Decays which violate these selection rules undergo slow, so-called "forbidden" unique transitions, which give 40 K its long ∼ 1.3 Ga half-life. The 40 K decay scheme itself is unusual because the coupled EC ground -β + and β − branches are the only third-order unique forbidden transitions known in nature. All 40 K decays undergo a parity reversal (where parity reversal is the change of sign in one of the spatial coordinates (x, y, z)) between the initial parent state and final daughter state. Therefore, we can define the selection rules as follows: is the change in spin from initial to final state following Krane and Halliday (1987). We can characterize each decay mode of 40 K by its degree of forbiddenness from the above selection rule. The EC* mode undergoes a spin change of J = 4 − 2 = 2 and is classified as a first-order unique forbidden decay. The three other decay modes of 40 K, including EC ground , all undergo a spin change of J = 4 − 0 = 4 and are classified as third-order unique forbidden decays.
The EC process occurs because the atomic electrons have a finite probability to be in the nucleus with the likelihood of being captured highest for those closest to the nucleus. A theoretical description of β emission was first given by Fermi (1934), while the possibility of electron capture was first recognized by Yukawa and Sakata (1935) and later developed by Bethe and Bacher (1936). Here we use Fermi theory of β − decay to calculate the EC ground /β + in the decay of 40 K.
We can use the ratio of orbital electron capture and positron emission to infer the existence of EC ground . The ratio br is defined as follows: where λ ec and λ β+ are the probability per unit time of electron capture or β + emission. In electron capture, orbital electrons can be captured from any orbital shell of the atom. The EC/β + is therefore the summation of the individual capture ratios from each shell. Following Bambynek et al. (1977), the total electron capture-to-positron ratio is as follows: where x is the shell, n x is the relative occupation number, C x contains the dependence of electron capture rates on nu-clear structure giving the forbiddenness classification, similar to the shape factor in β decay (Emery, 1975), f x is the integrated fermi function in β decay, f β + is the integrated positron spectrum, and C(W ) is the theoretical shape factor for allowed or forbidden transitions. A review of shape factors for 40 K transitions is provided by Cresswell et al. (2018Cresswell et al. ( , 2019. We initially simplify this equation to only consider the innermost K shell, the shell containing the electron with the highest probability to be captured by the nucleus: where λ K is the probability of K shell capture. For this capture, f K is defined as follows: where q K is the momentum of the neutrino particle, β K is the Coulomb amplitude of the wave function, and B K is the term for overlap and exchange corrections. Similarly, f β+ is defined as follows: where W is the total energy of the positron given by its kinetic energy E T and rest mass m e , defined above, and the momentum of the positron is given by p (Eq. 8); W 0 is the total normalized energy defined above; E max is the upper limit of the positron energy (equal to the Q value of the decay); and F (−Z, W ) is the Fermi function. We follow Bambynek et al. (1977) in the formula for C K C(W) which is given by where L = J and L = 1 for J = 0. The parameter λ n cannot be calculated in a straightforward manner and therefore we follow a typical assumption that λ n = 1 (Huber, 2011). This reduces the above expression to In a given decay, the change in charge from the initial to final state can lead to an imperfect overlap of the wave functions of these states. Furthermore, given the indistinguishability of electrons, there is the possibility of an exchange effect wherein an electron does not necessarily come from the orbital where the vacancy appears. For instance, it is possible that a vacancy may appear in the K-shell, but the captured electron from an outer shell is then subsequently filled by the inner shell electron (Bahcall, 1962;Bambynek et al., 1977). We follow Bahcall (1962) in implementing corrections for these effects, resulting in B K = 0.979. Following this, using nuclear data given in Bambynek et al. (1977), we estimate an EC ground /β + of 148. We first note that this value is in approximate concordance with the private correspondence value in Engelkemeir et al. (1962). However, this is only the capture ratio from the K shell, so we extend our model to a total electron capture ratio from all orbitals following Bosch et al. (1977). The total electron capture to positron ratio, an extension of Eq. (1), is given by We can simplify this equation by neglecting shells that make a negligible contribution. In 40 K the probability of capture is dominated by the two inner shells K and L1, with approximate probability of ca. ∼ 90 % and ∼ 10 % and a negligible contribution from the shells further out. We can therefore omit all shell captures except K and L1 to arrive at the total EC ground /β + ratio: The ratio of each shell capture can be solved with the following equation: where x = L1 and the other symbols have the same definition as above. Using this equation we calculate a total EC ground /β + of 164.
To further estimate the magnitude of the electron capture decay mode, we can perform another calculation of EC ground /β + following Fireman (1949). This simplified form of the calculating EC ground /β + is dependent only on the Q value (the difference between the initial and final state energies). This is given by λ EC λ β + = (η + 2) 8 0.450η 6.5 · 1 0.0676 + 1.25η + 8.48η 2 + 12.5η 3 + 1.74η 4 + 0.079η 6 , where η = Q m e − 2. We calculate an EC ground /β + of 272 using this method and the updated Q value of Wang et al. (2017). We note that despite discrepancies in these values for each method of evaluation, they are of the same order of magnitude. The differences in the values in these evaluations highlight the need for experimental measurement of EC ground /β + .
Comparison with other evaluations
Other theoretical evaluations of EC ground /β + for 40 K exist in the literature (Fig. 2). Pradler et al. (2013) and Mougeot (2018) report ratios of 150 and 212 ± 0.15, respectively (uncertainties are reported where they have been estimated). These workers use broadly similar methods to us. Mougeot (2018) uses higher-order corrections for both exchange and overlap and accounts for the dependence of λ K , that we set equal to 1 in Eq. (9) using the energy of the decay. Pradler et al. (2013) use the Fermi method and data from Bambynek et al. (1977) but only perform the calculation for K-shell electrons, resulting in a slightly differently calculated value than we report. Notably, all estimated values are of the same order of magnitude, similar to the ratio 155 reported in Engelkemeir et al. (1962) and our calculated value of 164. Currently, the most commonly used EC ground /β + value is calculated via the LogFT program, a program used in nuclear data evaluations (ENSDF collaboration, LogFT). However, the program is capable of only calculating first and second unique forbidden decay ratios, so the EC ground /β + value from LogFT of 200 ± 100 is an extrapolation, with the assumption that the increase in the ratio from second to third order is by the same factor as the increase from first to second order. Finally, Chen (2017) evaluates the 40 K decay data and reports a EC ground /β + value of 45.2 ± 1.4 without elaboration.
The variability between the modern estimates are driven primarily by choices when making the approximations necessary for these calculations to be tractable. Uncertainties on individual estimates that could be derived by propagating the uncertainties in the underlying experimental data are small, and where uncertainties are estimated, they are generally not explicated.
The value calculated by Mougeot (2018) of 212 ± 0.15 is currently the best estimate of the 40 K EC ground /β + . It is slightly higher than two other recent estimates, our value of 164 or that of Pradler et al. (2013) of 150. Given a broad consensus in calculated EC ground /β + over several decades and via a variety of methods, it appears highly likely that it falls in the range 150-212 (Fig. 2).
Comparison with 22 Na
To test the validity of our 40 K EC ground /β + estimate, we use the same calculations to estimate the experimentally constrained (EC/β + )* value for 22 Na decay. 22 Na is radionu- Figure 2. Comparison of theoretically calculated EC ground /β + of 40 K in this study using three methods: (1) Bambynek method (Bambynek et al., 1977), (2) Fireman method (Fireman, 1949), and (3) LogFT (ENSDF collaboration, 2020, LogFT). The value of Chen (2017) is not included in the figure as it is an extreme outlier, and the authors do not explain the method they use to reach this value. Our calculated ratios are compared to previous evaluations in the literature (Engelkemeir et al., 1962;Pradler et al., 2013;Mougeot, 2018). Uncertainties in these values are either intractable or, in the case of Mougeot (2018), too small to plot. Note the consistency in the estimated ratio from all of the methods. Uncertainties are unknown in all cases except Mougeot (2018), where the uncertainty is too small to plot. clide with a half-life of ∼ 2.6 years, it occurs in nature as a low-abundance cosmogenic nuclide produced by spallation of 40 Ar and is also produced synthetically by proton irradiation for use in positron emission tomography. Like 40 K, it decays by electron capture and positron emission. The main EC-β + pair for 22 Na decays initially to the excited state of 22 Ne, followed by a 1.27 MeV γ emission ( Fig. 3; Bé et al., 2010). This pair has a (EC/β + )* of approximately 0.1 and accounts for > 99.9 % of the total decay. A second ECβ + pair decays directly to the ground state of 22 Na with an (EC/β + ) ground of ∼ 0.02, but it is a minor component. Here, we calculate the (EC/β + )* for the main branch. Unlike 40 K, the dominant decay of 22 Na is the β + decay mode. This is due to the greater difference in energy between the initial and final states, as positron decay will have a greater possibility of occurring in decays with a greater mass differences between initial and final states (Emery, 1975). 22 Na is not a perfect analogue; however, it is probably the best choice that Bé et al. (2010) and Leutz et al. (1965). An additional EC and β + decay pair that corresponds to approximately 0.056 % of the total decay of 22 Na has been omitted for clarity.
has both a tractable theoretical calculation and a wealth of experimental data that can be used readily for verification.
Unlike 40 K, there are numerous measurements of the electron capture to positron ratio for decay to the excited state of 22 Ne ( Fig. 4; Kreger, 1954;Vatai et al., 1968;Williams, 1964;McCann and Smith, 1969;MacMahon and Baerg, 1976;Bosch et al., 1977;Baerg, 1983;Schmidt-Ott et al., 1984;Sýkora and Povinec, 1986;Kunze et al., 1990;Nähle et al., 2008). Measurement of (EC/β + )* for 22 Na is accomplished by measurement of both of the gammas (which come from both the EC* and the β + * ) and the x-rays (which only come from the EC branch). Relative to the 40 K EC ground /β + , the 22 Na (EC/β + )* is a more straightforward measurement because of the higher activity (meaning higher count rate) and the higher energy of the x-ray emitted from the Auger electron, which an electron from the same atom that is emitted as a vacancy of an inner shell is filled. In a decay to the excited state of 22 Ne, the de-excitation 1.28 MeV γ will be associated with both electron capture and positron decay. However, those measured 1.28 MeV γ that are not accompanied by two 0.511 MeV annihilation photons can be used to distinguish between both processes. We use the experimental measurements to verify our calculations described above for 40 K.
Following a similar calculation using the Fermi method, our preferred method, to that used for our proposed estimate of the 40 K EC ground /β + , we estimate an (EC/β + )* of approximately 0.11. This is within the range of measured values of 0.105-0.115 (Fig. 4), suggesting that our calculation strategy of the 40 K EC ground /β + is accurate, and lends fur- Figure 4. Comparison of experimentally measured (EC/β + )* ratios of 22 Na (grey circles) adapted from Kunze et al. (1990) with our calculated value (dashed black line). Note the concordance of the theoretical and experimental determinations. The uncertainty in the Baerg (1983) determination is smaller than the symbol. ther confidence to the existence of the current unmeasured 40 K electron capture to ground-state decay.
Experimental verification of EC ground decay mode
In both β − and β + decay, an electron or positron is emitted that allows for direct detection and verification of the decay process. In contrast, electron capture cannot be detected directly. Methods to experimentally verify electron capture rely on indirect processes associated with the rearrangement of the atom following the capture of the orbital electron. Once the electron is captured the atom will rearrange itself to fill the vacancy, resulting in the emission of a characteristic xray or Auger electron with an energy defined by the binding energy of the shell vacancy of the daughter nucleus.
In the case of 40 K, verification of the EC ground decay can be achieved by measuring the characteristic x-rays (Di Stefano et al., 2017). The orbital electron with the highest probability of capture is from the K-shell; if this electron is captured, the resulting vacancy in the K-shell may be filled by an electron from any of the other shells (e.g., L, M), and a characteristic x-ray is emitted with an energy dependent on the particular shell that fills the vacancy. It is not necessary, however, that all K-capture processes result in the emission of an x-ray. By the Auger effect, a radiationless transfer may occur wherein the K-shell vacancy is replaced by two vacancies in the next outer shell, L, or one in the next two shells; L and M. The energies of the Auger electrons emitted in these transitions depend upon the 40 Ar product resulting from K-capture. Both electron capture decays to the ground and excited state of 40 Ar ( 40 Ar 2+ ) result in the same electron configuration and x-ray emissions. Di Stefano et al. (2017) suggested tagging x-rays with the de-excitation γ associated with electron capture to 40 Ar 2+ , which has a lifetime on the order of ∼ 10 −12 s (Di Stefano et al., 2017). Measuring these tagged x-rays experimentally will be challenging since it requires identifying a low-probability decay mode with x-ray signals present against a high background from the 40 Ar 2+ state. Further, as illustrated in Di Stefano et al. (2020), it is expected that 50 EC* decays occur for every 1 EC ground decay; therefore, a detector efficiency of ≥ 98 % is required to make sure that there is fewer than one mis-tagged EC* decay for each true EC ground decay. The experiment therefore requires an x-ray spectrometer able to resolve the Ar-K x-ray from other x-rays in the background and accurately account for the x-ray-γ -ray coincidence efficiency (≥ 98 %) to quantify x-ray emission rates in excess of those from the 40 Ar 2+ state. Given the complexity involved in this experiment, a pilot study was conducted at SUERC to measure characteristic x-rays from a KCl source. The experiment was not successful because the detector was not able to resolve the Ar-K x-ray sufficiently but demonstrates the potential of this method to detect the x-rays, given a sufficiently high-resolution detector. Full details are provided in the Supplement.
Ongoing attempts are being made to verify this decay mode by careful detection of the characteristic x-rays by the Potassium decay (KDK) experiment (Di Stefano et al., 2017;Stukel, 2018). Experimental verification has implications for (1) rare event physics, as it is a vital component in constraining the irreducible background and verifying results in the DArk MAtter (DAMA) experiment (Pradler et al., 2013); (2) the theory of βdecay (Fermi, 1934), as it is the only thirdorder unique forbidden electron capture decay known (Audi et al., 2003); and (3) K-Ar and 40 Ar/ 39 Ar geochronology, for which it is currently overlooked due to lack of experimental evidence. We further expand on the implications for geochronology below.
Relevance for geochronology
Geochronology with the K-Ar system requires either both the branching ratio and the total decay constant, or in the case of an 40 Ar/ 39 Ar age wherein the fluence monitor age is constrained independently of its K-Ar systematics (Merrihue and Turner, 1966), only the total decay constant. Using lower and upper bound values of EC ground /β + corresponding to 150 and 212 as described above, the decay constants calculated by Min et al. (2000) (λ EC * = 0.580±0.014×10 −10 a −1 and λ T = 5.463 ± 0.107 × 10 −10 a −1 ), and the β + /β − from Engelkemeir et al. (1962) (1.12±0.14×10 −5 ), we calculate a β + decay constant of 5.47 ± 0.69 × 10 −15 a −1 and a range of EC ground decay constants of 8.2-11.6 × 10 −13 a −1 . Combining these values with the Min et al. (2000) quantities yields a new partial decay constant for 40 K to 40 Ar (λ40 Ar ) that ranges from 0.588-0.592 × 10 −10 a −1 and total decay constant (λ T ) that ranges from 5.471-5.475 × 10 −10 a −1 . These ranges are within the uncertainties calculated by Min et al. (2000) for decay constants that do not include the EC ground decay mode. Existing and modified constraints on the decay modes are given in Table 1. Our preferred decay constants are those calculated with the EC ground /β + = 212 from Mougeot (2018). Consequently, K-Ar (and 40 Ar/ 39 Ar) ages calculated with these new decay constants will be younger than those calculated using the Min et al. (2000) decay constants. K-Ar dates are most sensitive to shifts in the decay constant because they incorporate the branching ratio, which is more strongly affected than the total 40 K decay constant. K-Ar ages will decrease by ∼ 1.4 %-2.0 % at 1 Ma, ∼ 1.1 %-1.5 % at 1 Ga, and ∼ 0.6 %-0.8 % at 4.5 Ga (Fig. 5). Ages determined using the 40 Ar/ 39 Ar method, for which the fluence monitor age is independently constrained (e.g., Kuiper et al., 2008;Rivera et al., 2011), are much less sensitive to the change in decay constant. Using Eq. (5) from Renne et al. (1998), and assuming calibration to a monitor with an age of 28.2 Ma, ages < 28 Ma increase only slightly, by < 0.002 %. There is no age difference at 28.2 Ma, the fluence monitor age. Ages then decrease for ages > 28.2 Ma, with ages decreased by 0.07 %-0.10 % at 2.5 Ga, and by 0.09 %-0.13 % at 4.5 Ga (Fig. 5).
The age of fluence monitors such as the Fish Canyon tuff sanidine (e.g., Morgan et al., 2014) determined by intercomparison with astronomically tuned ages of ash beds (Kuiper et al., 2008;Rivera et al., 2011) is also sensitive to revision of decay constants. Using the data published by Kuiper et al. (2008), and incorporating an EC ground decay mode, we calculate a new age for Fish Canyon sanidine of 28.200 ± 0.044 Ma, nominally lower but indistinguishable from the published value of 28.201 ± 0.044 Ma. Overall, the effects of an EC ground decay mode are unlikely to be significant for most current applications of 40 Ar/ 39 Ar geochronology. However, given the levels of analytical precision attainable by the K-Ar dating approach when dating geologically recent materials by K-Ar (e.g., Altherr et al., 2019), the EC ground decay mode will impact the accuracy of this chronometer.
Conclusion
The Fermi theory of β decay has decades of experimental support and is well established. We demonstrate this here by using these theories to accurately calculate the decay rate of a 22 Na, a nuclide with an experimentally verified decay rate. We have used this information to demonstrate the high likelihood that the suspected second electron capture decay mode of 40 K exists. Based on the cal- λ EC ground 11.6 ± 1.5 × 10 −13 a −1 13 This work λ40 Ar 0.592 ± 0.014 × 10 −10 a −1 2.4 This work λ T 5.475 ± 0.107 × 10 −10 a −1 2.0 This work Figure 5. Change in age, age, is the age of a given sample with the decay mode to ground state included, subtracted from the age with the decay mode to ground state omitted. Panel (a) shows the change in age using the 40 Ar/ 39 Ar equation with independently calibrated standards using both the lower bound (EC/β + = 150; grey) and upper bound (EC/β + = 212; black). Panel (b) shows the change in age using the K-Ar equation using both the lower bound (EC/β + = 150; grey) and upper bound (EC/β + = 212; black). Inset figures in each panel show the fractional differences in age by the inclusion of both the upper-and lower-bound EC/β + value. The larger difference in ages for the K-Ar system is due to the dependence on both the total decay constant and branching ratio. culations of Mougeot (2018), the best estimate of the partial decay constant for 40 K direct to ground-state 40 Ar is 11.6 ± 1.5 × 10 −13 a −1 (2σ ), and other calculations are no lower than about 8.2 × 10 −13 a −1 . Combining this with the decay constants published by Min et al. (2000) results in revised values of λ40 Ar = 0.592 ± 0.014 × 10 −10 a −1 and λ T = 5.475 ± 0.107 × 10 −10 a −1 . This addresses a long-standing question in K-Ar and 40 Ar/ 39 Ar geochronology and provides future workers with confidence that the 40 K EC ground decay mode exists. Just as important as providing support for its ex-istence, we also demonstrate that the magnitude of this decay mode is small enough that neglecting it has not yet resulted in significantly biased geochronological 40 Ar/ 39 Ar data. The same cannot be stated for the K-Ar dating approach, especially for geologically young materials.
Despite the strong grounding in theory, the EC ground decay mode has yet to be detected. The next step is experimental verification to determine the branching ratio. This will allow for a more complete evaluation of uncertainties associated with the decay mode and the branching ratio. This experiment is difficult but not intractable. Data availability. All data used are available in the Supplement and can be accessed publicly via https://doi.org/10.6084/m9.figshare.13280795 (Carter et al., 2020).
Author contributions. The study was conceived by JC and RBI. JC, AJC, and DCWS calculated the ratio of electron capture to beta activities and measured x-rays. JC, RBI, DFM and MMT calculated the effects for geochronology. JC wrote the manuscript with contributions from all authors.
|
v3-fos-license
|
2021-10-08T01:16:14.508Z
|
2021-10-06T00:00:00.000
|
238419178
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2022)024.pdf",
"pdf_hash": "f9d86f9e97e8e7bb819dafe6abe0d1d4526a15d4",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3057",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "67befb799c9aebc3a48ef360ee6f3f5670711b01",
"year": 2021
}
|
pes2o/s2orc
|
Analytic Periods via Twisted Symmetric Squares
We study the symmetric square of Picard-Fuchs operators of genus one curves and the thereby induced generalized Clausen identities. This allows the computation of analytic expressions for the periods of all one-parameter K3 manifolds in terms of elliptic integrals. The resulting expressions are globally valid throughout the moduli space and allow the explicit inversion of the mirror map and the exact computation of distances, useful for checks of the Swampland Distance Conjecture. We comment on the generalization to multi-parameter models and provide a two-parameter example.
Introduction
The swampland program [1] aims to distinguish those effective field theories that can be UV completed to quantum gravity theories from those that cannot. So far, this program has produced many interrelated conjectures, see [2][3][4][5] for reviews. To test said conjectures and to better understand them a good comprehension of examples is required.
The main experimental testing grounds for these conjectures are compactifications of type II string theory or F-theory on Calabi-Yau (CY) manifolds. Compactifications to four-dimensions are described in terms of effective N = 2 supergravity theories or N = 1 in the presence of fluxes and orientifold planes. Many of the properties of the resulting effective theories are encoded in the periods of the CY, making their computation crucial in explicit analyses. For example, an N = 1 supergravity theory can be completely described in terms of three objects: the superpotential W , the Kähler potential K and the gauge kinetic functions. All of these quantities are expressible via the periods of the CY one compactifies on.
The usual strategy when studying flux compactifications is to obtain locally valid expressions for these objects by expanding the periods of the CY around a point in moduli space and then requiring the fluxes to be chosen in a selfconsistent way such that the initially assumed approximation holds. As an example, the recently proposed constructions of exponentially small superpotentials [6][7][8][9][10][11][12][13] develop the periods around the large complex structure point (LCS) or the conifold and drop the higher order exponentially small corrections. Globally valid expressions for the periods remove this limitation and would allow easier scans over flux vacua.
Local expressions for the periods 1 were also used in [15][16][17] to study the swampland distance conjecture [18]. Recently, the structure of the periods close to boundaries was determined in [19]. Near the boundary the series expansion converges slow enough as to significantly modify the distances computed by integration over trajectories traversing said regions. This was quantified in [20] where some of the distances in [15] were computed exactly by exploiting heterotic-type IIA duality.
A necessary step in both computations described above is the inversion of the mirror map. The mirror map is initially defined around the LCS, characterized as the point of maximal unipotent monodromy in complex structure moduli space, and then extended to other regions of moduli space through analytic continuation, which for example defines the quantum volumes on the mirror Kähler moduli space. The monodromy structure around the LCS determines the well-known leading behavior of the mirror map in this regime, namely 2πit i = log(z i ) + a 1 z + a 2 z 2 + · · · , (1.1) where t i are the Kähler moduli and z i the complex structure moduli of the mirror manifold. The infinite series is responsible for the exponentially suppressed corrections upon inversion of the mirror map. This local expansion breaks down as we move away from the LCS and the local expression of the mirror map around other loci like the conifold is known to behave differently. As we will see in examples throughout the paper, one can combine the logarithmic term with the infinite series to form ratios of elliptic integrals that are valid throughout the whole moduli space, and that upon series expansion reproduce the expected local behavior of the mirror map. For hypersurfaces in projective spaces and complete intersection CYs the periods have been computed in [21,22] in terms of hypergeometric functions and parameter derivatives thereof. The main result of this paper is that these derivatives are in the cases of genus one curves and K3 manifolds expressible purely in terms of hypergeometric 2 F 1 functions. The origin of these expressions lies in the fact that the Picard-Fuchs operators of K3 manifolds are symmetric squares of second order differential operators [23,24]. In general, these second order operators are not nicely behaved, but the introduction of twists allows for a unified description in terms of a basis of hypergeometric 2 F 1 functions related to a small classified set of integer sequences. Symmetric squares have already been studied to find modular expressions for periods [23,25,26] or to systematically generate CY operators [27]. The appearance of modular expressions in periods was first noted in [28]. Similar expressions can be found in [29][30][31][32][33][34][35][36].
The periods expressed purely in terms of hypergeometric 2 F 1 functions allow a global description of the moduli space, no longer dependent on infinite series representations. This can be used to directly test the refined swampland distance conjecture in this setup. We perform this computation for two examples, the manifolds P 4 1,1,2,2,2 [8] and P 4 1,1,2,2,6 [12], finding agreement with [20]. This paper is organized as follows. In Section 2 the idea behind the computation of the periods is discussed and applied to some examples. In Section 3 we generalize this method beyond the usual hypergeometric setup. Section 4 studies a two-parameter model. Finally, Section 5 summarizes our findings and gives an outlook on possible further generalizations.
Computation of the periods
Our goal is to compute distances along the boundary of the complex structure moduli space of CY threefolds. For this we require the metric g ij on said moduli space. The metric originates from a Kähler potential K as Here Π denotes the period vector defined by where Ω(x) is the unique holomorphic (3, 0)-form of the CY threefold X, the moduli space coordinates are denoted x and {γ α } α=0,...,2h 2,1 (X)+1 is an integral symplectic basis of 3-cycles. The symplectic pairing of the periods is given by Thus a knowledge of the periods is crucial for the computation of the distances. We will obtain closed forms for the periods using four ingredients: • the known formulae [21,22] for the periods around the LCS point, which express the periods in terms of parameter derivatives of hypergeometric functions; • Clausen's identity, reducing the problem to the solutions of second order operators; • a relation between the periods originating from supersymmetry [37], which allows expressing the periods in terms of the first parameter derivatives; • the so-called ǫ-expansion of a family of hypergeometric functions to first order [38,39].
When combined, these are sufficient to compute the periods of K3 manifolds given by hypersurfaces in weighted projective spaces. These appear as fibers of Calabi-Yau threefolds. Thus, the knowledge of these periods allows the analytic computation of the distances in moduli space along deformations of these fibers. In Section 3 we will generalize this construction to more complicated geometries, including, for example, complete intersections in Grassmannian ambient spaces, by identifying a basis of integral sequences which can be resummed to closed forms. In the remainder of this section we describe the computation of the periods in detail and calculate the distances in two hypergeometric examples. To denote the complete intersections in weighted projective spaces we employ the usual matrix notation giving the degrees of the defining homogeneous polynomials in each of the projective factors For these geometries the periods around the LCS point are well known. One can write them down explicitly in terms of derivatives of a fundamental period Here and in the following we use a multi-index notation to simplify the expressions, i.e. x n = x n 1 1 x n 2 2 · · · x n h 2,1 h 2,1 and ∞ n=0 = ∞ n 1 =0 ∞ n 2 =0 · · · ∞ n h 2,1 =0 . The ρ i are auxiliary variables that play a role in the computation of the rest of the periods and should be set to zero at the end. The expansion coefficients are given by [22] . (2.7) The full basis of periods is obtained by acting with the differential operators on the fundamental period, where the K ijk are the classical triple intersection numbers and i = 1, . . . , h 2,1 . The period vector is then (2.9) This period vector represents a basis of solutions to the Picard-Fuchs system, but is not yet in an integer symplectic basis. The periods expressed in said basis are obtained by acting with a transition matrix m The matrix m can be fixed up to Sp(2h 2,1 + 2, Z) transformations by known methods at the LCS [28]. By performing a resummation of the fundamental period we can express it as a sum of hypergeometric functions in which all of the dependence in one of the moduli space coordinates is encapsulated in the argument of the hypergeometric functions Here f (n i , ρ i ) denotes a combination of Γ functions which does not depend on the moduli. The parameters a and b of the hypergeometric function depend on the n i and the ρ i . We will be interested in one-dimensional subspaces of the moduli space where all but one modulus are fixed at the LCS point, i.e. x i = 0 for i > 1. This immediately implies that terms in (2.11) with n i > 0 will not contribute. Thus, (2.11) simplifies to This simplification holds as well for the ∂ ρ i derivatives since they will yield terms proportional to We keep the x i strictly at x i = 0, while we allow x 1 to take finite values along which we will integrate later on to compute distances along the boundary of the moduli space. The ∂ ρ i f (ρ i ) give combinations of Γ and polygamma functions that are finite. Therefore, even if at certain x 1 we could have a competing divergence coming from the hypergeometric function or its parameter derivatives, this will happen at a zero-measure set that will not contribute to our discussion. 2 Hence, for our purposes we can say that for the n i = 0 case the leading terms in the x i coordinates come from the log(x i ) pieces without further discussion. Thus, in the regime of interest, to obtain the relevant terms we only need to determine p F q ( a; b; x 1 ) and its ρ 1 derivatives. For this, the hypergeometric functions is expanded into a power series (2.14) This power series is called the ǫ-expansion as the same expansions appear in the computation of scattering amplitudes in dimensional regularization [40]. The strategy to compute the derivatives is the following. First we compute the ǫexpansion of a certain family of 2 F 1 -functions corresponding to elliptic and genus one curves. Then, using Clausen's identity and generalizations thereof, we compute the ǫ-expansion of a class of 3 F 2 -functions, which will turn out to describe the geometry of K3 manifolds. The relevant family of 2 F 1 -functions is the Legendre family of hypergometrics Here 0 < a < 1 is a real parameter. This type of hypergeometric functions are closely related to the Legendre functions and can be expressed using them. Moreover, for a = 1/2 the function is the complete elliptic integral K(m): We will give all results of this paper in terms of either the elliptic integral or the hypergeometric 2 F 1 functions. One could instead choose to describe them using the Legendre functions, but the hypergeometric version will allow for a unified treatment.
The hypergeometric function (2.15) describes the fundamental period ω 0 of the four complete intersection elliptic curves P respectively. Note that these are exactly the four elliptic functions appearing in Ramanujan's theory of elliptic functions to alternative bases [41]. The Picard-Fuchs system of an elliptic curve has two independent solutions. The second solution is obtained from the fundamental period by computing Thus, we need the ǫ-expansion of a hypergeometric 2 F 1 function to leading order in ǫ. Similar problems have appeared quite often in the literature, see for example [40,[42][43][44][45][46][47]. For integer and half-integer parameters the computation is even automatized and a Mathematica package is available [48]. In the general case computing such expansions is a difficult task. However, for the very special form appearing in (2.18) we are in luck: a closed form can be found in terms of a harmonic series. The derivatives only with respect to the upper parameters can be found in [38] and the one with respect to the lower one in [39]. The details of the computation can be found in Appendix A. The result is where H n denotes the harmonic numbers 3 . This expression simplifies drastically when one of the four explicit values of a is inserted. For example, for a = 1/2 While computing ǫ-expansions directly is difficult, a surprisingly large group of periods can be computed using only this single result. For example, the four functions f a fulfill a very interesting identity due to Clausen: The 3 F 2 functions appearing in this identity are an example of the fundamental periods of one-parameter K3 surfaces. This allows us to compute the A-cycle periods of these K3s in a closed form, for details we refer again to Appendix A. For the B-cycle periods one also needs the second term in the ǫ-expansion. But here supersymmetry comes to aid: type II string theory compactified on a K3 surface results in an N = 4 supersymmetric theory. Therefore, there are no instanton corrections to the prepotential, which results in a constraint relating the periods among themselves. This can be used to express the periods purely in terms of the first term of the ǫ-expansion. For one-parameter models, such a relation can be derived directly from the properties of the hypergeometric system, with the resultω This is the kind of relation needed to express everything in terms of first derivatives, as onlyω 2 (x) depends on second derivatives. A derivation of this identity using the bilateral hypergeometric function is given in Appendix A. The relation also follows from the fact that the Picard-Fuchs operator forf a (x) is the symmetric square of the operator of f a (x). We can see this at work for the K3 periods ω, which can be written as In the first equation we see how Clausen's identity relates the K3 periods to those of elliptic surfaces. Moreover, the mirror map takes the especially simple form Comparing this to the mirror map of the elliptic curve one can see that the only added complexity lies in the rational function appearing in the argument.
Examples
In the rest of this section we will apply the developed methods to K3 fibered CY threefolds obtained by resolving singularities of degree eight and twelve hypersurfaces on P 4 1,1,2,2,2 and P 4 1,1,2,2,6 respectively and use the result to compute distances in the moduli space. Their quantum geometry has been analyzed in great detail using mirror symmetry in [21,22,28]. The Fermat hypersurfaces among them are defined by the polynomials At z 1 = z 2 = 0 these geometries present a curve C of A 1 singularities that when blown up lead to an exceptional divisor E having the structure of a P 1 fibration over C. The degree one polynomials generate a linear system |L| which is a P 1 fibration with fiber L = K3. The degree two polynomials generate a second linear system |H| = |2L + E|. The complexified Kähler cone can then be parametrized by Their structure as K3 fibrations can be made obvious by equivalently expressing these geometries as complete intersection Calabi-Yau manifolds (CICY) where the first factor corresponds to the K3 fiber and the second one to the P 1 base. With the parametrization of the Kähler cone chosen for the hypersurface the J i coincide with the induced Kähler forms from the i-th projective factor in the CICY representation [22], and therefore we will loosely refer to J 1 and J 2 as the Kähler form of the fiber and the base respectively. The intersection ring was computed in [28]. Since we are dealing with K3 fibrations we have L 2 = 0. The non-vanishing triple intersections are given by (2.32) The mirror duals of these hypersurfaces can be constructed as the families of hypersurfaces defined by quotiented by appropriate discrete groups. The phase structure of the (mirror) moduli space is described in [49] and depicted in terms of the (ψ, φ) variables in Figure 1. We will be interested in computing the distance along the boundaries of moduli space. Using the language of the A-side variables with the splitting where ξ i is the axion and τ i its saxionic partner, this translates to keeping one of the τ i strictly infinite. For example, the distance along the boundary of the P 1 phase between the small ψ and large φ point and the conifold point corresponds to keeping τ 2 → ∞ while varying t 1 , which greatly simplifies all the expressions involved.
In the large volume regime the N = 2 special geometry data is encapsulated in the prepotential Landau-Ginzburg orbifold Figure 1: Schematic plot of the (mirror) moduli space of P 4 1,1,2,2,6 [12] taken from [15]. The shown distances are the ones computed in said paper. The moduli space of P 4 1,1,2,2,2 [8] has an analogous structure. Above and the a ij can be fixed modulo an irrelevant integer part by demanding that the prepotential gives periods with integer monodromies. The Euler characteristic of the manifold is denoted by χ(X) and the instanton corrections are given by F inst and exponentially suppressed at large volume. From this prepotential one obtains an integral symplectic basis of periods The Kähler potential from which the moduli space metric stems can then be reconstructed from the periods as explained in Section 2. Taking the leading terms in t 1 one obtains while the leading terms in t 2 are Note that the instanton expansion breaks down in the non-geometrical phases of the moduli space and therefore demands an analytical continuation to these regions. These issues are best treated using the hypergeometric structure of the periods analyzed in Section 2. Using the procedure described there to obtain the leading terms on the mirror side, construct the integral symplectic basis and apply the mirror map will yield the same result as above as long as we only keep the leading terms in t 1 or t 2 . When keeping τ 1 → ∞ the relevant component of the metric to compute the trajectory along the boundary of moduli space is This reproduces the distance computed in [15] along the boundary of the orbifold phase in moduli space. For the distance along the boundary of the P 1 phase we keep τ 2 → ∞ and compute the metric component (2.41) Before we can calculate the distance along this boundary we need to compute the initial and final points of the trajectory in the t 1 variable.
2.1 P 4 1,1,2,2,6 [12] The topological numbers corresponding to the P 4 1,1,2,2,6 [12] hypersurface and appearing in its prepotential are given by (2.42) To simplify some expressions it is convenient to define a new set of coordinates in the complex structure moduli space of the mirror, related to the ones used in (2.6) and subsequent equations by (2.43) Above we obtained the relevant metric to compute the distance along the boundary of the P 1 phase, i.e. for x 2 = 0. The trajectory starts on the B-side in the small ψ and large φ point corresponding to x 1 → ∞ and ends at the conifold point given by x 1 = 1, see Figure 1. This distance was numerically approximated in [15] to be ∆x 1 ≈ 0.27. Recently, in [20] this value was analytically computed by exploiting heterotic-type IIA duality, obtaining the result The fundamental period of P 4 1,1,2,2,6 [12] is given after resummation in the x 1 variable and keeping only the n 2 = 0 terms as explained in Section 2 by This expression is valid in the large complex structure regime and in other phases of the moduli space by appropriately choosing the branch of the hypergeometric function. In the large complex structure phase it can be associated to the fundamental period of the K3 fiber, which explains the appearance of a 4 F 3 hypergeometric function.
The factors in the rest of the periods coming from the action of the ∂ ρ 2 derivative are dominated by the log(x 2 ) piece, which leaves the hypergeometric function intact. Therefore, we can set ρ 2 = 0 from the very start and notice that we only need to know how to compute the action of ∂ ρ 1 on the hypergeometric function 4 for ρ 1 = 0. This corresponds to the type of hypergeometric ǫ-expansion that we treat in detail in Appendix A, where we print an analytic closed form for it.
This allows us to obtain a closed form for the mirror map of the A-periods along the boundary. The A-periods obtained after dividing by the fundamental are, at this level of approximation, Exploiting the ǫ-expansion of the 4 F 3 hypergeometric function yields the boundary mirror map From it we obtain the initial and final points of the trajectory, which are From (2.47) one can also explicitly check that all the points in the trajectory along the boundary of the P 1 phase lie in the circumference of radius one in the complex plane. Parametrizing the trajectory as t 1 (λ) = (λ, √ 1 − λ 2 ) we arrive at 49) which coincides with the result given in [20]. This agreement was obvious from (2.47), as using the fact that the inverse of the j-function can be written as we can rewrite the boundary mirror map as i.e. we have recovered the results of [28,50] from the ǫ-expansion of the boundary mirror map.
2.2 P 4 1,1,2,2,2 [8] We repeat the same steps for the P 4 1,1,2,2,2 [8] hypersurface. The topological numbers appearing in its prepotential are As before, we define on the mirror moduli space a new set of coordinates given by The fundamental period of P 4 1,1,2,2,2 [8] after resummation in the x 1 variable and keeping only the n 2 = 0 terms is (2.54) The boundary mirror map for t 1 is given by Using the closed form for the ǫ-expansion of the 4 F 3 hypergeometric function given in Appendix A we can express the boundary mirror map as where K(m) = π 2 2 F 1 1 2 , 1 2 ; 1; m is the complete elliptic integral of the first kind. Note that the infinite sum combines with the logarithm to form the elliptic integrals. Of course, if the right hand side of (2.56) is expanded into a power series around x 1 = 0 the logarithm reappears and one obtains the usual form of the mirror map.
The initial and final points of the trajectory are (2.57) From (2.56) one can see that all the points in the trajectory lie in the circumference of radius 1 √ 2 in the complex plane. Parametrizing the trajectory as t 1 (λ) = λ, 1−2λ 2 2 we arrive at 58) which coincides with the result given in [20]. There, the boundary mirror map was given in terms of the Hauptmodul j + 2 by considerations of modularity as .
(2.59)
An explicit expression for j + 2 is given by [51] j where q t 1 = exp (iπt 1 ). Thanks to the hypergeometric treatment of the periods we have automatically obtained the exact inverse (2.56) of this relation.
Beyond hypergeometry
Up to now, we focused on the computation of periods expressible via hypergeometric functions. This allows for the computation of the periods for oneparameter subspaces in toric CICYs. If one leaves this setup, more complicated GKZ systems appear. In this section we will venture beyond the hypergeometric case to obtain the periods of non-toric surfaces as well. This includes all oneparameter K3 surfaces of [23] and the fiber in all examples of [52] when taking the limit z 2 → 0 (see [53,54] for related constructions involving Grassmannians). Moreover, all Picard-Fuchs operators of one-parameter Fano threefolds can be computed this way. This includes, for example, sections of the Grassmannian G(2, 6) by a codimension 5 plane. One-parameter Fano threefolds are classified, for the complete list see [55].
With the periods at hand one could repeat the analysis of Section 2 for the distances along the boundary of the moduli space for CY threefolds admitting a fibration by these surfaces. The list [23] contains examples with increasing values of N for the relevant congruence subgroup Γ 0 (N) + , going up to N = 30, which would allow for an analysis similar to that of [20].
The hypergeometric differential operators for elliptic curves are of the form or for the one-parameter K3 case where we have used the logarithmic derivative θ = x∂ x . The general second order operator equation can be written as for some fixed m, known as the degree of the operator, and polynomials P n of degree 2. In this paper we will only slightly venture beyond the hypergeometric case, where m = 1, by taking m = 2. The interesting operators of this type have been studied by [56][57][58][59][60][61][62][63]. In those papers the differential equation or the corresponding recursion (known as an Apéry-like recursion) for the coefficients of a power series Ansatz f (x) = ∞ n=0 u n x n were considered. Here A, B and C are integer parameters. The found solutions with integer coefficients are all either polynomials, Legendre functions or hypergeometric functions. In addition to these infinite families of solutions, there are six sporadic solutions, denoted A-F by Zagier, and an additional hypergeometric solution that does not fall under the one-parameter family of hypergeometric solutions, denoted G. We list all solutions in Table 1. 16x x Table 1: Solutions to Apéry-like recursions (3.5). Note that the 2 F 1 1 12 , 5 12 ; 1; y function in case D is related to 2 F 1 1 6 , 5 6 ; 1; f (y) by a rational transformation. Beukers [64] gives solutions for all of the cases A-F in terms of 2 F 1 1 12 , 5 12 ; 1; x functions which are related to our solutions by Möbius transformations.
We have defined to shorten the expressions in the table. The Legendrian cases as well as the polynomial ones are special cases of hypergeometric functions. Thus, all cases reduce to hypergeometric functions of the form 2 F 1 (a, 1 − a; 1; x) and rational functions. The ǫ-expansion of these function is discussed in Appendix A. Four of the sporadic cases, A, B, E and F, seem to follow a similar structure and case G is a sum of two hypergeometric functions, while the remaining two cases are more complicated. Especially case D appears a priori complicated, but noticing that and that the j-function of the corresponding elliptic curve is given by explains the rather complicated structure in terms of the modularity of the underlying curve.
We will see that the first four entries are related to the fundamental periods of K3 manifolds constructed as complete intersections in toric varieties [22]. Constructing the other cases is more involved, e.g. case D appears as a fiber in all examples of [52] when taking the limit z 2 → 0. Moreover, the sporadic solutions appear in the Picard-Fuchs systems of the 17 one-parameter Fano threefolds [55].
All solutions f (x) of (3.4) enjoy a generalization of Clausen's identity in the form of a twisted symmetric square, i.e. their square is related to the solutions of the differential equation The geometric origin of the term twisted symmetric square is the following: The K3 manifolds can be written as twisted products of two elliptic curves E t × E ı(t) , where ı denotes an Atkin-Lehner involution. Details of this construction can be found in [55,[65][66][67]. For this paper the important fact is that the twisted symmetric squares have similar properties to those of the usual symmetric squares, especially the relations between the solution spaces of the operators. There exist several twists for each second order operator. For example, the solutions to another third order operator θ 3 − 2x(2θ + 1)(Âθ 2 +Âθ +B) + 4x 2Ĉ (θ + 1) 3 f (x) = 0 (3.11) are obtained via When looking through the list of diagonal K3 Picard-Fuchs operators in [23], one finds that the operators in (3.9) and (3.11) are not sufficient to cover all cases. Indeed, allowing for another parameter D in the recursion, three more sequences were found in [68]. There the operator was studied. The additional solutions correspond to one of the two types, either 14) or Together with the symmetric squares of the operators in Table 1 these give solutions to all operators listed in [23]. Note that the exact parameter combination for the Γ 0 (2) − cases does not appear. The reason for this is that in this case the parameters take values outside the parameter range considered in [68], see Table 2. While the combination gives an integer sequence like in the other cases, we could not identify a generating function for this case. However, there is also no known geometric construction associated to it, so we will simply ignore it. These represent, as in the other cases, also symmetric squares of quadratic operators. If we know the ǫ-expansion of all f (x), we are also able to compute the ǫexpansion of thef (x). The latter fulfill the Picard-Fuchs equations of certain diagonal K3 manifolds. Thus, it is possible to use the methods of this paper to give closed forms for these manifolds. As an example we take the sequence E. This sequence follows the recursion n 2 u n − (12(n − 1) 2 + 12(n − 1) + 4)u n−1 + 32(n − 1) 2 u n−2 = 0 , (3.16) which has the solution [69] u n = n k=0 n k The generating function of this sequence, which will play the role of the fundamental period in our case, is given by [68]. The appearing hypergeometric functions can be transformed into the 2 F 1 (a, 1−a; 1; x) type. The last row follows from the Γ 0 (2) − case in [23]. This is the only case where a closed form is unknown.
We only need the first term in ǫ-expansion of this function, i.e. we need to compute The ǫ-expansion of the Γ functions and the rational function are easily computed, and the derivatives with respect to ǫ of the 3 F 2 function reduce to derivatives of 2 F 1 functions of the type discussed in Appendix A: As the next example we take the sequence B. This has the generating function Thus, for the second period we need to compute Now we encounter a new type of ǫ-expansion, that of hypergeometric functions of the type 2 F 1 (a, a; 1; x). But these are related to the Legendrian type via a rational transformation: Thus, for the ǫ-expansion we are interested in we have the relation This reduces the computation again to the known expansions.
Two-moduli models
So far we have been concerned with one-parameter models, but most of the machinery carries over to multi-parameter models. In this section we will discuss the generalization to multi-parameter models and work out explicitly the example of the two-parameter K3 fiber of the three-parameter CY P 4 1,1,2,8,12 [24]. The two crucial ingredients necessary for the construction to work are the existence of a relation between the periods which allows the elimination of the second parameter derivative and the existence of a Clausen-like identity which expresses the fundamental period as products of 2 F 1 functions.
The required relation between the periods was observed in [37] to be where K a,b are the classical intersection numbers K a,b = X J a ∧ J b and the differential operators are defined as in (2.8) but substituting K ijk for K ab . This equation is the multi-moduli generalization of equation (2.22). The relation allows, independent of the number of moduli, the elimination of the ǫ 2 terms in the ǫ-expansion in favor of the first ǫ-derivatives. The origin of this relation is the absence of instanton corrections in an N = 4 supersymmetric theory. The relation (4.1) is exactly of the form to ensure the absence of such corrections.
The second relation required is a Clausen-like formula to express the fundamental period as products of 2 F 1 functions. A two-parameter example of such a formula is given by the Picard-Fuchs system [26] This is the Picard-Fuchs system of P 4 1,1,2,8,12 [24] in the limit y → 0, where it reduces to the Picard-Fuchs system of the K3 surface P 3 1,1,4,6 [12]. The solutions of this system can be expressed purely in terms of those of a further reduction along z → 0, where we obtain the Picard-Fuchs system for the family of elliptic curves P 2 1,2,3 [6]. Said solutions are explicitly given by x , (4.5) The combinations ω 0 (S) ω 0 (R), ω 1 (S) ω 0 (R) and ω 1 (S) ω 1 (R) are all annihilated by L 1 and L 2 , where S and R are algebraic functions given by the solutions to the system (4.8) These equations can be solved algebraically using only radicals, but the expressions are too convoluted to represent here. Thus, the periods of this twoparameter K3 are completely expressible using only hypergeometric functions. Note that the analytic continuation of 2 F 1 functions is well understood, so the resulting expressions are valid globally in moduli space.
The system (4.2) is of course not the only system of this sort. It is part of a family: for each of the four elliptic hypergeometric systems there exists a twoparameter K3 related as above. Similar constructions can be carried out for the sporadic solutions, albeit the algebraic functions appearing in the transformation become rather cumbersome. The proofs of these statements can be found in [26] and are based on a twisted version of the symmetric square.
Summary and outlook
The periods of K3 manifolds and Fano threefolds are obtained as solutions of the appropriate Picard-Fuchs operators. Obtaining a complete basis of periods involves taking the ǫ-expansion of the fundamental period up to second order. By noting that these differential operators are the (possibly twisted) symmetric squares of lower order Picard-Fuchs operators one obtains generalized Clausen identities that allow us to express the periods of the K3 and Fano surfaces in terms of the periods of the elliptic surfaces. The complete intersection elliptic surfaces correspond to the solutions of the hypergeometric Apéry-like sequences, while the curves with four singularities correspond to the sporadic solutions. In Appendix A we review how the first ǫ-derivative of these solutions to the Apérylike recursions is computed. Therefore, the generalized Clausen identities provide us with most of the periods of the surfaces. By taking into account that the N = 4 supersymmetry in the resulting compactifications demands the absence of instanton corrections an extra constraint on the periods emerges that allows for the computation of the remaining second ǫ-derivative. This was checked explicitly for the hypergeometric case in Appendix A, while the more general relation valid also for multi-parameter models is discussed in Section 4.
The expressions for the periods resulting from the previously described analysis are globally valid over the moduli space, allowing, for example, for computations of distances that do not suffer from the convergence problems that local expressions can present. As an application, we computed exact analytical distances along the boundary of the moduli space for some K3 fibrations, obtaining agreement with the results of [20]. Since our computation yields directly the periods on the Kähler side, we obtained as a byproduct the exact inverse of the mirror map used in [20] by considerations of modularity.
A few questions remain open for further investigation. First, it is interesting to note that Beauville's classification of elliptic curves with four singularities into six families [70] is not exactly in one-to-one correspondence with the six sporadic solutions of Zagier. Rather, the map is as given in Table 3 The minimal number of singular fibers in a stable family of elliptic surfaces over P 1 is four, with the classification just discussed above. It would be interesting to contemplate cases presenting a higher number of singularities and perform a similar analysis.
A natural next step beyond the cases of elliptic curves and K3 manifolds would be that of Calabi-Yau threefolds. In the same way that we took symmetric squares of hypergeometric 2 F 1 functions in order to move to the case of surfaces, we could take symmetric cubes to move to the case of threefolds. This is unfruitful as no CY threefold corresponding to the resulting operators is known in the literature. However, thus far only non-twisted symmetric cubes have been studied. It could therefore be that beyond these simpler cases an operator associated to a CY threefold can be obtained. In the case of elliptic curves and of the K3 surfaces the underlying modularity properties, although not exploited directly in this work, could very well be responsible for the simple expressions obtained after the computations. Since no analogue for the modularity of the elliptic curves is known in the CY threefold case this difference could constitute an obstruction to the application of the same techniques for this case in order to obtain global closed expressions for the periods. Also, no analogue of the period relations are known as there is less supersymmetry available. However, there still exist differential relations between the periods originating from the boundary Hodge structures [19] which could be used instead.
Finally, in the present paper we have only focused our attention on families of second order differential operators of degree one, the hypergeometric case, and of degree two, the beyond hypergeometric case. These are well studied and could be considered classified. The second order operators of degree three and above are less explored and it would be interesting to carry out a similar analysis for them. We plan on studying these cases and their K3 analogues in a future paper.
A Hypergeometric toolkit
In this section we will set up some mathematical identities for hypergeometric functions and their parameter derivatives needed for the computation of the periods as described in Section 2.
The generalized hypergeometric function is defined as x n Γ[n + 1] .
These functions are defined by two parameter vectors a, b ∈ R n of respective lengths p and q. For the case we are interested in p = q + 1, i.e. there are as many Γ functions in the numerator as in the denominator. Moreover, p will be related to the dimension of the space we are interested in, i.e. a 3 F 2 will describe the fundamental period of a complex one-dimensional space, a 4 F 3 corresponds to a two-dimensional space and finally a 5 F 4 will describe a CY threefold. 5 Another function we will need is the bilateral hypergeometric function These bilateral hypergeometric functions are combinations of usual hypergeometric functions of argument x and 1/x. But it turns out that in the ǫ-expansion to order O(ǫ m ) only the hypergeometric function with argument x contributes, as given by the identity [71] m H m a 1 , . . . , a m 1, . . . , 1 The hypergeometric function on the right hand side of this equation is exactly of the form appearing as a solution of the GKZ system of CY manifolds. Thus, a combination of its first m − 1 parameter derivatives and the hypergeometric function itself can be rewritten in terms of the ǫ-expansion of a bilateral hypergeometric function to order O(ǫ m ). Interestingly, this correspondence holds exactly to the required order to compute all periods. The ǫ-expansion of certain bilateral hypergeometric functions takes a very simple form. In this paper we specialize to the parameter values a i = {1/2, a, 1 − a}. We define F i (x) and H i (x) functions as the ones appearing at order ǫ i in (A.3) for m = 2 and m = 3 respectively, i.e.
From the right hand side of (A.3) we see that Then, defining the transformed variablẽ we have Clausen's classical identity Thanks to a generalized version of Clausen's identity and to the structure of the bilateral hypergeometric functions involved we also have the relations [71] We can exploit these identities to express the second parameter derivative in the ǫ-expansion of 4 F 3 (1, 1/2 + ǫ, r + ǫ, 1 − r + ǫ; 1 + ǫ, 1 + ǫ, 1 + ǫ; x) in terms of the first parameter derivative and the function itself. Denoting the Euler-Mascheroni constant by γ and the polygamma function of order m by we obtain from the right hand side of (A.3) (12) .
We can make the observation that the first ǫ-derivative always leaves one of the b i = 1 + ǫ parameters intact, that after setting ǫ = 0 cancels with a 1 = 1. , (A.19) thus reducing the problem to the computation of the first parameter derivative of a 2 F 1 hypergeometric function.
The derivative with respect to the upper parameters can be found in [38], where the closed expression is given. For the derivative with respect to the lower parameter we find in [39] − (A.21) By applying the chain rule the closed expression for the first ǫ-derivative of the Gaussian hypergeometric function is given by
(B.4)
• Case 2: The choice of parameters gives the symmetric squarẽ (B.6) • Case 3: The choice of parameters gives the symmetric squarẽ
|
v3-fos-license
|
2022-08-11T06:16:10.347Z
|
2022-08-10T00:00:00.000
|
251470292
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21645698.2022.2107385?needAccess=true",
"pdf_hash": "a7c5fec23111e89638572bcfc6f1454e0a4801d7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3058",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"sha1": "05102b4bf099d43adcc3345312f70eb451cd7e7b",
"year": 2022
}
|
pes2o/s2orc
|
Individual and combined effects of land use and weeds on Cry1Ab/c protein expression and yield of transgenic cry1Ab/c rice
ABSTRACT Considering the anticipated commercial exploitation of insect-resistant transgenic rice and that the planting area of cultivated rice overlaps with wild rice, simulating an escape of transgenic rice from farmlands and exploring its fitness after entering semi-natural or natural ecosystems through uncontrolled seed dispersal or gene flow are critical to understand the resulting potential long-term environmental risks. The expression of foreign Cry1Ab/c protein and vegetative and reproductive fitness of insect-resistant transgenic rice Huahui1 (HH1) and its parental-line Minghui63 (MH63) were studied under four treatments combining land use and weed competition: farmland and uncultivated land under weed control (F-NW and U-NW, respectively), and farmland and uncultivated land without weed control (F-W and U-W, respectively). The expression of Cry1Ab/c was significantly lower in U-NW, F-W, and U-W than that in the control treatment, F-NW. Except for plant height, key vegetative (tiller number and biomass) and reproductive (grain number and grain weight per plant) growth indices of transgenic HH1 were significantly lower than those of the parental-line MH63 in F-NW and U-NW, indicating a significant fitness cost. In F-W and U-W, vegetative growth indices (plant height, tiller number, and biomass) were similar in HH1 and MH63; however, key reproductive indices including seed-set rate were significantly higher in HH1 than in MH63, indicating significant fitness benefits. Although these results support large-scale cultivation of insect-resistant transgenic rice in China, the ecological risk involved is high in farmland or uncultivated land without weed control (F-W and U-W).
Background
Genetically modified crops have already been planted commercially on a large scale in many countries, bringing important economic and social benefits. However, potential environmental and ecological risks, in addition to biosafety concerns due to the cultivation of genetically modified crops have attracted increasing attention of various stakeholders globally. 1 Researchers in China, the largest producer and consumer of rice, have conducted extensive research on insect-resistant transgenic Bt (Bacillus thuringiensis) rice aiming to reduce the application of chemical pesticides while simultaneously minimizing the impact of insect pests on rice yield.
On October 20, 2009, Huahui1 and Bt-Shanyou 63, two insect-resistant transgenic cry1Ab/c rice lines were firstly granted safety certificates by the Ministry of Agriculture of China for production and utilization. Those certificates were renewed in 2020, thereby promoting the commercialization of insect-resistant transgenic rice considerably. 2 Therefore, under the background of the anticipated commercial exploitation of transgenic rice on a large scale, the study of the fitness of transgenic rice in different semi-natural and natural environments is vital for anticipating the potential risks of its entry into natural ecosystems through different ecological routes, such as uncontrolled seed dispersal or gene flow.
The normal expression of a foreign gene is the biological basis for performing its function but it might influence the fitness of parental crops. In addition, the expression of foreign proteins is closely related to the external environment. Thus, for example, previous studies have demonstrated that, although the level of expression of a foreign protein was significantly lower under stress, such as drought, salinity, or phosphorus deficiency, than under normal conditions, nonetheless it was still expressed. [3][4][5] If a foreign protein is expressed normally in natural ecosystems, this may alter the fitness and influence the evolutionary potential of wild parental rice in natural populations. Therefore, investigating the expression of foreign proteins posed by insect-resistant transgenic rice under natural conditions might determine what potential factors are responsible for differential Bt expression.
Investigating the fitness of insect-resistant transgenic rice under natural conditions is also vital in view of its commercial promotion. To date, numerous studies have found that insect-resistant transgenic rice showed superior field performance to the parent rice line under high insect pressure or field conditions without insect control, which is the fitness advantage that a foreign gene confers to insectresistant transgenic rice over the parental rice lines, and the basis for its development and commercial promotion. [6][7][8][9] Xia et al. (2011) reported that, in the field and under high insect pressure, the yield of insect-resistant transgenic cry1Ab/c rice cultivars Huahui 1 and Bt-Shanyou 63, was significantly higher than the yield of the corresponding parent rice lines, Minghui 63 and Shanyou 63, respectively. 2 Another transgenic line, bt/cpTI, also showed resistance against insect damage in advanced-generation hybrid weedy rice and wild rice in the field under higher insect pressure, and offered a shelter that prevented insect feeding on hybrid rice, thus resulting in significantly higher yield than that of the parental line. [10][11][12] Conversely, other studies have reported that insectresistant foreign genes may introduce a fitness cost in parental rice either in the presence or absence of target-insect pressure. 3,13,14 However, very few studies have explored the fitness of transgenic crops upon entering natural ecosystems through uncontrolled seed dispersal or gene flow and other processes of population dynamics. 3,15,16 To the best of our knowledge, the fitness of insect-resistant transgenic rice in semi-natural and natural growing conditions (i.e., farmland without weed control, uncultivated land with weed control and uncultivated land without weed control), has not been reported.
The environmental safety concerns over the widespread use of transgenic crops are legitimately based on the potentially broad ecological consequences in time and space. Therefore, a current high-priority challenge is to conduct a comprehensive scientific assessment of all the environmental safety concerns over transgenic crops in the shortest possible time. As China is the center of origin of wild rice, inevitably, vast areas of cultivated rice overlap with the areas of distribution of wild rice. If insect-resistant transgenic rice is to be cultivated commercially on a large scale in the future, and it shows major fitness advantages after escaping into the surrounding natural ecosystems, gene flow might occur between cultivated and wild rice that might pose substantial ecological risks. In addition, as target-insect pressure would be controlled in farmlands and natural ecosystems following commercial cultivation of the insect-resistant transgenic Bt rice, the potential environmental risks associated with a change in fitness in natural ecosystems should be evaluated under low insectpressure conditions.
Based on the foregoing background, the present study simulated an escape of transgenic rice from farmland under conditions of: farmland under weed control (F-NW, control treatment), uncultivated land under weed control (U-NW, seminatural individual treatment), farmland without weed control (F-W, semi-natural individual treatment), and uncultivated land without weed control (U-W, natural combination treatment) in a glass greenhouse (i.e., low insect pressure without targetinsect), (1) to detect the level of expression of the Cry1Ab/c protein in different plant tissues of HH1 rice under different cultivation condition; and (2) to investigate the vegetative and reproductive growth fitness and explore the possible reasons for any changes in fitness of transgenic cry1Ab/c rice under the different experimental growing conditions. The findings of the present study will provide a sound theoretical basis for the evaluation of the long-term environmental risks resulting from extensive cultivation of transgenic cry1Ab/c rice.
Rice
The experimental materials used in the present study were insect-resistant transgenic rice line Huahui-1 (HH1), were granted safety certificates by the Ministry of Agriculture of China for production and utilization, and an elite restorer line parental rice (Oryza sativa L.) Minghui 63 (MH63). The HH1 was derived from a T51 transformation event of Minghui-63 containing a fused Bt cry1Ab/c transgene, based on a particle gun technology. The cry1Ab/c transgene was synthesized from the 1,344 bp cry1Ab gene (GeneBank accession no. X54939) and the 486 bp cry1Ac gene (GeneBank accession no. Y09787), and driven by the rice actin1 promoter. 6 Results of Southern hybridization showed that the HH1 rice contained one copy of the transgene. 6 HH1 showed a high level of expression of the delta-endotoxin, which led to 100% larval mortality. 6 These two lines were provided by the National Key Laboratory of Crop Genetic Improvement, Wuhan, China.
Land Use Type and Weeds
We simulated four growing conditions by combining two types of land use with or without weed control. Uncultivated land typical yellow topsoil (0-30 cm) was collected from a natural uncultivated land in Jiangning District, Nanjing, Jiangsu (31°37'-32°07' N, 118° 28'-119°06' E). Farmland topsoil (topsoil, 0-30 cm, control) was collected from a paddy field in Liuhe District, Nanjing, Jiangsu (32°11'-32°27' N, 118°34'-119°03' E). The physicochemical properties of the two soils were determined; organic matter, total nitrogen (N), total phosphorus (P), available P, total potassium (K), and available K were significantly lower in uncultivated land soil than in farmland soil (P < .01, Table S1). According to the Classification of Early Arable Land in Red and Yellow Soil in the Southern Mountains and Hills in the People's Republic of China, Agricultural Industry Standard NY/T309-1996, the fertility level of the test uncultivated land soil is 8-9, which satisfies the experimental requirement for the uncultivated land low concentrations of key nutrient elements that characterize uncultivated land soils.
Considering the high weed cover in wild uncultivated lands, weed seeds were sown to simulate about 100% weed cover. Selected weeds included barnyard grass (Echinochloa crusgalli L.), sedge grass (Cyperus rotundus L.), weedy rice (Oryza sativa L.), and sprangletop (Leptochloa chinensis L.), all of which grow in farmlands and uncultivated lands around Nanjing city. Seeds of the selected weeds were mixed randomly based on weight at a ratio of 1:1:1:1, and then uniformly sown in each test pot (840 mm length × 560 mm width × 360 mm height) with the same amount of seeds. Meanwhile, the rice seeds were simultaneously and directly sown into the above-mentioned pots at a density of 200 mm × 200 mm along with weed seeds. After 40 d of weed growth the density and the average cover of the weeds were calculated before transplanting rice. The average densities of barnyard grass, sedge grass, weed rice, and Chinese sprangletop per pot were approximately 500 ± 50 plants/m 2 (dominant weeds), 100 ± 25 plants/m 2 , 50 ± 10 plants/m 2 and 400 ± 50 plant/m 2 in farmland without weed control, respectively; and were approximately 455 ± 50 plants/m 2 (dominant weeds), 120 ± 40 plants/m 2 , 58 ± 12 plants/m 2 and 390 ± 60 plant/m 2 in uncultivated land without weed control, respectively, initial weed cover was 100%.
Experimental Design
A pot experiment was conducted from May to October 2016 in a glass greenhouse at the Nanjing Institute of Environmental Science of the Ministry of Environmental Protection. The site is surrounded by residential and office buildings but no farmland. No rice, vegetables, or any other crops were planted within a 5 km radius around the site, which ensured that the experiment was carried out in an environment free of target-insect pressure.
We simulated four treatments combining land use and weed competition: farmland under weed control (F-NW control treatment), uncultivated land under weed control (U-NW, semi-natural individual treatment), farmland without weed control (F-W, semi-natural individual treatment), and uncultivated land without weed control (U-W, natural combination treatment). Under these experimental conditions, we investigated the fitness of foreign cry1Ab/c in rice. Considering the fact that rice seedlings will germinate along with weed seeds under wild uncultivated land, rice seeds were indirectly sown to the large experimental pots (840 mm length × 560 mm width × 360 mm height) with or without weeds. To increase the emergence rate, eight seeds were simultaneously sown in each hole, other seedlings were pulled out to ensure one plant in each hole after emergence. A complete randomized block design was used with 12 plants distributed evenly in each pot with plant spacing at 19-21 cm. Each treatment comprised 10 replicates and spacing between pots was 60 cm. Although heteromorphic sedge exhibited high density at the seedling, tillering, and heading stages, they basically disappeared at grain filling stage, while other weeds emerged over the whole plant life cycle. Overall, weed cover first increased, reaching and remaining stationary at 100% before grain filling, and then decreased to 80% at rice maturity. In the F-NW and U-NW treatments, manual weeding was performed over the entire growing season. No insecticide was applied under any of the four experimental treatments, and all remaining test materials were incinerated and inactivated after the experiment was finalized.
Investigation of Target Insect Pressure
Parental rice MH63 without the insect-resistance gene was considered respondent of insect pressure index. Investigations were conducted at the jointing and heading stages where the most severe occurrence of rice borers was observed in the field, including dry heart rate (including white ear) caused by borer, and leaf roll rate caused by leaf borer.
The Expression of Foreign Cry1Ab/c Protein Was Determined by ELISA
On July 20 (tillering), August 5 (jointing), September 5 (heading), September 25 (grain filling) and October 25 (grain maturity), rice stems and leaves were frozen in liquid nitrogen including, 5 HH1 rice plants randomly collected from each pot and mixed into one sample, with 5 biological replicates which were stored at −80°C. Foreign Cry1Ab/ c protein was quantified using the QualiPlate Kit for Cry1Ab/c (EnviroLogix Inc., Portland, ME, USA). The detailed procedure was carried out according to Fu et al. (2018). 3
Determination of Fitness Components of Vegetative and Reproductive Growth of Rice
Four rice plants were randomly selected from each of 5 pots at tillering, heading, grain filling and grain maturity, to measure plant height and tiller number. Selected plants were marked to ensure that the same individuals were measured over the duration of the experiment. Next, 20 plants were randomly selected (evenly distributed in each pot) to measure the SPAD value on rice leaves with a portable SPAD-502 chlorophyll meter (Konica Minolta Co., Japan) at heading, filling and maturing stages. SPAD values were used as a proxy of total chlorophyll content. Rice plants were randomly selected at maturity and then cut off at ground level and dried at 80°C to constant weight, which was measured on a digital balance (PB602-N, Mettler Toledo) to determine aboveground dry biomass.
To estimate the reproductive growth abilities at the maturing stage of HH1 rice and MH63 rice, the following indices were determined according to Fu et al. (2018). 3 (1) effective number of panicles per plant; (2) panicle length; (3) panicle weight; (4) number of filled grains per plant; (5) total number of grains per plant; (6) filled grain weight per plant; (7) 1000-grain weight; and (8) seed setting rate. In all treatments, we calculated the average value of plants of the same genotype in each pot, and six pots were randomly selected.
Data Collection and Analysis
The expression of foreign Cry1Ab/c protein in the same plant tissues and growth stages were compared and analyzed using the independent variable t-test under different growing conditions. Spatiotemporal dynamic changes of expression of foreign Cry1Ab/c protein in different plant tissues and at different growth stages were analyzed by Duncan´s multiple comparison test under the same growing conditions.
According to the methods of Burke et al. 17 and Song et al., 18 the independent sample t-test was used to test whether the fitness value (HH1 vs MH63) was significantly different from "1.00." The fitness components were grouped according to characteristics associated with the two important lifehistory stages, i.e. vegetative growth and reproduction, and the relative fitness related to each lifehistory stage was calculated as the mean of the relative fitness of all characteristics within this stage. Composite fitness across the whole lifehistory was the mean of the fitness estimates of the two stages above-mentioned. All the aforementioned statistical analyses were computed by SPSS v. 16.0 for Windows (IBM Corp., Armonk, NY, USA).
Results
There were only a few non-target insects due to the absence of pesticides; these included spiders (Arachnida), ladybugs (Coccinellidae), and locusts (Locusta migratoria manilensis) under the four treatment combinations. Further, we did not observe "dead heart," leaf rolling caused by rice stem borers (Scirpophaga incertulas, Chilo suppressalis, or Sesamia inferens) or rice leaf borers (Cnaphalocrocis medinalis) of Bt transgenic rice under the greenhouse conditions. The results indicated that our pot experiments experienced free of target-insect pressure.
Expression of Exogenous Cry1Ab/c Protein
According to Figure 1, the expression of Cry1Ab/c protein in transgenic HH1 rice leaves and stems exhibited spatiotemporal dynamic changes during different growth stages under the same growing conditions. Cry1Ab/c protein expression first increased and then decreased along the growth stages, with the highest expression observed at the filling stage, and a subsequent significant decrease at the maturing stage.
Under different growth conditions, Cry1Ab/c protein expression in HH1 leaves and stems exhibited some differences in the same growth stages. For example, the expression levels of exogenous Cry1Ab/c protein in U-NW, F-W, and U-W treatments were significantly lower than in the control treatment, F-NW. In addition, according to the three-way ANOVA, different growth condition, different growing stage, different tissues, and their interaction significantly influenced Cry1Ab/c protein expression (Table S2).
Plant Height
Plants of the transgenic-line HH1 and of the parental-line MH63 differed in height at the same growth stage under different growing conditions. Thus, plant height in the transgenic and the parental line in U-NW, F-W, and U-W treatments was significantly lower than in the control treatment, F-NW (P < .01, Figure 2).
The shifting trends in plant height observed in both rice lines during the entire growing period were consistent under the same treatment, both showing a gradual increase concomitantly with plant growth. Except at tillering, HH1 were significantly taller than MH63 by approximately 9-11% at heading, filling, and maturing stages under F-NW conditions (P < .01). Similarly, HH1 plants were significantly taller than MH63 plant by approximately 7-9% (P < .01) at heading, filling, and maturing stages under U-NW individual treatment condition, with the magnitude of the difference in plant height between HH1 and MH63 being lower in U-NW than in F-NW. As for F-W and U-W combination treatments, there were no significant differences in plant height between transgenic HH1 and parental MH63 rice line over the entire growing season.
In addition, according to the three-way ANOVA, the foreign gene, different growing condition, different growth stage, the interaction between the foreign gene and growing conditions, the interaction between growing conditions and growth stage, and the interaction between the foreign gene and growth stages, all influenced rice plant height significantly (Table S3).
Tiller Number
Plants of the transgenic line HH1 and of the parental-line MH63 differed in tiller number at the same growth stage under different growing conditions. Thus, tiller number in the transgenic and the parental line in U-NW, F-W, and U-W treatments was significantly lower than in the control treatment, F-NW (P < .01, Figure 3). The shifting trends in tiller number observed in both rice lines during the entire growing period were consistent under the same treatment, both showing a gradual increase concomitantly with plant growth. HH1 tiller numbers were significantly lower than MH63 by approximately 19-29% at tillering, heading, filling, and maturing stages under F-NW conditions (P < .01). Similarly, HH1 plants tiller numbers were significantly lower than MH63 plant by approximately 15-41% (P < .01) at tillering, heading, filling, and maturing stages under U-NW individual treatment condition, with the magnitude of the difference in tiller numbers between HH1 and MH63 being higher in U-NW than in F-NW. As for F-W and U-W combination treatments, there were no significant differences in tiller numbers between transgenic HH1 and parental MH63 rice line over the entire growing season (tiller number was "1.00").
In addition, according to the three-way ANOVA, the foreign gene, different growing condition, different growth stage, the interaction between the foreign gene and growing conditions, the interaction between growing conditions and growth stage, and the interaction between the foreign gene and growth stages, and the interaction exogenous gene, growth conditions and growth stages all significantly influenced rice tiller number (Table S3).
SPAD Value of Flag Leaves
Plants of the transgenic-line HH1 and of the parental-line MH63 differed in SPAD value at the same growth stage under different growing conditions. Thus, SPAD value in the transgenic and the parental line in U-NW, F-W, and W-W treatments was significantly lower than in the control treatment, F-NW (P < .01, Figure 4).
Under F-NW and U-NW conditions, the HH1 rice SPAD values were significantly lower than that of MH63 rice by approximately 12.82% and 10.29% at the heading stage (P < .05 or P < .01), significantly lower than that of MH63 rice by approximately 16.58% and 11.68% at the filling stage, respectively (P < .01), and were significantly higher than MH63 rice at the maturing stage (P < .01). Under F-W and U-W growth conditions, there were no significant differences in SPAD values between the HH1 rice and the MH63 rice at the above-mentioned three stages.
In addition, according to the three-way ANOVA, the foreign gene, different growing condition, different growth stage, the interaction between the foreign gene and growing conditions, the interaction between growing conditions and growth stage, and the interaction between the foreign gene and growth stages, and the interaction exogenous gene, growth conditions and growth stages all significantly influenced rice SPAD value (Table S3).
Biomass
Plants of the transgenic-line HH1 and of the parental-line MH63 differed in biomass at the same growth stage under different growing conditions. Thus, biomass in the transgenic and the parental line in U-NW, F-W, and U-W treatments was significantly lower than in the control treatment, F-NW (P < .01, Figure 5).
At maturing stage, HH1 biomass were significantly lower than MH63 by approximately 21% at maturing stage under F-NW conditions (P < .01). Similarly, HH1 plants biomass were significantly lower than MH63 plant by approximately 21% (P < .01) under U-NW individual treatment. As for F-W and U-W combination treatments, there were no significant differences in biomass between transgenic HH1 and parental MH63 rice line.
In addition, according to the three-way ANOVA, the foreign gene, different growing condition and their interaction significantly influenced rice biomass (Table S3).
Reproductive Growth Indices
Overall, the reproductive indices of HH1 and MH63 showed differences in fitness under different growing conditions and were significantly lower in the U-NW, F-W, and U-W treatments than in the F-NM control treatment (P < .01, Table 1).
While mean panicle length of HH1 transgenic plants was significantly higher than that of the MH63 parent in F-NW, the effective panicle number, panicle weight, total grain number per plant, filled grain number per plant, and filled grain weight per plant were significantly lower in HH1 than in MH63 by approximately 38.6%, 19.8%, 32.3%, 34.5%, and 11.8%, respectively (P < .01). Similarly, while mean panicle length in HH1 was significantly higher than in MH63 in U-NW, the effective panicle number, panicle weight, total grain number per plant, filled grain number per plant, and filled grain weight per plant were significantly lower in HH1 than in MH63 by approximately 36.8%, 21.7%, 18.2%, 20.0%, and 18.0%, respectively (P < .05 or P < .01). On the other hand, panicle length, panicle weight, filled grain number per plant, and filled grain weight per plant in HH1 rice were significantly higher than in MH63 in F-W by approximately 15.8%, 10.0%, 32.2%, 27.8%, and 27.8%, respectively (P < .05 or P < .01), although there were no significant differences in effective panicle number, total grain number per plant or 1000grain weight between HH1 and MH63 rice. Lastly, in U-W, filled grain number per plant, filled grain weight per plant, and setting rate were significantly higher in HH1 than in MH63 by 30.3%, 33.3%, and 26.2%, respectively (P < .01), while effective panicle number, panicle length, panicle weight, total grain number per plant, and 1000-grain weight did not differ significantly between HH1 and MH63.
Therefore, according to the results of the twoway ANOVA performed on the data, foreign Bt gene expression, growing conditions, and the interaction between the two had a significant effect on most reproductive indices of transgenic HH1 rice ( Table 2).
Fitness
According to Table 3, with regard to vegetative fitness, HH1 rice had a significant fitness cost compared to MH63 rice overall. For example, tiller number, biomass presented significant fitness cost under the F-NW and U-NW conditions. Under the F-W and U-W conditions, there was no significant difference in fitness in vegetative growth between the HH1 and the MH63 rice.
With regard to reproductive fitness, HH1 rice had a significant fitness cost when compared to MH63 rice under the F-NW and the U-NW conditions overall. For example, effective panicle per plant, panicle weight, total grain number per plant, filled grain number per plant, and filled grain weight per plant had significant fitness costs. Under the F-W and U-W conditions, the reproductive growth of HH1 rice presented significant fitness benefits overall. For example, filled grain number per plant, filled grain weight per plant, and seed setting rate presented significant fitness benefits.
Overall, for the estimation of composite fitness across the two lifehistory (Vegetative growth and reproduction), compared with the parent MH63, Means ± SEM followed by HH1 rice with * and ** were significantly different from MH63 rice according to the t-test (P < 0.05 and P < 0.01, respectively). HH1 rice had a significant fitness cost under the F-NW and the U-NW conditions, or significant fitness benefits under F-W and U-W.
Discussion
The expected development and commercial exploitation of insect-resistant transgenic rice plants on a large-scale demands a scrupulous assessment for their foreign protein expression, vegetative and reproductive fitness that might escape farming systems and enter natural ecological systems. Here, we evaluated these indices above-mentioned under farmlands and uncultivated lands in the presence or absence of weed competition without targetinsect pressure. Seemingly, this may contribute to understand the potential long-term environmental risks when transgenic rice plants enter semi-natural and natural ecosystems in the future.
Growing Conditions Affect Foreign Cry1Ab/c Protein Expression
The normal expression of insect-resistant foreign genes is the biological basis for providing insect resistance and influencing the fitness of recipient crops. Here, we showed that the expression of exogenous Cry1Ab/c protein in HH1 rice leaves and stems exhibited spatiotemporal dynamic changes over the entire growing cycle under the four growing conditions tested. Our results are consistent with results of spatiotemporal dynamic changes in Bt protein expression in transgenic crops in farmlands. [19][20][21][22] In addition, the expression of foreign Cry1Ab/c protein in HH1 rice leaves observed that Bt protein expression of transgenic cry1Ab/c rice under saline conditions was significantly lower than in farmland. 3 Other similar studies reported that the expression of the resistancerelated foreign protein in insect-resistant transgenic cotton and maize correlated negatively with stress. [23][24][25][26][27][28] Overall, although the expressions of foreign insecticidal Bt genes driven by a constitutive promoter, the results reported herein and those previously reported demonstrate that Bt proteins in transgenic crops are affected by environmental stress, plant growth and development, and parental background to some extent. 23,29,30 A possible explanation is that the expressions of exogenous genes is usually regulated at post-transcription or translation levels and the transgene silencing is environmentally and developmentally regulated. Our study showed that weeds (appear as a new factor) are competitor of transgenic variety HH1 in term of fighting for light, taking nutrient form soil and others, and significantly affecting the growth and development of the HH1, thus affecting the expression of insecticidal Bt in HH1. This was consistent with previous report that the expression of exogenous proteins was significantly correlated with phenotypic differences in crops. 30 However, the foreign protein can still be expressed, which might influence the vegetative and reproductive growth fitness of transgenic rice in such stressful environments.
Growing Conditions Affect Vegetative Growth Fitness of Transgenic cry1Ab/c Rice
Vegetative growth fitness refers to the ability of a crop to grow and compete. We showed that plant height, tiller number, SPAD value, and biomass were significantly lower under individual and combined stress treatments U-NW, F-W, and U-W than in F-NW, the control situation in our experiments. This finding was similar to the findings of previous studies reporting that vegetative growth of insect-resistant transgenic rice, cotton and maize were significantly lower under salinity or drought stress than under control conditions. 3,23,27,28 Except for plant height being significantly higher in HH1 than in MH63, other vegetative growth indices, such as tiller number and biomass were significantly lower in HH1 than in MH63 over most of the plant life cycle in F-NW and U-NW treatments without target-insect pressure, suggesting a significant fitness cost. Similarly, Chen et al. (2006) observed that tiller number was significantly lower in transgenic Bt/CpT1 rice plants than in plants of the parental line under low target-insect pressure on a rooftop 31 ; furthermore, Jiang et al. (2013) observed that the biomass of insect-resistant transgenic cry1C* rice was significantly lower than that of the parent line under low natural insect pressure in the field 32 ; Fu et al. (2018) reported that the tiller number and biomass of cry1Ab/c transgenic rice were significantly lower than those of parental MH63 rice in saline-alkaline soil in the absence of significant target-insect pressure. 3 In addition, significant fitness cost in terms of plant height, root length, and other vegetative growth indices in other Bt transgenic rice, compared with their parents under natural low insect pressure in the field, have also been reported. 13,14 Overall, the Bt genes often seems to confer obvious vegetative growth disadvantages to parental rice lines under low insect pressure due to the expressions of them. Conversely, there were no significant differences in the aforementioned vegetative growth indices over most of the life cycle between HH1 and MH63 under F-W or U-W treatments. The most likely explanation for these observations might be that the expression of foreign Cry1Ab/c protein was very low in F-W and U-W treatments, whereby the fitness cost in terms of vegetative growth for HH1 did not show.
Growing Conditions Affect Reproductive Growth Fitness of Transgenic cry1Ab/c Rice
Non-targeted effects of external Bt genes on plant traits have raised concern in transgenic rice breeding. These non-targeted effects usually cause negative variations in yield components. Reproductive fitness refers to the abilities of a plant to produce offspring and cause environmental risk, which is affected by an energy trade-off between vegetative and reproductive growth under some growing conditions. Here, the reproductive indices of HH1 and MH63 rice under individual and combined stress treatments U-NW, F-W, and U-W were significantly lower than under control treatment, F-NW, including filled grain number and filled grain weight per plant. These results were consistent with findings of previous studies, which reported that the reproductive indices in insect-resistant transgenic cotton and parental cotton under flooding-stress conditions were significantly lower than under normal growth conditions. 27 31,34 In addition, other studies have reported that low total yield due to low seed-set rates of transgenic cry1C* rice and transgenic cry2A* rice was significantly lower than that of parental-line MH63 under natural or low target-insect pressure in the field. 8,35,36 Significant fitness costs for different reproductive indices, such as grain number per panicle, seed-set rate, total yield, and 1000-grain weight have also been reported for transgenic cry1Ac rice, transgenic cry1Ab rice, and transgenic cry1Ab/c rice under natural low target-insect pressure in the field. 13,14 Based on our own results and those previously reported, foreign Bt genes usually confer obvious reproductive growth disadvantages to Bt transgenic rice plants under greenhouse or field conditions at low target-insect pressure due to the expressions of them. Other studies and our previous studies shown that, compared with the parents, even though plants harboring foreign genes may demonstrate relatively low or high composite fitness across the whole life-history, they usually do not perform with uniform inferiority of all characteristics at every life-history stage. 3,17,18 Here, key reproductive indices in HH1 rice including, filled grain number per plant, filled grain weight per plant, and seed-set rate were significantly higher than in MH63 in F-W and U-W treatments without targetinsect pressure, evidencing significant fitness benefits. In addition, the high seed-set rate of HH1 might be a key factor explaining the yield advantage of the transgenic line, indicating that the reproductive and ecological risks associated with HH1 under treatments of F-W and U-W were significantly different from those under treatments of F-NW and U-NW. Fang et al. (2018) found that total grain number in transgenic EPSPS Arabidopsis thaliana was significantly higher than in parental A. thaliana under high temperature or drought stress without target glyphosate, indicating significant fitness benefits. 37 A likely explanation for the above results may relate to the expression of foreign Cry1Ab/c protein: including the fitness cost may relate to high Cry1Ab/c protein expression in HH1 in F-NW and U-NW treatments, whereas the fitness benefit may relate to a very low Cry1Ab/c protein expression and to the effect of an energy trade-off between vegetative and reproductive growth in HH1 under treatments of F-W or U-W. 31,38 Based on these results, from an agricultural production perspective, transgenic HH1 rice showed a high reproductive growth ability in the absence of weed control (under F-W and U-W treatments), and the expression of foreign Cry1Ab/c protein significantly reduced the potential yield loss caused by target insects, which is highly beneficial for the future, large-scale adoption and use of HH1 in China. However, stronger reproductive ability shown by HH1 often contributes to the establishment of its populations in such natural or seminatural ecosystems with the associated potential ecological risks.
Conclusions
Here, we demonstrated that transgenic cry1Ab/c rice-line HH1 showed weaker vegetative and reproductive growth abilities than its parent-line MH63 in farmland or uncultivated land under weed control, which indicated a significant fitness cost. In contrast, transgenic HH1 exhibited an overall stronger reproductive growth ability than parental MH63 rice in farmland or uncultivated land without weed control, which might be conducive to the establishment of transgenic populations and the associated potential ecological risks ( Figure 6). Furthermore, we observed that abiotic stress factors, such as uncultivated land soil, and biological factors, such as weed competition, had different effects on the fitness of transgenic HH1 rice. Notably, we did not include the impact of insect pressure in our study design considering that the population of the insect targeting Bt rice in the natural ecosystem is far smaller than that in the farmland ecosystem. If, in the future, Bt rice is to be planted on a large scale, most rice paddy will be under insect pressure; however, Bt rice will not be affected by the attack of these insects, thus promoting the yield. Therefore, further study are much needed to determine the comprehensive effects of weed and target insect pressure on Bt rice, ecological fitness; and to combine weed and target insect pressure jointly analyze the effect of low Bt protein expression on plant insect resistance. Additionally, these findings were recorded during a single-year trial. Consequently, before large-scale commercialization of insect-resistant transgenic HH1 rice is promoted, other natural growth conditions need be simulated for longer periods of time on a case-by-case basis in the future, particularly under natural stress conditions such as saline-alkaline, natural marsh, and weed competition under low targetinsect pressure to explore the potential ecological risks of transgenic rice HH1 following large-scale release into natural ecosystems under conditions of low target-insect pressure.
|
v3-fos-license
|
2018-12-05T00:48:34.179Z
|
2015-06-29T00:00:00.000
|
56065557
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.ccsenet.org/journal/index.php/jsd/article/download/50586/27192",
"pdf_hash": "03945ee544083d2b7a993c9328193082286a3c5d",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3059",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "03945ee544083d2b7a993c9328193082286a3c5d",
"year": 2015
}
|
pes2o/s2orc
|
Socio-Economic Systems Strategic Development Managing
The relevance of the article is determined by the necessity of theoretical concepts’ and practical experience’s rethinking how to change the functioning of socio-economic processes and systems in the conditions of technological mode, the cycle of development, evaluation of current reforms’ changing. In this regard, the purpose of this article is aimed at socio-economic systems’ strategic development management improving when it is necessary to clarify the types of structural changes, to envisage various scenarios of events’ development, to determine the possibility of sustainable development achieving. The basis of the presented paper is the ideas of the infological approach and the principles of continuous improvement of all processes in the system. The presented article shows the structural model of information provision providing for the selection of significant information content for the development management implementation. The article presents the author's system of information sources’ ranking on quality criteria, providing a formalization and standardization of strategic planning source data. This article is intended for business leaders, educational organizations, top-managers, researchers and those who are focused on the development of socio-economic systems.
Background
Contemporary critical political and socio-economic situation again put the problem of socio-economic systems' sustainable development at all levels before the science and practice (Anisimov,1995;Korneev & Nikitin, 2010;Novikov & Petrakov 1999) The existing theory of socio-economic systems' development does not only meet the requirements of practice, but does not withstand theoretical criticism, not matching, first, the nature of modern development, and secondly, being in a paradigmatic field of economic theory that does not consider the development as a goal and value itself (Volozhanina, 2010).The need to accelerate the reforms' progress increases the urgency of a methodological approach development, reducing the development's risks, feedback gain and socio-economic systems' efficiency improvement.The result is determined by the objective necessity to undertake development efforts to transition process to a qualitatively new state and to a higher level of functioning.It is necessary to develop such innovative methods, principles and mechanisms that would reinforce the effect of the reforms, reducing their negative manifestations in the current time period (Ansoff, 2009).
Status of a Problem
Socio-economic systems' development managing is positive or productive changes' management in modern functioning conditions (Kouchner & Carlina, 2011;Masalimova & Nigmatov, 2015).For the productive development implementation, it is necessary to justify the initial, fundamental ideas of managerial and developmental activities and other important requirements, compliance with which ensures the efficiency of development.Theoretical basis of research are the works of the classics of philosophy, economic theory, the results of fundamental and applied studies of modern domestic and foreign experts in the field of socio-economic systems and other areas of knowledge (Thompson & Strickland, 2006;Buslenko, 1963;Novikov, 2005;Meskon, 2004;Hussey, 2004).The overall methodological framework of this study is the categorical system of philosophy, general systems theory, economic theory, management theory and the theory of computer science (Voronin, 2000;Karavaev, 2003;Webster, 2004;Kouchner, 2011;Volovikov & Rebrovа, 2010).The need to shift the focus of research from the selected aspects of economic theory and management theory on polyparadigm level also caused by post-modern, post-industrial social and technological structures' changes and relevant to this revolutionary process, the knowledge and information technologies' increasing taking place currently.
Problem of Investigating
In the process of the gradual accumulation of changes at some point, any system reaches its growth limit and is becomes less and less able to continue its self-development and order providing.The result is determined by the objective necessity to undertake development efforts to transition to a qualitatively new state and to a higher level of functioning.Methodological and organizational approaches to development management are to determine: the entities of development management, management orientation, management integration, management respond's kind on changes.
Thus, the socio-economic system as a complex dynamic system consisting of endogenous nonlinear subsystems, influenced by waves of exogenous and endogenous impacts, is the most realistic model.This chaotic nature of the educational process significantly complicates the management and this requires the organization and structuring of information provision.
Principles of Socio-Economic System's Development
At each stage of society development, the state of the system is determined by the trends of socio-economic development and technological modes of economic development, there are new conditions, opportunities and needs (financial; legal; scientific-technical, organizational, informational, motivational, etc.).The need to identify systemic patterns allowed us to identify a number of principles of socio-economic systems' development: -Taking into account of the system's all stakeholders' interests; -Manageability -control model availability by the dynamic active socio-economic system; -Information availability (full informational description of the controlled object, evaluation mechanisms and data interpretation); -Effectiveness (system operation results are to satisfy its agents).
-Predictability (mission, strategy, strategic plan, roadmap); These positions are identified by: participatory management (with all agents' participation of the system); system's each cycle's target orientation (beneficial effect) (then conditions or development's vector are changed while maintaining the integrity of the system); the systemic nature of innovations (developments), as the orientation of all processes of the system on the end result; a leading management with continuous monitoring for early detection of the directions' displacement in the development from the intended target.
The Development's Strategic Factors Identification
Identified systemic patterns determine targets for development management: -socio-economic system's potential to a significant extent depends on the type of its elements' interaction organization; -the socio-economic system is characterized by a high degree of instability, "cause" and "consequence" can be interchanged; -the socio-economic system is able to maintain stability at the expense of countering to external and internal conditions; -the structural stability of the socio-economic system is determined by the resistance of the weakest subsystem; -the duration of the system's and subsystems' development cycles, and their component objects, which include smaller elements, has different periods, with these cycles in a complex manner are synchronized with each other; -the socio-economic system's instability or crises is observed as in the external so in internal environment; -the life of a socio-economic system corresponds to all its available resources; -the efficiency of the socio-economic system is substantially below 100%.
The Active Socio-Economic Systems Management Model Parameters
According to the theory of active systems (Novikov), a model of active socio-economic system is specified by a set of parameters: -collectively by participants and stakeholders of the system; -a set of relationships between entities in the system (management, hierarchical, information); -sequence of operation (focus of control and information flows); -dynamic functioning (length of control cycles by the entities of the educational system); -the level of impact (the degree of system's entities' influence on the state of the system); -the effectiveness parameters of the system state (management criteria); -the conditions of operation (the set of acceptable states for the system's entities); -the system's participants' and stakeholders' certainty (the level of awareness when making decisions).
These parameters define the mechanism of interaction of its elements, i.e., a possible control mechanism of its functioning and development from different perspectives.
Information Support's Development Stages of Socio-Economic System's Strategic Development
The starting point in the development of the information basis for strategic development of socio-economic systems is a dominant strategy, which is in the form of needs and purposes renders the selective influence in the analysis and synthesis of all incoming information.The result of selective information analysis and synthesis is the formation of a dynamically reloadable array of information, including the source, processed, launcher, command, diagnostic (return) and management information, and the original goal serves as an array of information with respect to the future and desirable conditions of social or economic system.Types and properties of information and the speed of its circulation and processing define the channels of communication that can have a reinforcing or weakening effect.
It is obvious that the flow of information then become a strategic resource for the system's development, when there is a uniform system of selection, analytics and metadata management with defined information qualities, their automated circulation and content.It requires the integration of organizational, technological and information support, that there would be a dynamic database of accurate, complete, holistic, accessible, structured and formalized information resources at the scale of the object.
Information provision is generally understood as a provision (use) of information for any activity, control or automated systems.Formed information provision for systems' strategic development is to coordinate all information works and to support the user with on-demand information resources with the necessary qualities, dynamic properties and suitable for repeated use.
Information support development of socio-economic systems' strategic development should include the following steps: -meaningful content selection for information support; -sources' identifying of information support; -methods of obtaining, classifying, grouping of information; -distributed information flows' identification, their formalization; -verification of information obtained through the interaction data and the degree of correlation study; -analysis of the data; forecasting the state of the external and internal environment taking into account the existing economic and socio-political trends.
Formed information support of socio-economic systems' strategic development (Fig. 1) is applicable at each of these stages, and is an organic combination of scientific knowledge, scientific methodology and techniques with the latest technology in all aspects of information work.
Figure 1.The procedure of information support formation of socio-economic systems' strategic development
The Information Support Quality Identification of Socio-Economic Systems' Development
Sources' selection -the initial stage of information support formation of socio-economic systems' strategic development, which determines the initial position of the information flow, its authenticity.Here the source is understood as documented carriers of any socio-economic information directly or indirectly affecting the potential agents' interests (stakeholders) of the system.
The determining factor at this stage is the inconsistent data elimination, which would erode the subsequent analytical presentation of the socio-economic situation.Also the important factor is the cost of consumed information because there is a likelihood of getting "cheap", but not adequate data at the same time, accurate reports performed by special analytical groups have a high price.Therefore, determining the sources' choice it is necessary to take into account the cost of the basic information that subsequently forms the basis for strategic planning.Besides, it makes sense to pay attention to the availability of information, that is, the method of its publication in open access and the level of replication.The relevance of basic information for strategic development is not too high, as it implements a long-term process.Therefore, the database accumulation and its update should be done in a set of time intervals, taking into account the effect of forecast situations and socio-economic trends.
The sources' information content is determined by the trends of the global and domestic socio-economic status in the whole, significant global political and socio-economic events and prognosis of the interpreted data on the functioning of the system processes.
There is a method of information sources' evaluation from both the source criteria, and the quality of information (Levina, Gumerov,2015).Interpreting it in the aspect of research, we propose the following method of sources' determining on the basis of scaling and the formation of the ranks' matrix.Each of the sources' parameters is assigned with the index of importance (1 -4) on each of information quality criteria (tab.1.).
Selecting Sours Criteria for the selection of information
Criteria for the quality of information
Classification, formalization and structuring of information flows
Verification of search results
Comprehensive Assessment of Information with Regard to Possible Consequences for the Strategic Decisions
The importance of internal and external resources' managing of socio-economic systems puts the necessary definition of "environment organizational boundaries" and cost-effective use of all information resources, aimed at the development of the company in the long term.
In this area there is a developed international standard for stakeholder engagement AA1000SES [4], which provides a regulatory framework for the planning, execution, evaluation, information and non-financial audit of the interaction quality with stakeholders, which implements the functional interface of all interests through the mechanisms and procedures that ensure the rights of stakeholders.From the point of view of information support formation, this standard has impact on the selection of content and the choice of information sources, ensuring significant impacts' taking into account on stakeholders, the boundaries of action and resource constraints.
According to the AA1000SES standard, it is recommended to determine the team of stakeholders of socio-economic system, significant areas, content, purpose and mechanisms of interaction.These include the foreign and domestic policies' issues, financial objectives and problems, socio-ethical norms, etc. that identifies potential issues of interaction, that is, the necessary parameters of the information base of information provision.The quality of interaction at this stage determines the degree of accountability in information receiving and accepting from stakeholders on the developed Protocol and the degree of participation of stakeholders in activity planning for information rapid detection and distribution.
The next stage of information support formation is the classification, formalization and structuring of information flows.The total volume of documented information obtained from fixed information sources needed for the strategic development of the socio-economic system we call as information flow, the speed of which is determined by the amount of information processed per unit time.As a rule, incoming information flows are classified according to certain criteria (type of media, data source, frequency, purpose, openness, type of transfer, exchange type, orientation).Solving problems of strategic development, classification features of the information flow are divided into fixed and variable.As fixed signs, let's take the information electronic media, the prescribed period of use, off-line mode of exchange and anticipatory orientation.As variable signs of information flow classification, let's take the kind of information source, the information destination, the degree of openness of the incoming data.
By formalization of socio-economic systems' initial data is expected standardization and qualitatively-quantitatively interpretation of the information flow.Formalization allows for rational use and to simplify the content for the subsequent work, including automated systems.Formalization is a method of information recording, aimed at further use or analysis, in a uniform manner according to specified criteria that ensure compliance with the order, the simplicity, the convenience of recording and broadcast, presentation.
Information structuring is a multifactorial problem also.The formation of the initial structure for the analysis is closely linked to the establishment of a database, methods for collection, storage and transmission of information, methods of analysis and monitoring.Information structure of formalized and classified data should be unambiguously interpreted on the one hand, and at the same time possess the potential semantic content on the other hand.
Final stage of information support formation is data verification related both to the conditions of the information obtaining and its' content.In information analysis there is the possibility of the inaccuracy of the original data, the unreliability of the source or the subjectivity of the examination.Therefore, the process of verification, as authentication establishing, verification of the truthfulness, accuracy, reliability of the received information and confirmation of its truth is mandatory for strategic planning.
Discussions
The problem of information provision semantic content selection is closely connected with the formation of balanced indicators.Initially, this system is based on allocated areas of the external and internal environment analysis of the socio-economic system's functioning, defining the boundaries of information requests and information environment interaction.The strategic analysis subsystem provides with information the entire strategy development process, highlighting four areas of strategic analysis: macro conditions, industry conditions, market conditions and the corporate environment conditions.
Effective analysis of external environment (to the system) provides a really executive development strategy (and, if needed, adjustment), leveling the risks and anticipating potential threats, and internal problems and potential opportunities knowledge allows for operational management and planning of internal resources implementation.
Conclusions
One of the ways to achieve sustainable development of socio-economic systems is to use the possibilities of modern information and communication technologies.Their use can meet the current needs of the socio-economic systems' practice, such as research methods, modeling, forecasting, management and development improvement.
From the standpoint of digital inclusion in socio-economic systems (i.e.increasing the share of knowledge and information as a public resource distribution of computers and computer networks) there is a need in the creation and organization of such share (information provision), which would allow us to freely create and consume information in all aspects of functioning, interaction of system units, at all hierarchical levels in the standards and protocols development that ensure interoperability for all components.
Then from the point of view of management, it is necessary to organize the information management of socio-economic systems, as integration of the principles, forms, technologies and rules of formation, processing, analysis and use of information resources.Information management is one of the aspects of systems' management in general, providing a formalization, structuring, processing and storage of metadata, which promotes the adoption of adequate management decisions.
Table 1 .
The information sources' ranking on quality criteria
|
v3-fos-license
|
2021-08-04T00:07:07.703Z
|
2021-01-01T00:00:00.000
|
236864877
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/51/e3sconf_eilcd2021_03048.pdf",
"pdf_hash": "629f45489fa58464592033554626f60a3cf4fcf9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3060",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "30e2f378dad45c92fdeb983283410bd4c381634e",
"year": 2021
}
|
pes2o/s2orc
|
Research on the construction mechanism of science and innovation corridor from the perspective of innovation chain
Based on the perspective of innovation chain, this paper takes Highway 128 and Guang-Shen Science and Technology Innovation Corridor as examples to explore the construction mechanism of science and technology innovation corridor. The findings are as follows: First, the corridor can be regarded as a virtual organization containing core nodes and auxiliary nodes. All nodes are not equal, and the core node plays the core function. Second, if the function of the core node is weakened, the science and innovation corridor will show a deterioration trend. Third, the construction of the science and innovation corridor is a process of the whole science and innovation chain from the flow and integration of knowledge resources, to the overflow of innovation incubation, and then to the market application. However, there are still problems in the chain, such as resource flow obstacles, innovation incubation breaks and structural imbalance. Therefore, in order to realize the benign interaction of multiple subjects in the science and innovation corridor, there needs to be a core subject that plays the role of agglomeration and diffusion, so as to realize the coordinated development of the corridor.
Introduction
The science and technology innovation corridor is a unique and efficient "organization" or innovation system. The science and innovation corridor has increasingly become an important way to promote regional innovation linkage. In the final analysis, the science and innovation corridor is a form of cross-regional organization, and how to build the corridor has become the focus of research.
This paper studies the sci-tech innovation corridor with the characteristics of cross-regional collaborative development. There are multiple subjects in the corridor that play the corresponding functions, and there must be a core subject that plays the core role. All subjects in the corridor should be in their respective positions and perform their respective functions, which is the necessary condition for the effective operation of the entire sci-tech innovation corridor. Since Schumpeter put forward the concept and theory of innovation, most scholars believe that innovation is not only the development of new scientific knowledge and technology, but also the transformation of these knowledge and technology into productive forces, and the industrialization of diffusion process. Around such understanding, science and innovation corridor from the beginning of the scientific knowledge source, containing a variety of main body, through multiple links, finally realizes the whole process of industrialization, a process that includes the government, colleges and universities, enterprises, financial institutions, research institutions, intermediary service agencies, and other innovative main body, the colleges and universities, enterprises and research institutes is an important function of innovation chain nodes, government departments, financial institutions, service organizations is the main point of science and technology innovation chain [1]. Among them, the innovation chain is an important component of the innovation ecosystem [2], and the structural imbalance in the innovation chain is the root of the problems such as "insufficient transformation of scientific and technological achievements" and "two layers of science and technology economy" that have always existed in China's economic development. Therefore, in combination with the actual situation of the collaborative innovation of the science and innovation corridor, the construction mechanism of Highway 128 and the Guang-Shenzhen Science and Technology Innovation Corridor is explored from the perspective of the innovation chain, so as to form a replicable, extensible construction mode [3], promote the in-depth development of the science and technology innovation corridor, and strengthen the overall science and technology innovation strength of the country. This study not only deepens the understanding of the science and innovation corridor, but also provides a reference for the construction of regional collaborative innovation of science and innovation corridor based on the innovation chain.
Analyze the construction mechanism of science and innovation corridor from the perspective of innovation chain
The innovation chain is proposed from the thinking of the academic community on the innovation process. Most domestic and foreign scholars believe that the innovation chain is the whole process from front-end basic research and development to back-end industrialization diffusion, and the key of this process lies in the effective transformation of scientific and technological achievements [4]. Since Marshall & Vredenburg put forward the concept of innovation chain, the earliest innovation chain was considered to be an interactive process among multiple subjects such as raw material suppliers, product manufacturers and product sellers [5]. Since then, domestic and foreign researchers have started to conduct a series of studies on the meaning, composition mode, tech-economic connection and functional nodes of the innovation chain [6]. In terms of connotation, the first category is defined from the perspective of process, emphasizing that the innovation chain is the whole process of innovation or transformation of scientific and technological achievements. The innovation chain is usually defined as the process from basic research to the formation of relevant scientific knowledge, which is used to create new technological products and finally the industrialization of the products [7]. Some scholars believe that innovation chain is a whole process in which multiple participants such as the government, enterprises, universities and research institutes rationally and optimally allocate resources and integrate with each other, and finally industrialize knowledge and technology achievements [8]. Another kind is based on the definition of the knowledge innovation Angle, thought innovation chain from scientific and technological knowledge to through technology innovation link to the process of industrialization, it around a core subject, on the basis of the market direction, through the knowledge innovation activities will be related to the innovation main body link, is a kind of is designed to process of knowledge economization and optimize the innovation system as the guidance function of chain structure model. It emphasizes that the innovation chain is the creation, transfer, dissemination and diffusion of knowledge. From this perspective, some scholars believe that the innovation chain is centered on one subject, which starts with innovative ideas, and then connects multiple complementary subject nodes to realize knowledge innovation and technological innovation through mutual integration and mutual division of labor. Some scholars also divide the innovation chain into two categories: horizontal chain and vertical chain. Horizontal chain refers to the process that takes product development as the core and realizes innovative design, research and development, industrialization and commercialization through the mutual integration of government, enterprises and scientific research institutions. Vertical chain refers to the chain structure formed around the connection between the upstream and downstream links of the industrial chain. Other scholars emphasize that innovation chain is a link between innovation subjects for the transfer and transformation of innovation elements, and the goal of producing innovative achievements is to meet the market demand [9]. The above scholars mainly studied the concept of innovation chain from two perspectives: one is that innovation chain is a process from innovative conception to industrialization; second, innovation chain is a process in which multiple subjects are required to interact to realize innovation. As for models, some scholars have proposed a linear model that starts from basic research and ends in industrial application, as well as a cycle model that is composed of scientific knowledge, technological research and development, product development, market application and other links.
Whatever it is in the shape of a corridor, in mathematics thought is a line, this paper makes the corridor as a unique innovation system, as shown in figure 1, so with the innovation chain perspective to explore the route 128 and Guang-Shen corridor the construction mechanism of the corridor.
Analysis on the construction mechanism of the innovation corridor of 128 highway
Highway 128 innovation corridor was built in 1951 in Boston, Massachusetts half ring highway, the road on both sides there are thousands of technology-oriented enterprises and research institutions in linear distribution of highway is about 16 km from downtown Boston, near including MIT and Harvard University, 65 universities, has created the world famous "miracle" in Massachusetts, was also known as "America's technology road" around the world. The Route 128 Innovation Corridor has experienced a boom in the 1960s and 1970s and a bust in the 1980s.In recent decades, with the development of aerospace, precision medicine, new materials, life and health and other fields, it rises again and becomes a world-famous innovation centre again.
Universities, as the source of knowledge, promote the development of regional innovation and entrepreneurship
The success of Route 128 Innovation Corridor lies in the intellectual power behind it. Route 128 brings together 65 higher education institutions, including Harvard University and the Massachusetts Institute of Technology, to provide Boston's high-tech industrial district with a large number of senior scientists, engineers and technicians. Colleges and universities as the intellectual backing plays an important role in the study of innovation, widely participate in the innovation research, the patent results output unceasingly, high-tech industrial zone along the highway 128, with 70% of the enterprise is founded by the Massachusetts institute of technology graduates, so for 128 highway region at the Massachusetts institute of technology have the biggest influence of development of science and technology. In order to promote the rapid transfer and transformation of scientific and technological achievements and attract high-tech enterprises to settle down, Highway 128 has laid out many industrial parks around universities, which greatly strengthens the exchange and cooperation between enterprises and universities, speeds up the transfer and transformation efficiency of intellectual property rights, and is conducive to enterprises' rapid access to scientific and technological achievements of universities. Universities are also supported to set up platforms for the transfer of scientific and technological achievements, such as technology transfer services and technology transfer centres. Colleges and universities are encouraged to promote the existing scientific and technological achievements and to cultivate professional talents in a targeted way. Therefore, Highway 128 gathers many industrial leaders [10], laying a solid foundation for the innovative development of this region.
Diversified innovation incubation measures promote regional innovation development
The innovation incubation in Highway 128 region is mainly manifested in the following three points: First, the industry-university-research cooperation network. Regional innovation and development cannot be separated from the highway 128 model combining production, education and research. Enterprises, research institutions should not be three independent individuals, otherwise it is difficult to form a whole-process innovation industrial chain from basic research to industrialization of achievements. Regional Collaborative Innovation Centre has extensive contact with academia and industry, and encourages R&D personnel of enterprises, universities and research institutes to participate in the technical cooperation research and development of Collaborative Innovation Centre, and also encourages scientific and technological personnel to have the courage to establish technologybased enterprises.
Second, active venture capital. Boston's financial services industry is very developed, and its venture capital investment ranks second in the United States. Such active venture capital not only promotes the development of local innovation, but also promotes the development of a large number of start-up companies. In addition, there are many venture capital firms that provide seed money for high-tech enterprises; local banks have also taken measures to promote active venture capital by establishing specialized venture capital companies and some semi-official venture capital companies established by the government. All these measures have provided strong and favourable financing conditions for enterprises near Highway 128. Defence investment is also playing an important role in the development of Route 128, and venture capital is also contributing to the development of advanced technologies.
Coordinated development of regional integration
The Route 128 district has established an agency to provide coordination and information services, and the Massachusetts Department of Economic Development has established an Office of Business Development, staffed by experienced professionals who focus on providing a variety of unpaid information services to help businesses in the Route 128 region grow. For example, it answers questions about tax policies of 128 regions, conditions for enterprises to settle in, corresponding encouragement measures, contact information of relevant departments, and legal affairs, so as to help enterprises find solutions to the problems quickly. The Boston City Authority has also organized a quarterly cocktail reception, which will invite governors, mayors, ministers, community and entrepreneurs to exchange information, discuss and discuss the future. The local government has issued a policy that scientific and technological innovation enterprises identified as innovative research or new product development can enjoy a 3% tax reduction, as well as exemption from sales tax on the purchase of scientific research supplies and office supplies. Most enterprises in the Route 128 area can obtain tax reduction [11].
Therefore, as shown in Figure 2 (a), the brilliance and successful rise of Highway 128 is mainly due to the knowledge source function and innovation incubation function provided by Boston as the core node. The whole scientific innovation chain is complete, and the efficiency of structure and function is maximized by using spatial layout. However, it is well known that in the 1970s, due to the Great Depression of the United States economy, the decrease of military orders from the relevant government, a large number of companies went bankrupt, and the unemployment rate also began to soar, which had a serious negative impact on the economy of the Boston area. Since then, the Route 128 area has been in an economic crisis. In the early 2000s, however, the Route 128 area adjusted its economic structure, focused on Boston's solid industrial base and talent strengths, and developed high-tech industries. The innovation and entrepreneurship atmosphere began to flourish again, and Route 128 began to prosper again. Thus, as shown in Figure 2 (b), the decline in the middle period of Highway 128 can be attributed to the weakening of the function of the core node. In the late 1970s, no new node replaced Boston, resulting in the deterioration of the whole scientific innovation chain with no core and no peak. At the beginning of the 21st century, Boston served as the core node to incubate innovation, and the surrounding cities played a supporting role in bringing Route 128 back to its glory. Guangzhou-Shenzhen Science and Technology Innovation Corridor is dumbbell shaped, among which Guangzhou has a strong science and innovation force, Shenzhen has a high degree of marketization, Dongguan has developed manufacturing industry and perfect supporting facilities, and is the gathering place of Guangzhou and Shenzhen. Therefore, Guangzhou-Shenzhen Science and Technology Corridor is a corridor formed after the day, which is formed on the basis of industrial economy and technology. Guang-Shen Science and Technology Innovation Corridor adopts the development direction of "one corridor linkage, ten cores drive and multiple points support". According to the report of the 12th Party Congress of Guangdong Province, Guangdong should learn from the experience of the US Highway 128 Science and Technology Corridor, and the construction of Guangzhou-Shenzhen Science and Technology Innovation Corridor should seize all kinds of excellent innovation resources along the corridor, integrate the resources, radiate and drive the development of surrounding areas, and promote closer cooperation of regional economy.
The source of knowledge lays a foundation for the development of regional scientific and technological innovation
Guang-Shen Science and Technology Innovation Corridor has obvious common characteristics with Silicon Valley and Boston in the United States. Firstly, they are both close to the world-famous Greater Bay Area, with strong industrial support and strong economic foundation. Secondly, the agglomeration of innovation elements such as university research institutions, and the surrounding areas include a number of research and development institutions, new universities, venture capital funds, etc.; in addition, these areas have complete infrastructure and beautiful ecological environment, can attract high-tech talents to land. Guangzhou has 79 institutions of higher learning, 7 state key laboratories and 233 provincial and ministerial key laboratories, among which more than 70% of the scientific and technological personnel and 95% of the doctoral students from Guangdong Province are from Guangdong Province. Guangzhou is known as the "Innovation Brain". Shenzhen was once rated as "the pearl on the innovation crown" by the British Economist, and has a number of world-class innovative technology enterprises such as Huawei, ZTE, DJI, BYD [12].
Diversified innovation incubation measures promote regional innovation development
The innovation incubation of Guangzhou-Shenzhen Science and Technology Innovation Corridor is mainly manifested in three aspects. First, give full play to the role of enterprises as the main body of innovation. A combination of measures should be taken to promote enterprises to improve their independent innovation ability, promote enterprises to increase their investment in research and development, support enterprises to jointly set up technology research and development institutions with universities and research institutes, improve the ability to tackle key technologies and core technologies, develop and expand high-tech enterprises, and lead the rapid development of modern industry with innovation. Encouraging enterprises to carry out major science and technology projects and facilitating the active flow of innovation factors to enterprises; increase the number of first-class innovative enterprises, develop and strengthen the development of small and mediumsized innovative enterprises, to form an innovative enterprise cluster with independent intellectual property rights, with independent brands and strong core competitiveness; in order to own the independent intellectual property rights, local support to establish a number of overseas production bases and research and development centres, master the key core technology, improve the international standards of enterprises, improve the influence of international product pricing.
Second, the construction of industrial incubation system plays a driving role. The corridor launched "the gen space -accelerator -professional incubator park" as the carrier of the whole chain industry hatching breeding system model, at the same time, taking advantage of the colleges and universities and research institutes focus on resources, lead to all kinds of industrial park, the leading enterprises in construction technology business incubator platform, form the whole cycle cover science and technology enterprise development way. Further implement the incubation and growth plan of science and technology business incubators, and encourage social and private capital to actively invest in incubator construction. The government provides technical skills, science and technology finance, entrepreneurship guidance, inspection and testing services, etc., and guides the incubation institutions to establish a sound ecological chain system, construct professional incubation mode, establish a multi-level service team, improve the success rate of incubated enterprises, and promote the development of regional industries.
Coordinated development of regional integration
Innovation cluster refers to the technology and economic network composed of universities, research institutions, enterprises, venture capital companies, intermediary service organizations, etc., which has the characteristics of agglomeration economy and knowledge spillover. They form strategic alliances or carry out various cooperation through value chain and knowledge chain. Due to the high intensity of R&D investment, innovation cluster has become an important subject of knowledge transfer and knowledge dissemination, forming an efficient innovation network with multi-subject participation and exerting influence and promoting effect on economy and society. The concrete manifestation of regional integration is: First, promote the policy interaction in the corridor. Focusing on the study of industrial co-construction, opening and sharing of scientific and technological resources within the region and the flow of innovative talents among regions, coordinating and coordinating major science and technology, talents and industrial policies in the corridor to ensure the free flow of talents, capital, technology, information and other innovative elements. Second, unimpeded talent flow channels. Promote the flexible two-way flow of talents between enterprises, universities and research institutions. In addition, by providing highquality educational resources, medical resources and living environment, the supply of urban public services can be improved, which helps to enhance the stickiness of talents in different places and solve the worries of talents. Thirdly, the Guang-Shen Science and Technology Innovation Corridor breaks down the administrative barriers that hinder technology and innovation, concentrates the innovation resources of the three places, and forms an innovation economic belt with functions running through, industrial linkage and spatial link age. Foreign agglomeration high-tech innovation resources around the world, the innovation of high-tech talent, high-quality scientific research and high-tech innovation enterprises, finally built the global science and technology innovation centre, the national system reform leading demonstration zone, and carrying a large bay area of Guangdong science and technology innovation centre, powerful to promote the implementation of the strategy of developing national innovation drive. Fourth, government guidance and support. The government plays a pivotal role in the construction of the science and innovation corridor, and the government's targeted support is helpful to solve the dilemma of science and technology innovation.
Therefore, the rapid development of Guangzhou-Shenzhen Science and Technology Innovation Corridor cannot be separated from its clear functional status. As shown in Figure 3, Guangzhou gives full play to the function of knowledge source and takes advantage of the aggregation of universities and research institutions to establish a national innovation centre and an international science and technology innovation hub. Dongguan plays an auxiliary role, establishes a solid foundation between Guangzhou and Shenzhen, takes advantage of manufacturing clusters and industrial parks to build a world-class advanced manufacturing base, so as to support the rapid development of the entire Guangzhou-Shenzhen Science and Technology Innovation Corridor. Shenzhen will give full play to its role as an incubator of innovation, speed up the establishment of an international centre for scientific, technological and industrial innovation, and build an innovative pilot zone with global competitiveness by taking advantage of the concentration of high-tech enterprises and their market-orientation. Among them, we believe that innovation incubation in Shenzhen plays a core function, turning knowledge into productivity, and promoting the rapid development of the whole science and technology innovation chain.
Guangzhou Dongguan Shenzhen
Source of knowledge Innovation incubation Industrial application
Discussion
Highway 128, is a common play a role of government, enterprises and universities an innovation cases, work together to form a ecological chain of innovation, under the ecological chain support each other, interaction, mutual influence, and makes the area is full of creativity, meanwhile combined with the national innovation system, makes the whole route 128 regional common prosperity development innovation. Shenzhen science and technology innovation corridor, Guangzhou and Shenzhen gathered a number of first-class universities, research and development centre, corporate research park, industrial park, national laboratories, industrial manufacturing base, makes the information exchange, technical services, education and training, financial support and other innovative elements flow between entrepreneurs, can speed up the transformation of scientific and technological achievements transfer, effectively reduce the business risk .Based on the above case analysis, we sorted out the basic situation of the case and the functions of each node, as shown in Table 1 and Table 2. This paper describes the chain of the science and innovation corridor as from the source of knowledge to innovation incubation and then to industrial application. Meanwhile, the whole corridor has its own unique geographical spatial distribution, and finally forms the regional integration and coordinated development. First of all, the knowledge source has a lot of channels, universities, research institutions, research institutes, platform are corridor knowledge source, production, collaborative innovation is the enterprises, universities, research institutes, such as the innovation main body involved in each other, play to their respective advantages, knowledge production, scientific research and economic development organically, through resource sharing between each other and complementary advantages, realize the main body of innovation and win-win. In the science and innovation corridor, through the division of labor of different subjects in the upper, middle and lower reaches, a network organization structure is formed to realize the virtuous cycle development of resources and elements in the corridor.
Secondly, the transformation of scientific and technological achievements is the key link to transform science and technology into realistic productive forces. Scientific and technological knowledge is the foundation of enterprise innovation, it is only a potential capital, and cannot create real productivity value by itself. Therefore, in order to promote the development of the science and innovation corridor, how to maximize this potential advantage into the reality of competitive advantage is the key. Therefore, diversified innovation incubation system is needed to turn knowledge source into productivity. Multiple innovation entities work together in the same direction to form synergy in scientific and technological innovation and provide driving force for the development of the science and innovation corridor.
Finally, the science and innovation corridor is a characteristic collaborative innovation zone across the administrative boundary. The key to its construction is to break the barrier of the city boundary and strengthen the flow of elements between the cities along the corridor. The full flow of factors is an important basis for regional integration, including talent exchange, capital docking, project cooperation and so on. In order to promote the steady and sustainable development of the science and innovation corridor, it is necessary to strengthen the interaction between cities along the corridor, build a coordinated urban network, and form a regional integrated development. The special organizational structure of the science and innovation corridor is also integrated in the coordination of various subsystems.
Conclusion
First, the construction mechanism of the science and innovation corridor. This paper describes the chain of the science and innovation corridor as from the source of knowledge to innovation incubation and then to industrial application. This is very similar to the innovation chain model. Corridor contains a variety of subjects, many links, but all the subjects are not equal, at least one node plays a leading role, gathering and leading role. Each science and innovation corridor should have a core node playing its core function, that is, "high mountains rise from high peaks, high peaks rise from high peaks to high plateaus". If there is no core node or if the core node is weak, we conclude that the corridor will not succeed and will exhibit a state of deterioration. Under the multi-subject governance structure, each governing body plays different functional utility with its different function orientation. The elements in the corridor should gather to the core nodes actively. As an organizational form, the corridor should not just stay in the static balance between "utilization" and "exploration" to deal with the most fundamental problems in the process of innovation challenges, but should constantly coordinate the dynamic balance within the region, so that the whole corridor can achieve sound development.
Second, the core nodes of the science and innovation corridor are nodes corresponding to the innovation incubation function. Corridor need resources agglomeration. In our case study, the core node in Boston and Shenzhen. They are both big cities with capabilities, resources and environment. Only as the core node, they can have the ability of the agglomeration and diffusion, playing the core capabilities, to play a leading and support for the whole of the coordinated development of corridor.
Current research "chain" concept has been widely applied to various fields, this paper studies from the Angle of innovation chain corridor characteristics and its construction mechanism, through the study of corridor, in turn, we will find the "breaking point" of the innovation chain is mainly produced in knowledge resources agglomeration to achievements transformation stage, the innovation of the multivariate hatch is the key to the success of corridor construction, innovation chain is the link between weak point, improve the links of the core of the innovation chain is the key to optimize the link between the problem. Only by integrating various resource elements and getting through all links of the innovation chain at the same time can the whole chain give full play to its effect.
|
v3-fos-license
|
2020-04-27T20:38:18.659Z
|
2020-08-12T00:00:00.000
|
216325876
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://srcd.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cdev.13417",
"pdf_hash": "c5cdcddc5cbed7eae465a293c8fcd569ccd3b2b1",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3065",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"sha1": "261cc140efd062c2bfbdc2e16fa8ceac447d792c",
"year": 2020
}
|
pes2o/s2orc
|
How Diversity Approaches Affect Ethnic Minority and Majority Adolescents: Teacher–Student Relationship Trajectories and School Outcomes
This study aimed to relate school diversity approaches to continuity and change in teacher-student relationships, comparing Belgian-majority (N = 1,875, Mage = 14.56) and Turkish and Moroccan-minority adolescents (N = 1,445, Mage = 15.07). Latent-Growth-Mixture-Models of student-reported teacher support and rejection over 3 years revealed three trajectories per group: normative-positive (high support, low rejection) and decreasing-negative (moderate support, high-decreasing rejection) for both groups, increasing-negative (moderate support, low-increasing rejection) for minority, moderate-positive (moderate support, low rejection) for majority youth. Trajectories differed between age groups. Student and teacher perceptions of equality and multiculturalism afforded, and assimilationism threatened, normative-positive trajectories for minority youth. Diversity approaches had less impact on majority trajectories. Normative-positive trajectories were related to improved school outcomes; they were less likely, but more beneficial for minority than majority youth.
Schools are key social contexts for adolescent development (Eccles & Roeser, 2011), and school outcomes have long-lasting implications for future life chances (Heath & Brinbaum, 2014). Against the background of persistent inequalities in school outcomes between ethnic minority and majority students (Heath & Brinbaum, 2014), we focused on the quality of teacher-student relationships as a key protective factor (Sabol & Pianta, 2012). We asked how teacher-student relationship quality evolves throughout secondary school for ethnic minority and majority youth; how evolving relationships differ between individuals and schools; and how these relationships benefit later school outcomes. Building on Garc ıa Coll's et al. (1996) integrative model of minority child development and bridging theoretical approaches from ecological systems (Bronfenbrenner, 2005) and intergroup relations (Derks, Van Laar, & Ellemers, 2007), our study aimed to disentangle distinctive intergroup factors in the school environment for minority adolescents from protective factors for all adolescents.
Teacher-student relationship quality refers to students' positive as well as negative experiences with their teachers. Thus, youngsters can experience teacher support such as when they feel understood and teacher rejection such as when they feel treated unfairly. While supportive relationships promote achievement and adjustment (McGrath & Bergen, 2015;Sabol & Pianta, 2012), experiences of rejection or discrimination undermine these outcomes (Brown & Chu, 2012;Thomas, Caldwell, Faison, & Jackson, 2009). Moreover, early experiences can be transient or stable as relationships continue or change through adolescence. It is critical to look at change and continuity in these relationships because cumulative experiences of supportive relationships enable better school adjustment than do temporary experiences, whereas prolonged periods of relational difficulty are more harmful than are shorter periods (Ladd, Herald-Brown, & Reiser, 2008).
Our first objective was thus to identify distinct patterns of teacher-student relationship quality over time. We asked how trajectories of teacher support and rejection differed between ethnic majority and Turkish and Moroccan minority adolescents in Belgium. Turkish and Moroccan minority youth in Europe, most of whom are second-generation of immigrant origin, perform worse than their majority peers in school, even when controlling for parental and individual background (Heath & Brinbaum, 2014). They are also targets of anti-Muslim prejudice in Europe, as the majority of them are Muslim (Strabac & Listhaug, 2008). Our second objective was to predict distinct relationship trajectories from the way schools deal with cultural diversity as perceived by teachers and students. As critical components of diversity approaches in schools, we assessed perceived equality (i.e., equal treatment), multiculturalism (i.e., valuing diversity and combating discrimination), and assimilationism (i.e., expecting minorities to relinquish their minority culture for the majority culture). No previous research, to our knowledge, has examined how cultural-diversity approaches impact teacher-student relationships. Our third objective was to associate relationship trajectories with changes in school outcomes over time. We assessed "hard" outcomes such as grades and school compliance and "soft" outcomes such as school engagement and belonging.
Adolescence is the period when young people's views on equality and diversity are formed as they become more aware of, and susceptible to, discrimination or unfairness (Baysu, Celeste, Brown, Verschueren, & Phalet, 2016). At the same time, older adolescents are more at risk of experiencing negative relationships with teachers in secondary school (McGrath & Bergen, 2015). Our fourth aim therefore was to examine age differences in the relationship trajectories of adolescents and in the effects of cultural diversity approaches on these trajectories. In the following section, we will discuss continuity and change in teacher-student relationships, cultural diversity approaches as affordances, and school outcomes as consequences of these relationships.
Continuity and Change in Teacher Support and Rejection
Supportive relationships with teachers promote better school outcomes for minority and majority students alike (McGrath & Bergen, 2015;Roorda, Jak, Zee, Oort, & Koomen, 2017;Roorda, Koomen, Spilt, & Oort, 2011;Sabol & Pianta, 2012). According to Bronfenbrenner's (2005) ecological systems theory, at the level of microsystems (such as schools) proximal processes (such as interpersonal relationships between teachers and students) are the primary mechanisms that explain developmental outcomes. Although teacher-student relationship is important for all students (Roorda et al., 2017), minority students are at risk of experiencing less support or more discrimination from teachers (McGrath & Bergen, 2015). From an intergroup relations perspective, we associate teacher support and rejection with socialidentity affirmation and threat (Derks et al., 2007). As most teachers have a majority background, minority adolescents' relationships with them have a distinct intergroup dimension so that teacher support (vs. rejection) signals the (de)valuation of their minority identity in school (Verkuyten, Thijs, & Gharaei, 2019). In line with Garc ıa Coll's et al. (1996) integrative model of minority development, discrimination puts children at risk of poorer developmental outcomes through affecting their relationships and engagement with learning. Thus, minority experiences of teacher discrimination predict impaired school outcomes (Brown & Chu, 2012;Thomas et al., 2009). To disentangle distinctive minority experiences from those common to all adolescents, we investigated both minority and majority experiences of teacher support and rejection over time.
Longitudinal studies about teacher-student relationship quality generally document a decline with age (O'Connor & McCartney, 2007) and with transition from primary to secondary school (Hughes & Cao, 2018). Looking beyond general trends in relationship quality, five longitudinal studies modeled different trajectories of teacher-student relationship quality in primary school, using measures of teacher-reported closeness and conflict (Bosman, Roorda, van der Veen, & Koomen, 2018;O'Connor & McCartney, 2007;O'Connor, Collins, & Supplee, 2012;O'Connor, Dearing, & Collins, 2011;Spilt, Hughes, Wu, & Kwok, 2012). Most studied samples of largely ethnic majority children (11%-20% ethnic minority), with Spilt et al. (2012) as a notable exception using an ethnically diverse sample. These studies did not distinguish between trajectories of ethnic minority and majority children, but two (Bosman et al., 2018;Spilt et al., 2012) tested the effect of ethnic minority status on the trajectories.
Our study extends these studies in several ways. First, rather than reports of teachers, we focus on adolescents' own experiences of their relationships with teachers, which can affect school outcomes, regardless of their congruence with teacher perceptions (Wu, Hughes, & Kwok, 2010). Second, while our measure of teacher support (e.g., "teachers understanding you," "having attention for you") overlaps with closeness or warmth in teacher-student relationship, our measure of rejection differs from the conflict measures used in these studies. Although both rejection and conflict indicate negative relationships, conflict refers to disharmonious and coercive interactions between teacher and student (e.g., "This child and I always seem to be struggling with each other," in all five studies, "I often need to discipline this child," in Spilt et al., 2012), whereas rejection indicates when students feel that teachers treat them unfairly or make them feel unwelcome or incompetent. Such rejection experiences can undermine trust and perceived justice (Baysu et al., 2016;Yeager, Purdie-Vaughns, Hooper, & Cohen, 2017).
Third, we aimed to replicate previous findings for adolescents' experiences of teacher support and rejection. Because experiencing teacher support is normative and more likely than rejection, for both minority and majority youth, we expected that majority of ethnic minority and majority adolescents would follow a normative-positive trajectory, experiencing moderate-to-high support (Bosman et al., 2018;O'Connor & McCartney, 2007;O'Connor et al., 2011O'Connor et al., , 2012 and low rejection (Bosman et al., 2018;O'Connor et al., 2012;Spilt et al., 2012). We also expected smaller numbers of minority and majority adolescents to have non-normative or problematic relationships, either initially or increasingly over time. Problematic trajectories can take the form of worsening relationships over time (increasing teacher rejection, decreasing support or both) and of initially negative yet improving relationships over time (decreasing teacher rejection, increasing support or both; Bosman et al., 2018;O'Connor et al., 2011O'Connor et al., , 2012Spilt et al., 2012). One might also find a most negative relationship trajectory with low support and high rejection (O'Connor & McCartney, 2007;O'Connor et al., 2011O'Connor et al., , 2012Spilt et al., 2012).
Fourth, relationship trajectories have not been analyzed separately for ethnic majority and minority youth. Minority students may experience less teacher support or more unfair treatment than majority peers in school (McGrath & Bergen, 2015;Verkuyten et al., 2019). Research on trajectories of teacher-student relationship has shown that teachers report less closeness and more conflict with minority children (Bosman et al., 2018), who are more often represented in increasing-conflict trajectories than their majority peers (Spilt et al., 2012). We expected that minority adolescents might be more at risk of having problematic relationship trajectories than majority peers. Minority adolescents might either experience specific forms of problematic relationships with teachers or face the common problematic trajectories more often. To allow for group differences in both form and frequency of relationship trajectories, we identified different trajectories within minority and majority samples separately.
Fifth, we shifted the focus from primary-school children to adolescents and their teachers in secondary school. Adolescents generally experience less supportive teacher relationships as they get older (McGrath & Bergen, 2015). Adolescence is also a period when minority children in particular become more aware of, and vulnerable to, discrimination (Baysu et al., 2016). Thus, middle-to-late adolescents can expect more ethnic victimization and more readily infer ethnic motives even in ambiguous situations (Killen, Henning, Kelly, Crystal, & Ruck, 2007). Against this background, we explored whether older majority and minority adolescents were more likely to develop problematic relationship trajectories relative to younger adolescents.
Finally, we contextualized different relationship trajectories for majority and minority adolescents by focusing on cultural diversity approaches in school.
Cultural Diversity Approaches in Schools
The quality of relationships in schools and how well schools manage diversity are key to a positive school climate (Thapa, Cohen, Guffey, & Higgins-D'Alessandro, 2013;Wang & Degol, 2016). From an ecological systems theory approach (Bronfenbrenner, 2005), teacher-student relationships as proximal processes are afforded by the school environment as a microsystem. Thus, the school environment shapes how teachers deal with their students and how students see their teachers. No research, to our knowledge, has related a positive diversity climate, such as when schools value fairness or diversity, to individual experiences of teacher-student relationship quality. By bridging theoretical approaches from ecological systems and intergroup relations (Derks et al., 2007), we connected cultural diversity approaches to teacher-student relationships, particularly for minority youth. In line with a social identity approach of intergroup relations, we assume that minority youth feel more included and perform better in schools that value their minority identities than in identity-threatening contexts (Baysu et al., 2016;Celeste, Baysu, Meeusen, Kende, & Phalet, 2019;Walton & Cohen, 2007). Extending existing evidence on diversity approaches, we proposed that schools could make a difference in relationship quality with teachers by signaling identity affirmation to minority students, for example, by ensuring equal treatment or valuing cultural diversity. To the extent that cultural diversity approaches affirm or threaten majority identities, majority students may also be affected (Plaut, Garnett, Buffardi, & Sanchez-Burks, 2011). Because minority identities are more likely to be targets of unequal treatment or devaluation in schools, we expected diversity approaches to be most relevant for minorities, but we also explored whether majorities might be affected as well.
Three different approaches to diversity can be identified in European societies and schools: equality, multiculturalism, and assimilationism (Guimond, Sablionniere, & Nugier, 2014). While few studies relate diversity approaches to school outcomes (Celeste et al., 2019;Schachner, 2019;Schachner, Noack, Van de Vijver, & Eckstein, 2016), none predicts teacher-student relationship quality. Yet, these approaches can have different consequences for teacher-student relationship quality.
The equality approach refers to student perceptions of the general school climate, whether schools treat everyone equally and fairly (Baysu et al., 2016). In line with a social identity perspective, perceived fairness promotes various positive outcomes for minority youth such as well-being, engagement, and trust in the organization (Morin, Ma€ ıano, Marsh, Nagengast, & Janosz, 2013;Schachner et al., 2016;Yeager et al., 2017) because it conveys the message that their minority identity is equally valued in the school context (Baysu et al., 2016). Thus, when minority adolescents saw their school as more fair, they reported less discrimination and better school outcomes (Benner & Graham, 2011;Juvonen, Kogachi, & Graham, 2018). Perceived school fairness not only increased minority school outcomes, but it also buffered disengagement in the face of discrimination (Baysu et al., 2016). Extending these findings, we expected that perceived equality would enable minority adolescents to develop normative-positive (rather than problematic) relationship trajectories. We explored whether a fair school would also improve the relationship of majority adolescents with their teachers.
A multiculturalist approach values diversity (Schachner et al., 2016) and challenges racism and discrimination on grounds of race, ethnicity or religion (Thijs, Westhof, & Koomen, 2012;Zirkel, 2008). From a social identity perspective, minority adolescents could benefit from a multiculturalist approach when they feel that their cultural identity is valued, and growing evidence attests to the benefits of multiculturalism for minority students' achievement and adjustment (Celeste et al., 2019;Vedder & van Geel, 2012). When schools and teachers were seen to value diversity, minority students reported less discrimination (Brown & Chu, 2012;Vedder & van Geel, 2012). Thus, we expected that perceived multiculturalism would afford normative-positive trajectories of teacher-student relationship quality. We had no hypotheses for majority adolescents because the existing evidence is mixed: benefits depend on whether the majority feels that their identity is valued or whether they feel excluded by multiculturalism (Plaut et al., 2011).
Assimilationism requires minorities to prioritize the mainstream culture over their heritage cultures (Guimond et al., 2014). From a social identity perspective, assimilationism harms minority outcomes when minorities feel that their cultural identity is disregarded (Hornsey & Hogg, 2000). Minority students experience more peer rejection in classrooms with assimilationist peer norms (Celeste, Meeussen, Verschueren, & Phalet, 2016) and feel alienated and less belonging when schools enforce assimilationism (Celeste et al., 2019;Pulinx, Van Avermaet, & Agirdag, 2015). We likewise expected that minority adolescents in assimilationist schools would be less likely to have normative-positive relationship trajectories. We had no hypotheses for majority adolescents. Because assimilationism does not threaten the majority cultural identity, majorities may be unaffected (Celeste et al., 2019). Alternatively, they might indirectly benefit if they feel affirmed in their majority identity.
Cultural diversity approaches are critical during adolescence. From a developmental intergroup perspective (Killen & Rutland, 2011), adolescents explore group identities as part of their social development and they develop a deeper understanding of fairness in their moral judgments. In line with changes in social-cognitive and moral development during adolescence, perceived equal treatment buffered minority adolescents against discrimination in school and this buffer effect was stronger for older adolescents (Baysu et al., 2016). We thus explored age-related differences in the consequences of perceived equality, multiculturalism and assimilationism for minority and majority adolescents.
Changes in Minority and Majority School Outcomes
The quality of teacher-student relationships shapes school outcomes both longitudinally and cross-sectionally (Baysu & Phalet, 2012;McGrath & Bergen, 2015;Roorda et al., 2011Roorda et al., , 2017. Moreover, continued teacher support-or protection from adverse relationships with teachers-has more enduring effects than momentary perceptions of support or adversity (Ladd et al., 2008). Longitudinal studies of teacher-student relationship quality in primary school related normative-positive trajectories to higher achievement relative to negative (O'Connor & McCartney, 2007) and worsening relationship trajectories (Spilt et al., 2012). By combining the relationship trajectories of closeness, conflict and dependency into an overall risk measure, Bosman et al. (2018) found that those in the no-risk group (overall high-quality relationships) had higher motivation and achievement relative to either low or high-risk groups. We thus expected that normative-positive relationship trajectories would improve school outcomes for all adolescents.
From a social identity approach, minority students may be more responsive to the quality of relationships in school, which could protect them from negative consequences of identity threat (Walton & Cohen, 2007). Similarly, the academic risk hypothesis (Hamre & Pianta, 2001) states that (minority) students at risk of academic failure have more to gain from high-quality teacher-student relationships. Thus, supportive relationships with teachers enhanced minority students' school outcomes (Roorda et al., 2011). In other studies, however, teacher support (or lack of conflict) was no less important for majority students as well (Baysu & Phalet, 2012;Hughes, Luo, Kwok, & Loyd, 2008;Roorda et al., 2011). We explored whether teacher-student relationship trajectories were more consequential for the school outcomes of minority (vs. majority) adolescents.
The Present Study
Drawing on large-scale school-based longitudinal data (3 cohorts, 3 waves), our study aimed to elucidate what is distinctive about ethnic minority adolescents' schooling experiences compared to their majority peers (Garc ıa Coll et al., 1996). We presented the following expectations: (a) Across groups, many ethnic minority and majority adolescents would develop normative-positive relationship trajectories, but problematic trajectories, though less frequent, would be more common among minority than majority adolescents. (b) Equality and multiculturalism approaches of cultural diversity would afford more frequent positive -and assimilationism more frequent problematicrelationship trajectories in minority adolescents. We did not formulate any hypotheses for majority adolescents. We measured equality, multiculturalism and assimilationism at the individual-level of personal perceptions and at the school-level of shared student and teacher perceptions. (c) Positive relationship trajectories would improve and problematic trajectories would undermine school outcomes for all adolescents, and the consequences might be stronger for minority than majority adolescents (d) Older adolescents (both majority and minority) might be more at risk of developing problematic relationship trajectories. We also explored age-related variation in the associations of school diversity approaches with relationship trajectories.
Participants
This study was part of a large-scale longitudinal study (Children of Immigrants Longitudinal Study Belgium, Phalet, Meuleman, Hillekens, & Sekaran, 2018) with three waves of data collected 1 year apart in 70 randomly selected secondary schools in Flanders, Belgium (Time 1: February 2012-July 2013; Time 2: February 2013-July 2014; Time 3: February 2014-July 2015). Following consent from respective parties in line with the university ethical guidelines, students participated in the study during class hours in the presence of research assistants. Student nonresponse rate was 13.3% (Phalet et al., 2018). Schools were stratified by ethnic composition, ranging from low (< 10%) to moderate (10%-30% and 30%-60%) to high (> 60%) percentages of minority students, on the basis of administrative data on foreign languages spoken at home. Participants were in their first (28.2%), second (30.7%), or third (41.2%) year of secondary education. Ethnic majority and minority samples were selected based on self-reported parentage (i.e., all grandparents born in Belgium vs. one or more (grand)parents born in Turkey or Morocco). Majority adolescents (N = 1,875, 48.7% girls) were on average 14.56 years-old in Wave 1 (SD = 1.06, range = 12.21-18.43).
Measures
We identified trajectories using measures of teacher support and rejection as reported by students at each wave. Responses ranged from 1 (never) to 4 (always).
Teacher rejection (T1, T2, T3) was assessed with four items: "In your daily life at school how often do you experience that your teachers . . . 'treat you unfair or hostile', 'expect you cannot do anything right', 'talk to you as if you were stupid', 'let you know that you are not welcome'." This scale was adapted from the rejection subscale of Perceived Ethnic Discrimination Questionnaire (Brondolo et al., 2005;a = .88-.70). The original scale cited ethnicity as a motive, which was removed to measure general feelings of rejection for both groups (Minority, a T1-T2-T3 = .80, .83, .82; Majority, a T1-T2-T3 = .76, .76, .80).
Perceived diversity approaches (reported by students, teachers or both) were assessed as predictors at time 1 and rated on 1 (strongly disagree) to 5 (strongly agree).
Adolescents' Perceptions of Equality in School (T1) were measured with two items that were used in previous research (Baysu et al., 2016; two-item version, r = .56), adapted from the Experience of School Rules scale (Gregory, Cornell, & Fan, 2011; longer version a = .95): "The rules are applied equally to all students," "Some students are allowed more than others" (reversed). We used adolescents' own perceptions at the individual level (Majority a = .69, Minority a = .53), and aggregated perceptions of majority students and of Turkish and Moroccan minority students (a = .61) at the school level.
Adolescents' and Teachers' Perceptions of Multiculturalism in School (T1) were measured with the same items: "In my school" "different cultures and religions are treated with respect," "they take strong action against racism and discrimination," "teachers treat all students equally regardless of their religion or descent," "teachers say that you shouldn't discriminate students with another culture or origin" (from Teachers' Multicultural Attitudes Scale; Thijs et al., 2012; a = .70). We used individual-level student perceptions (Minority: a = .67; Majority: a = .68). We also aggregated majority and minority students' (a = .68) and teachers' perceptions (a = .73) at the school level.
Teachers' Perceptions of Assimilationism in School (T1) were measured with two items "In my school speaking another language than Dutch is not tolerated"; "it is forbidden to wear a headscarf or other religious clothing" (a = .40). The scale was based on the analysis of Belgian school policies (Celeste et al., 2019), which revealed assimilationism as a separate cluster with these two items (mentioned 25% of the time in the policies). A factor analysis of teacher reported multiculturalism and assimilationism confirmed that the two items load on a separate factor (factor loadings > .73). We aggregated teacher perceptions at the school level.
Several outcomes based on student reports were assessed at T3, controlling for scores at T1 on scales from 1 (strongly disagree) to 5 (strongly agree) unless otherwise indicated.
School engagement (T1, T3) was assessed separately for emotional engagement ("I like to learn new things in class," "I feel good in class," "I like to be in class"; minority a = .73-.70, majority a = .70-.67 in T1 and T3), behavioral engagement ("I work as hard as I can in class," "I listen carefully during the class," "I pay attention in class," minority a = .81-.82 majority a = .82-.83), and behavioral disengagement ("In class I am easily distracted," "I often think of other things during class," "In class I do not really do my best"; minority a = .59-.60; majority a = .70-.67). The scales were adapted from the Engagement versus Disaffection with Learning scale (Skinner, Kindermann, & Furrer, 2008; scale a = .61-.82, across constructs).
School belonging (T1, T3) was measured with four items (Wang et al., 2011; original scale a = .75): "I feel at home at this school"; "I am proud to be a student of this school," "I would prefer to go to another school (reverse item)," "I feel at home at this school," "I feel happy at this school." (Minority a = .84-.85; Majority a = .85-.86 in T1-T3).
Age (T1) was included as a predictor of the trajectories and as a moderator in the association between diversity approaches and the trajectories.
Control variables (T1) were gender and school track (vocational vs. nonvocational). Parental education as a proxy for socioeconomic status, ethnic school composition, and Turkish vs. Moroccanbackground were not significant and were dropped from further analysis.
Analytic Strategy
We used Mplus version 7.31 (Muth en & Muth en, 1998 for all analyses and handled missing data using full information maximum likelihood (FIML). FIML uses all available data without imputing missing data, which may introduce randomness in the data. Thus, it is unbiased and preferable to other methods (Dong & Peng, 2013). Data analysis involved three parts. First, we identified different patterns of teacher support and rejection via multivariate Latent Growth Mixture Models. Second, we ran multilevel multinomial logistic regression analyses with trajectories as outcomes in a stepwise fashion, separately for majority and minority groups: (a) control variables; (b) age and individual-level perceptions of equality and multiculturalism and school-level aggregates of shared (majority and minority) student perceptions of equality and multiculturalism, (c) school-level aggregates of teacher perceptions of multiculturalism and assimilationism, (d) age interactions with equality and multiculturalism. Only significant interactions were kept in the model (see Supporting Information for model specifications). Third, we ran separate multilevel regression analyses with trajectories as predictors (by using dummy-coding) with T3 grades, noncompliance, engagement, and belonging as outcomes, controlling for T1 outcomes. We included T1 diversity approaches as covariates (see Supporting Information for model specifications). We also ran additional analyses with the two low-reliability scales (a < .60, school noncompliance and assimilationism), which confirmed the results reported here (see Supporting Information for details).
Results
For attrition analysis, we compared those who participated in all waves (minority: 40.3%; majority: 47.9%) to those who missed at least one wave. The details can be found in Supporting Information. Table 1 shows descriptive statistics of key variables (see Supporting Information for all study variables).
Trajectories of Teacher Support and Rejection
For both majority and minority adolescents, we decided that a three-class solution fitted the data best ( As shown in Figure 1, a positive relationship trajectory, labeled "normative-positive" was most frequent (77.7%). This subgroup combined high teacher support with a negligible decrease over time (I = 2.86, S = À0.04, p = .010), and low, stable teacher rejection (I = 1.38, S = À0.03, p = .099). We also found two problematic relationship trajectories. Adolescents in a "decreasing-negative" trajectory (13.8%) had moderate teacher support with a small increase over time (I = 2.32, S = 0.13, p = .030), and initially high teacher rejection that decreased over time (I = 2.90, S = À0.61, p < .001). A third subgroup followed an "increasing-negative trajectory" (8.5%) with moderate, stable teacher support (I = 2.46, S = À0.03, p = .667), with initially low teacher rejection that increased over time (I = 1.82, S = 0.60, p < .001).
Among majority adolescents, the three-class model also yielded the best fit. It had better fit statistics (lower BIC and AIC) and significantly improved model fit (better BLRT) over a two-class model Comparing the three-class to a four-class model, although the BLRT suggested significant improvement, log-likelihood, AIC, BIC values, and entropy showed little improvement; and the percentage of students was too small to analyze. The fourth class resembled the increasing-negative trajectory among minorities, but it was not statistically and meaningfully differentiated as a separate trajectory based on model fit indices.
Comparing numbers of adolescents in the two common trajectories, majority adolescents were more often in the normative-positive trajectory, minority adolescents in the "decreasing-negative" trajectory, v 2 (1) = 43.52, p < .001.
Cultural Diversity Approaches and Age as Predictors of Trajectories
Minority adolescents (Table 3). When these students perceived the school as more equal or more multicultural, they were more likely to have a normative-positive trajectory (vs. others). At the school level, when both minority and majority youth perceived the school as more equal, minority adolescents were more often in the normative-positive trajectory (vs. decreasing-negative). When teachers perceived the school as more multicultural or less assimilationist, minority youth were again more often in the normative-positive trajectory (vs. increasing-negative). Additionally, older minority adolescents were less likely to be in the normativepositive trajectory (vs. increasing-negative) than Note. Likelihood ratio tests compare the solution with k trajectories to a solution with k À 1 trajectories. BIC = Bayesian information criterion; AIC = Akaike information criterion; BLRT = bootstrap likelihood ratio test. younger ones. A significant interaction between age and multiculturalism ( Figure 3) showed that multiculturalism did not make a difference for younger adolescents; for older adolescents, lower multiculturalism made the increasing-negative trajectory more likely (vs. normative-positive). As for control variables, girls were more often in a normative-positive trajectory (vs. other trajectories). Majority adolescents (Table 4). We found some similar results for majority students. Those who perceived their school as more equal or more multicultural were more often in the normative-positive trajectory (vs. others). These effects were not replicated at the school-level: shared student perceptions of equality and multiculturalism had no significant effects, but teacher perceptions were inversely related to majority trajectories. When teachers reported more multiculturalism or less assimilationism, majority adolescents were less likely to be in the normative-positive trajectory (vs. moderate-positive). Older majority adolescents were also less likely to be in the normative-positive trajectory (vs. moderate-positive). Despite a significant interaction between age and perceived equality, there were no meaningful differences in simple slopes (see Supporting Information with estimated probabilities for this interaction). As for gender, girls were more likely to have a normative-positive trajectory.
Associations of Trajectories With Minority and Majority School Outcomes
Minority adolescents (Table 5). Relationship trajectories were unrelated to T3 school grades .02 Note. (controlling for T1 grades). Those in the normativepositive trajectory showed higher T3 school compliance and higher emotional engagement (vs. the other trajectories), lower behavioral disengagement and higher belonging (vs. the increasing-negative trajectory; controlling for T1 outcomes). Additionally, diversity approaches (T1) were directly related to school outcomes (T3): When minority students perceived their school as more equal, their school compliance and belonging increased over time. At the school level, however, student perceptions of multiculturalism predicted more behavioral disengagement for minorities. As for gender, girls reported more compliance, but less emotional engagement than boys. Majority adolescents (Table 6). Relationship trajectories were unrelated to T3 grades and compliance. Majority adolescents in the normative-positive trajectory reported higher T3 emotional engagement (vs. the decreasing-negative trajectory) and belonging (vs. the other two trajectories). Additionally, school-level diversity approaches at T1 also directly predicted T3 outcomes: Shared student perceptions of equality predicted higher school compliance, whereas teacher perceptions of lower multiculturalism or higher assimilationism predicted higher engagement over time. As for gender, girls reported higher grades, more compliance, higher behavioral engagement and less disengagement than boys.
Discussion
Against the background of persistent ethnic disparities in European schools, we examined the quality of evolving teacher-student relationships during adolescence as a protective proximal process in an ecological systems approach to development (Bronfenbrenner, 2005). Drawing on a large-scale multilevel (students within schools) and longitudinal (three waves, three cohorts) data, our study adds a unique comparative dimension by comparing Turkish and Moroccan minority and Belgian majority adolescents. Our goal was to disentangle generic change for all adolescents (e.g., a general decline in relationship quality with age) from distinctive challenges that minority adolescents face (e.g., increased risk of developing problematic relationships with teachers). Thus, we add to an emergent research stream on distinctive minority experiences in developmental science (Garc ıa Coll et al., 1996;Juang et al., 2018;Syed, Santos, Yoo, & Juang, 2018). Rather than amalgamating minority students, within both ethnic minority and majority groups, we differentiated the most common "normative-positive" trajectories of teacher-student relationship quality from specific problematic trajectories and assessed age-related changes in trajectories through adolescence. Moreover, we not only looked at developmental consequences (i.e. various school outcomes) of these trajectories (Bosman et al., 2018;Spilt et al., 2012) but also for the first time showed which cultural diversity approaches in school afforded normative-positive trajectories across minority and majority adolescents. In the following section, we summarize our contributions and consider limitations and implications for future research.
Trajectories of Teacher-Student Relationship Quality
Our findings extend the limited evidence on trajectories of teacher-student relationship quality (Bosman et al., 2018;O'Connor & McCartney, 2007;O'Connor et al., 2011O'Connor et al., , 2012Spilt et al., 2012) by comparing how ethnic minority and majority adolescents' relationship quality with teachers changes through secondary school. The novelty in our comparative approach was to identify the trajectories within minority and majority samples separately, which allows for group differences in both form and frequency of relationship trajectories. We were also the first to look at trajectories of teacher-student relationships during adolescence. Looking at the common trajectories, we identified a normativepositive trajectory (with high teacher support and low rejection) for most adolescents, replicating earlier findings with primary-school children (Bosman et al., 2018;O'Connor & McCartney, 2007;O'Connor et al., 2011O'Connor et al., , 2012Spilt et al., 2012). Thus, highquality relationships with teachers appear to be the norm among both ethnic minority and majority Figure 3. Interaction between age and multiculturalism for minority adolescents. * p < .05; ** p < .01. Table 4 Predicting Teacher-Student Relationship Trajectories by Cultural Diversity Approaches for the Majority Sample .02 Note. Table presents unstandardized regression coefficients with standard errors in parenthesis and odds ratios (OR) for multinomial logistic regression. Normative-positive trajectory is the reference category. The nonsignificant age interaction that was dropped from the final analysis is indicated by ns.
Intraclass correlations are based on the models without any predictors. † p = .05. *p < .05. **p < .01. ***p < .001. Table 5 Predicting .07 .03 .07 .05 Note. .01 .09 Note. adolescents. We also identified a decreasing-negative trajectory (with moderate teacher support and initially high, but decreasing rejection) for both minority and majority adolescents, replicating another common trajectory of improving relationship quality found in earlier work (Bosman et al., 2018;O'Connor et al., 2011O'Connor et al., , 2012Spilt et al., 2012). We also found group-specific trajectories. An increasing-negative trajectory (with moderate teacher support and initially low, but increasing rejection), showing worsening relationship quality was found for minority adolescents only, replicating Spilt et al. (2012) findings with minority children. However, other studies have found this trajectory in majority samples as well (Bosman et al., 2018;O'Connor et al., 2011O'Connor et al., , 2012. Possibly, increasing teacher rejection, as distinct from the conflict assessed in earlier studies, is more common among minorities (vs. majorities). Our finding of increasing rejection resonates with previously reported increases in school discrimination for ethnic minority children (Benner & Graham, 2011). Finally, we found a moderate-positive trajectory for majority adolescents only (with moderate support and low rejection), replicating earlier findings with majority samples (Bosman et al., 2018;O'Connor & McCartney, 2007;O'Connor et al., 2012). Although some earlier studies found a primarily negative relationship trajectory for very small subgroups (3%-5%; O'Connor et al., 2011O'Connor et al., , 2012Spilt et al., 2012), this pattern did not emerge in our data.
Earlier studies either analyzed closeness and conflict measures separately (Bosman et al., 2018;Spilt et al., 2012) or aggregated them as one construct (O'Connor & McCartney, 2007;O'Connor et al., 2011); in contrast, our trajectories of relationship quality combined changes in both teacher support and rejection. Across different trajectories, teacher support varied less (high vs. moderate) than rejection (low, high-decreasing and low-increasing). This finding resonates with earlier studies, which reported more variation in teacher-reported conflict than in warmth (Bosman et al., 2018;O'Connor et al., 2012;Spilt et al., 2012). Thus, evidence on teacher-student relationship trajectories suggests more action on the negative side than on the positive side of relational experiences. More research is needed to clarify this asymmetry.
Comparing minority and majority trajectories, problematic relationships were more likely for minority adolescents. Only 5% of majority vs. 22% of minority adolescents were in problematic trajectories (increasing-negative or decreasing-negative trajectories). Even when we restrict the comparison to the common trajectories, minority adolescents were less often in a normative-positive trajectory. This finding extends the existing evidence on teacher-student relationship trajectories, which showed that ethnic minority children were underrepresented in low conflict (vs. increasing conflict) trajectories (Spilt et al., 2012), and that teachers reported lower closeness and higher conflict with ethnic minority children (Bosman et al., 2018). Overall, our findings suggest distinctive relational experiences for ethnic minority adolescents in school McGrath & Bergen, 2015;Sabol & Pianta, 2012), in terms of both the form of relationship trajectories (more problematic relationship trajectories for ethnic minorities) and their frequency (a higher risk of being in problematic trajectories).
Cultural Diversity Approaches
Our findings showed, for the first time, that school diversity approaches made a difference in trajectories of teacher-student relationship quality. When students perceived their school as more equal and multiculturalist, both minority and majority adolescents were more often in the normative-positive trajectory (vs. other trajectories). Because minority adolescents are at risk of experiencing unfair treatment in school (Bottiani, Bradshaw, & Mendelson, 2016), our finding that the perception of an inclusive diversity approach helps both majority and minority adolescents to form and maintain positive relationships with teachers is promising. As reverse causation is less likely for longitudinal relationship trajectories, our findings strengthen recent evidence linking equality and multiculturalism to school outcomes (Celeste et al., 2019;Schachner et al., 2016). For minority youth only, we also replicated positive effects at the school level so that shared student perceptions of equality made normative-positive relationship trajectories more likely. Perceived fairness affords normative-positive relationships with teachers, particularly for minorities, in line with findings of enhanced minority school outcomes in 'fair' schools (Baysu et al., 2016;Schachner et al., 2016).
Interestingly, teacher perceptions of multiculturalism and assimilationism affected minority and majority trajectories differently. When teachers perceived schools as more multicultural and less assimilationist, minority adolescents were more likely, but majority adolescents less likely, to form normative-positive relationships with their teachers. The minority group results resonate with earlier findings Pulinx et al., 2015). In contrast, majority students seem to benefit more from assimilationism than multiculturalism (as reported by teachers) as the latter makes them less likely to be in normative-positive trajectories. Reasoning from social identity threat versus affirmation, a multiculturalist approach may not be seen as all-inclusive by majorities (Plaut et al., 2011), whereas assimilationism might affirm the majority identity in highly culturally diverse schools.
To conclude, in schools with egalitarian or multiculturalist diversity approaches, minority adolescents were more likely to experience normativepositive relationships and less likely to experience problematic relationships. However, for majority adolescents, relationship quality was less consistently associated with school diversity approaches.
School Outcomes
Positive relationship trajectories were associated with longitudinal changes in affective school outcomes (i.e., emotional engagement and school belonging) for both minority and majority adolescents, in line with the protective function of supportive relationships with teachers for all students (Baysu & Phalet, 2012;Sabol & Pianta, 2012). We further asked whether relationship trajectories would affect the school outcomes of minority students more strongly if, in line with an intergroup relations perspective (Derks et al., 2007;Verkuyten et al., 2019), teacher support can affirm, and experiences of teacher rejection threaten, the distinct identity of minority adolescents in schools as an intergroup context. Consistent with this reasoning, relationship trajectories were most consequential for minority adolescents: a normative-positive trajectory uniquely protected them from behavioral problems so that they reported less behavioral disengagement and noncompliance. Confirming the academic risk hypothesis (Hamre & Pianta, 2001), minority adolescents seem to benefit more supportive relationships with teachers, which can promote their engagement in school despite migration-related adversities .
Within both groups, the normative-positive group had significantly better outcomes compared to the most problematic trajectory in that group. Minorities who were in increasing-negative trajectories were most vulnerable, in line with the findings by Spilt et al. (2012). For majorities, a decreasingnegative trajectory had the worst outcomes, in line with the findings of O' Connor and McCartney (2007) in their largely majority sample. Unlike with these studies, however, we did not find any association between trajectory patterns and achievement (grades). This is not inconsistent with evidence from two recent meta-analyses (Roorda et al., 2011(Roorda et al., , 2017 that found weak associations between teacher-student relationships and achievement. Moreover, the impact of teacher support on achievement in secondary school can be limited by structural inequalities in hierarchical tracking systems in Europe (Eccles & Roeser, 2011). Highly tracked educational systems stream students at an early age into different schools to follow vocational or academic secondary education; decisions determine a student's achievement and are often irreversible (Baysu, Alanya, & de Valk, 2018;Eccles & Roeser, 2011). Even when teachers are supportive in already tracked secondary schools, it may be "too little, too late" to counteract existing discrepancies in achievement.
Age and Gender Differences
The finding that older adolescents were less often in normative-positive trajectories is suggestive of general developmental processes and replicates other findings of declining relationship quality with teachers during adolescence (McGrath & Bergen, 2015;Roorda et al., 2011). Protective effects of multiculturalism were also age-dependent for minority adolescents: older (vs. younger) adolescents in less multiculturalist schools were more at risk of increasingly negative relationships with teachers. From a developmental intergroup perspective, older minority adolescents are more vulnerable to discrimination (Baysu et al., 2016). Because they are more aware of their minority status (Umaña-Taylor et al., 2014) and the related risks of ethnic victimization (Killen et al., 2007), they may be more apprehensive about teacher rejection when cultural diversity is not valued in their school.
We also found several gender differences. Both minority and majority girls were more likely than boys to have normative-positive relationships with their teachers (Bosman et al., 2018;Hamre & Pianta, 2001;Spilt et al., 2012). In terms of outcomes, majority girls did better than boys on five out of seven outcomes (including grades), in line with reported gender advantage in school outcomes favoring girls over boys (Baysu & de Valk, 2012). However, ethnic minority girls did better than boys on only one out of seven outcomes (and did worse in one outcome). Baysu and De Valk (2012) similarly found that the gender advantage of girls did not hold for the minorities when they compared the school careers of majority versus Turkish and Moroccan minorities in four European countries. More research is needed on the intersection of ethnicity with gender for school outcomes.
Limitations
We see diversity approaches as characteristics of the school environment, and teacher-student relationships and affective outcomes such as belonging as individual experiences. However, the quality of relationships can also be a school characteristic such that certain schools can be defined as having higher or lower quality relationships or connectedness (Morin et al., 2013;Thapa et al., 2013;Wang & Degol, 2016). From an ecological systems approach, these different layers are interrelated due to the shared experience of an ecological system (Thapa et al., 2013). We should thus acknowledge the possible reciprocal influences between cultural diversity approaches, teacher-student relationship quality, and school outcomes. Future research should look more closely at bi-directionality in these associations.
Our study had other limitations. First, teacher reports of relationship quality could complement our focus on adolescents' own experiences of relationship quality. Second, student-reported noncompliance and teacher-reported assimilationism had low reliability, but the additional analyses with these measures confirmed the results reported here. Although minority students' perceptions of equality also indicated low reliability, majority and shared student perceptions of equality had better reliabilities. Moreover, positive associations of minority perceptions of equality with trajectories were replicated at the school level with shared student perceptions, which strengthens the conclusions regarding this measure. Third, teacher and student perceptions of cultural diversity approaches were incompletely assessed (e.g., we had teacher, but not student perceptions of assimilationism), so we cannot fully separate out differences in perspectives from different approaches. Fourth, teacher data were available for only two thirds of the schools. This reduction may limit the generalizability of our findings on teacher perceptions (however, the FIML estimation in Mplus was robust). Fifth, school-level contextual effects should be qualified in light of low intraclass correlations. Finally, minority adolescents had a wider age range and were overrepresented in vocational tracks compared to majority adolescents. This is because ethnic minority students in Belgium are more often held back a year and referred to vocational education by their teachers (Baysu & de Valk, 2012;Baysu et al., 2016Baysu et al., , 2018. Still, we advise caution in inferring causal factors.
Applied Implications
Despite these limitations, our research findings have important applied implications. Our integrative models reveal continuity and change in adolescents' relationship quality with teachers as critical processes connecting early school environment to later school outcomes. Positive relationships with teachers are generally protective, but minority adolescents are more likely to benefit from positive relationships with teachers and at the same time more likely to lack such support. From an applied perspective, these findings suggest that schools can develop specific processes to protect at-risk minority adolescents from adverse outcomes. Our findings thus highlight the long-term benefits of practicing and communicating fairness and the value of diversity in building enduring positive teacher-student relationships, as well as the need for interventions to remedy worsening relational experiences.
Although majority adolescents benefitted from a positive diversity climate, we also found that in schools with stronger multiculturalist policies (as reported by teachers), majority adolescents were more likely to have moderate-quality rather than high-quality relationships with their teachers. Future research should ask whether actual teacher practices vary in schools with different cultural diversity approaches and which practices can make majority adolescents feel excluded. Thus, it is critical to work toward developing all-inclusive diversity approaches in which both ethnic minority and majority adolescents feel welcome in order to afford positive relationships and outcomes for all adolescents.
|
v3-fos-license
|
2022-06-03T15:12:31.992Z
|
2022-05-30T00:00:00.000
|
249289469
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://ojs.ijemd.com/index.php/ComputerScienceAI/article/download/83/32",
"pdf_hash": "c1be648b97a1c4432b030ed836190407073dfe91",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3067",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "ce9d6d1a1153a74d4db8ee9b1a7c3d1ec2395bfe",
"year": 2022
}
|
pes2o/s2orc
|
International Journal of Emerging Multidiciplinaries: Design of Wind Power Blades Using off-the-Shelf Software
In the simulation there are different things you can change like the length of the windmill and the position of the windmill on ground, hills, and near the sea you can also change the blade design in this simulation you can change blade length, blade pitch, blade twist, tip shape, and air foil shape. In addition, there were different types of pieces you can add to the windmill it can improve the efficiency of the windmill however; there are some parts that will decrease the efficiency of the windmill. After you finish making your windmill in the simulation it is going to show you the results of it like how many watts of electricity has been generated and how many houses it powered.
Introduction
Energy is one of the most important things in our lives. Without energy we cannot use the internet, cars, and many other things. One of the biggest problems with energy is that it mostly does not come from renewable sources. We cannot use coal, oil, and gas forever since they are limited resources, and also these non-renewable sources cause a lot of CO 2 emissions that pollutes our air and environment. That is why we need to find renewable sources for energy that's cleaner and more sustainable for us. We also need to increase the production of such renewable energy, especially wind energy or solar energy due to their abundance. Although, a lot of progress has happened in the last few years, it is still not enough to meet the world energy demand and at the same time keep their environment cleaner. Many studies have show that the earth's temperature is steadily rising each year which is a major concern. Although, renewable energy is an old idea, it still has not reach its full potential it term of wide utilization and efficiency. If we increase the amount of wind turbines worldwide, which is the most reliable method of getting renewable wind energy, it will have a very positive effect on our lives. Firstly, renewable energy will decrease the amount of CO 2 in the air which will let us live healthier. Secondly, another reason for supporting renewable energy is that it will create lot of jobs especially in engineering fields and technology. Thirdly, renewable energy will never run out, the wind will always be there all the time. On the other hand, fossil fuels need millions of years to regenerate. Lastly, renewable energy is cheap and affordable, that is because renewable energy is free unlike fossil fuel sources [1].
Although, renewable energy seems perfect it still has some challenges/issues, such as efficiency and wide distribution network, which needs to be resolved. These challenges are very serious, and we need to deal with them and solve them as soon as possible. Many think that because renewable energy relies on the environment makes it better than fossil fuels. They think that because wind energy gain their energy from the wind which makes the energy created free unlike fossil fuel. Although there is some truth to this it is not the full story. Renewable energy relies on the environment which is perfect because we will never run out of the wind, because most of the time its windy. Sadly, this can also be a weakness, because what happens when the wind is not strong enough? What happens if the wind is blowing the other way? All of these scenarios decrease the amount of energy we make. Another problem for renewable energy is that it cannot be stored. For example, if we produce a large amount of energy, but we only need half of that energy that means that the other half will be wasted, because it cannot be stored and used later. Another issue is that renewable energy takes a lot of space. When it comes to wind turbines, this is a big issue, because wind turbines hurt the environment that surround it. Lot of studies have shown that wind turbines will often kill or divert birds when they fly near them. Another issue is that wind turbines are remote, which means that they are far from the city. This means that you need lot of expensive infrastructure to get the renewable energy to the people in cities so they can use it.
Even though renewable energy has a lot of issues, challenges and problems, we can not just give up on it and try not to solve these problems. Lot of these issues can be solved if we invest more into research and development. For example, we can solve the weather issue by using advanced forecasting. This will give us the ability to predict weather patterns for the days ahead. We will know when its windy or not windy, this will help us predict the amount of energy that the wind turbines can generate. This will not eliminate the problem completely, because weather forecasting is not 100% accurate. However, most of the time it is accurate which means that it will help a lot in solving some issues. Another issue that has a solution is energy conserving. This issue is easily solved by using battery storage. Battery storage might be the way forward for renewable energy because it can store large amounts of energy that can be used later. Even though setting up wind turbines can be expensive but after that it has very low maintenance cost. This means that wind turbines usually do not need any maintenance during their life span [2].
Literature survey
Renewable energy has many forms hydro, solar, wind, biomass. In this paper, we will review a specific type of renewable energy, which is wind energy. China is considered to be the best country to produce wind power followed by USA, Germany, fourth place is India, and final place is Spain.
India shared 73% of it wind power to other countries. In addition, wind project increased because of the reduction of the price of controlling wind power. The Indian government worries about the adverse effect and that is why India created the national action plan on climate change. So we can see that India need to use wind energy to solve major problems. Pakistan does not have much oil and fossil fuel and it cost a lot to produce nuclear energy so they had to find another method i.e., renewable energy. The Pakistan Meteoritical Department (PMD) made a survey to find the best areas of the country to produce wind energy. They located areas where wind turbines can be installed around the whole country. According to the surveys a good wind energy generator will be in the cost line of Sindh and Baluchistan provinces.
Before they start to plant a wind generator they must check some things like the price, and potential of energy these two factors must be checked before installing wind energy system in order for it to work in full potential without any errors. Researchers did some research in order to study wind data. Weibull and Rayleigh were the best match for wind data collected. Researchers have studied ways to estimate and optimize wind data all over the world. Others have studied the ability of wind energy over different areas [3].
One of the studies that has been carried out showed the wind data of Alacati region in Izmir, which was Babur. The purpose of this is to guess the wind power potential accurately. So that is how renewable energy can help countries that can't have access to electricity. Many countries around the world use it because it cannot harm the environment. Wind turbine should be installed in places where the weather is good or it will cause a serious problem like damaging the windmills or even break them which can harm people [4].
Wind energy is one of the most important things in this world, as it has many benefits in various types of fields. Wind energy is characterized by its low price, as it costs 1-2 cents per kilowatt-hour after tax deduction for production. Electricity from wind farms is sold at a fixed price over a long period of time, and the fuel for them is also free. And not only is it free, it's clean too. Where it differs from power plants that depend on the combustion of fossil fuels. When fossil fuels are burned, they produce particulates, nitrogen oxides, and carbon dioxide, and then cause health problems and many economic damages. But in wind energy it does not cause any problems as it depends on wind turbines.
At the level of countries, wind energy is one of the most important sources that every country must rely on it, as wind supplies are abundant and inexhaustible. In the past ten years, the United States of America has relied on wind energy, and wind energy has grown by 15% annually. Now, wind energy is the This method may benefit rural areas and may raise their economy. As mentioned earlier, the renewable energy of wind energy is clean. The United States of America seeks to compete in the clean energy economy. Wind energy enables the growth of US industry and new wind energy projects represent annual investments of more than $10 billion in the US economy. Because it has enormous local resources and highly skilled labor [5].
Simulation and Analysis
Wind energy is the fastest growing renewable source out there. Wind power has a lot of advantages that are not available in solar panels or fossil fuels. Wind Turbines are also getting cheaper with every passing year. Although the results of wind turbines are impressive, they still can get better results and be more efficient. A lot of engineers and scientists are always looking to improve the wind turbine through many aspects, but in this research paper we are going to focus on one aspect and that is blade designing. We are going to use an online simulation to design a blade with different characteristics and see if does well or not. Wind turbine blades are a tricky subject, because there is no superior blade design. Blade designing heavily depends on the environment it is in, wind speed, and the weather. That is why in this research paper we are going to design the wind turbines blades in the plains environment of North America. We decided to do this to make sure there is no room for error in our research, and that we can get accurate results in our simulation. We are also designing the blade for wind turbines that only have three blades, and we will also use some of 3M products in the simulation that focus on enhancing the blade and maintaining it [6].
Designing the Blades
In this design we are going to focus on all the variables that go into designing the blades. Firstly, we are going to focus on the height of the turbine. Although it has nothing to do with blade designing, it will effect the results if we change the height of the wind turbine. That is why we decided to make the height of the turbine in this design at 100 meters. Secondly, we are going to add a blade twist, this will be twisting the blade to change angles of the wind on the blade. Twisted blades will be more durable and lighter than straight blades. Thirdly, we are going to add a blade pitch of 3 degrees. We do a blade pitch so that the wind hits the right fraction thus making more energy output, but it is also important to make sure that the blades don't reach it is maximum rotational speed, because it might damage the blades and the wind turbine. Fourthly, we are going to add a thin airfoil shape. Airfoils might be the most important part in blade designing, because it plays an important role the aerodynamic performance and it also helps to control the noise of the wind turbine. Fifthly, we are going to add a thin tip shape. Thin tip shapes are used in wind turbines to increase the maximum energy produced, because a thin tip is going to move the blades faster than a wide tip. Sixthly, we are going to make the blade length 45 meters, because the larger the blade length the larger the surface area. This will make the wind hit the blade at a much larger surface area which will make the blade spin faster and produce more energy. Below we can see a picture that will show clearly all the blade variables.
Figure 1 Variables for Blade Designing
Now that we finished designing the blade, we are going to add 3M products to improve and maintain the blades. Firstly, we going to add a 3M protection tape on the blades. This tape will protect the blade from hail, rain, and dust, but it will also decrease the performance of the wind turbine. We made the choice that maintaining the blade while losing a little energy is better than changing the blade every time it is damaged. Secondly, we added a 3M filler that balances the blade and make the blade surface and edges smoother which in return will make the blade spin faster and make more energy. Thirdly, we added a 3M riblets film. Riblets film will help organize air turbulence and friction on the blade which will make it produce more energy. Below we can see from Figure 2, which summarizes the inputs and the outputs.
Form Figure 2 we can see our inputs and their outputs. The wind turbine blades that we designed produce 686576 Watts per year when the air density is 1.15 and the velocity is 7.5 m/s. we can also see that the wind turbine supplies energy for 458 American houses per year.
Conclusion
Wind energy is a cheap and environmental friendly source for generating energy. This work has discussed the importance of wind energy in a number of countries such as India, Pakistan, USA, etc. A through discussion was also made at the individual level, and discussed how wind energy will save a lot of time, effort, and money. It will benefit the world as a whole in terms of caring for the environment. In this paper we have also used a free off-the-shelf software o determine the shape of the blade to obtain the best possible results. We placed the turbines at a height of 100 meters and used blades that were slightly more durable. We made the inclination by 3 degrees and added the shape of the wing and the tip of the wing thin. Then we made the blade 45 meters long. We found the best possible result: wind turbine blades produce 68,6576 watts per year and power 458 American homes annually. It is strongly recommended to invest more money to overcome the challenge of wind energy's full potential to make it more efficient and widely distributed to all regions of a country.
|
v3-fos-license
|
2016-06-17T06:57:31.186Z
|
2015-12-01T00:00:00.000
|
16098173
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2015.01356/pdf",
"pdf_hash": "a816391e1a685a0a9f2beecb29595ecf851d6fd9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3070",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "a816391e1a685a0a9f2beecb29595ecf851d6fd9",
"year": 2015
}
|
pes2o/s2orc
|
Prevalence and Characterization of Shiga Toxin-Producing and Enteropathogenic Escherichia coli in Shellfish-Harvesting Areas and Their Watersheds
more strains formed a strong biofilm at 18 than at 30°C. Finally, more than 85% of analyzed strains were found to be sensitive to the 16 tested antibiotics. These data suggest the low risk of human infection by STEC if shellfish from these shellfish-harvesting areas were consumed.
INTRODUCTION
The microbiological quality of coastal environments can be impacted by urban and agricultural fecal wastes from watersheds. Moreover, shellfish can accumulate and concentrate pathogenic micro-organisms, such as Salmonella, pathogenic Escherichia coli (E. coli) and noroviruses present in surrounding waters by their filter-feeding activities (Potasman et al., 2002). This can lead to closures or downgrading of shellfish-harvesting areas and to outbreaks of food poisoning through consumption of contaminated shellfish (Iwamoto et al., 2010).
Enumeration of E. coli, a fecal bacterial indicator, is the standard way to assess the level of fecal microorganisms in water and shellfish and indirectly, to estimate the associated potential risk to human health from all waterborne enteric pathogens (e.g., through classification of bathing areas and shellfish-harvesting areas; Anonymous, 2004). However, in addition to being a fecal indicator and a commensal bacterium, E. coli includes strains that can be pathogenic to humans. These can cause diarrhea and extra-intestinal diseases after acquiring virulence genes by genetic mobile elements such as bacteriophages, pathogenicity islands, and plasmids (Touchon et al., 2009). Pathogenic E. coli are distributed into diarrheagenic E. coli pathotypes including enterotoxigenic E. coli (ETEC), Shigella/enteroinvasive E. coli (EIEC), enteroaggregative E. coli (EAEC), diffusely adherent E. coli (DAEC), enteropathogenic E. coli (EPEC), Shiga toxinproducing E. coli (STEC; for review, Croxen et al., 2013) and into extra-intestinal E. coli pathotypes (Russo and Johnson, 2000).
Escherichia coli occurrence in seafood is considered a sanitary case and may represent a risk to the consumers if related to diarrheagenic E. coli (for review, Costa, 2013).
The study presented here focuses on EPEC (one of the main causes of diarrhea in infants) and STEC (an emerging zoonotic pathogen).
Enteropathogenic E. coli is an important cause of infantile watery diarrhea, which is more frequently encountered in lowincome countries than in the industrialized world (Nataro and Kaper, 1998). They are known to create distinctive lesions on the surface of intestinal epithelial cells, called attaching and effacing (A/E) lesions. This property is encoded by genes, including eae, grouped together in a pathogenicity island referred to the 'locus of enterocyte effacement' (LEE; Paton and Paton, 1998). EPEC is transmitted from host to host via the fecal-oral route through contaminated surfaces, waters and food and human carriers. Humans, including symptomatic and asymptomatic children and asymptomatic adults, are the most likely source (Levine and Edelman, 1984). Animals, such as cattle and wildlife species, have been found to be additional sources (Singh et al., 2015). Twelve O serogroups have been recognized as EPEC by the World Health Organization: O26, O55, O86, O111, O114, O119, O125, O126, O127, O128, O142, and O158 (WHO, 1987).
Shiga-toxin producing Escherichia coli are responsible for the mucoid-bloody diarrhea that can progress to hemolytic uremic-syndrome (HUS), especially in children. One of the most important pathogenicity factors produced by STEC strains is the Shiga toxin (Stx), encoded by a lambdoid bacteriophage (O'Brien et al., 1984). Shiga toxins can be divided into two types, Stx1 (almost identical to Shiga toxin produced by Shigella dysenteriae type 1) and Stx2, encoded by stx1 and stx2 genes, respectively (Scheutz et al., 2012). In addition, the STEC strains are often able to produce the A/E lesions as a result of the presence of the LEE pathogenicity island, as in EPEC. This subset of STEC strains is also known as enterohemorrhagic E. coli (EHEC;McDaniel et al., 1995). Instead of this LEE pathogenicity island, they can also possess the auto-agglutinating adhesin factor designated Saa (STEC autoagglutinating adhesin; Paton et al., 2001). Adhesion to the intestinal mucosa is an essential step in the infection cycle of E. coli, which contributes to pathogenesis in humans. Other factors are involved in the virulence of STEC but also of EPEC, such as enterohemolysin A, encoded by the ehxA gene and associated with cytotoxic effects on endothelial cells that may contribute to the development of HUS (Jiang et al., 2015). STEC infections have been reported following the ingestion of contaminated food or water, after bathing in contaminated waters or contact with animals (for review, Croxen et al., 2013). The principal reservoir of STEC is the digestive tract of animals, particularly of cattle that are healthy carriers (Bibbal et al., 2015). Other animals, such as sheep, goats, swine, birds, and other wild animals, as well as humans, can also harbor STEC (Mora et al., 2012;Chandran and Mazumder, 2013).
Shiga-toxin producing Escherichia coli and EPEC contamination of the environment may occur through the spreading of livestock manure, animal waste on pastures, via wastewaters from slaughterhouses or from treatment plant effluents and by wildlife (Muniesa et al., 2006;Singh et al., 2015).
In such environments, STEC and EPEC strains are exposed to various stresses, such as low temperature or nutrient depletion and the ability to form biofilm could be an advantage to increase persistence (Vogeleer et al., 2014).
To date, very few studies have focused on the detection and isolation of pathogenic E. coli belonging to the STEC and EPEC pathovars in coastal environments (Gourmelon et al., 2006;Bennani et al., 2011). The aim of the study presented here was to detect and characterize STEC and EPEC strains from French shellfish-harvesting areas and their upstream watershed in order to assess the diversity of these pathogenic E. coli potentially present in this type of hostile environment. For this purpose, during a 2-years study, shellfish batches, freshwater, seawater, and surface sediment samples from three selected shellfish-harvesting areas and their upstream watersheds, the location of intensive livestock activities (cattle, swine, poultry, and/or sheep), were analyzed monthly to evaluate the presence of STEC and EPEC strains.
Sampling Locations and Sample Description
Shellfish, water, and surface sediment samples were collected from three shellfish-harvesting sites on the French coast of within the Eastern English Channel and their watersheds. One of these sites, located in Brittany (site 1), corresponded to a 121 km 2 watershed, characterized by intensive livestock farming (cattle, swine, and poultry), with a human population of about 9,000 inhabitants. The two others were situated in Normandy; site 2 was characterized by a 1,000 km 2 catchment, with large livestock farming (cattle, sheep, swine, and poultry) and about 40,000 inhabitants, while the second site (site 3) corresponded to a 50 km 2 watersheds with large livestock farming (cattle, sheep, and swine) and about 7,000 inhabitants. These two latter watersheds are geographically closer together than the Brittany site (location of the sites Supplementary Figure S1). The three shellfish-harvesting areas are classified as category B for oysters (Crassostrea gigas) and mussels (Mytilus edulis) and as category C for common cockles (Cerastoderma edule) according to European regulation (European Directive 91/492/EEC; Anonymous, 2004). Shellfish from category B shellfish-harvesting areas must be depurated before being sold and shellfish from category C areas must be relayed at least 2 months prior to sale for consumption. Shellfish [oyster, mussel, and common cockle batches (site 1, n = 120; site 2, n = 72; and site 3, n = 46)] and freshwater samples from nine sampling sites upstream of shellfish-harvesting areas (site 1, n = 96; site 2, n = 72; site 3, n = 48) were collected monthly from February 2013 to January 2015, whereas surface sediment samples (site 1, n = 13; site 2, n = 13; site 3, n = 13) were collected from February 2013 to January 2014 and seawater samples (site 1, n = 12) from February 2014 to January 2015.
Isolation of STEC and EPEC Strains
Samples were transported in insulated cooler boxes to the laboratory and analyzed within 24 h. After opening, total shellfish flesh, including shellfish flesh and intravalvular liquid, were homogenized in a commercial blender (Waring Products Division, Torrington, CT, USA) for 60 s at high speed.
Twenty-five grams of homogenized total shellfish flesh were inoculated into 225 ml of buffered peptone water (BPW). For surface sediments, 10 g were introduced into the same volume of BPW. For water samples, 1 L was filtered using 0.45 μm cellulose membranes (Pall Gelman GN-6 Metricel; Pall Corporation, St Germain-en-Laye, France) and the filter was placed in 225 ml of BPW. Incubation was performed at 37 • C for 24 h.
Shiga-toxin producing Escherichia coli and EPEC strains were isolated from the environmental samples using three additional protocols. The first one, described by Balière et al. (2015), involves application of the ISO/TS-13136 method, which focuses on isolation of strains belonging to the five highly pathogenic serotypes and was applied to samples collected from February 2013 to February 2014. The two other protocols involve the isolation of STEC and EPEC with or without an enrichment step and independently from the serotype. These were applied to samples collected from February 2013 to January 2015.
For the protocol with an enrichment step (described in Balière et al., 2015), DNA was extracted from 500 μL of each BPW enrichment broth using NucliSENS Nucleic Acid Extraction Reagents for miniMAG (BioMérieux, Marcy l'Etoile, France), according to the manufacturer's instructions. The stx and eae genes were detected by real-time PCR (Agilent MX3000P, Waldbronn, Germany), using primers and probes published previously (Nielsen and Andersen, 2003;Perelle et al., 2007), according to the ISO/TS-13136: 2012 technical specification, with slight modifications concerning the PCR cycles [denaturation for 10 s at 95 • C, primer annealing for 5 s at 55 • C, and extension for 25 s at 60 • C (45 cycles)]. BPW broths identified positive for stx and eae were screened for STEC and EPEC isolates by streaking 1 μL of these broths onto Tryptone-Bile-Xglucuronide agar (TBX; AES chemunex, Bruz, France) and onto chromID TM agar (BioMérieux), followed by incubation at 44 • C for 24 h.
The final protocol to be used involves the screening of E. coli isolated directly from the water and shellfish samples without an enrichment step. For this protocol, 1, 10, and 100 ml of water were filtered through 0.45 μm cellulose membranes and the filters were placed onto TBX agar. For shellfish, 10 g of blended total shellfish flesh were distributed onto five empty and sterile plates with overlay super-cooled TBX agar. All TBX plates were incubated at 44 • C for 24 h. Presumptive STEC and EPEC isolates were confirmed by real-time PCRs targeting stx1, stx2, and eae genes, as described above after a DNA extraction of each isolate by boiling at 100 • C, for 15 min.
The STEC and EPEC isolates were characterized using several protocols as described below.
Serotyping
The serotypes of the STEC and EPEC strains were characterized using the serotyping method by agglutination, as described by Blanco et al. (2003).
More precisely, determination of O and H antigens was carried out by agglutination as previously described (Guinée et al., 1981), employing all available O (O1-O185) and H (H1-H56) antisera. All antisera were absorbed with the corresponding cross-reacting antigens to remove the nonspecific agglutinins. The O and H antisera were produced in the Laboratorio de Referencia de E. coli (USC, Lugo, Spain). Isolates that did not react with O antisera were considered as non-typeable (ONT) and those non-motile were HNM.
Detection of Enterohemolysin and Adhesin
The presence of ehxA (encoding enterohemolysin A) and saa (encoding STEC autoagglutining adhesin) genes in these isolates was investigated by conventional PCR using primers previously described by Paton and Paton (2002).
Phylogenetic Group
Isolates were classified into the four main E. coli phylogenetic groups (A, B1, B2, or D) using a conventional triplex PCR method based on the detection of two genes, chuA, required for heme transport in enterohemorrhagic O157:H7 E. coli, yjaA, initially identified in the recent complete genome sequence of E. coli K-12, for which the function is unknown, and of an anonymous DNA fragment designated TSPE4.C2 using primers described previously by Clermont et al. (2000).
Pulsed-field Gel Electrophoresis
The genetic relatedness of the isolates was studied by the pulsed-field gel electrophoresis method (PFGE) according to Bidet et al. (2005). Isolated strains were inoculated in nutrient broth containing 1.3% NaCl (Bio-Rad, Marnes-la-coquette, France) and incubated at 37 • C for 24 h. Bacterial DNA was extracted from 400 μl of the enrichment broth using the CHEF Bacterial Genomic DNA Plug Kit (Bio-Rad) according to the manufacturer's recommendations. Bacterial DNA was digested for between 16 and 20 h at 37 • C with the restriction endonuclease XbaI (Roche Diagnostic, Meylan, France) according to the manufacturer's recommendations. Each electrophoresis was performed using a lambda ladder molecular mass marker (Bio-Rad) for the normalization of gel images. The migration was performed on a 1% agarose gel using the CHEF-DRIII apparatus (Bio-Rad) at 6 V cm −1 for 27 h, with pulse times varying linearly between two and 49 s. The bacterial DNA restriction patterns were analyzed using the Bionumerics software 7.5 (Applied Maths, Kortrijk, Belgium). The similarity of PFGE profiles was compared and a dendrogram was created using the band-based Dice unweighted-pair group method, using average linkages (UPGMA), based on 1% position tolerance and 0% position optimization. Branch quality was evaluated using Cophenetic correlation. PFGE patterns were considered clonally related when they had a similarity coefficient higher than 80%.
Static Biofilm Formation Assay
A biofilm formation assay was performed as previously described by Tremblay et al. (2015). In addition to the incubation temperature of 30 • C, the ability to form biofilms was also tested at 18 • C, in order to reproduce marine temperate environmental conditions (Moldoveanu, 2012). Briefly, overnight cultures at 37 • C in LB media were diluted (1:100) in 5 ml of M9 medium with glucose (0.4% wt/vol) and minerals (1.16 mM MgSO 4 , 2 μM FeCl 3 , 8 μM CaCl 2 , and 16 μM MnCl 2 ) and incubated for 24 h at 37 • C. These cultures were diluted (1:100) in M9 medium supplemented with glucose and minerals and were inoculated in triplicate into microtitre plates (Costar 3370; Corning, NY, USA). After 24 h of incubation at 18 or 30 • C, unattached cells were removed by washing three times with phosphate-buffered saline (PBS). Plates were dried at 37 • C for 15 min and biofilms were stained with crystal violet (0.1% wt/vol) for 2 min. After removal of crystal violet solution, the biofilms were washed three times with PBS and dried at 37 • C for 15 min. The biofilm stain was dissolved with 150 μl of 80% (vol/vol) ethanol and 20% (vol/vol) acetone and biofilms were quantified by measuring the absorbance at 590 nm (OD 590 ) with a microplate reader (Powerwave; BioTek Instruments, Winooski, VT, USA). The results for the static biofilms formed at 18 and 30 • C were compared using two-way analysis of variance (ANOVA) followed by a Bonferroni post hoc comparison using GraphPad Prism, version 4.02 (GraphPad Software, San Diego, CA, USA). Strains were divided into three groups based on the OD 590 of bacterial biofilm: strong (A 590 > 0.6), medium (0.6 ≥ A 590 ≥ 0.3) and weak or none (A 590 < 0.3).
Environmental Data and Statistical Analysis
Rainfall data (2-days cumulative rainfall before sampling date) were provided by the meteorological stations from Meteo France at Pleurtuit (site 1) and at Coutances (sites 2 and 3). The water temperature was measured manually at each sampling. The data on temperature and precipitation were categorized into three groups whose boundaries were defined so that they are likely to categorize the data significantly for the studied sites and they allow to have in each category a number of sample consistent with a reliable statistical analysis. Comparisons of STEC and EPEC prevalence between the type of samples, the site, the season, the temperature and the precipitation were analyzed by the chi-square test. A p-value of <0.05 was considered statistically significant.
Detection and Isolation of STEC and EPEC Strains
The stx gene was detected in 30.3, 85.9, 41.7, and 28.2% of shellfish, freshwater, seawater and surface sediment enrichment broths, respectively ( Table 1). The eae gene was detected in 74.8, 100, 100, and 43.6% of shellfish, freshwater, seawater, and surface sediment enrichment broths, respectively ( Table 1). STEC were isolated from 5.0% of the 238 shellfish, 5.6% of the 216 freshwater, 8.3% of the 12 seawater, and 2.6% of the 39 surface sediment samples analyzed, whereas EPEC were isolated from 8.0, 21.3, and 33.3% of the shellfish, freshwater, and seawater samples, respectively. No EPEC were isolated from surface sediments ( Table 2). A total of 57 STEC and 117 EPEC isolates were obtained from these samples. However, as 29 STEC and 28 EPEC had identical serotypes, PFGE and MLST patterns, virulence gene profiles, and phylogroups to other isolates cultivated from the same samples, they were considered to be replicates and not retained. The remaining 28 STEC and 89 EPEC isolates represented 0.2 and 0.7% of the total E. coli (n = 12,016), respectively ( Table 3). Shiga-toxin producing Escherichia coli strains represented 0.2, 0.2, and 0.3%, of the isolated E. coli from sites 1-3, respectively. EPEC strains represented 0.8, 0.6, and 0.7% of the isolated E. coli from sites 1-3, respectively (Table 3).
For the three sites, the majority of STEC strains derived from freshwater samples (50, 57.1, and 44.4% of samples from sites 1-3, respectively) and from shellfish batches (41.7, 42.9, and 44.4% in the sites 1-3, respectively; Table 3). Only two STEC strains were isolated from seawater and surface sediment samples from sites Table 4). The entire sample set demonstrated a seasonal effect with potential pathogenic E. coli as STEC strains were significantly more frequently isolated in Summer and EPEC strains in Summer and Autumn (p < 0.05; Table 2). However, no correlation between the prevalence of both STEC and EPEC and pluviometry nor temperature was observed ( Table 2).
Virulence Gene Profiles
By considering the presence of a single virulence gene or a combination of the four virulence genes investigated (i.e., stx, eae, ehxA, and saa) in the 117 STEC or EPEC strains, eight virulence gene profiles were found. The most frequent profile presented the eae gene only (70.1% of the strains) followed by the profile presenting the stx2 gene only (11.1%) and the profile presenting the stx1 gene only (7.7%). Seven strains (6.0%) were shown to possess the eae and ehxA genes. The stx1-stx2-ehxA-saa profile was found in three strains and three other virulence gene profiles were observed only once, i.e., the stx1-eae-ehxA, the stx1-stx2-saa, and the stx2-ehxA-saa profiles ( Table 4).
Seven STEC strains carrying stx1, three carrying stx2, and two carrying both stx1 and stx2 genes were isolated from the site 1 whereas three STEC strains harboring the stx1 gene, 11 the stx2 gene, and two presenting both stx1 and stx2 genes were recovered from sites 2 and 3.
Phylogroups
The STEC strains (n = 28) were mainly distributed among the phylogroups A, B1, and D (39.3, 35.7, 21.4%, respectively). Only one strain belonged to phylogroup B2. The EPEC strains (n = 89) belonged to all the phylogroups; the strains from phylogroup B1 and B2 (38.2 and 38.2%, respectively) being more prevalent than those from phylogroups A and D (18.0 and 5.7%, respectively; Table 5). More precisely, at site 1, the STEC strains were mainly divided between phylogroups B1 and D. At sites 2 and 3, the STEC strains were divided between phylogroups A and B1. At sites 1-3, the EPEC strains belonged to all the investigated phylogroups, with a majority belonging to phylogroups B1 and B2.
Eighteen different serotypes (O:H) were identified among the STEC strains. Only one STEC belonging to one of the five highly pathogenic serotypes was isolated: an O26:H11 stx1 + , eae + , and ehxA + strain, from a mussel batch. One strain from serotype O91:H21 and carrying stx1, stx2, ehxA, and saa genes was also identified among the STEC strains. The most detected serotype among the STEC strains was the O100:HNM (n = 9). identified within the STEC strains and contained one or two individual isolates each. Fifty-seven serotypes were identified among the EPEC strains. Eleven strains belonged to the highly pathogenic serotypes: O26:H11 (n = 6), O103:H2 (n = 3), and O145:H28 (n = 2). The remaining EPEC strains belong to a large diversity of serotypes listed in Table 4.
PFGE and MLST Profiles
The genetic relatedness of 26 STEC and 79 EPEC strains was investigated by PFGE and MLST analysis (Supplementary Figure S2). Seventy-nine distinguishable PFGE patterns (PT) and 46 distinguishable sequence types (STs) were obtained. Seven other STs (seven strains, one STEC and six EPEC) were obtained but have not as yet been described. These results demonstrate a high level of genetic diversity among the strains isolated. The highest diversity was observed for the EPEC strains, which represented 8.1% of the PTs identified (64/79) and 71.7% of the STs (33/46).
The STEC serotype O100:HNM (n = 9) presented identical PT (D) and ST (ST933; Figure 1A) despite of their three specificities: isolated (a) from oyster, mussel, and common cockle batches, freshwater, and superficial sediment samples, (b) from the sites 2 and 3, (c) during the sampling campaigns of May 2013, June 2013, March 2014, and June 2014. With regards to the major serotypes (Figure 1B), the six EPEC O26:H11 strains presented three additional PTs (i.e., L, BV, and AL) and two STs (i.e., ST29 and ST48) isolated from mussel and common cockle batches and freshwater samples. One of those belonging to the ST29 was isolated from the same mussel batch from which the STEC O26:H11 belonging to the ST21 was isolated. A unique PT (I) was observed for the two EPEC O145:H28 (ST not yet described), isolated from mussel and common cockle batches and sampled during the same campaign (June 2013). Both O103:H2 isolated during two different months (i.e., June 2013 and November 2013) had distinguishable PTs (i.e., M and N) and STs (i.e., ST1146 and ST343). Additionally, identical PTs and STs were found among the other STEC and EPEC strains isolated from different types of sample (i.e., shellfish vs. freshwater) or between shellfish batches (i.e., mussel vs. common cockle) and between freshwater samples, often from samples taken from the same sites on the same date (e.g., O153:H2, O108:H21 serotypes; Supplementary Figure S2).
DISCUSSION
To our knowledge, this study is the first to focus on the detection and characterization of environmental STEC and EPEC strains from shellfish-harvesting areas and their upstream watershed. Overall, among the environmental samples analyzed (n = 505), very few STEC (0.2%, n = 28) or EPEC (0.7%, n = 89) strains were obtained from the E. coli that were isolated (n = 12,016) and in comparison with the number of shellfish, water or surface sediment samples that were found to be positive for stx and FIGURE 2 | Biofilm formation by EPEC and STEC strains at low and high temperatures. Biofilms were formed on polystyrene in M9 medium supplemented with glucose (0.4% wt/vol) at 18 • C (gray bars) or 30 • C (dark bars) for 24 h and were stained with crystal violet, and the absorbance at 590 nm was measured. The results are the average of at least three biological replicates and the error bar represent the standard error. The results for the static biofilms formed at 18 and 30 • C were compared using two-way analysis of variance (ANOVA) followed by a Bonferroni post hoc comparison. * p < 0.05; * * p < 0.01; * * * p < 0.001. a stx2 + , b stx1 + , c stx1 + -stx2 + , d stx1 + -stx2 + -ehxA + -saa + , e stx1 + -eae + , f eae + , g eae + -ehxA + . eae genes (54.1%, 273/505). The higher proportion of EPEC than STEC strains isolated from these environmental samples is in agreement with the results obtained in previous studies (Hamilton et al., 2010;Chandran and Mazumder, 2015). For example, only 3.6% EPEC and no STEC were detected among the 24,493 E. coli isolated from seawater collected in Santa Catalina Island, CA, USA (Hamilton et al., 2010). Conversely, more STEC strains (6.2%) than EPEC strains (1.8%) were isolated from water samples from the Yeongsan river basin in South Korea (n = 3,480 E. coli; Jang et al., 2014). The low level of isolation of STEC or EPEC vs. the high frequency of detection of genetic markers in the analyzed samples has also been observed in various studies focusing on cattle feces, food, and samples from the environment (Miyagi et al., 2001;Bai et al., 2015;Bibbal et al., 2015). The low level of isolation of STEC vs. the high detection of stx genes in the environmental samples could be explained by the presence of free stx-encoding bacteriophages in the environment (Martinez-Castillo et al., 2013) and the presence of viable but non-culturable or dead bacteria as a result of the stressful conditions (sunlight, salinity, oligotrophy, predation, etc.) in riverine and especially coastal environments (Gourmelon et al., 1997, for review Rozen andBelkin, 2001). The difficulties of isolating these bacteria from environmental samples containing a significant background flora could also contribute to this low recovery of strains (Pradel et al., 2000;Gourmelon et al., 2006).
In addition, a high genetic diversity among the 105 genotyped strains was observed, with 79 PFGE patterns and 46 distinguishable sequence types in agreement with the high genetic diversity observed by PFGE in other studies for STEC and EPEC strains (Bibbal et al., 2015;Singh et al., 2015). In this study, PFGE was found to be more discriminatory than MLST as previously described for bacteria such as Salmonella isolated in Californian coastal waters (Walters et al., 2013). For example, seven strains belonging to ST10 (seven different serotypes) were further discriminated into seven distinct PTs (S, AG, AH, AW, X, G, and Y). The identification of numerous PTs and STs highlights the potential presence of different strains in a same sample and the presence of genetic diversity between strains belonging to the same serotype (e.g., E. coli O26:H11 ST21 and ST29 in the same mussel batch).
The non-detection of E. coli from the O157:H7 serotype in the shellfish, water and surface sediment samples investigated (from February 2013to February 2014(n = 282, Balière et al., 2015 is in agreement with the low detection or absence of E. coli O157 in shellfish and environmental water previously observed (Miyagi et al., 2001;Manna et al., 2008).
Several EPEC strains belonging to the highly pathogenic serogroups (i.e., O26, O103, and O145) were also isolated from some of the shellfish batches or freshwater samples that were analyzed. The STEC O26:H11 ST21 found in a mussel batch was shown to be implicated in STEC infections and has been detected in cattle in Europe (Zweifel et al., 2013).
The EPEC O26:H11 ST29 isolated in this study can be strains with no previous contact with stx-bacteriophages or bacteria that have lost the stx-bacteriophage either during their passage from their original source to water or shellfish or during their isolation steps. The presence of these bacteria in coastal environments could present a risk to human health as these EPEC could be lysogenized by stx1-or stx2-converting bacteriophages, which are present in the same environment and could become STEC of the highly pathogenic serotypes. In fact, Bielaszewska et al. (2007) demonstrated that STEC O26 strains can lose their stxbacteriophages and become EPEC O26, and conversely EPEC O26 can be lysogenized by stx1-or stx2-bacteriophages and become STEC O26. Even if the conversion of strains was found to occur in the digestive tract of different animals (Toth et al., 2003) and in various food matrices (Imamovic et al., 2009), the environment could also provide the conditions for conversion of strains. However, the potential conversion of E. coli strains in the environment still needs to be evaluated in more details (Dumke et al., 2006). Interestingly, Solheim et al. (2013) have demonstrated the conversion of an E. coli strain (serotype O103:H25) by bacteriophages in a biofilm at 37 • C, but also at 20 • C.
In addition, another STEC strain isolated in a freshwater sample could present a potential human risk. Indeed, an O91:H21 E. coli was found to belong to ST442, a sequence type that had previously been isolated from adult patients in Germany with symptoms that ranged from diarrhea to hemolytic uremic syndrome (Mellmann et al., 2009).
The majority of strains isolated in this study would present low virulence as most of the isolated strains (87.2%) possessed only one of the five virulence genes (stx1, stx2, eae, ehxA, or saa); i.e., 70.1% of the strains carried the eae gene, 7.7% stx1 and 11.1% stx2). The STEC O26:H11 was the only STEC isolate to carry the eae gene. These results are in agreement with previous studies describing STEC strains isolated from the environment (García-Aljaro et al., 2005) with the exception of the analysis of water samples from a Californian central coast agricultural region where Cooley et al. (2014) showed that the majority of STEC strains isolated contained stx1, stx2, and eae genes. The low level of isolation of STEC strains carrying the saa gene encoding another adherence factor, the STEC autoagglutinating adhesion, or the gene ehxA encoding enterohemolysin A was in agreement with the results obtained from water samples in Spain by García-Aljaro et al. (2005).
Most of the STEC strains in this study were classified into the A, B1, and D phylogroups. Phylogroups A and B1 were also the main phylogroups of environmental STEC strains isolated by García-Aljaro et al. (2005) in Spain. In the Yeongsan river basin of South Korea, STEC strains isolated belonged mainly to phylogroup D and to a lesser extent to phylogroups A, B1, and B2 (Jang et al., 2014). In Brittany and Normandy, overall, the EPEC strains belonged mainly to phylogroups B1 and B2 and to a lesser extent to A and D. EPEC strains isolated from water samples in South Korea belonged mainly to the B2 phylogroups (Jang et al., 2014). The frequent isolation of E. coli from B1 phylogroup in the present study is in agreement with recent data showing that environmentally persistent E. coli belong mainly to the B1 phylogroup (Berthe et al., 2013).
Phenotypic differences in the ability to form biofilms among tested strains underline the genetic diversity of STEC or EPEC strains. Our study demonstrates that more than half of the tested strains (17/22) were able to form biofilms on polystyrene at 18 or 30 • C, and most of these strains were able to form strong biofilm at 18 • C, a temperature close to marine environment condition. A similar result had been observed previously when E. coli K12 biofilm was grown at low temperature (White-Ziegler et al., 2008). It has been shown that low temperatures (<30 • C) promote the expression of genes associated with biofilm development, including genes involved in curli (csgA and mlrA) or cellulose (yaiC) production (Olsen et al., 1989;Arnqvist et al., 1992;White-Ziegler et al., 2008). Interestingly, although O26:H11 strains formed weaker biofilms than other strains, they formed significantly stronger biofilms at 18 • C than at 30 • C. Also, all but one O26:H11 strains were negative for pgaA, a gene coding for the export of poly-N-acetyl-D-glucosamine (PGA) that promotes biofilm formation (Itoh et al., 2008; data not shown). In addition, pgaA sequence is also absent in the sequenced genomes of STEC O26 strains available in GenBank. In conclusion, the ability of strains to form a biofilm might contribute to their persistence in coastal environments.
This study highlights the presence of a specific geographic distribution of some of the STEC and EPEC serotypes and a persistence of some of these serotypes in the coastal environments from Brittany and Normandy investigated in this study. The isolation of the serotype O100:HNM positive for stx2 (PT D and ST993), at different dates (over a period of 1 year), in shellfish, waters and surface sediments from both sites in Normandy (sites 2 and 3), but its absence in Brittany (site 1), highlights potential specific contamination sources in these region and the higher persistence of some of these specific strains. This had previously been shown in water samples from California for O157 strains isolated up to 19 months apart by Cooley et al. (2014). The various livestock breeding in the three watersheds, i.e., mainly swine, poultry and bovine in Brittany and sheep, bovine and swine in Normandy, could explain differences in strain detection at specific sites. A potential explanation of the frequent isolation of the STEC O100:HNM is the high carriage of this E. coli strain in the animals in the upstream watersheds. Bibbal et al. (2015) have identified farms harboring STEC bovine carriers, highlighting the fact that STEC of a given serotype could be carried by several animals belonging to the same farm. A probable prediction of the presence of these STEC strains in the coastal environment is their re-introduction to the water and consequently to shellfish from animal reservoirs, which enables persistence at high titer for months (Cooley et al., 2014). Another explanation could be that they are present in surface sediments in which a better persistence could occur and then they are re-introduced to the water and then filtered and accumulated by shellfish. The evaluation of the persistence of STEC and EPEC (especially those isolated several times in this study, i.e., E. coli O100 and E. coli O26) in freshwaters and seawaters and in shellfish needs to be studied to better understand their frequent detection in such shellfishharvesting areas. Several studies have been carried out to evaluate the persistence of STEC in water or superficial sediments and these have shown that some E. coli strains are able to persist in the environment for periods of a few days to several months (Fremaux et al., 2010) and that the persistence could be variable according to the serotypes (Ma et al., 2014).
This study critically evaluated the nature of STEC and EPEC strains present in coastal environments. Knowledge of strains circulating in the environment is crucial to the investigation of potential new STEC serotypes and their human health risk. These results confirm that the environment is a reservoir for these strains. The presence of both EPEC strains and stx-converting bacteriophages in the same samples could lead to new pathogenic E. coli.
The risk of a human infection by STEC caused by shellfish consumption seems to be limited for two reasons. First, a depuration step or relaying step has to be performed before shellfish from category B and C areas, respectively, reach market. Secondly, STEC were present in only a few samples and the majority of STEC strains lacked genes associated with high human virulence, such as eae, and few of the STEC isolated in this study have previously been shown to be involved in human infections.
ACKNOWLEDGMENTS
This work was funded by the European Regional Development Fund Interreg IVA Programme, as part of the collaborative project RiskManche. The thesis of CB was supported by a grant from Ifremer and the Agence de l'Eau Loire-Bretagne. We thank Clémence Balière from the University of Caen and Céline Courroux from Hôpital Robert-Debré, for their participation in the sampling and technical assistance and Dominique Hervio-Heath (Ifremer) and Huw Taylor (University of Brighton, UK) for their critical review of the manuscript. Work in the LREC-USC-laboratory was financed by the grant CN2012/303 from Consellería de Cultura, Educación e Ordenación Universitaria (Xunta de Galicia) and The European Regional Development Fund, ERDF).
|
v3-fos-license
|
2020-07-05T14:10:06.774Z
|
2020-07-05T00:00:00.000
|
220337027
|
{
"extfieldsofstudy": [
"Economics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-020-08214-4.pdf",
"pdf_hash": "83c256a06570350eeb94a18ef5cfde05cdc5c26f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3072",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"sha1": "83c256a06570350eeb94a18ef5cfde05cdc5c26f",
"year": 2020
}
|
pes2o/s2orc
|
Greenhouse gas emissions convergence in Spain: evidence from the club clustering approach
This study examines the convergence of greenhouse gas emissions per capita across the 19 Spanish regions using the Phillips-Sul club convergence approach over the period spanning from 1990 to 2017. The results indicate the presence of four clubs which converge to different equilibria in emissions per capita and three clubs in terms of income per capita, which involves different regions. These findings suggest that mitigation policies should explicitly consider the presence of different clubs of regions with different convergence paths in terms of emissions and income per capita and address the distributional effect of transfers across regions.
Introduction
Greenhouse gases emissions, their impact on climate change and global warming have become one of the main concerns of policymakers over the recent decades, leading countries to the signing of international agreements, such as the Kyoto Protocol, aimed at reducing emissions on the global level. According to such agreements, countries are committed to reduce their emissions primarily through national level measures. This fact implies that they are primarily responsible in designing their own mitigation policies in order to meet specific targets.
An important issue for policymakers at national level is the distribution of emissions across regions, as well as their evolution over time. A key question is whether differences in emissions across regions tend to increase or decrease over time (Burnett 2016). This issue could affect the design of mitigation policies and the principles used to share the burden of emission reduction across regions. If emissions converge over time, while their overall growth rates decrease, the distributional impact of the mitigation schemes, such as equal per capita allowances, is less concerning for policymakers , given that transfers across regions are reduced as they converge to the same level of emissions per capita. However, if differences in emissions tend to increase over time, the mitigation policies could have distributional costs across regions, resulting in increasing transfers of resources or the reallocation of emission-intensive industries. Therefore, policymakers should take explicitly into account such regional differences in the designing of mitigation policies (Burnett 2016;, including other indicators beyond the uniform per capita allowances in the emission allocation schemes, such as the ability to pay or adjustment costs. This issue has attracted the interest of researchers, who have studied the presence of convergence in emissions at sub-national level across the US states (Aldy 2007;Bulte et al. 2007;Li et al. 2014;Payne et al. 2014;, as well as across Chinese provinces (Huang and Meng 2013;Wang and Zhang 2014;Zhao et al. 2015;Wu et al. 2016;Yang et al. 2016), using different methodologies and definitions of convergence.
In this paper, we extend the research on greenhouse emissions convergence to the case of Spanish regions. Spain, as a member of the European Union, is framed in the Horizon 2030 programme, which established a target of reducing by 40% greenhouse emissions by 2030 from their 1990 levels (European Commission 2014). Since 1990, Spanish emissions have evolved in the same direction along with economic growth, following an increasing trend until 2007, where a turning point was achieved and emissions started to decrease. Nevertheless, emissions have relatively stabilized at 2013, despite higher economic growth rates experienced over the last years (Ministerio para la Transición Ecológica 2019). In 2017, emission levels were 17.9% above 1990 levels, the benchmark for reduction targets, implying that an important effort should be made to achieve its commitment. If we focus on the emission intensity, measured as emissions per GDP unit, the picture is different. In particular, a downward trend can be observed since 1995, with an increasing pace after 2005 (Ministerio para la Transición Ecológica 2019), implying that emission growth has been lower than GDP growth during the same period.
In regard to the regional economic performance, several works suggest that the convergence process of Spanish regions stopped after the 1980s (Leonida and Montolio 2004;Tortosa-Ausina et al. 2005;Castro 2007;Diez-Minguela et al. 2018) and it has given rise to the formation of two convergence clubs or groups of regions (Montolio and Leonida;Tortosa-Ausina et al. 2005;Montañés and Olmos 2014;Diez-Minguela et al. 2018). Given that income growth and greenhouse gas emissions have been linked at the national level, it is important to test whether this fact holds for Spanish regions; convergence patterns would imply that regional emissions are diverting or converging over time, arising important policy implications in relation to the design of mitigation measures and their economic impact. If different convergence patterns are found, a uniform mitigation scheme would involve sizeable and increasing transfers of income across regions, favouring the introduction of other principles for sharing the burden of emission reduction. Therefore, the goal of this paper is to explore the convergence patterns for greenhouse gas emissions across all Spanish regions. Our study extends the literature on the long-run convergence in regional emissions per capita on certain fronts. First, as the first study on Spanish emissions convergence, we focus our attention on all 19 regions geographically dispersed across Spain (Andalusia, Aragon, Asturias, Balearic Islands, Basque Country, Canary Islands, Cantabria, Castille-La Mancha, Castille-Leon, Catalonia, Extremadura, Galicia, Madrid, Murcia, Navarre, Rioja, Valencia, Ceuta and Melilla). Second, while previous studies in emissions convergence have primarily implemented unit root and cointegration tests, which rely on the assumption of variable stationarity, this current work employs the time-varying nonlinear approach recommended by Phillips and Sul (2007). This approach identifies that convergence does not depend on the assumptions regarding the stationarity of variables and allows us to test both overall convergence and the identification of convergence clubs across regions.
Methodology
The panel includes N = 19 Spanish regions, while the number of time frequency is T = 28. The approach employs a timevarying common factor y it defined as: y it is the log of emissions (income) per region i at time t, while μ t is a common trend component, and δ it is a time-varying idiosyncratic component that captures time as well as individual specific effects; it indicates the distance between y it , and the common factor, μ t , is the common stochastic trend, while δ it yields: δ i is fixed and ξ it~i id(0, 1) across regional factor i = 1, 2, …, N and weakly dependent over time t; σ i is an idiosyncratic scale parameter; L(t) is a varying function of time, with L(t) → ∞ and t → ∞. 1 The null hypothesis of convergence is H O : δ i = δ and α i ≥ 0 against the alternative hypothesis: y it and μ t do not need to be trend stationary. Phillips and Sul (2007) employ the quadratic distance measure, H t , as follows: h it is the relative transition coefficient, This coefficient captures the transition path with respect to the panel average. When there is a common behaviour across individual regions, h it = h t across i, then convergence occurs when h it → 1 for all i as t → ∞; however, in the case that convergence does not hold, the distance remains positive as t → ∞ .Following Phillips and Sul (2007), L(t) = logt in the decay model (2), so the empirical logt regression can be used to test for convergence clubs as follows: 1 The parameter α is the rate at which the cross-section variation over the transitions decays over time to zero.
Empirical analysis
Panel A of Table 2 reports the panel convergence results for r = 0.30. 3 The first row displays the results testing for convergence in emissions per capita across all regions. The coefficient estimate is γ= − 0.588 with a t-statistic, t b γ = − 4.5116, and statistically significant at 1%, indicating the rejection of the null hypothesis of overall panel convergence. Next, the analysis proceeds to determine whether we can identify club clusters. The results from the club clustering algorithm illustrate the presence of three distinct clubs. Club 1 consists of the regions of Extremadura, Aragon and Asturias, with γ= 0.0935 and t b γ ¼ 0.4028, which fails to reject the null hypothesis of Note that Sul (2007, 2009) suggest that using the sign criterion may lead to over-estimation of the true number of clubs. To address this potential issue, the analysis also performs club-merging tests via regression (3) to determine whether merging adjacent numbered clubs into larger clubs is relevant. Panel B of Table 2 shows that tests of merging clubs support the merger of the Club 1 and Club 2, which implies that these regions are converging at a slower pace to the same level.
The findings reported in Panel A in Table 3 illustrate the presence of three clubs of convergence on income per capita basis. The test for full sample rejects the null hypothesis of convergence, with b γ = − 0.5280 and t b γ = − 14.2760. The first club of convergence consists of the regions of Madrid and Basque Country, which are the regions with the highest income per capita, i.e. a higher converging GDP per capita level. A second club is formed by the regions of Extremadura, Rioja, Navarre, Cantabria, Murcia, Valencia, Galicia, Aragon, Asturias, Castille-Leon, Catalonia and Balearic Islands that includes regions with high income per capita converging to a lower GDP per capita level (i.e. Catalonia), and regions with low and middle income per capita, with an increasing trend and converging to a higher GDP per capita (i.e. Extremadura). Finally, the third club is made up of the regions of Andalusia, Canary Islands, Castille-La Mancha, Ceuta and Melilla, highlighting low income regions which are converging to a lower level of GDP per capita than Clubs 1 and 2.
Conclusion and policy implications
This study tested overall emissions convergence across Spanish regions. To this end, the analysis used the clubclustering approach by Phillips and Sul (2007). The findings documented the presence of different convergence patterns in 2 −2 log(logt) acts as a penalty function. The omission of this term under the alternative hypothesis would have an upward bias on the least square estimator of γ. terms of emissions per capita and income per capita. Some of the high-income regions, such as Madrid, Catalonia or Basque Country, are converging to lower levels of emissions per capita than some low-or middle-income regions, such as Extremadura, Aragon or Asturias. This implies that a regional mitigation scheme based only in per capita allowances would entail significant transfers of income from high emission-intensive regions to low emission-intensive regions, some of the latter being the wealthiest regions, thus producing a regressive redistribution between regions and increasing the differences in income levels. This evidence recommends policies towards reducing emissions which also take account of the ability to pay, that is, emissions allowances should be allocated not only based on the size of population (equal per capita emissions), but also inversely to GDP per capita, reducing the differences between the Spanish regions in terms of income. These findings recommend that the level of economic activity, measured by GDP per capita, is not the main determinant of the emissions per capita level across Spanish regions. Other aspects, such as the regional economic structure, could also play an important role in determining emissions levels, which suggest the need of further research in emissions convergence across Spanish industries and the policy implications that could entail.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
v3-fos-license
|
2023-06-23T06:17:13.378Z
|
2023-06-21T00:00:00.000
|
259222987
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cre2.755",
"pdf_hash": "c18a355bcbfb0a127062de4f418413f03bc08b9f",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3073",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "fa9c7b453dfcd23a4e632b60ecb5b150257c2b26",
"year": 2023
}
|
pes2o/s2orc
|
Comparative efficacy of topical povidone‐iodine and chlorhexidine gel on dental plaque regrowth in toddlers: A randomized controlled trial
Abstract Objectives This study aimed to compare and evaluate the efficacy of topical use of povidone‐iodine (PVP‐I) solution and chlorhexidine (CHX) gel on dental plaque regrowth after 3 and 7 days in toddlers aged 24–36 months. Materials and Methods A randomized controlled trial that included 45 healthy toddlers aged 24–36 months, who were randomly assigned to three groups. The first group received a placebo (distilled water (DW)) (negative control). The second group received topical CHX gel (0.2% w/v) (positive control). The third group received topical PVP‐I solution (10% w/v). Plaque accumulation was measured at the baseline (t0), after 3 days (t1) and after 7 days (t2) using the Turesky‐modified Quigley–Hein plaque index (TMQHPI). Oral hygiene practices were prohibited during the trial period. The trial ID is ACTRN12623000567628. Results In the DW group, the mean of the TMQHPI score was 1.89 ± 0.67 at t0 and decreased to 1.45 ± 0.66 at t1 (p = .028). Similarly, in the CHX group, the mean of the TMQHPI score was 1.83 ± 1.06 at t0 and decreased to 1.02 ± 0.99 at t1 (p = .033). Regarding the PVP‐I group, the mean of the TMQHPI score went from 1.84 ± 0.85 to 1.01 ± 0.61 at t1 and then increased to 1.57 ± 0.74 at t2. Those changes were statistically significant (p = .001) and (p = .002), respectively. No statistically significant difference was noted between TMQHPI scores at t0 (p = .789). Regarding t1 and t2, no statistically significant difference was found between the three groups (p > .05). Conclusion CHX and PVP‐I efficacy lasted only for 3 days, and PVP‐I was not superior to CHX in terms of plaque control in toddlers. However, further studies are needed to determine the long‐term efficacy of these antiplaque agents in toddlers.
| INTRODUCTION
Dental plaque is a microbial community that consists of microorganisms and their extracellular matrix sticking to dental surfaces and can be either supragingival or subgingival.Dental plaque accumulation is the initiating etiological factor for gingivitis and dental caries (Valm, 2019).There is a correlation between dental plaque accumulation and early childhood caries (ECC) in children aged 12-36 months.ECC is an alarming oral health issue and the most widespread chronic condition for both toddlers and infants (Meyer & Enax, 2018).However, it can be prevented by different prophylactic measures at various prevention levels including primary, secondary, and tertiary.Primary prevention measures aim to intervene at the youngest age possible before dental caries formation by caregivers, dental professionals, and the community (Meyer & Enax, 2018;Sitthisettapong et al., 2021).Antiplaque agents adjacent to mechanical plaque control are used as primary prevention methods.Hence, chemical plaque control agents should not be regarded as an alternative to mechanical methods.However, mechanical plaque control seems challenging for toddlers due to the lack of compliance and commitment on the part of parents (Jafer et al., 2016;Vyas et al., 2021).
Antiplaque agents suppress the growth of specific microorganisms and can be either bactericidal or bacteriostatic depending on their concentration (Vyas et al., 2021).
Chlorhexidine (CHX), to date, is the gold standard in terms of chemical plaque control due to its antimicrobial efficacy and cationic structure.Its cationic property results in a phenomenon named substantivity, which means that CHX antimicrobial efficacy extended over a long period (Balagopal & Arjunkumar, 2013).In addition, CHX is biocompatible and safe for topical use in toddlers older than 2 months of age (Chapman et al., 2012).
CHX is available in various forms, the gel type is easier to handle, provides a prolonged release, and has an analgesic effect (Ahmedi et al., 2023).Nevertheless, CHX has several adverse effects, including xerostomia, glossodynia, hypogeusia, tooth discoloration, epithelial desquamation, and parotid gland swelling (McCoy et al., 2008).Therefore, several antiplaque agents have been suggested to overcome the aforementioned drawbacks.Povidone-iodine (PVP-I) is a broadspectrum microbicidal agent that is economical and widespread (Eggers, 2019;Lepelletier et al., 2020).Furthermore, PVP-I suppresses the levels of Streptococcus mutans and caries progression in young children (Amin et al., 2004).In addition, it is proven to be safe for topical use in toddlers older than 2 months of age (Chorney et al., 2020).However, it is contraindicated for patients with thyroid disorders or iodine allergies (Chundamala & Wright, 2007).
To the best of the authors' knowledge, no study has ever evaluated the efficacy of CHX gel and PVP-I in terms of plaque control in toddlers.Hence, the aim of the current study was to compare and evaluate the efficacy of topical use of PVP-I solution and CHX gel on dental plaque regrowth after 3 and 7 days, in toddlers aged 24-36 months.
| Study design and ethics
This was a three-arm, double-blind, randomized parallel-group controlled trial.It was conducted from December 2022 to March 2023 at Damascus University.This study was approved by the ethics board at Damascus University (N 725/2022) and was conducted in accordance with Helsinki Declaration 2013 and CONSORT statement.Written informed consent was provided by the patients' legal guardians.This study was registered in Australian New Zealand Clinical Trials Registry (ACTRN12623000567628).
2. Healthy children in terms of medical and dental status.
| Blinding
This study was double-blind since both the examiner and the statistician were blinded to group allocations.The examiner was unaware of which experimental arms participants have been assigned to and the statistician was kept unaware of which records belong to which interventional arm.
| Clinical procedures and evaluation
The toddlers were randomly assigned to three groups.The first group received a placebo (DW) (negative control).The second group received topical CHX gel (0.2% w/v) (positive control).The third group received topical PVP-I solution (10% w/v).Each antiplaque agent was applied using a cotton-tipped swab on tooth surfaces in the clockwise direction.It was applied for one time due to the lack of compliance on the part of toddlers.Plaque accumulation was measured at the baseline (t 0 ), after 3 days (t 1 ) and after 7 days (t 2 ) using the Turesky-modified Quigley-Hein plaque index (TMQHPI).
Dental plaque was disclosed using the two-tone erythrosine-free plaque disclosing dye solution Mira-2-Ton ® (Hager & Werken).The toddlers' legal guardians were instructed to avoid feeding their child for 30-60 min postapplication and oral hygiene practices were prohibited during the trial period.TMQHPI scores are as follows (Quigley & Hein, 1962;Turesky et al., 1970)
| Statistical analysis
Statistical analysis was performed using IBM SPSS software version 24 (IBM Corp.).Kolmogorov-Smirnov test was used to determine the normality of data.Regarding demographic profile, Fisher's exact test was used for comparing sex, and caries risk and one-way ANOVA test was used for comparing patient age among study groups.For comparison of the TMQHPI scores between groups at different time points, the Kruskal-Wallis test was applied.Wilcoxon signed-rank test was performed to compare the dental plaque accumulation between the different time points for each group.Statistical significance was adjusted at 0.05.
| RESULTS
Based on selection criteria, out of 56 patients who were assessed for eligibility, 45 were selected.Approximately more than half of the patients (53.30%) were female, and more than two-thirds of them (71.10%) were at high risk of caries.The mean age was 29.42 (SD 3.68; range 24-36 months) as listed in Table 1.No significant difference was found between the study participants regarding their demographic characteristics (p > .05).In addition, no statistically significant difference was noted between TMQHPI scores at t 0 (p = .789)(Table 2) suggesting that the entire baseline data were homogenous.Regarding t 1 and t 2 , no statistically significant difference was found in the TMQHPI scores between the three groups (p > .05)(Table 2).
Table 3 shows the descriptive statistics of the TMQHPI scores at different time points of follow-up between the groups.In the DW group, the mean of the TMQHPI score was 1.89 ± 0.67 at t 0 and decreased to 1.45 ± 0.66 at t 1 (p = .028).However, the TMQHPI score increased to 1.83 ± 1.06 at t 2 but this change was not significant (p = .078)(Table 4).Similarly, in the CHX group, the mean of the TMQHPI score was 1.83 ± 1.06 at t 0 and decreased to 1.02 ± 0.99 at t 1 (p = .033),then increased to 1.29 ± 0.74.However, no statistically significant difference was noted in the TMQHPI score at t 2 (p = .438)(Table 4).Regarding the PVP-I group, the mean of the TMQHPI score went from 1.84 ± 0.85 to 1.01 ± 0.61 at t 1 then increased to 1.57 ± 0.74 at t 2 .Those changes were statistically significant (p = .001)and (p = .002),respectively (Table 4) (Figure 2).
| DISCUSSION
CHX gel and PVP-I solution are two commonly used antiplaque agents, but their efficacy in toddlers has not been extensively studied.The current randomized controlled trial was conducted to T A B L E 1 Characteristics of the study participants.Mechanical plaque control is the cornerstone for preventing gingivitis and caries.However, it requires patient compliance and motivation making it less effective in controlling dental plaque.
Therefore, chemical plaque control using antimicrobial agents seems promising.However, the long-term use of some antiplaque agents can lead to bacterial resistance so clinicians should pay close attention when applying antimicrobial agents several times (Anil et al., 2016).This explains the one-time use of chemical antiplaque agents adjacent to the mechanical effect of the cotton-tipped swab in the current study.Antiplaque agents should mainly be applied by a pediatric dentist in a knee-to-knee position due to the lack of compliance on the part of toddlers (Fux et al., 2019).
In this study, CHX was used as an antiplaque agent due to antimicrobial efficacy, substantivity, and safety.In addition, it is the gold standard in terms of chemical plaque control (Balagopal & Arjunkumar, 2013;Chapman et al., 2012).CHX gel form was used because it is effective and provides a long-lasting release.It is also safe and easy to use, making it a preferred option for many dentists (Ahmedi et al., 2023).PVP-I is a common antiseptic used in dentistry that can be found in many over-the-counter products.It is a broadspectrum disinfectant that is effective against both Gram-positive and Gram-negative bacteria (Eggers, 2019;Lepelletier et al., 2020).
PVP-I works by breaking down plaque and removing bacteria.
However, the prolonged use of PVP-I seems unfavorable due to its absorption through the oral mucosa (Anil et al., 2016).In the current study, PVP-I as an antiplaque agent was not superior to CHX (positive control) nor to DW (negative control) in terms of plaque regrowth inhibition.This result could be explained by the fact that the two antiplaque agents are similar in their antimicrobial efficacy (Eggers, 2019;Lepelletier et al., 2020).In addition, according to Narayan et al. (2017), CHX varnish was not superior to 10% PVP-I in suppression of plaque and saliva Streptococcus mutans.According to the study by Twetman and Grindefjord (1999), 1% CHX gel was used twice daily in toddlers aged 12-18 months, and CHX suppressed the levels of Streptococcus mutans.In addition, Wan et al. (2003) concluded that applying 0.2% CHX gel once a week for 3 months in toddlers aged 10 months was also effective in reducing Streptococcus mutans.However, according to Neeraja et al. (2008), CHX was more effective than PVP-I when used as mouth rinses for 3 months.
Regarding PVP-I efficacy, Berkowitz et al. (2009) found that using 10% PVP-I in children aged 2-5 years was effective in suppressing Streptococcus mutans for 90 days.However, in the current study, the antiplaque agents were applied one time suggesting the need for further trials to evaluate the efficacy of their repeated applications.
In the CHX group, there was a statistically significant decrease in the TMQHPI mean scores at t 1 .However, the TMQHPI mean scores increased at t 2 .This result could be explained by the fact that CHX has a long-lasting effect only when it is applied intensively; 3-4 times per day or 10 to 14 daily applications (Ribeiro et al., 2007).Thus, the current result is in agreement with the findings of Slot et al. (2014), which suggested that brushing with CHX gel was not effective in dental plaque control.In addition, Plonka et al. (2013) found that brushing with 0.12% CHX gel was not superior to 10% casein phosphopeptide-amorphous calcium phosphate paste in terms of controlling ECC.On the contrary, Slot et al. (2007) found 1% CHX gel applied with trays was superior to 0.12% CHX dentifrice gel and to 0.2% CHX mouthwash.This variation could be attributed to the fact that using trays ensures that CHX gel can reach all tooth surfaces simultaneously when compared with a cotton-tipped swab.
In the PVP-I group, there was a statistically significant decrease in the TMQHPI mean scores at t 1 , then an increase at t 2 .However, PVP-I was not superior to DW in plaque control.This result could be attributed to the difficulty in achieving optimal moisture control.
According to Reilly et al. (2016), the repeated use of a 10% PVP-I and 5% sodium fluoride varnish combination led to optimal plaque control.Furthermore, according to several studies, combining PVP-I and sodium fluoride varnish lead to Streptococcus mutans suppression (Hashemi et al., 2015;Milgrom et al., 2021).According to Simratvir et al. (2010), the repeated use of 10% PVP-I yielded satisfactory outcomes in terms of preventing dental caries and reducing Streptococcus mutans counts.In addition, according to Lopez et al. (2002), the topical application of 10% PVP-I improves disease-free survival in toddlers with high caries risk.Furthermore, Amin et al. (2004) suggested that PVP-I was effective in reducing Streptococcus mutans counts at 6 months.Therefore, the repeated application of PVP-I is beneficial.
This study has limitations.First, the short follow-up period of the current study.Second, the single application of the antiplaque agents.
Therefore, there is a need for further studies to evaluate the efficacy of their repeated use.demonstrate superior plaque control to CHX in toddlers.Chemical plaque control agents should be considered an adjunct to mechanical methods, not an alternative.Further studies are needed to confirm these findings and to determine the long-term efficacy and safety of these antiplaque agents in toddlers.
3. Children with full primary dentition.4.Both male and female toddlers.Exclusion criteria1.Children with special health care needs (SHCN).2. Children receiving any antibiotic therapy over the past 4 weeks.3. Children with thyroid disorders or iodine allergies.4. Children with CHX allergies.The CONSORT flow diagram is illustrated in Figure 1.Fifty-six patients were assessed for eligibility.Based on selection criteria, 45 patients were assigned to the current study and were randomly allocated into three groups according to the antiplaque agent used: Group 1: negative control or placebo, distilled water (DW) (n = 15).Group 1: positive control, CHX gel (0.2% w/v) (PerioKIN, KIN Dental) (n = 15).Group 2: PVP-I solution (10% w/v) (Betadine Antiseptic, BETA-DINE ® ) (n = 15).
Note: Fisher's exact test was used for sex, and caries risk and a one-way ANOVA test was used for patient age.Comparison results of the Kruskal-Wallis test of the TMQHPI scores between groups at different time points.Descriptive statistics of TMQHPI scores at the different time points between the study groups.
T A B L E 2 T A B L E 3 T A B L E 4 Comparison results of Wilcoxon signed-rank test dental plaque accumulation between the different time points.Abbreviations: CHX, chlorhexidine; DW, distilled water; n, sample size; PVP-I, povidone-iodine; t 0 , baseline; t 1 , at Day 3; t 2 , at Day 7.
|
v3-fos-license
|
2021-09-28T05:27:40.843Z
|
2021-09-01T00:00:00.000
|
237939497
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4360/13/18/3018/pdf",
"pdf_hash": "7a4c8b4c0c5f64e4c0b55ea6779aec50ce74727c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3075",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "7a4c8b4c0c5f64e4c0b55ea6779aec50ce74727c",
"year": 2021
}
|
pes2o/s2orc
|
Eight-Fold Interpenetrating Diamondoid Coordination Polymers for Sensing Volatile Organic Compounds and Metal Ions
Reactions of divalent metal salts with 4,4-oxybis(N-(pyridine-4-yl)-benzamide), L, and naphthalene-1,4-dicarboxylic acid (1,4-H2NDC) in various solvents gave [Zn(L)(1,4-NDC)·H2O]n, 1, [Cd(L)(1,4-NDC)(H2O)·MeOH]n, 2, and [Co(L)(1,4-NDC)(H2O)0.5·MeOH]n, 3, which have been structurally characterized. Complexes 1–3 show eight-fold interpenetrating frameworks with the dia topology, which exhibit porosities substantiated by CO2 adsorption, whereas 1 and 2 manifest stability in aqueous environments and show high selectivity toward sensing of mesitylene molecules and Fe3+ ions with low detection limits and good reusability up to five cycles.
Introduction
Coordination polymers (CPs) have shown potential applications in luminescent sensing, gas storage and separation, drug delivery, catalysis, ion exchange and magnetism [1][2][3][4][5]. Metal ions are linked together by organic ligands to form extended frameworks, which are often porous and comprised into infinite arrays. In addition, polycarboxylates play a vital role in the organization of diverse CPs with differential dimensionalities, which show multiple potential coordination sites that are highly active to coordinate with metal centers. Linear and angular types of polycarboxylate ligands have been used extensively in the preparation of CPs in a mixed system [6,7]. Entanglement involving interpenetration, polycatenation and self-catenation is a very interesting phenomenon, which is also a major factor contributing to the diversities of CPs due to their aesthetic and intriguing topological interests [8]. Despite many such interesting CPs being reported, the manipulation of the structural diversity is still a challenge.
Considerable effort has been invested into understanding the entanglement of CPs containing bis-pyridyl-bis-amide (bpba) ligands [9][10][11][12][13]. Two highly interpenetrated Zn(II) and Cd(II) CPs with eight-and nine-fold interpenetrating diamondoid modes, respectively, have been reported by using the long flexible N,N -di(4-pyridyl)adipoamide ligand and the short rigid 1,4-benzenedicarboxylic acid [13]. Construction of a high-fold interpenetrating diamondoid framework with linear bidentate ligand may thus be ascribed to the length and flexibility of the spacer ligands and the geometrical arrangements of the metal centers that may modulate pore size and shape in a single network. While the entanglement may reduce the solvent accessible volume required for the molecules/metal ions adsorptions, the modulated pore size and shape may enhance the selectivity. Moreover, the tunable porosity of luminescent CPs has a great impact on the sensitivity and selectivity for the detection of volatile organic compounds (VOCs) and metal ions that are harmful to the environment and public health [14].
General Procedures
Elemental analyses of (C, H, N) were performed on a PE 2400 series II CHNS/O (PerkinElmer Instruments, Shelton, CT, USA) or an Elementar Vario EL III analyzer (Elementar Analysensysteme GmbH, Hanau, German). Infrared spectra were obtained from a JASCO FT/IR-460 Plus spectrometer with pressed KBr pellets (JASCO, Easton, MD, USA). Thermal gravimetric analyses (TGA) were carried out on an SII Nano Technology Inc. TG/DTA 6200 over the temperature range of 30 to 900 • C at a heating rate of 10 • C min −1 under N 2 (SEIKO Instruments Inc., Chiba, Japan). UV-vis spectrum was performed on a UV-2450 spectrophotometer (Dongguan Hongcheng Optical Products Co., Dongguan, China). Emission spectra for the solid samples were determined with a Hitachi F-4500 fluorescence spectrophotometer (Hitachi, Tokyo, Japan). Gas sorption measurements were conducted using a Micromeritics ASAP 2020 system (Micromeritics Instruments Co., Norcross, GA, USA). Powder X-ray diffraction patterns were carried out with a Bruker D8-Focus Bragg-Brentano X-ray powder diffractometer equipped with a CuKα sealed tube (λ α = 1.54178 Å) at 40 kV and 40 mA (Bruker Corporation, Karlsruhe, Germany).
X-ray Crystallography
A Bruker AXS SMART APEX II CCD diffractometer, which was equipped with graphite-monochromated MoKα (λ α = 0.71073 Å) radiation, was used to collect the diffraction data for complexes 1-3 [16], and the data were reduced by using the well-established computational procedures. The structure factors were treated with Lorentz and polarization corrections and an empirical absorption correction based on "multi-scan" [17]. The Direct or Patterson method was applied to locate the positions of some of the heavier atoms, and the remaining atoms were identified in several alternating difference Fourier maps and least-square refinements. Hydrogen atoms were added by using the HADD command in SHELXTL. The coordinated water molecule, O(8), of complex 2 is disordered such that two orientations of the oxygen atom can be found, and the occupancy of each orientation was set to be 0.5 and were refined isotropically. Moreover, the occupancy of O(8) of 3 was refined to be 0.5. Table 1 lists the basic information regarding the crystal parameters and structure refinement.
Crystal Structures of 1-3
Complexes 1-3 are isostructural and their single crystals conform to the monoclinic system. While the crystal structure of 1 was solved in the space group P2 1 /c, those of 2 and 3 were in P2 1 /n. Their asymmetric units are mainly composed of one divalent metal ion [Zn(II), 1; Cd(II), 2 and Co(II), 3], one L and one 1,4-NDC 2− ligands, and there is also one cocrystallized water molecule in 1, one coordinated water and one cocrystallized MeOH molecule in 2, and a half coordinated water and one cocrystallized MeOH molecule in 3, respectively. Considering the divalent metal ions as four-connection nodes and the ligands as linkers, the structures of 1-3 can be simplified as 3D nets with the (6 6 )-dia topology ( Figure 1d) revealing the 8-fold [4 + 4] interpenetration (Figure 1e), determined using ToposPro [18]. Noticeably, the diamondoid Zn(II) and Cd(II) CPs constructed from the flexible N,N -di(4-pyridyl)adipoamide and 1,4-benzenedicarboxylic acid show eight-and nine-fold interpenetrating modes, respectively [13], indicating the nature of the metal center alters the degree of interpenetration. In marked contrast, complexes 1 and 2 with the Zn(II) and Cd(II) ions, respectively, are both eight-fold interpenetrated, demonstrating that the degree of interpenetration of the diamondoid CPs in mixed systems is also subject to the changes of the shape and flexibility of the polycarboxylate and bpba ligands.
Ligand Conformations and Bonding Modes
For the bpba ligand, the two C=O (or N-H) groups that are on the same and opposite directions define the cis and trans conformations, respectively [19]. Accordingly, all of the L ligands in 1-3 adopt the cis conformation. A schematic drawing defining the C-O-C (θ) angle and dihedral angle (d) is shown in Figure 2, and the angles were calculated and listed in Table 2. Clearly, although all of the L ligands in 1-3 adopt the same cis conformation, they differ in the dihedral angles, presumably due to the different metal identity.
Ligand Conformations and Bonding Modes
For the bpba ligand, the two C=O (or N-H) groups that are on the same and opposite directions define the cis and trans conformations, respectively [19]. Accordingly, all of the L ligands in 1-3 adopt the cis conformation. A schematic drawing defining the C-O-C (θ) angle and dihedral angle (d) is shown in Figure 2, and the angles were calculated and listed in Table 2. Clearly, although all of the L ligands in 1-3 adopt the same cis conformation, they differ in the dihedral angles, presumably due to the different metal identity. Moreover, the 1,4-NDC 2− ligand of 1 bridge two Zn(II) ions through two carboxylate oxygen atoms from two carboxylate groups to reveal the coordination mode I, while those of 2 and 3 chelate and bridge two metal ions through four carboxylate oxygen atoms, coordination mode II, and three carboxylate oxygen atoms, coordination mode III, respectively ( Figure 3). Noticeably, while the Zn(II), Cd(II) and Co(II) metal centers of 1-3 adopt
Ligand Conformations and Bonding Modes
For the bpba ligand, the two C=O (or N-H) groups that are on the same and opposite directions define the cis and trans conformations, respectively [19]. Accordingly, all of the L ligands in 1-3 adopt the cis conformation. A schematic drawing defining the C-O-C (θ) angle and dihedral angle (d) is shown in Figure 2, and the angles were calculated and listed in Table 2. Clearly, although all of the L ligands in 1-3 adopt the same cis conformation, they differ in the dihedral angles, presumably due to the different metal identity. Moreover, the 1,4-NDC 2− ligand of 1 bridge two Zn(II) ions through two carboxylate oxygen atoms from two carboxylate groups to reveal the coordination mode I, while those of 2 and 3 chelate and bridge two metal ions through four carboxylate oxygen atoms, coordination mode II, and three carboxylate oxygen atoms, coordination mode III, respectively ( Figure 3). Noticeably, while the Zn(II), Cd(II) and Co(II) metal centers of 1-3 adopt Moreover, the 1,4-NDC 2− ligand of 1 bridge two Zn(II) ions through two carboxylate oxygen atoms from two carboxylate groups to reveal the coordination mode I, while those of 2 and 3 chelate and bridge two metal ions through four carboxylate oxygen atoms, coordination mode II, and three carboxylate oxygen atoms, coordination mode III, respectively ( Figure 3). Noticeably, while the Zn(II), Cd(II) and Co(II) metal centers of 1-3 adopt the distorted tetrahedral, pentagonal bipyramidal and octahedral (square pyramidal) geometries, respectively, giving rise to different coordination modes for the 1,4-NDC 2− ligands; their structural topologies are not subject to the metal identities. the distorted tetrahedral, pentagonal bipyramidal and octahedral (square pyramidal) geometries, respectively, giving rise to different coordination modes for the 1,4-NDC 2− ligands; their structural topologies are not subject to the metal identities.
PXRD Patterns and Thermal Analysis
As shown in Figures S2-S4, the experimental PXRD patterns of complexes 1-3 are in consistency with their corresponding simulated ones, demonstrating the purities of the bulk samples. On the other hand, their thermal gravimetric analysis (TGA) curves performed in a nitrogen atmosphere from 30 to 800 °C ( Figure S5) display two-step weight losses involving removal of cocrystallized solvents and the coordinated ligands were found. From Table 3, it can be shown that the starting temperature for the removal of the organic ligands of 3, 300 °C, is higher than those of 1-2, probably indicating that the framework of 3 is stronger than those of 1-2 upon heating.
Chemical Stability
To estimate the chemical stability, the PXRD patterns of complexes 1-3 in water, as well as their dehydration and rehydration products, were measured. Complexes 1-3 were first immersed into water for seven days and then dried under vacuum. Figures S6-S8 show that only complexes 1 and 2 are stable in water. Moreover, dehydration (heated at 150 °C for 1 h) and rehydration (immersed into water for 1 h) of complexes 1 and 2 gave PXRD patterns comparable to the simulated ones, indicating the stability upon solvent removal and adsorption. The stabilities of 1 and 2 in acidic and basic solutions at 50 °C for 24 h were further investigated. As shown in Figures S9 and S10, the PXRD patterns of 1 in pH 2-12 and 2 in pH 4-10 are almost identical to the simulated ones, suggesting the better acid and base resistances of 1 than 2.
Gas Sorption
The low-pressure gas sorption properties at 273 K and 298 K were examined for complexes 1 and 2 and confirmed by CO2 gas absorption and desorption isotherms, resulting in a type II isotherm behavior [20,21]. As illustrated in Figures S11-S13, the experimental amount of adsorbed CO2 gas in the sample at 273 K is higher than that at 298 K. While desolvated samples of 1 and 2 adsorbed 1.40565 and 1.55795 mmol/g of CO2 at 273 K, they adsorbed 0.83070 and 1.05312 mmol/g at 298 K, respectively. The Brunauer-Emmett-Teller (BET) surface areas of 1 and 2 evaluated from the CO2 gas uptake are 411.3 and 302.4 m 2 g −1 and the Langmuir surface areas are 482.6 and 330.5 m 2 g −1 at 273 K, with total pore
PXRD Patterns and Thermal Analysis
As shown in Figures S2-S4, the experimental PXRD patterns of complexes 1-3 are in consistency with their corresponding simulated ones, demonstrating the purities of the bulk samples. On the other hand, their thermal gravimetric analysis (TGA) curves performed in a nitrogen atmosphere from 30 to 800 • C ( Figure S5) display two-step weight losses involving removal of cocrystallized solvents and the coordinated ligands were found. From Table 3, it can be shown that the starting temperature for the removal of the organic ligands of 3, 300 • C, is higher than those of 1-2, probably indicating that the framework of 3 is stronger than those of 1-2 upon heating.
Chemical Stability
To estimate the chemical stability, the PXRD patterns of complexes 1-3 in water, as well as their dehydration and rehydration products, were measured. Complexes 1-3 were first immersed into water for seven days and then dried under vacuum. Figures S6-S8 show that only complexes 1 and 2 are stable in water. Moreover, dehydration (heated at 150 • C for 1 h) and rehydration (immersed into water for 1 h) of complexes 1 and 2 gave PXRD patterns comparable to the simulated ones, indicating the stability upon solvent removal and adsorption. The stabilities of 1 and 2 in acidic and basic solutions at 50 • C for 24 h were further investigated. As shown in Figures S9 and S10, the PXRD patterns of 1 in pH 2-12 and 2 in pH 4-10 are almost identical to the simulated ones, suggesting the better acid and base resistances of 1 than 2.
Gas Sorption
The low-pressure gas sorption properties at 273 K and 298 K were examined for complexes 1 and 2 and confirmed by CO 2 gas absorption and desorption isotherms, resulting in a type II isotherm behavior [20,21]. As illustrated in Figures S11-S13, the experimental amount of adsorbed CO 2 gas in the sample at 273 K is higher than that at 298 K. While desolvated samples of 1 and 2 adsorbed 1.40565 and 1.55795 mmol/g of CO 2 at 273 K, they adsorbed 0.83070 and 1.05312 mmol/g at 298 K, respectively. The Brunauer-Emmett-Teller (BET) surface areas of 1 and 2 evaluated from the CO 2 gas uptake are 411.3 and 302.4 m 2 g −1 and the Langmuir surface areas are 482.6 and 330.5 m 2 g −1 at 273 K, with total pore volumes of 0.05 and 0.06 cm 3 g −1 , respectively. The pore size distribution and sorption/desorption hysteresis are presumably related to very narrow pore sizes. Calculations using nonlocal density functional theory based on CO 2 sorption data at 273 K suggest that the pore sizes for 1 and 2 are similar and are around 1.06 nm, respectively, indicating microporous materials. The PXRD patterns of the samples of 1 and 2 measured after the Figure S13c,d).
Photoluminescence Properties
An ample number of studies have shown that CPs with d 10 metal centers exhibit great sensing properties [22][23][24][25][26]. Therefore, the solid-state emission spectra of the water-stable 1 and 2, as well as 1,4-H 2 NDC and L ligands, were investigated at room temperature and the results are shown in Figure S14 and Table 4. The emission band of L appears at 468 nm upon excitation at 376 nm, while 1,4-H 2 NDC exhibits an intense emission band at 480 nm upon two excitations at 280 and 370 nm, which may be attributed to intra-ligand π* → π or π* → n transition. As-synthesized complexes 1 and 2 may contain water molecules in the pores, they were first activated by immersion into methanol for two days and heated at 80 • C for 1 h to give 1a and 2a, respectively. It can be seen ( Figure S15) that complexes 1a and 2a show identical emission wavelengths to those of original 1 and 2. Moreover, the PXRD patterns ( Figure S16) of 1a and 2a match quite well with those of 1 and 2, respectively, indicating no framework changes and suggesting ligand-based emissions with little contribution from the cocrystallized solvents. The different emission wavelengths in 1 and 2 are probably due to the distinction in the metal identity and the coordination mode of the 1,4-NDC 2− ligand. Zn(II) and Cd(II) atoms are more difficult to undergo oxidation and reduction reactions, due to their closed-shell electronic configurations; therefore, the emissions of complexes 1 and 2, as well as 1a and 2a, are probably due to ligand-to-ligand charge transfer (LLCT), while metal-to-ligand charge transfer (MLCT) is also possible.
Detection of Volatile Organic Compounds (VOCs)
The CO 2 gas adsorption studies confirm the permanent pores in the eight-fold interpenetrating CPs 1 and 2, which provide a unique opportunity to investigate the role of the entangled bpba-CPs in the sensing of abundant VOCs [26][27][28][29][30][31][32] and metal ions [33]. The activated complexes of 1 and 2, 1a and 2a, were exposed to various VOCs by immersing grounded samples (30 mg) into 5 mL different organic solvents, which were then sonicated for 10 min and collected by centrifugation and then air dried. The emission spectra of the VOC-absorbed complexes upon excitation at 350 nm were measured at room temperature (Figure 4a), revealing that the emission intensities of VOC-absorbed 1a and 2a reached the maximum for the mesitylene molecules. The PXRD patterns shown in Figure S16 91 and 1.65 for 1a and 2a, respectively. The organic uptakes were estimated from the weight-losses in TGA curves of VOC-loaded 1a and 2a ( Figure S17), showing that the uptakes of mesitylene molecules per unit cell are 2.53 in 1a and 2.97 in 2a, respectively. The emission wavelengths of mesitylene-absorbed 1a and 2a shows red-shifts of 13 and 11 nm, respectively, compared with solvent-free 1a and 2a (Figures S18a and S19a), indicating the possible interligand couplings [34]. The mesitylene molecules may have the strongest C-H···π interactions with L linkers among other exposed solvents, inducing the largest bathochromic shift and the intensity enhancement [35].
Recently, it has been shown that C-H···π interaction can be modulated to enhance the luminescence [36].
linear dependence of R 2 = 0.983 and 0.985 (Figures S18b,c and S19b,c). Table S1 shows the intensity enhancements of several reported complexes due to the interactions with the specific organic solvents, showing comparable luminescence enhancements for 1a and 2a in the detection of mesitylene molecules. Further, to determine the recyclability of mesitylene adsorption in complexes 1a and 2a, their luminescence sensing abilities were explored for five regeneration cycles. The PXRD patterns in Figure S16 indicate that the main diffraction peaks of 1a and 2a are quite identical with those of the mesitylene-treated samples, while Figure 4b,c display their similar emission spectra and intensities, respectively, suggesting that their frameworks remain rigid for five cycles.
Detection of Metal Ions
In order to explore the potential application of 1 and 2 in luminescent sensing of metal cations, 30 mg samples of 1 and 2, respectively, were immersed into 10 mL aqueous solutions of nitrate salts M(NO3)x (M = Al 3+ , Cd 2+ , Cu 2+ , Mg 2+ , Ni 2+ , Zn 2+ , Na + , K + , Cr 3+ and Fe 3+ ) with a concentration of 1 × 10 −3 M. After 1 h, the solids were filtered and then the solidstate emission spectra were measured at room temperature. As shown in Figure S20, remarkable luminescence quenching of about 95% for 1 and 92% for 2 were found in the 3+ 3+ To estimate the selective sensing performances of 1a and 2a toward mesitylene, luminescence titration experiments were performed by the addition of the increasing mesitylene (0-5 mL) to 1a and 2a. The volume of mesitylene and the emission intensity show linear dependence of R 2 = 0.983 and 0.985 (Figures S18b,c and S19b,c). Table S1 shows the intensity enhancements of several reported complexes due to the interactions with the specific organic solvents, showing comparable luminescence enhancements for 1a and 2a in the detection of mesitylene molecules. Further, to determine the recyclability of mesitylene adsorption in complexes 1a and 2a, their luminescence sensing abilities were explored for five regeneration cycles. The PXRD patterns in Figure S16 indicate that the main diffraction peaks of 1a and 2a are quite identical with those of the mesitylene-treated samples, while Figure 4b,c display their similar emission spectra and intensities, respectively, suggesting that their frameworks remain rigid for five cycles.
Detection of Metal Ions
In order to explore the potential application of 1 and 2 in luminescent sensing of metal cations, 30 mg samples of 1 and 2, respectively, were immersed into 10 mL aqueous solutions of nitrate salts M(NO 3 ) x (M = Al 3+ , Cd 2+ , Cu 2+ , Mg 2+ , Ni 2+ , Zn 2+ , Na + , K + , Cr 3+ and Fe 3+ ) with a concentration of 1 × 10 −3 M. After 1 h, the solids were filtered and then the solid-state emission spectra were measured at room temperature. As shown in Figure S20, remarkable luminescence quenching of about 95% for 1 and 92% for 2 were found in the detection of Fe 3+ ions. To further explore the quenching effect of Fe 3+ ions, sensing dependence of luminescence intensity on the concentration of Fe 3+ was investigated by immersing finely grounded samples (30 mg) of 1 and 2 into Fe 3+ aqueous solutions with various concentrations (0.005-0.15 mM) for 1 h. As shown in Figure 5b,c (left), the emission intensities were getting lower and almost completely quenched upon increasing the concentration of Fe 3+ . Quantitatively, the quenching capacity of Fe 3+ ion can be rationalized by the Stern-Volmer equation: I 0 /I = 1 + K sv × [Q], where [Q] is the concentration of Fe 3+ , K sv is the quenching constant, and I 0 and I are the emission intensities in the absence and presence of Fe 3+ , respectively [37]. As demonstrated in Figure 5b,c (right), the titration curves for Fe 3+ ions in 1 and 2 are virtually linear at low concentrations, which gave the linear correlation coefficient (R 2 ) of 0.997 for 1 and 0.975 for 2, respectively, while the S-V curves at higher concentrations became nonlinear, which can be described by concurrent dynamic and static quenching or the self-absorption process. The Stern−Volmer constant (K sv ) values are 6.895 × 10 5 for 1 and 9.940 × 10 5 M −1 for 2, respectively, in the presence of Fe 3+ .
Furthermore, the detection limits were calculated according to the standard equation 3σ/k, where σ is the standard deviation from the blank measurements and k is the absolute value of the calibration curve at lower concentration [38], giving 2.35 and 1.01 µM for 1 and 2, respectively. Noticeably, complexes 1 and 2 show comparative K sv with some previously reported complexes toward the detection of Fe 3+ ions in aqueous phase solution (Table S2). The recyclability test showed no significant differences in the PXRD patterns ( Figure S21) and the luminescence intensities ( Figure S22) for five regeneration cycles, indicating the reusability and demonstrating that the quenching effect was not due to the structural decomposition of 1 and 2 upon the interactions with the Fe 3+ ions. Several mechanisms for luminescence quenching such as framework collapse, cation exchange, and interactions between the incoming metal ion and the organic linker that result in competitive absorption of the excited energies of the sensed metal ion and the host CPs, have been suggested [39]. As mentioned above, by comparing the PXRD patterns and luminescence intensities, the framework collapse can be ruled out. The interactions between Fe 3+ ions and complexes 1 and 2 through the amide carbonyl oxygen atoms may thus lead to the luminescence quenching [40]. The UV-vis absorption spectrum of Fe 3+ in aqueous solution and the corresponding excitation and emission spectra of complexes 1 and 2 are shown in Figure S23. Partial overlaps between the absorption spectrum Several mechanisms for luminescence quenching such as framework collapse, cation exchange, and interactions between the incoming metal ion and the organic linker that result in competitive absorption of the excited energies of the sensed metal ion and the host CPs, have been suggested [39]. As mentioned above, by comparing the PXRD patterns and luminescence intensities, the framework collapse can be ruled out. The interactions between Fe 3+ ions and complexes 1 and 2 through the amide carbonyl oxygen atoms may thus lead to the luminescence quenching [40]. The UV-vis absorption spectrum of Fe 3+ in aqueous solution and the corresponding excitation and emission spectra of complexes 1 and 2 are shown in Figure S23. Partial overlaps between the absorption spectrum of Fe 3+ ion and the excitation spectra of complexes 1 and 2 are observed, indicating that the excitation energies of 1 and 2 can be partially absorbed by the Fe 3+ ions, and the luminescence quenching can most probably be ascribed to competitive energy absorption [39].
Conclusions
Three eight-fold interpenetrated 3D CPs with the (6 6 )-dia topology have been successfully prepared by using angular L and the linear dicarboxylic acid 1,4-H 2 NDC. The Zn(II), Cd(II) and Co(II) metal centers of 1-3 adopt the distorted tetrahedral, pentagonal bipyramidal and octahedral/square pyramidal geometries, respectively, resulting in different coordination modes for the 1,4-NDC 2− ligands and indicating that the structural topology is not subject to the metal geometry. Their structural stabilities are assessed particularly in the aqueous environments, revealing that complexes 1 and 2 with close-shell Zn(II) and Cd(II) ions are much more stable than 3 with the open-shell Co(II) ion. The surface areas and pore structures of 1 and 2 were observed via CO 2 gas sorption measurements and the activated samples of 1 and 2 explored by emission spectra and PXRD patterns demonstrate efficient detection of mesitylene and Fe 3+ ions with high selectivity and recyclability. This investigation oversees the sensing properties of the highly entangled bpba-based CPs toward VOCs and metal ions.
|
v3-fos-license
|
2023-06-07T15:01:58.079Z
|
2023-05-24T00:00:00.000
|
259094456
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://drpress.org/ojs/index.php/jceim/article/download/8710/8483",
"pdf_hash": "42909d7464bdd92a84d64c6b77625483ebf001cb",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3076",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "35adbd468177e10a29bbf608e974a073cbb09bee",
"year": 2023
}
|
pes2o/s2orc
|
Research on anti-islanding detection technology strategy
: In recent years, the conditions and environment of China's economic development have undergone or are about to undergo many major changes. At the same time, China's electricity consumption ranks the first in the world with a huge volume and a considerable annual net increment. It is necessary to dialectically understand the profound impact of the new normal of economic development on the growth of China's electricity demand. Faced with the ever-increasing power demand, the power industry is still heavy to ensure steady supply, and the proportion of power consumption will also gradually increase. The central position of power in the energy field is gradually promoted, and the power security guarantee has gradually become one of the core elements of the new energy security strategy: in the actual operation of the power grid, there will be some unstable operation, and the local power grid accident can easily expand into a large-scale blackout. This will not only threaten the daily consumption of electricity, but also affect the national economy to a certain extent. The proposed distributed generation system solves this problem, but it also brings new problems. The appearance of island effect brings more harm to power grid operation. Aiming at the harm of island effect, this paper puts forward the detection technology strategy of related island effect.
Definition and research significance of island effect
The so-called island effect refers to the distributed gridconnected power generation system of various users when the power supply of the power company jumps off due to fault accidents or power outage maintenance. (Such as: photovoltaic power generation, wind power generation, fuel cell power generation, etc.) Isolated island effect is a unique phenomenon of grid-connected power generation system, which has considerable harm. It will not only harm the whole distribution system and the user's equipment, but also cause the life safety of the transmission line maintenance personnel [1].
The research on islanding effect can be divided into two kinds: anti-islanding effect and utilization islanding effect.
Mechanism and causes
The photovoltaic grid-connected system is connected to the local load and connected to the power distribution network through the gate switch. Its topology is shown in Figure 1. When the power grid is cut off, an island is formed.
Islanding can be the result of one or more of the following [2]: (1) The power grid detects a fault, leading to the switch on the grid side jumping, but the distributed generation device does not detect the fault and continues to operate (2) Accidental interruption of normal power supply due to failure of power grid equipment; (3) Power supply interruption caused by network maintenance; (4) Staff's mis operation or deliberate destruction; (5) Natural disasters. Figure 2 is the power diagram of the grid-connected photovoltaic power generation system. The inverter works in the sine wave control mode of unit power factor, and the local load is represented by a parallel RLC circuit. When the power grid runs normally, the active power and reactive power provided by the inverter to the load are P and Q, the active power and reactive power provided by the grid to the load are △P and △Q,namely Pload=P, Qload=Q, then when the switch K on the network side jumps due to line maintenance or fault, the voltage and frequency of the common coupling point a will not change much, and the inverter will continue to supply power to the load. The formation of photovoltaic grid-connected power generation system and the surrounding load constitute a self-powered island.After the island system is formed, the voltage va(instantaneous value) at point a is determined by the Ohm's law response of the RLC load and monitored by the inverter control system [3]. At the same time, in order to keep the output current i synchronized with the terminal voltage (Va), the inverter will drive the frequency of i to change until the phase difference between i and va is 0, that is, the frequency of i reaches a (and only) steady-state value, that is, the inevitable result of the resonant frequency f0, Qload=Q of the load. The reactive power balance relationship formed by grid tripping can be described by the phase balance relationship, that is, θload+θinv=0, where θinv is the phase Angle of the inverter output current ahead of the terminal voltage determined by the adopted anti-island scheme, θload is the load impedance Angle, in the case of parallel RLC load, Therefore, the sufficient and necessary conditions for the occurrence of island effect are as follows: (1) The active power provided by the photovoltaic gridconnected power generation device matches the active power of the load (2) The reactive power provided by the photovoltaic gridconnected power generation device matches the reactive power of the load, that is, the phase balance relationship is satisfied: θload+θinv=0.
Harmfulness
After the occurrence of hazardous island phenomenon, voltage fluctuations, frequency fluctuations and harmonics may occur at the same time, or none of them may occur. The following problems will arise when the island is generated [4]: (1) If the operation is non-three-phase, there is a large harmonic content and the frequency is unstable, the island phenomenon will be expanded.
(2) The voltage phasor of the island will drift relative to the main network, which may interfere with reclosing when the grid recovers quickly.
(3) If the island phenomenon is not eliminated before reclosing, it will lead to asynchronous grid connection. When the island voltage is connected to the grid at the same time that the grid voltage phasor is different, a large current will be generated at the interface, resulting in damage to the inverter.
(4) It will cause safety hazards to personnel conducting maintenance work in isolated island power grid.
(5) The three-phase load may be supplied by the singlephase grid-connected system, which will cause the threephase load to operate without phase and cause harm.
Anti-island detection strategy
Isolated island effect is a unique phenomenon of gridconnected power generation system, which has considerable harm. It will not only harm the whole distribution system and the user's equipment, but also cause the life safety of the transmission line maintenance personnel. Therefore, it is very important to solve the island problem and we should seek effective ways to prevent the island effect. Islanding detection technology can be divided into active detection, passive detection and grid detection in grid-connected inverters. Passive detection has no interference on the power grid and no influence on the output power quality; However, when the load matches the output power of the inverter, there is a large detection blind area. And the detection time is long, need to combine with the active detection [5].
Island effect detection is one of the indispensable protection detection methods for grid-connected inverters. Detection of isolated island is an important factor to be considered in the safety of grid connection. It is very important to adopt appropriate and effective detection methods. In this section, various anti-islanding strategies are analyzed, and active frequency drift method (AFD) is selected as the anti-islanding detection scheme of this project.
Passive detection method
The passive method uses the voltage, frequency, phase or harmonic change of the inverter output terminal when the power grid is cut off to detect the island effect. However, when the output power of photovoltaic system is balanced with the local load power, the passive detection method will lose the detection ability of island effect, and there is a large non-detection zone (NDZ). The passive anti-islanding scheme of grid-connected inverters does not require additional hardware circuits or separate protective relays [6].
(1) Over/under voltage, over/under frequency detection scheme All grid-connected inverters should adopt over/under voltage, over/under frequency protection scheme. Often when the remote switch trips due to fault or maintenance, if the output power (active power, reactive power) of gridconnected inverter does not match the load demand power, that is, △P≠0 and △Q≠0, the voltage or frequency will be offset. Once it exceeds the normal range, At this time, the over (under) voltage protection setting point and over (under) frequency protection setting point of the grid voltage stipulated by the software and hardware of the system can be used for detection. When the local switch is tripped (that is, the grid-connected switch of the inverter), the inverter will stop running, so as to prevent the generation of the island. However, when the load matches the output power of the photovoltaic system or △P is very small, the protection circuit cannot detect the occurrence of the island because the voltage fluctuation is small and does not exceed the normal range. When the inverter is working, the working range of voltage and frequency should be set reasonably to allow the normal fluctuation of the grid voltage and frequency [7]. Generally, for the 220V and 50Hz power grids, the working range of voltage and frequency is 194V≤V≤242V and 49.5Hz≤f≤50.5Hz respectively.
This method is simple, easy to implement and low cost. Its function is not only limited to detecting the island effect, but also can be used to protect user equipment. Other anti-island schemes that generate abnormal voltage or frequency also rely on over/under voltage and over/under frequency protection schemes to trigger the grid-connected inverter to stop working. However, there is a large non-detection area, and the response time of this scheme is unpredictable.
(2) Phase jump detection scheme The phase jump scheme detects the island effect by monitoring the phase difference between the terminal voltage and the output current of the grid-connected inverter. In order to realize the operation of unit power factor, normally the grid-connected inverter always controls its output current to be in the same phase as the grid voltage. The sudden change of the phase difference between the grid-connected inverter terminal voltage and the output current means that the main grid trip and can no longer maintain the grid-connected inverter terminal voltage. The phase of the terminal voltage has jumped to the impedance Angle of the load [8].
After the power grid is cut off, that is, island occurs. The voltage at point a will depend on the product of the output current and load of the photovoltaic power generation system. Because the frequency and phase of the output current of the photovoltaic power generation system are controlled by the phase-locked loop, the current and voltage are synchronized only at the voltage crossing point, and outside the crossing point, the current frequency and phase are determined by the sinusoidal table inside the system, and the current waveform is sinusoidal wave. As shown in Figure 3, before the power failure, the phase difference between the current and voltage at the zero-crossing point is zero. After power failure, isolated island occurs. As the current is a sinusoidal wave with constant frequency and phase, if the load is inductive, the voltage will advance ahead of the current, resulting in sudden voltage phase change. Similarly, capacitive load will also lead to voltage phase mutation. Therefore, by detecting the magnitude of the phase difference between voltage and current, the occurrence of island can be judged.
However, it is difficult for this scheme to choose the threshold value that will not lead to mis operation. If the closing value is set too low, it will lead to false tripping of grid-connected inverter. When the load impedance Angle is close to zero, that is, the load is approximately resistive, the method fails due to the limitation of the set threshold.
Voltage harmonic detection scheme The principle of voltage harmonic detection scheme is that the grid-connected inverter monitors the totalharmonicdistortion (THD) of voltage va. If THD exceeds the range, the grid-connected inverter stops running.
As shown in Figure 2, when the PV power generation system is connected to the grid, its output current harmonics will flow into the grid through the common coupling point a.
Since the network impedance of the power grid is very small, the total harmonic distortion rate of the voltage at point a is usually low. Generally, the THD of va at this time is always lower than the threshold value (THD of grid-connected inverters is generally required to be less than 5% of the rated current).
After the power grid is disconnected, the PV power generation system output current harmonics into the load. Because the harmonic current and load impedance generated by grid-connected inverter are usually much larger than the grid impedance, so the voltage at point a (the product of harmonic current and load impedance) will produce a large harmonic, so it can be determined whether the PV system is in an isolated state by detecting voltage harmonics or harmonic changes.
Theoretically, the voltage harmonic detection scheme can detect the island effect in a large range, without dilution effect when the system is connected with multiple inverters, and the island effect can be detected even in the case of power matching. However, in practical application, due to the existence of nonlinear load and other factors, the power grid voltage harmonics are large, and the action threshold of harmonic detection is not easy to determine, so this method has limitations.
Active Detection Method
The active island detection method means that the output power, frequency or phase of the inverter is disturbed by controlling it. When the power grid is working normally, these disturbances cannot be detected due to the balancing effect of the power grid. Once the power grid fails. The disturbance of the inverter output will accumulate rapidly and exceed the allowable range, thus triggering the island effect detection circuit. This method has high detection accuracy and small non-detection area, but the control is more complex, and the quality of the output power of the inverter is reduced. At present, the anti-islanding strategies of grid-connected inverters are combined with passive detection scheme and an active detection scheme [9].
(1) Output power perturbation method The principle of output power perturbation method is that the inverter through the output current perturbation, so that its output power changes, and then the output voltage changes. The active current perturbation method is essentially an output power perturbation method. In the case of no current perturbation, the output current of the inverter is controlled to follow the given signal vg (generally the signal of the power grid or the sine of the same frequency and phase with the power grid). At this time, iinv=vg. When the interference signal is added, the reference signal of the current is the difference between sinusoidal signal vg and interference signal vgi, then iinv=vgd= vg-VG. In the grid-connected case, if the output of the inverter matches the power consumed by the load, the voltage at point a does not change when the power grid is cut off without adding disturbance, which will lead to the occurrence of isolated island. In the case of adding current disturbance, the voltage at point a depends on the output current of the inverter and the local load when the grid is cut off.
(2) The main advantage of this scheme is that the undetectable area is very small for a single grid-connected inverter whose local load impedance is greater than that of the grid. If the load matches the output power of the gridconnected inverter during grid tripping, the output variation scheme will interfere with the matching condition and lead to undervoltage protection.It will not inject harmonics into the power grid, but it will affect the output power of the inverter, but also produce dilution effect, and the influence is greater when the number of grid-connected inverters increases in local areas. This means that the output variation scheme is only applicable to the small system of a single grid-connected inverter, and cannot work effectively in the small system of multiple grid-connected inverters or the large system of a single grid-connected inverter.
(3) Reactive power compensation detection method Reactive power compensation detection method is also an output power disturbance scheme, using adjustable reactive power output to achieve island detection. When the system is connected to the grid, the load terminal voltage is clamped by the grid voltage, and is basically not affected by the reactive power output of the inverter. When the system enters the island state, once the reactive power output of the inverter does not match the load demand, the load voltage amplitude or frequency will change. Because the inverter output reactive current can be adjusted, and the load reactive demand is constant under certain voltage amplitude and frequency conditions, the inverter output set as partial reactive power compensation or wave compensation to the load can avoid the reactive power balance of the system under the island condition, so as to make the load voltage or frequency continuously change to reach the detectable threshold, and finally determine the existence of the island.
(4) Sliding mode frequency shift detection method Sliding mode frequency shift detection is a scheme that applies positive feedback to the phase of the output currentvoltage of grid-connected inverter to make the phase offset and then the frequency offsetThe frequency of the grid is not affected by the feedback.
In this scheme, the phase of the output current of gridconnected inverter is defined as the function of the deviation between the frequency f of the previous cycle and the frequency fg of the grid, i.e (5) Active Frequency drift detection (AFD) AFD is a common detection method of output frequency disturbance island effect. Working principle of AFD method: By controlling the inverter, the system makes a certain error △f between the frequency of the output voltage finv and that of the grid voltage (within the allowable range of the grid connection standard); When the power grid works normally, the inverter will work normally in a small range due to the corrective action of the phase-locked loop circuit. In the next power frequency cycle of the inverter, the system will take the finv as the baseline, and then add the set frequency error △f to control the finv, resulting in further increase of △f. This process is repeated until the finv exceeds the requirements of the grid connection standard, and the island protection action is performed. Figure 3 shows the control schematic diagram.
However, for parallel RLC loads, AFDs have detection blind spots. Set load impedance Angle <0, that is, the load is resistively capacitive. In the KTH cycle after the island occurs, if the hysteresis of the load impedance Angle and the advance action of △f cancel out, and the frequency and voltage do not exceed the preset threshold. Then, the system will not be able to detect the island occurring. Similarly, for the negative frequency offset and >0, there will be a similar problem. In order to avoid the decrease of the effect of ADF detection method due to the load properties, the inverter output voltage can be periodically and continuously disturbed in both positive and negative directions to eliminate the balancing effect of the load properties on the direction of a single frequency disturbance.
FIG . 4 shows the control principle of the improved AFD island detection scheme. In the figure, cf1 and cf 2 --two disturbance signals in different directions; cf1 = 5%, cf2 = -5%; △f 1, △f 2 --error between fi and f g after the perturbation signal is applied. The grid-connected inverter based on DSP control is easy to implement the scheme. Although AFD can reduce the non-detection area of island effect, the current harmonics introduced by this method will reduce the output power quality of PV system. In addition, when multiple PV systems are connected to the grid, if the frequency shift direction is not consistent, their effects will cancel each other. Figure 4. Detection method of periodic disturbance island effect (6) Frequency jump detection Frequency jump detection is a modification of the active frequency shift detection method, in which dead zones are inserted into the output current waveform in a preset pattern, but not every cycle, but every few cycles. When the grid is connected, frequency hopping causes the output current of the grid-connected inverter to be iinv, occasionally distorted, but the grid controls the voltage waveform of the va. After grid tripping, the frequency hopping scheme can prohibit the island effect by driving the frequency shift as in the active frequency shift scheme, or it can detect the island effect by detecting the frequency of the voltage va consistent with the output current waveform of the grid-connected inverter.
When a single grid-connected inverter adopts the frequency hopping scheme, the scheme has relatively effective island detection performance if sufficiently complex current wave shape is adopted. This scheme, like impedance measurement and active frequency shift schemes, has dilution effects in systems with multiple grid-connected inverters, unless the current waveform changes synchronizes. (7) Voltage feedforward positive feedback disturbance detection method In this detection method, the voltage of point a is fed forward, and the output is taken as the given current amplitude disturbance after a preset algorithm.
Set it to the default value every other cycle; When isolated island occurs, the positive feedback effects of peak voltage at point a and peak voltage of power grid U m under different conditions are analyzed as follows: U a > Um If an island occurs at time t, the peak current will increase in the next period because U a = Um. As a result of positive feedback, the voltage at point A keeps rising. When the voltage peak exceeds the preset threshold, the system will detect the occurrence of the island. If △U works, that is, △U changes from 0 to U d, which further increases the voltage at point a until it exceeds the preset voltage threshold. U a < Um If an island occurs at time t, the peak current will decrease in the next period because U a < Um. As a result of positive feedback, the voltage at point A continues to drop. When the voltage peak exceeds the preset threshold, the system will detect the occurrence of the island. If △U works, that is, △U changes from 0 to △U d, further reducing the voltage at point a until it exceeds the preset voltage threshold. U a = Um It is assumed that an island occurs at time t. Due to the voltage equilibrium state, the PV system cannot detect the island. If △U acts every one cycle, i.e. △U =△Ud, the output current of PV system decreases, and then the peak value of voltage at point a is U a < Um, and the following working process is the same as U a < Um.
The grid-connected inverter based on DSP control is easy to implement the scheme, and the undetectable area is small. However, this scheme also has two shortcomings, one is that the positive feedback with gain slightly reduces the output power quality; Second, for the photovoltaic grid-connected inverter, the working efficiency of the inverter is reduced. Usually, the photovoltaic grid-connected inverter always controls its work at the maximum power point, but when this scheme is adopted, the small change of va amplitude will make the inverter reduce its power output, thus deviating from the maximum power point. (8) The main monitoring unit equipped with a switching device connected in series is actually an automatic isolation device (ENS), which integrates a variety of island protection schemes. The automatic isolation device consists of two independent, parallel main monitoring units, each unit is equipped with a switching element connected in series, and the two switching elements are controlled separately. Each individual unit monitors the grid by continuously monitoring voltage, frequency, and impedance. The reliability of the scheme is improved by redundant design and automatic testing before each grid connection. The redundant design and periodic self-testing of grid-connected inverters at startup allow users to install the equipment without the need for periodic checks to determine if the anti-island circuit is still active, and it also has the advantage of an impedance measurement scheme. It is predictable that hundreds of devices connected to the same feed line will not interfere with each other. However, because ENS needs to determine impedance by injecting current pulses into the grid, it can also cause power quality degradation and some system problems.
In a system connected with multiple grid-connected inverters, if the number of ENS connected to the same branch is large enough, it will eventually interfere with each other and lead to false tripping. Now most grid-connected inverters using this equipment need to adjust the time of current injection to reduce mutual interference. In addition, redundant designs and hardware circuits dedicated to detecting grid impedance add to the cost.
Anti-island detection method on the power grid side
(1) Impedance insertion detection method Impedance insertion is done by installing a low-impedance element (usually a capacitor bank) within the area of the grid where an island effect can occur. In Figure 2-8, for example, a capacitor bank is connected at point b on the grid side by switching K, which usually jumps away. When grid side switch K1 jumps off, after a short delay, switch K closes and accesses the capacitor bank. If the local load matches the output power of the inverter before grid tripping, the additional large capacitance will break the state of energy balance, resulting in a sudden change in current-voltage phase and a sudden drop in frequency, triggering the underfrequency protection. The short delay between the switch K1 on the grid side and the capacitor bank switch K on is necessary, because the additional large capacitor may also compensate the inductive load, resulting in the power matching state and the failure of islanding effect detection. In this case, the inductive load is very large, and if the grid trips, a large frequency shift will occur before the capacitor bank is connected. The short delay will allow ample time for frequency changes to be detected. Another type of impedance such as a large resistance could theoretically be used, which would cause a sudden change in the voltage amplitude at node A. However, the use of capacitors has the function of reactive power compensation to the power grid [10].
(2) This scheme is very effective for the detection of island effect as long as an appropriate time delay is allowed between grid trip and capacitor bank insertion to ensure that capacitor bank insertion does not result in power balance between gridconnected inverter and load. In addition, the required capacitor banks are readily available and can be compensated for reactive power.
However, impedance insertion also has four serious disadvantages: (1) The capacitor bank increases the cost, making the grid-connected inverter using this scheme economically unfeasible. Meanwhile, if the installation time of multiple grid-connected inverters in the system is different, it is impossible to determine who is responsible for the cost of the capacitor bank; (2) There are multiple series switches in the power grid, all of which may lead to the occurrence of island effect, that is to say, island systems of different structures and forms may be formed, so each switch has to be equipped with a capacitor bank when adopting this scheme; (3) Because the input of capacitor banks needs to be delayed, the response speed is slower than other schemes, which may not meet the requirements of the grid connection standard on the detection time; (4) This scheme requires the capacitor bank to be installed on the grid side, which complicates the installation process, and this arrangement also has adverse effects on the grid.
Conclusion.
This paper mainly introduces the concept and harm of islanding effect, analyzes and discusses the mechanism, cause and harm of islanding effect, and puts forward three antiislanding schemes, namely over/under voltage (over/under frequency) detection scheme, reactive power compensation detection method and grid side anti-islanding detection method. And to each kind of anti-islanding effect detection scheme is discussed and technical analysis, has certain reference significance for Everbright researchers. The research content of this project focuses on the characteristics of electric vehicles, multi-energy power integrated management and vehicle network interactive integration, integration and complementarity, establishing the system integration, integration and complementarity model of electric vehicles, smart grid and distributed power generation, and studying the interactive integration strategy of charging facilities and multi-energy smart grid. The energy scheduling integration mechanism of electric vehicles and multi-energy smart grid will be formulated; 3). Key Science and Technology Projects of Henan Province: Vehicle high-beam violation detection and Capturing system, Project No. 182102310784; The main research is the detection of high beam violation, computer recognition and the research and development of software and hardware system for capturing the relevant information of illegal vehicles; 4).Key science and technology projects of Henan Province: 222102240117, Research on fusion technology of electric vehicles, smart grid and distributed power generation: Research content around the electric car, distributed generation, energy storage, power grid interactive integration features, research targets in the safety management and economic operation of energy management and control strategy, set up the electric car with smart grid, a distributed power generation system integration security integration model, study the charging infrastructure and the safety of power grid and distributed generation fusion strategy; Formulate a security integration mechanism for electric vehicles, smart grid and distributed power generation in multiple scenarios.
|
v3-fos-license
|
2020-06-25T09:06:33.736Z
|
2020-06-19T00:00:00.000
|
221352402
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jwcn-eurasipjournals.springeropen.com/track/pdf/10.1186/s13638-020-01777-3",
"pdf_hash": "b7e19c437220f4e9bcd9daa95d934c295582c408",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3078",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "dbd554dd225faccad64367e680745a2730bc0a61",
"year": 2020
}
|
pes2o/s2orc
|
Research on simulation technology of wireless sensor in swimming mechanics
In order to improve the quantitative teaching level of swimming sports, it is necessary to carry out quantitative analysis of swimming sports mechanics. A method of swimming mechanics analysis based on computer simulation technology is proposed. The ARMA model is used to analyze the mechanics of the limb movement of swimming, the Lagrange dynamics model is used to model the characteristics of the mechanics of swimming movement, and the distributed characteristic parameter model of the dynamics of swimming is analyzed in the space of six degrees of freedom. The dynamic distribution space of swimming motion is obtained by using the forward kinematics analysis model, and the inverse kinematics of multiple degrees of freedom is solved by analytic method, and the global and local joint force and force parameters in swimming are estimated. The model of swimming mechanics is realized. The simulation results show that the proposed method is accurate and accurate in estimating the mechanical parameters of swimming movement, and it can effectively guide swimming training.
Introduction
A swimming competition is mainly composed of five links: starting, route, turning, sprint, and edge technique.As the first step of the whole process, swimming and departure has great significance.With the improvement of the competitive level of swimming in the world and the increasing intensity of the competition, different water sensations have different effects on the swimming performance of the athletes.Only by improving the water sense of the swimmers can they become more relaxed in the process of swimming.In order to get a better result in swimming, in this study, there are two main methods of water sense practice, one is fist grip exercise and the other is using water stroke exercise.If a swimmer has mastered the key points of action, but he lacks a good sense of water, we can use the method of clenching to make athletes tighten their double fists, avoid palm contact with the current, and use their fists to feel the direction and strength of the current in the water.We should pay attention to the need to control the stroke strength of the fist well, not too large [1].By limiting the number of strokes and using the palm of the hand to magnify the resistance encountered by the athletes during the stroke, the paddling method can enhance the athletes' feeling of water and facilitate the athletes to adjust the frequency and amplitude of their own stroke.Through the analysis of swimming movement mechanics, it is of great significance in improving the training level and studying the swimming sports mechanics analysis model to improve the swimming technical movement [2].
In order to realize the kinematic analysis of swimming, it is necessary to construct the dynamic model of the body and to analyze the dynamics of the body.From the point of view of modeling theory of lower limb and knee movement chain, the geometric control and behavior control mechanism of lower limb and upper limb movement chain in the course of useful swimming should be analyzed [3].In reference [4], the kinetic energy decomposition and mechanical model of a moving arm with 7 rotational degrees of freedom are solved by geometric method.First, the CCD algorithm is used to make the inverse motion error between the virtual pose and the current position and pose.The method of combining mechanical structure with inverse kinematics of human body structure is used to decompose the characteristics of lower limbs and knees during swimming, and the global force is analyzed.In this way, it is impossible to estimate the local force mode of each joint of the limb, and the human motion model cannot be constructed accurately.The lower limbs and knees are decomposed with 6 degrees of freedom, and the dynamic mathematical modeling and mechanics are carried out.The composition of the algorithm is complex and the adaptability is not good [1].
Aiming to solve the above problems, this paper presents a mechanical analysis method of swimming motion based on computer simulation technology [3].The ARMA model is used to analyze the mechanics of the limb movement of swimming, the Lagrange dynamics model is used to model the characteristics of the mechanics of swimming movement, and the distributed characteristic parameter model of the dynamics of swimming is analyzed in the space of six degrees of freedom.The dynamic distribution space of swimming motion is obtained by using the forward kinematics analysis model, and the inverse kinematics of multiple degrees of freedom is solved by analytic method, and the global and local joint force and force parameters in swimming are estimated.The model of swimming mechanics is realized.Finally, the performance test is carried out through the simulation experiment, which shows the superior performance of this method in improving the ability of swimming mechanics analysis.
Structure of human sports chain in swimming
In this paper, the Lagrange dynamic model is constructed, and the structure of swimming human body motion chain is analyzed in six degrees of freedom space combined with six degrees of freedom spatial impact dynamics model construction method.The kinetic energy of the sagittal knee impact force in swimming is decomposed and described, and the local mechanical parameters of the knee joint are solved [5].The global analysis of the mechanical parameters in swimming process is carried out by using the 7-bar driving structure, and the optimal solution vector is obtained.Firstly, the spatial position model of swimming is analyzed, as shown in Fig. 1.
In Fig. 1, the swimming field is a standard swimming field and the amplitude of the space vector of the human body entering the water, the array spacing of the space position of swimming [6], the elevation angle of the human body entering the water, and the azimuth of the waist and the left (right) arm θ are correlated with the distance S. The estimated parameters of three rotational degrees of freedom are as follows: Because in swimming, the space position of entering water is a uniform linear array, according to motion planning theory, it can be proved that: The dynamic decomposition structure model of inverse kinematics of swimming athletes was obtained.In the initialization study of mechanical decomposition, P M i¼1 cosφ i is obtained.The azimuth distance R of the position and posture of human body in the process of entering and leaving the water can be deduced as follows: The joint at the palm of the world coordinate system construction, knee joint unit i in the process of force, and the body into the water of the arm movement chain as: In that kinematic chain structure of the swimming exercise human body, the joint can be approximate as a fixed center [7], and the state equation of the motion chain of the human body in the swimming exercise is: in which, m is the mass of the force unit in swimming, P is the moment of momentum of the right foot in the process of entering the water.θ is the rotation vector of the joints of the two limbs.It provides a model basis for kinematic mechanics analysis [8].
Model analysis of swimming parameters
On the basis of the above model construction and kinematic chain analysis, the kinematic parameter model of swimming is analyzed, and the kinematics of swimming is analyzed in six degrees of freedom, and the joint angle space is obtained by the forward kinematics algorithm.Thus, the kinetic energy of the body's functional parts in the water is obtained as follows: The upper limb mass m 3 is regarded as a center of mass, and the inverse kinematics of multiple degrees of freedom is solved by analytic method.The potential energy of swimming athletes after entering the water is expressed as follows: Under the limited conditions, the equal-position equivalent kinematic chain structure of continuous swimming using Lagrange dynamic model is shown as follows: The energy equivalent relationship between swimming and swimming is obtained as follows: Under macroscopical conditions [9], the equivalent stress expression of twodimensional periodic structure for swimmers entering water is as follows: The reference coordinate system for swimming athletes to enter 7 degrees of freedom in the right arm: In the water state, the joint vector of the elbow joint position of the right arm is calculated as: The homogeneous coordinate transformation matrix of swimming motion can be obtained by the calculation of the previous modulus.Based on the analysis of swimming motion parameter model, the kinetic analysis of motion mechanics can be carried out [10].
Model improvement design and implementation
The ARMA model is used to analyze the mechanics of the limb movement of swimming, the Lagrange dynamics model is used to model the characteristics of the mechanics of swimming movement, and the distributed characteristic parameter model of the dynamics of swimming is analyzed in the space of six degrees of freedom [11].Using the inverse kinematics analytical model, the total joint structure z i of swimming under six degrees of freedom is decomposed, and the force component is calculated as follows: Based on grid decomposition, the Lagrange dynamic model of swimming athletes with redundancy in the right arm in water is used for continuous mechanical decomposition [12].The forward and reverse kinematics equations of swimming athletes entering the left arm are obtained as follows: In which, ξ ∈ R 10 is an arbitrary vector, + p e is the inverse kinematic solution of joint velocity, and the Jacobian matrix of swimming athletes entering the water arm is J(θ) ∈ R 6 × 10 , the energy equivalent relation of motion burst is obtained as follows: The swimming athletes enter the water waist in the three-dimensional space relative to the movement of the inertia reference system ∑ I of the lower extremity.The Lagrange dynamics experiment of swimming is carried out [13], and the dynamic analysis model of the ankle joint is obtained as follows: in which, p t = [x t , y t , z t ] T , the relation between the translation constant m L3 and m R3 in the three directions of the upper formula is expressed as follows: According to the classical gradient method proposed by Liegeois, the prediction of swimming mechanics is carried out.The relationship between the predicted value and the real value is shown as follows: The distribution characteristic parameter model of swimming motion mechanics is analyzed in the six degrees of freedom space.The dynamic distribution space of swimming motion is obtained by using the forward kinematics analysis model, and the inverse kinematics solution of multiple degrees of freedom is realized by analytic method.The results of estimating the mechanical parameters of swimming are expressed as follows: The estimated results of the inertial force characteristics of swimming are expressed as follows: Above all, the distribution characteristic parameter model of swimming motion mechanics is analyzed in six degrees of freedom space, and the dynamic distribution space of swimming motion is obtained by using forward kinematics analysis model, which realizes the mechanical analysis and modeling of swimming movement.
Experiment
In order to test the application performance of this method in swimming mechanics analysis, the simulation experiment is carried out.The experiment is designed by Matlab 7.After the research object is determined, the experimental results are compared and analyzed by the contrast experiment method.The specific operation is to divide 10 swimmers into a control group and experimental group on average, and then to conduct a 200-m backstroke competition, make good records, and then conduct training for 3 months.In the course of the experiment, the experimental group used the effective backstroke technique to train, while the control group adopted the normal training method.The two groups of swimmers maintained the same training time every day, and the other training contents and intensity remained unchanged.At the end of the experiment period, the two groups of swimmers were given 200-m backstroke, then the results of the two groups were compared, and the results of the two groups were compared with the results of backstroke before the experiment.Finally, the comparative results were observed to find out the gap between the control group and the experimental group, and to judge the effectiveness of backstroke technique.After three months of training, the average results of 200-m backstroke in the control group and the experimental group were compared with those before training, and the simulation results of the mechanical parameters of swimming movement were obtained as shown in Table 1.
Based on the results in Table 1, a descriptive statistical analysis is carried out, and the results of mechanical simulation are shown in Fig. 2.
The results of the analysis show that the results of backstroke in the experimental group are obviously improved, and there is a great time difference between the experimental group and the control group.The average result of the backstroke in the control group is 2.1 s behind the average result of the experimental group, which shows that the effective backstroke technique is effective.It plays an important role in improving the technique of swimmers.Network transmission simulation test during swimming dynamics transmission is shown in Fig. 3. Network packet loss rate test during swimming dynamics transmission is shown in Fig. 4.
There is a significant difference between the two, and if P < 0.05, there is statistical significance, and it is proved that improving the backstroke technology of swimmers can improve the backstroke results.It can also be seen from the chart that the backstroke scores of both the control group and the experimental group were significantly improved before and after training, in which the time used in the control group was reduced by 2.10 s, and the time in the experimental group was reduced by 3.90 s, which was more obvious than that in the control The accuracy of the estimation of swimming mechanical parameters by different methods is tested.The comparison results are shown in Fig. 5.The analysis shows that the accuracy of the method is higher than that of other methods.
Results and discussion
In this paper, a method of swimming mechanics analysis based on computer simulation technology is proposed.The ARMA model is used to analyze the mechanics of the limb movement of swimming, the Lagrange dynamics model is used to model the characteristics of the mechanics of swimming movement, and distributed characteristic parameter model of the dynamics of swimming is analyzed in the space of six degrees of freedom.The dynamic distribution space of swimming motion is obtained by using the forward kinematics analysis model, and the inverse kinematics of multiple degrees of freedom is solved by analytic method, and the global and local joint force and force parameters in swimming are estimated.The model of swimming mechanics is realized.The simulation results show that the proposed method is accurate and accurate in estimating the mechanical parameters of swimming movement, and it can effectively guide swimming training.This method has good application value in mechanics modeling and training instruction of swimming.
Abbreviation ARMA: Agung Rai Museum of Art
Fig. 1
Fig. 1 Human body spatial position model during swimming
Fig. 3 Fig. 4
Fig. 3 Network transmission simulation test during swimming dynamics transmission
Table 1
Description of simulation results of mechanics parameters
|
v3-fos-license
|
2023-08-27T15:05:12.459Z
|
2023-07-01T00:00:00.000
|
261191897
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440231193939",
"pdf_hash": "0f6edbd3e5e96663ccf14b2d0bf2e3e18d494244",
"pdf_src": "Sage",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3079",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "4f31e8f0988cbede2b166ce8764825bc2c087195",
"year": 2023
}
|
pes2o/s2orc
|
A Mixed Methods Study of the Challenges and Rewards of Fatherhood in a Diverse Sample of U.S. Fathers
Fathers contribute to healthy child development, but there are limited data that provide an in-depth understanding of fathers’ perceptions of the challenges and rewards of fatherhood. We recruited 122 fathers from three different ethnic groups living in Atlanta, Georgia to conduct a mixed-methods research study on fathers’ perceptions of the challenges and rewards of fatherhood. Challenges included financial responsibilities (56%), sleep-deprivation (47%), work-family conflict (44%), negative changes in their relationship with their partner (43%), and children crying and whining (23%). Ninety seven percent of fathers agreed that having children added meaning to their life when asked. Many indicated that having children infused their life with an invaluable sense of meaning and purpose, and inspired them to become better people. The most common spontaneously mentioned rewards included witnessing developmental milestones (38%), experiencing the child as an extension of oneself (26%), witnessing children’s accomplishments (26%), and feeling loved by children (20%). In quantitative analyses, higher self-reported quality of family life was associated with living with children full-time, having fewer children, getting more sleep, having planned the first child, and minimal work-family conflict and financial stress. Finally, fathers reported increases in positive affect and decreases in negative affect over the course of the interviews, suggesting that they benefited psychologically from reflecting on and sharing their experience as fathers.
While there is abundant evidence that fatherhood impacts child development, a smaller body of research has demonstrated that fatherhood also impacts men's adult development (Kotelchuck, 2022).The act of becoming a father is a major transition in the life course (Palkovitz, 2002).Fatherhood has been shown to impact men's health, social lives, and work lives (Eggebeen D et al., 2012).Health-related changes may be both positive and negative.In one study, three-quarters of men reported positive changes to their health-related behaviors.These changes included improved diet, increased physical activity and decreased alcohol usage (Garfield et al., 2010).Another study found that the majority of expectant fathers identified as current smokers tried to quit in the past year, or indicated an intention to quit in the near future (Everett et al., 2005).Changes like these may partially explain reduced mortality rates among married men with children in the home compared with married men without children in the home (Kobrin & Hendershot, 1977;Modig et al., 2017).On the other hand, there is evidence that parents with children consume significantly more fat than adults without children (Laroche et al., 2007), and that men gain weight across the transition to fatherhood (Kotelchuck, 2022;Saxbe et al., 2018).In addition, new fathers suffer depression at twice the rate of non-fathers (Paulson & Bazemore, 2010;Paulson et al., 2006).In the social realm, marital quality tends to decrease after having children (Kowal et al., 2021;Twenge et al., 2003).On the other hand, fathers report more contact with their parents compared with men who are not fathers.Fathers are also more likely to be involved in service and community organizations, but spend less time socializing with friends, going out to bars and playing recreational sports than nonfathers (Eggebeen D et al., 2012;Gray & Anderson, 2010).In the occupational realm, there is evidence that fathers are more likely to be employed, to work longer hours, and to earn higher wages than non-fathers (Glauber, 2008;Hodges & Budig, 2010;Kotelchuck, 2022).
There has been less research on how men experience these changes that accompany the transition to fatherhood.That is, which aspects of becoming a father do men find rewarding or challenging and, importantly, how does this change across child development and how does it vary by socioeconomic status and ethnicity (Parke, 2000)?One notable exception is a detailed qualitative study of 40 American fathers (Palkovitz, 2002).This study identified several themes related to the benefits of fatherhood, including: the satisfaction of watching one's children grow, a sense of pride in one's child, love received from one's child, personal growth, expanded self, extension of the family line, fun, continued learning, additional meaning and purpose, and enhanced marriage.The study also identified several themes related to the costs of fatherhood, including: reduced personal time, personal sacrifice, financial strain, reduced marital closeness, and children growing up and not needing you as much.Overall, the men in this study reported that the benefits of fatherhood outweighed the costs, and that they viewed fatherhood as a generative process that shaped their lives for the better.
The nature and extent of paternal involvement varies considerably across cultures.Among small-scale, non-industrial societies, fathers are quite consistently involved in provisioning, but involvement in direct caregiving is highly variable (M.J. Konner, 2010).Mothers typically receive help from a variety of potential allomothers, including but not limited to fathers (Hrdy, 2009).Fathers are often more involved in caregiving when other allomothers are less available (Fouts, 2008;Meehan, 2005;Winking et al., 2009).Even among the Aka hunter-gatheres from central Africa, where fathers do more caregiving than men in any other known society, mothers hold infants far more than fathers do (Hewlett, 1991).
Among modern nation states, such as China, Russia and India, traditional fathers were strict, emotionally detached, authoritarian breadwinners who were minimally involved in caregiving.However, modern fathers in these and many other nations around the world are now becoming more involved, expressive and affectionate toward their children (Shwalb et al., 2013).Fathers in some countries deviate from the nuclear family model so common in many parts of the world.For example, distributed fathering, in which many fathers have responsibility for children other than their own, is common in South Africa (Townsend, 2013).
Due to increasing rates of divorce and non-marital childbearing, the U.S. has the highest rate of single parenting in the world (Pew Research Center, 2019).That rate is particularly high among Black Americans (United States Census Bureau, 2021).However, many nonresident fathers remain highly involved, and studies show that their children benefit from this involvement (Amato & Gilbreth, 1999;Choi & Pyun, 2014).Among resident fathers, Black fathers reported being more involved in instrumental caregiving activities like bathing, diapering and dressing their children, as well as helping with homework, compared with white and Hispanic fathers (Jones & Mosher, 2013).Due in part to discriminatory criminal justice practices, one in three Black American men will be imprisoned at some point in their lives and this also contributes to high rates of single parenthood in the Black community (Morsey & Rothstein, 2016).On the other hand, grandmothers are more often involved in raising Black children (Wilson, 2021).
Despite high rates of non-resident fathers, resident American fathers have more than tripled their time spent on childcare over the last several decades (Livingston & Parker, 2019), likely because the majority of American mothers now work a full-time job (Horowitz, 2019).As a consequence of shared breadwinning responsibilities, the current American cultural model of fathering is that the care of children should also be shared between men and women (McFadden & Tamis-LeMonda, 2013).Despite this, the U.S. government is an outlier among high income nations in not offering any paid paternity leave (C. C. Miller, 2021), and this likely places limits on paternal involvement and contributes to high levels of work-family conflict experienced by American fathers (Aumann et al., 2011).
Conceptions of fatherhood are changing in the U.S. and in other parts of the world.As women's educational outcomes have steadily improved, they have achieved access to high-paying jobs that used to be occupied by men and the gender gap in wages has decreased (M.Konner, 2015).The cumulative result is that more families are opting for mothers to be the family breadwinner and more fathers are staying at home as the primary caregiver.In the U.S., the number of stay at home fathers (SAHFs) has more than doubled in the last 20 years to the point that approximately 20% of all stay at home parents are now fathers (Lee et al., 2020).
The primary objective of the current study was to identify the challenges and rewards perceived by a diverse sample of modern American fathers by way of a semistructured qualitative interview.Conceptually, we envision that the rewards and challenges of fatherhood will vary by the developmental stage of the child, and perhaps, by the social class and ethnicity of the father.We recruited equal numbers of fathers who self-identified as Asian, Black and white fathers from the Atlanta area.We also investigated the effects of participant ethnicity and age of their children on reported challenges and rewards.This allowed us to compare across self-identified subgroups and across different developmental stages.
A secondary objective of our study was to conduct a quantitative analysis of variables associated with fathers' self-reported overall subjective quality of family life.Based on a conceptual model that economic, physical and social stressors would impact on father's quality of life, we made several predictions about factors that would influence fathers' reported subjective quality of family life.Consistent with previous research (Beutell & Wittig-Berman, 1999;Kowal et al., 2021;Md-Sidin et al., 2010), we predicted that quality of family life would be negatively associated with factors suspected to increase workfamily conflict, such as more time spent at work and a greater number of children at home.We also generated a series of novel predictions.We predicted that nonresidential fathers would report lower quality of family life compared with resident fathers since non-residential status has been associated with less paternal involvement (Amato et al., 2010;Castillo et al., 2011), although this is not always the case (Amato et al., 2010).We also expected non-residential status to be associated with a greater likelihood of relationship stress with mothers, which would decrease quality of family life.Fathers of unintended children are less likely to live with or visit their children, and are less likely to participate in caregiving activities (Lindberg et al., 2017).As such, we predicted that fathers who did not plan to have their first child would report lower quality family life than fathers whose first child was planned.Similarly, we predicted that fathers who were experiencing financial stress would report lower quality of family life.Given the negative impact of prolonged sleep deprivation on mental health (Roberts & Duong, 2014), fathers who slept fewer hours were also predicted to report worse quality of family life.Finally, we suspected that talking about the challenges and rewards of fatherhood would be therapeutic for fathers, and that men would experience increases in positive affect and decreases in negative affect over the course of the interview.
Methods
This study was approved by the Emory University Institutional Review Board.All data were collected in 2016 to 2017.
Participants
Participants were recruited by posting physical flyers in and around the Emory University community.Interested individuals contacted research staff by phone or email.Study staff would then further explain the study.Interested and qualified individuals were scheduled for an interview.Participants included 40 Asian, 41 Black and 41 white participants.Participants were also roughly evenly distributed across four groups based on the age of their eldest child.Twenty-eight participants had an eldest child between 0 and 2 years of age, 32 had a child between 2 and 5, 31 had a child between 5 and 12, and 31 had a child between 12 and 30.Participants averaged 37.5 years of age , and had between 1 and 5 children (M = 1.8).Combined years of education for the father and mother ranged from 22 to 44, with a mean of 33 (SD = 4.5).Thus, while many of the fathers in our sample were college-educated or beyond, 13 men had a high school education or less.Another 11 completed some college, but did not earn a degree.Fifty-four participants were research personnel (n = 20), students (n = 15), professors (n = 6), physicians (n = 5), postdoctoral fellows (n = 5) or lawyers (n = 3) from the University.
Combined income ranged from $0 to $750,000, with an average of $104,772 (SD = $92,176).Three of the fathers were unemployed.In addition, fathers who were currently in school often had no income.Seventy-four percent of participants were married to the mother of all of their children.Eight percent of participants were married to the mother of some but not all of their children.One percent were married, but not to the mother of any of their children.The remaining participants were unmarried.Seventy-six percent of participants lived with their children full time.Twelve percent of participants lived with their children some of the time.The remaining 12% lived apart from their children.Sixty-six percent indicated that their first child was planned.Of men with more than one child, 82% had all children with the same mother.Additional demographic information, including annual income, and weekly hours work, are provided in Table 1.
Protocol
All interviews were conducted by the first author (JR) in his office on the Emory University campus.After providing written informed consent, participants completed the Positive and Negative Affect Schedule (PANAS), to characterize their current affective and emotional state.PANAS is a self-report questionnaire that includes one 10-item scale to measure positive affect and another 10item scale to measure negative affect.Each item is rated on a 5 point scale from 1(not at all) to 5 (extremely) (Watson et al., 1988).After the PANAS, the interviewer asked the participant 14 demographic-related questions, and recorded their answers.Afterward, participants were asked 26 open-ended interview questions regarding the challenges and rewards of fatherhood (see Supplemental Materials for questionnaires).Interviews lasted between 32 and 122 min (Mean = 72.25,SD = 17.32).After the interview, participants were asked to rate their current overall quality of family life on a 10-point Likert scale, and completed the PANAS a second time.The interviewer left the room while the participant completed both PANAS surveys to provide privacy.Upon returning to the room after the second PANAS, the interviewer compensated the participant with $100.The Likert-scale quality of family life question and the interview question, ''Did having children add meaning to your life?''were added part-way through the study, so analyses with these variables have smaller sample sizes (n = 73 and n = 61, respectively).
Analysis
All interviews were audio recorded, transcribed, and imported into MaxQDA software.Themes were defined using a bottom-up approach in which respondents' voices guided the identification of themes.After conducting the interviews and reviewing the transcripts, the first author (JR) defined a set of themes based on challenges and rewards that were expressed repeatedly across all 122 interviews; these were defined as instances where the respondent manifestly or implicitly discussed the difficulties that come with parenting and the benefits.To allow respondents' experiences to drive the findings, we did not pre-specify these challenges and rewards, but rather let those come from the interviews (Guest et al., 2012;Ryan & Bernard, 2003).Once the point of theme saturation had been reached, that is, where no new expression of challenges and rewards emerged, these themes were then systematically applied to the transcripts using MaxQDA (Hennink et al., 2017).Themes related to rewards included: witnessing children's development, experiencing the child as an extension of one's self, children's accomplishments, feeling loved by children, the child's appearance and affective expressions, teaching and preparing children for the world, receiving compliments about the child, emotions surrounding the birth, and improved relationship with one's partner.Themes related to challenges included financial stress, workfamily conflict, worse relationship with partner, lack of sleep, and children crying and whining.Statements that related to multiple themes were double-coded.All coding was carried out by JR, and following Kuckartz (2019), analysis focused on describing key themes and providing illustrative quotes, and exploring the distribution of themes across socio-demographic categories.To further explore the relationship between themes and sociodemographic variables, we coded the absence/presence of each theme for each respondent, and created a binary matrix indicating whether each subject mentioned each reward theme and each challenge theme.
Self-reported quality of family life was tested for associations with continuous variables using the Pearson Product-Moment Correlation Coefficient, and with dichotomous variables using two sample t-tests.The proportion of participants mentioning each reward or challenge was compared across ethnic groups and child age groups using an Independent-Samples Kruskal-Wallis Test.
Challenges and Rewards of Fatherhood
Fathers are referred to by their ID number below to protect confidentiality.For example, the father with ID = 1 is referred to as FA1, etc.
Many fathers felt that it was primarily their responsibility to provide for their children, and were willing to make any necessary sacrifices.FA36 was a 47 year-old Black father with a college degree who managed a company.He had three biological children between the ages of 5 and 19.He was separated from his wife and was living with a new partner, who also had three children that he was providing for.He described the responsibility as follows, ''Yes, it's a lot of pressure.I mean, I take.I take that very seriously, and.yeah, I work sun up to sun down if I have to, but it's stressful, though.'' For some men, the responsibility of providing was allconsuming and negatively impacted their quality of life.FA118 was a 53 year-old Chinese father with a college degree who came to the U.S. as an adult, and then had two children who were now teenagers.He described the weight of financial responsibility as follows, ''I remember when I had my second boy when I took on the three jobs .I need to bring more money .I only have like four hours sleep.It's just like every day is just working .even on Saturdays I took a job.'' Several men indicated that being the sole wage earner in the family exacerbated the stress and pressure they felt to provide.For example, FA56 was a 32 year-old Black father of a 3 year-old girl who he did not currently live with.He expressed this idea as follows, ''I'm stressed out all the time, because, yeah, child support, and her mom doesn't have, like .a real job and so it's all on me, that's very stressful, so I have a job I don't really want to do because I have to do it, you know, and my job's not very fun, or fulfilling.''Work-Family Conflict.All fathers were asked if they had difficulty balancing work and family life.Fifty-four fathers (44%) answered in the affirmative.For example, FA2 was a 35 year-old white father of two children under the age of 5.He was an MD/PhD who had completed his medical residency and was in the midst of a postdoctoral fellowship.He noted explicit conflict between time spent at work and time spent with children, saying ''a challenge has been to put the family first yet make sure that work is there at a high enough level that I'm going to succeed and sometimes that means you gotta put family second, and that's been a hard concept for me to deal with.''FA114 was a 49 year-old Asian accountant with two adolescent sons who described the pressures of tax season on him and his family, ''I remember every tax season .My wife was not happy.We always have some arguments .during tax season because that pressure accumulates .When it accumulates to a certain level, maybe when I come home, I really-I don't look good.I look tired, I look really mad .And maybe when I talk to her, say something maybe I didn't, you know, sounds very well, and she has so much pressure accumulate as well.''Some fathers described being physically but not fully psychologically present with their children, as thoughts of work intruded.FA45 was a 35 year-old white MD/ PhD student who was focused on finishing his PhD, ''And it's not only the time working, but it's the attention in your mind, even if you're not working.If I spend time with my kids, thinking about work, is it time working or is it time with my kids?.''For FA8, a 26 year-old white medical student, it was school that intruded on family life, ''.sometimesI'm still like thinking about school.When we were in biochemistry .those two weeks were hard and I felt like I was missing out on my time with her even when I was with her and I couldn't really focus on her cause I was just stressed.'' In some cases, work intruded on family time more explicitly.Modern technology (i.e., smart-phones) made it challenging for some fathers to separate work from home.FA23, a 30 year-old Black father of two children who worked as a research administrator, provided one good example, ''like sometimes I'm with the kids, and then an email comes through and somebody's like looking for this and that.And even though I have my out-of-office up, they're still expecting me or they still need me to tell them where it is .So that can be a little bit difficult sometimes'' (FA23).
Relationship With Partner.All men were asked how having children affected their relationship with their child's mother.Fifty-three men (43%) indicated that it had a negative impact on the relationship.
Many fathers noted that the attention and affection they had previously received from their partner shifted to the child after the birth.FA118, a 53 year-old Asian father, noted that after having children, ''She is spending more time on child.Yeah, sometime he the center.I feel it, personally.I feel less care, care about me.''FA67, a 59 year-old white accountant and father of two, described this shift in more detail, ''so that first year, I mean, she really doted on me, but then my daughter was born .the focus for her became the kids . in a lot of ways, I am looking forward to him going off to college.'' Closely related to this shift in the partner's focus of attention and affection, many men noted that their relationship with their partner was strained due to a lack of time to devote to it.FA12 was a 37 year-old white physician with three children who was married to a physician.He said, ''it's been very difficult to focus on our specific romantic relationship, and to have any sort of meaningful conversation that doesn't involve childcare .and it's hard to have any specific interests outside of the children .And I think that the fatigue is a factor also that contributes to all of that, whether that's with physical or emotional intimacy.''FA15 was a 33 year-old white assistant professor and father of two children.After having a second child, he lamented, ''there is not so much time to have those conversations and learn how each other is doing.We may get 15 minutes a night, which is not enough and that has been problematic and I think we have had more emotional strain with that reduced communication.''Some fathers alluded more specifically to a loss of physical intimacy.FA1 was a 48 year-old white IT specialist and father of three children, ''.intimacy is definitely diminished with children .I mean, it's like a switch turned off.''FA3, a 34 year-old white father of two, regretted that he and his wife sometimes feel, ''more like were roommates that share a bed,'' and FA89, a 45 yearold Korean father of one boy, described his wife transforming from a ''young, attractive girl'' before having children to a ''co-worker'' afterward.
Sleep.Sleep deprivation was also a pervasive theme.All men were asked if they felt they were getting enough sleep.Forty-three men (35%) said that they were not.
The newborn period, when infants were feeding throughout the night, was a particularly challenging time for some fathers in terms of sleep.Many men said their infants slept through the night from an early age.However, this was not the case for others.Three different Asian fathers noted their struggles with sleep deprivation during this period.FA121 (age 39), ''And he is crying so we have to hold him to sleep, and if we put him to bed, in two hours he will stop crying .So either my wife or me will have to hold him.Sometimes you know we get maybe two hours sleep during the night.''FA91 (age 36) said, ''The first three months were very tough.My boy just waked up frequently -every three hours .the boy needed to be hugged for sleeping.Not good for that.Yeah, it was very hard..'' Finally, FA115 (53 years) noted, ''I was deprived of sleep almost every night, you never know when the baby would cry..'' Fathers of other ethnicities also struggled.FA60, a 53 year-old white father of two children, exclaimed ''sleep deprivation!That's a big one, that's probably number one because that was for like six years .very stressful on both of us.''Despite setting aside enough time for sleep, many men complained of disrupted, fragmented sleep and poor sleep quality.FA21, a 34 year-old Black father with a 2-year old son, described his situation as follows, ''our two-year-old is in and out of the bed with us and he's a wild sleeper so even those hours are usually broken, so it almost nothing.''Crying and Whining.Several fathers (n = 28, 23%) described stress and frustration surrounding their children's crying and whining.FA22 was a 35 year-old Asian PhD student with a 2 year-old son who woke up crying several times at night, ''so he cried really hard and he try to find us.We have to put him in our bed.That may happen like three times or four times at night so, it was pretty bad.''He went on, ''Horrible time.Nightmare.''FA24 was a 32 year-old white father and Captain in the Army who was currently a graduate student.He had four children ages 6 and under.He described his frustration with infant crying as follows, ''he's just screaming cause we try to feed him but we can't hold him.That's all he wants is to be held by mommy right now.So it's just aggravating cause he's just very loud, very, he won't give up.''FA48, a 40 year-old Black father of three, and his wife were similarly frustrated, ''High pitched noise all night, crying, but everything whining.Oh, my goodness.It was a very tough time.Even my wife.She was like 'ahh boy I am gonna fight him'.'' Infant crying was particularly challenging when inconsolable, as described by FA6, a 30 year-old white father of a newborn boy, ''when I couldn't console him when he was crying.likeI couldn't figure out the problem, I was just frustrated.''FA 50, a 33 year-old white father of two girls, similarly expressed frustration, ''She would go to bed and then wake up and cry.She would cry when you changed her clothes, gave her a bath, if she sat in a seat for too long.And you just couldn't console her -it was like screaming crying.''Some men viewed their inability to console their infant as a personal failure.FA71, a 31 year-old Black father with a newborn daughter, felt rejected by her inconsolable crying, ''When she's crying and crying and I can't get her to stop crying, it makes me feel bad like she doesn't like me or something.''Inconsolable crying made FA19, a 29 year-old Asian father with a 2 year old daughter, feel inept, ''I don't have any power.What she needed was just mother's hugging and also feeding .I was very disappointed with my powerlessness.''FA3, a 34 year-old white father with two sons, including a newborn, conveyed a similar idea, ''it stresses me out a lot when I'm trying to take care of him and let her rest because he does so much better with her than with me.'' For a few men, the crying pushed them to the brink of physically harming their infant.FA 13 was a 34 yearold Black father with three daughters, ''I just remember like wanting her to be quiet and I'm squeezing harder like a hug but this is like out of anger, like what am I doing.Like, this is not good.You know?.''FA57, a 56 year-old white father with five adult children, provided another example, ''you know, a couple of three times over 20 years or so, I got to where I didn't do the shaking, but I was .I mean I never succumbed, I never hurt them.But there were .maybe 3 or 4 times over the whole rearing period I might've gotten to a place where I didn't want to be,'' as did FA59, a 38 year-old Asian father of an infant boy, ''And he keep on crying.It just gets to me, you know, you almost become little bit more rough with him because you will just try to hold him down more.Say, 'Stop it,.'I now can realize why people will have Shaken Baby Syndrome.''Challenges Summary Statistics.Fathers mentioned an average of 2.1 total challenges (SD = 1.3).For three of the five challenge categories, there was no difference among the three ethnic groups in the proportion of fathers who endorsed the challenge.''Worse relationship with partner'' was mentioned more by both white (63%) and Asian (45%) compared with Black (22%) fathers (p \ .05).Financial challenges were mentioned more by white (66%) than by Asian (40%) fathers.There was a similar trend for Black (61%) fathers to mention this challenge more than Asian fathers (Supplemental Table 1).For four of the five challenge categories, there was no difference among the child age groups in the proportion of fathers who endorsed the challenge.Only ''infant crying and whining'' differed by child age group, with fathers of children in age group 1 (0-2 years old, 46%) mentioning this challenge more than those with children in each of the other age groups (age group 2 = 22%, age group 3 = 19%, age group 4 = 7%; Supplemental Table 2).To assess whether class or SES was associated with endorsement of specific challenges, we compared combined years education between men who did and did not endorse each challenge.There was no difference in combined years of education between fathers who did and did not endorse any of the challenge themes (Supplemental Table 3).
Rewards
Meaning and Purpose.A sub-set of fathers (n = 61) were specifically asked if having children added meaning to their life.Fifty-nine (97%) answered in the affirmative.FA71, a 31 year-old Black father with a 3 monthsold daughter, articulated the idea most explicitly, ''Before my wife was pregnant, I was having this on-going philosophical debate about how life HAS no meaning in the grand scheme of things on this one planet, this one universe .But after I had [my daughter], I was like, I mean everything to this kid and she means everything to me, and it's a solid meaning.You can't really toss it out the window.It's crazy.Never thought something could mean so much.'' Other men noted the sense of purpose fatherhood provided them.According to FA20, a 34 year-old white Ph.D. student with one infant son, stated, ''.being a dad.gave me like another sense of purpose .If all things fall apart ., but I can provide and keep him happy, I'll be okay.''This idea was echoed by FA81, a 33 year-old Asian Ph.D. student and father of a newborn infant boy, ''if you have no other purpose in life .you can still go home and take care of your kid and that is a great thing.So, that definitely adds purpose.''FA87 was a 45 year-old Asian father of a 15 year old girl and a 9 year old boy who immigrated from China 15 years ago, and now worked as a research scientist.He became overwhelmed by emotion when talking about what his children meant to him, ''I'm not in any religion, but one word from my friends .he said that having child is a prayer from.[subject crying, interviewer asks, ''from God?'' and subject affirms, ''Yeah''].
Several men said that having children motivated or inspired them to become better people.One 35 year-old white father with an infant daughter (FA92) said, ''I feel like I became a better person now that I'm a dad .Definitely think about, you know, your actions before you take them and um, something about how they would impact her.''FA96 was a 26 year-old Black father who worked as a merchandizer.He was unmarried, but lived with the mother of his infant daughter.,He discussed how having his daughter helped him shift his life in a positive direction, to stop ''partying'' and to settle his legal troubles ''[becoming a father] gave me a reason to stop with all the bull and get my life together.''He continued, ''I had some warrants .. and I .finally went to court and got everything situated .So now, I'm back, legit, and I got my license back and everything now.''Some men felt a need to take fewer risks, and to take better care of themselves so that they would be available for their children.FA97 was a 23 year-old Black father of
Rilling and Hadley
an infant son, ''I'm not taking risks that I would, had I not had a child.I know he's gonna need me and stuff, so I gotta think about myself in order to think about him, too.''Witnessing Development.This theme was defined as being rewarded by witnessing any aspect of the child's physical, cognitive or social development, including the attainment of developmental milestones.Forty-six fathers (38%) spontaneously endorsed this theme.Several fathers expressed fascination with this process.FA5 was a 40 year-old white father of two boys who worked for the federal government, ''To me it was really fascinating to see these young babies .suddenly they can open their eyes and look around and focus, and the next day they can smile, and the next day they can sit up on their own .it's really amazing just to watch through the first 18 months of life.''FA124 was a 44 year-old Black father of two who described witnessing their development as deeply gratifying, ''Just watch them grow, it's a most beautiful thing you can see.When they take their first step.They cut they first tooth.They say they first word.Aye man, it just make your heart big.''Child as an Extension of Self.This theme was defined as the father being rewarded by feeling that his children were an extension of himself, either in the present or into the future.This theme was endorsed by 32 fathers (26%).Men expressed this theme in a variety of ways.Some were able to see aspects of themselves in their children.For example, FA77, a 35 year-old white professor and father of two young children, noted ''.you see yourself in the children.I'm raising myself .I see time bending back at me, I see, you know, an image of myself when I was a boy, .so you see your ego, you see yourself.''Others described a sense of re-birth through their children.FA29 was a 31 year-old unmarried Black father of two children who worked as a custodian, ''Like, anything that I do is for my kids, like they, having them was like me being rebirthed.Like I was reborn twice ..'' Related to the theme of rebirth, several fathers viewed raising a child as an opportunity to ''start over'' in life.They hoped that their children would lead a better life than they had, or that they would eclipse their own accomplishments.FA25 was a 30 year-old white Ph.D. student and father of two young children, ''you do get to kind of live vicariously through, I mean, kids have a clean slate. .They can literally be an astronaut, be the president, you have just got to guide them.It's just such a wonderful thing.You look at all the mistakes that you've made .they can really grow up to be great, great people.''Others echoed this sentiment.FA31, a 28 yearold Black father of three, had his first child at age 19 before he felt ready to be a father.For him, one of the rewards of being a dad was helping his children to, ''have a chance to be better, and make better choices than I did.''Similarly, FA21 was a 34 year-old Black father with a 2-year old son who described one of the ''biggest rewards'' of being a father as, ''having a hand in influencing someone that can influence the world.To me that's amazing, that I may never have this huge impact in the world but he has the potential to.''Similarly, FA35 was a 29 year-old Black father of four children who said, ''I want my guys to be better and do bigger things that I did, and .do better and accomplish more goals that I didn't accomplish because that would make me feel pride as a father.'' Children's Accomplishments.This theme was defined as witnessing a child's accomplishments, beyond simple developmental milestones.Perhaps in part because fathers often viewed their children as an extension of themselves or as opportunities to relive, or improve on, their own childhoods, they often reported finding their children's accomplishments rewarding (n = 32, 26%).Some fathers described this reward in a general fashion.FA36 was a 47 year-old Black father of three whose daughter attended The United States Military Academy at West Point, ''Seeing 'em accomplish something that they were working hard at and really wanted.Those are the some of the best feelings, best rewards you will ever see.''Others discussed their children's specific achievements.FA35, a 29 year-old Black father of four, focused on his children's scholarly achievements.''When your kid come, and they be like, 'Dad I got all A on my report card!', .that is the most, that's it right there'' For others, it was athletic achievements that they found rewarding.FA29, a 32 year-old Black father of two children, noted, ''. it was his first year playing football, he practiced 4 weeks and worked his way up to the starting team and his first game out there he got the interception, oh my god.''Feeling Loved by Children.This theme was defined as the father feeling that his child loved him.Many fathers (n = 25, 20%) noted that the simple act of being loved brought immense reward.This theme was succinctly stated by FA23, a 30 year-old white father of two boys, ''I feel like there's nothing more rewarding than-it's just having your child, someone that looks up to you and the love that they show; there's no way to really describe it.Like there's nothing that you have that's comparable.''Others agreed, stating ''to be loved by someone I love so much, yeah, that's, that's very rewarding'' and that this was true even when ''I'm not the best dad in the world, but [my children] still love me and that feels good.''(FA56).Not surprisingly then, many fathers mentioned the reward of simply hearing their child say, ''I love you'' or feeling loved by their child's actions.Several fathers described the reward they felt when they arrived home from work and their children ran excitedly to greet them.FA12, a 37 year-old physician and father of three, said, ''when you open up that door, running up to you and giving you a hug, like there's no, there's nothing that I've encountered in life that really can replicate that.''Child's Appearance and Affective Expressions.This theme was defined as being rewarded by visual cues from the child, including their appearance and their facial expressions of emotion.Several fathers (n = 19, 16%) endorsed this theme as a reward.FA111 was a 56 year-old Asian father of two children who worked as a CPA.Referring to his 5 year old daughter, he said, ''when she smiles, you know, this is the whole world to me.''FA71 was a 31 yearold Black graduate student and first-time father of a 3 month-old infant girl.He described a transformation in his feelings for his daughter when she began to smile, ''For the first month and a half, she was kind of like, I don't want to say empty, but there wasn't much there in her eyes.Like there was nothing behind there .she started smiling a couple of weeks ago.Just makes my heart fall out.''Teaching Skills, Knowledge and Values.This theme was defined as teaching the child skills, knowledge or values.Many fathers took pleasure in teaching their children (n = 19, 16%).Some emphasized the knowledge that they imparted to their children, while others referred to the values they instilled.FA2, a 35 year-old white father, delighted in successfully teaching his son the difference between left and right, when his son ''had it down cold''; ''those things are so gratifying.''FA5, a 40 year-old white father, took joy in his ability ''to teach and train and meld'' his two sons ''into the kind of person that I knew they would become.''Some fathers felt that they could give their child an advantage by passing on their hard-earned knowledge; that their child would not need to learn everything on their own from scratch.
FA37, a 29 year-old Black father of three who worked as a chef said, ''.I wanna give him all the keys to everything that I had to figure out through life, through all these different situations .So, I felt like I had a lot to teach them.''Relationship With the Child's Mother.This theme was defined as improved quality of the relationship between the father and the child's mother.Men were specifically asked how having a child influenced their relationship with their child's mother.Forty-four men (36%) said that it improved aspects of their relationship.Many spoke of the strong sense of partnership it created-''My daughter was the bridge between us'' (FA28), ''It was the glue.It was the cement that bonded'' (FA49)-that they were embarking together on a life project with enormous significance and that they would need to work together.Some men expressed sympathy and appreciation for what their partner had endured during childbirth, as well as increased respect, and this drew them closer.Witnessing childbirth led some fathers to have increased respect for women and their partners in particular.FA76 was a 27 year-old Black father of two young children and an amateur boxer who worked at a gym, ''actually being there, seeing my children born .it made me see her (his wife) in a different light.She's .now I see how powerful a woman is.''FA89 was a 45 year-old Korean father with a 9-year old son who said that before his wife gave birth, he loved her ''90%'' and respected her ''10%,'' ''but after having a baby.it's50/50.''Rewards Summary Statistics.Fathers mentioned an average of 2.2 total rewards (SD = 1.3).For eight of the nine reward categories, there was no difference among the three ethnic groups in the proportion of fathers who endorsed the reward.Only ''viewing the child as an extension of the self'' differed by ethnicity, with white fathers (10%) mentioning this reward less often than both Black (39%) and Asian (30%) fathers (p \ .05,Supplemental Table 1).For eight of the nine reward categories, there was no difference among the child age groups in the proportion of fathers who endorsed the reward.Only ''children's accomplishments'' differed by child age group, with fathers with older children (age groups 3 (36%) and 4 (42%)) mentioning this reward more than those with children in age group 1 (4%) (p \ .05,Supplemental Table 2).To assess whether class or SES was associated with endorsement of specific rewards, we compared combined years education between men who did and did not endorse each reward.Fathers who endorsed children's accomplishments as a reward had lower combined years of education than fathers who did not endorse this reward (t(119) = 2.62, p = .01).Similarly fathers who endorsed feeling loved by their children as a reward had lower combined years of education than fathers who did not endorse this reward (t(119) = 2.20, p = .03).There was no difference in combined years of education between fathers who did and did not endorse the other reward themes (Supplemental Table 3).
Quantitative Predictors of Self-Reported Quality of Family Life
Self-reported quality of family life (10 point Likert-scale) was generally high (M = 8.3, SD = 1.4).We predicted that this variable would be negatively correlated with number of children and number of hours per week at work, since these variables would be expected to increase work-family conflict.While there was no significant correlation between quality of family life and the number of hours per week worked (r = 2.05, p = .68),quality of family life was significantly negatively correlated with number of children (r = 2.30, p = .01;Figure 1a).We also predicted a positive correlation between amount of sleep and quality of family life, which was confirmed (r = .29,p = .02).Contrary to predictions, the total number of endorsed rewards was not positively correlated with quality of family life (r = .15,p = .20).On the other hand, there was a marginally significant negative correlation between quality of family life and the number of endorsed challenges of fatherhood (r = 2.23, p = .05)(Table 2).
As predicted, quality of family life was higher in men who lived with their children full-time versus part-time or not at all (equal variances not assumed; t(22) = 2.54, p = .02;Table 3, Figure 1b).Quality of family life was also significantly impacted by the marital status of the participants.Men who were married to the mother of all of their children were significantly happier with their family life than other participants (equal variances not assumed; t(26) = 2.42, p = .02;Table 3).Also as predicted, men who endorsed financial stress as a challenge reported lower quality family life than men who did not (t(71) = 2.95, p = .004,Table 3, Figure 1c), and the same was true for fathers who endorsed work-family conflict as a challenge (t(71) = 2.16, p = .03,Table 3).On the other hand, fathers who said their relationship with their partner improved after having children reported higher quality of family life (t(71) = 3.19, p = .002,Table 3), as did fathers who endorsed ''child as extension of themselves'' as a reward of fatherhood (t(71) = 2.17, p = .03,Table 3).Our prediction that quality of family life would be higher for men whose first child was planned (vs.unplanned) was also confirmed (t(71) = 2.90, p = .01,Table 3).
Changes in Mood and Affective State Across the Interview (PANAS Results)
Pre and post-interview PANAS ratings were compared using paired t tests.Fathers reported significant increases from pre to post interview in several positive affect items, including ''excited,'' ''strong,'' ''enthusiastic,'' ''proud,'' ''inspired,'' and ''determined.''In addition, they reported significant decreases from pre to post interview in several negative affect items, including ''irritable,'' ''nervous,'' and ''jittery.''There was no significant change in the remaining items.For comparison, we examined pre to post changes in PANAS ratings from a previous experiment in our lab that involved monetary compensation, but did not involve an interview.This experiment involved imaging brain function with functional MRI while subjects played a dyadic social interaction task (Chen et al., 2016;Feng et al., 2015).In contrast to the current study, participants in our fMRI study reported pre to post experiment decreases in several positive affect items, including ''interested,'' ''excited,'' ''enthusiastic,'' ''alert,'' ''determined,'' and ''attentive,'' as well as significant decreases in several negative affect items, including ''scared,'' ''nervous,'' ''jittery,'' and ''afraid'' (Figure 2).
Challenges
Provisioning is a cross-culturally consistent responsibility of human fathers, and has likely been so throughout human history (M.J. Konner, 2010).This responsibility was felt acutely by the fathers in our sample, with many men describing significant stress and pressure related to adequately performing this role, especially those who were the sole wage-earner in their family.This may be related to the tendency of American men to endorse ideals of traditional masculinity to a greater extent than men in other Western nations (Gattario et al., 2015;Shafer et al., 2021).Indeed, traditional, hegemonic masculine ideals dictate that fathers should be breadwinners and helpers to primary caregiving mothers.However, many scholars have posited a shift in ideology among Western fathers toward more ''caring'' or ''nurturing'' masculinities that emphasize caregiving, father-child relationship quality, positive emotions, and gender equality rather than male dominance (Cherry & Gerstein, 2021;Elliott, 2016;Pleck, 2010a).Attempts to fulfill both traditional and contemporary father roles that encompass both provisioning and caregiving responsibilities has resulted in increased work-family conflict (Aumann et al., 2011;Galinsky et al., 2011;Harrington, 2022;Lewington et al., 2021;Machin, 2015;T. Miller, 2010), another commonly endorsed theme in our study.Several fathers indicated that the demands of work prevented them from spending as much time with their children as desired, and some reported thoughts of work intruding on their time with children.Work-family conflict was associated with lower self-reported quality of family life (Table 3).
Another major challenge frequently cited by fathers was the loss of physical and emotional intimacy with their partners (the child's mother).This was attributed to insufficient time and energy for spousal interactions after parenting responsibilities had been fulfilled, and to a shift in the focus of their partner's affection and attention from father to child.Some parents struggled with sleep deprivation and infant crying, especially during the newborn period when infants would wake frequently throughout the night to feed or be held.Many parents co-slept with the infant, and these fathers seemed to report fewer problems related to infant sleep and crying, which is consistent with the published literature (McKenna & McDade, 2005).A few men described extreme frustration in response to inconsolable infant crying and acknowledged transient, though un-realized, impulses to harm the infant.Inconsolable infant crying is a known trigger for Shaken Baby Syndrome, and men are more often perpetrators than women so these are potentially dangerous impulses (Barr, 2012).
Most of these challenges showed similar distributions across ethnicity, SES, and age of eldest child.However, Black fathers were less likely to report decreased relationship quality with their partners.Black fathers were also less likely to be married to the mother of all of their children (equal variances not assumed; t(55) = 25.40,p \ .001),and less likely to have planned their first child (equal variances not assumed; t(68) = 24.20,p \ .001).One speculative possibility is that these unmarried couples who had unplanned children had less time to develop a close relationship with their partner before the child was born, and as such experienced less decline in relationship quality across the transition.However, Black fathers also had lower average years education (t(119) = 25.6, p \ .001),raising the possibility that lower SES drove this relationship rather than ethnicity.This possibility is supported by the fact that those fathers endorsing decreases in relationship quality had a trend (p \ .10)for more years of education on average than those not endorsing this theme (Supplemental Table 3).Another ethnic difference was that Asian fathers were less likely to report financial challenges.Several of the Asian fathers in our sample were immigrants from China who had grown up in, and perhaps adapted to, a lower standard of living in China.This may have led them to experience their current financial situation in the U.S. as relatively good.Finally, infant crying and whining was endorsed more often by fathers of infants than by fathers of older children, as would be expected.
The themes that were identified as challenges in this study echo themes identified in an earlier study of 40 American fathers by Palkovitz (2002).These included the financial obligations of fatherhood, loss of marital intimacy, sleep loss, and work-family conflict.More generally, Palkovitz emphasizes how the diverse demands of involved fatherhood can be at odds with demands of the partner, worker, or community member roles, and that fathers must work to balance these competing, and ever shifting, demands.
Rewards
These challenges of fatherhood were balanced by a number of rewards.Ninety-seven percent of fathers said that having children added meaning to their life.This finding is consistent with previous research showing that parents score higher than nonparents on lifemeaning, even though they score lower on affective and life satisfaction measures of well-being (Umberson & Gove, 1989).It is also consistent with recent research showing that becoming a parent increases eudaimonic well-being, which refers to meaning, purpose and selfgrowth, in both parents.Interestingly, however, increases are reportedly larger in fathers (Brandel et al., 2018).This increase in meaning and purpose may help explain why men continue to have children despite the many challenges it entails (Shenkman & Shmotkin, 2014).
Many men considered their child to be an extension of themselves.They found it gratifying to see some of their own traits in their children, and rewarding to relive aspects of their own childhood through their children.They viewed their child as a legacy that they would leave into the future.They saw their child as a way of ''starting over'' in life, and they valued the opportunity to assist this younger version of themselves to achieve more than they had.As such, they also took great pleasure in the accomplishments of their children, and the compliments they received about their children.
Many fathers were fascinated by their child's development and found witnessing this process to be rewarding and satisfying.Physical, psychological and linguistic development were all of great interest.
While many fathers described decreases in relationship satisfaction after having children and this is the predominant theme in the published literature (Twenge et al., 2003), a smaller but significant number of fathers emphasized that their relationship improved after having children, either because both parents were working together toward a common and meaningful goal, or because they developed increased appreciation and respect for their partner after witnessing them overcome the challenges and demands of childbirth and motherhood.
Fathers in our study were rewarded by their child's smile, laughter, and by their behavioral and verbal expressions of love.Children have likely been designed by evolution to make adults emotionally bond with them and want to care for them.They do this by way of both their appearance and their actions (Hrdy, 2009).
Most of these rewards showed similar distributions across ethnicity, SES and age of eldest child.However, white fathers were less likely to endorse ''viewing the child as an extension of the self.''This finding raises the possibility that white fathers are less likely to live vicariously through their children and to consider them as their legacy into the future.Fathers of other ethnicities seemed to experience more hardship throughout their childhood, and believed their children were likely to have a better life and achieve more than they had.This may have led them to emphasize living vicariously through their children as more of a reward.Related to this, fathers who endorsed children's accomplishments as a reward had lower combined years of education than fathers who did not endorse this reward.Again, this may relate to the perception that their children would have more opportunities to achieve than they did.Finally, children's accomplishments was endorsed more by fathers with children over the age of 5, suggesting that accomplishments in later developmental stages, such as academic and athletic accomplishments, are more salient to fathers.
As with the challenges, Palkovitz (2002) identified reward themes that overlap significantly with those outlined here.These included the amazement that fathers experience as they watch their children develop, being loved by their children, personal growth, an expanded self, increased life meaning, and, infrequently, an enhanced marriage.Palkovitz studied 40 fathers from the Eastern U.S. nearly two decades ago, 23% of whom were not white fathers.The similarity between our results and those of Palkovitz suggests that the challenges and rewards that men experience as fathers have not changed much in the past two decades, and are consistent across geographic region and ethnicity within the U.S.
Quality of Family Life
Self-reported quality of family life was significantly associated with several variables.Fathers with more children reported lower quality of family life, likely because the demands of parenthood increase with additional children.Fathers who lived with their children full-time reported higher quality of family life than those who did not.For the latter, this probably reflects strained relationships with the child's mother, as well as challenges in arranging to spend time with the child.In addition, fathers whose first child was planned reported higher quality of family life.We might expect that men who become a father intentionally and deliberately do so because they are amenable to the idea of family life, whereas this may not be true of men with unintended children.Financial stress and work-family conflict were very significant challenges for many men, so it is understandable that both variables were related to decreased quality of family life (Table 3).Unsurprisingly, quality of family life was also positively correlated with hours of sleep.This association might result from parenting challenges interfering with sleep, or from sleep deprivation interfering with the ability to enjoy family life.
Ethnic Group Comparisons
Previous research has described difference in ideals of fatherhood among the three ethnic groups that were the focus of this study.One study of U.S. College students compared Black, white and Asian men's perceptions of their paternal relationships.The study determined that Black students often valorized their fathers as distant role models of ''laid-back'' or ''cool'' masculinity.White students were more likely to describe their fathers as ''friends'' who they shared activities with and asked for advice.Asian American students were more likely to criticize their fathers for excessive emphasis on breadwinning and lack of intimacy and seemed to embrace norms of more involved fatherhood typically endorsed by white Americans (Ide et al., 2018).Other scholars have suggested that social fatherhood, which includes men who are not biological fathers but who provide significant nurturance, moral guidance and emotional support, is more common in the Black community (Connor & White, 2007).Although the above studies might lead to the expectation that there would be differences in the endorsement of themes across ethnic groups, our study was not specifically designed to expose differences in fatherhood ideology across ethnic groups.Furthermore, these ideals may now be converging, perhaps especially among certain social classes.For example, many of the Asian fathers in our study were immigrants from China, where there has been a redefinition of the fatherhood ideal to include both provisioning (yang) and raising confident, competitive, independent, happy children (jiao) (Cao & Lin, 2019), which is quite similar to modern American ideals of fatherhood.As such, it may less surprising that there was no quantitative difference in the expression of most themes, nor were there obvious qualitative differences in the types of statements that were made.This is evidenced by our ability to extract relevant quotes from each of the ethnic groups for most themes.Thus, our data suggest that many of the challenges and rewards experienced by U.S fathers seem to generalize across class and ethnicity.
Interview-Induced Changes in Mood and Affective State
Men experienced marked increases in positive affect and decreases in negative affect from pre to post interview.Notably, there were no increases in any positive affect items in another study from our lab that also involved financial compensation, but did not involve an interview about fatherhood.Therefore, the increases in positive affect in this study are not likely attributable to financial compensation or generalized social interaction with experimenters.Three positive affect items showing increases in the current study actually showed significant decreases in our comparison study, including ''excited,'' ''enthusiastic'' and ''determined.''A few men spontaneously offered that they found the interview therapeutic.These findings suggest that men benefit psychologically from having the opportunity to reflect upon and share their experience as a father with an attentive listener.
Limitations
One limitation of our study is the potential for bias in our sample of fathers.Fathers were recruited by posting flyers in and around the Emory University area.Emory University is an expensive private college surrounded by middle-class to affluent neighborhoods, and our sample includes a large number of highly educated fathers.Although the sample also included several lower SES fathers from outside the University community, results may have been different if we had sampled more fathers from other parts of Atlanta.Interested fathers took the initiative to contact study personnel and express interest in the study.This may have biased our sample toward fathers who had more positive experiences with fatherhood and wanted to share those.It also may have biased the sample toward fathers who had more time outside of work and family responsibilities.In addition, since this was a paid study, our sample may have been biased toward fathers who were motivated by financial compensation.As such, our findings may not fully generalize to the population of fathers at large.Another limitation is that all interviews were conducted by the first author, who is a white male.Therefore, only white fathers were interviewed by a member of their own ethnic group.It is possible that Asian and Black fathers felt less comfortable with the interviewer and may have been less candid in revealing their perspective and feelings about fatherhood to him.As a final limitation, we note that the large-scale nature of our study limited our ability to provide detailed qualitative exploration of the meaning of individual participants' statements in their context.
Summary and Conclusion
Paternal involvement has well-documented benefits for children.However, our study suggests that fatherhood also brings many rewards to fathers themselves, including increased meaning and purpose.In many cases, fatherhood inspires men to become better people and take fewer risks.Therefore, society is likely to benefit in multiple ways from facilitating positive paternal experiences that keep men supportively involved.Our data suggest that this can be accomplished in part by limiting excessive work demands that compete with family time, limiting financial stress, and by devoting effort to sustain physical and emotional intimacy between mothers and fathers in support of a healthy relationship.Work-family conflict could be ameliorated in the infancy period through more generous paternal leave policies.This is particularly important in the U.S. which is an outlier as the only high income country lacking state-supported paternity leave.It is also important to recognize that fathers are more likely to physically abuse infants compared with mothers, and that abuse mostly occurs in response to frustration induce by inconsolable infant crying.It is therefore important that educational programs about Shaken Baby Syndrome be particularly targeted at fathers.Sleep-deprived fathers may be at particular risk since sleep-deprivation can interfere with emotion regulation, further underscoring the need for paternal leave that might allow fathers more opportunities to sleep throughout the day.At later ages, work-family conflict could be further ameliorated by companies permitting shorter work weeks.Financial stress can be limited by programs that provide fathers with employment opportunities.Most fathers consider provisioning their primary paternal responsibility, and will embrace the opportunity to do so when given the opportunity.Finally, couples could be better educated about the risk that having children poses to their marital relationship and the need to invest in sustaining some level of intimacy to preserve a healthy relationship.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by a grant from the Emory College Program to Enhance Research and Scholarship (PERS)
Figure 1 .
Figure 1.Variables associated with fathers' self-reported quality of family life.Quality of family life was: (a) negatively associated with number of children, (b) higher in fathers who lived with their children full-time, and (c) lower in fathers reporting financial stress.
Figure 2 .
Figure 2. Pre and post study PANAS ratings for: (a) the current study and (b) a previous fMRI study in our lab.
Table 2 .
Associations Between Quality of Family Life and Continuous Variables.
Bold indicates that results are significant at p \ .05.
Table 3 .
Associations Between Quality of Family Life and Dichotomous Variables.
Bold indicates that results are significant at p \ .05.
|
v3-fos-license
|
2016-03-14T22:51:50.573Z
|
2014-03-01T00:00:00.000
|
18591260
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6651/6/3/1109/pdf",
"pdf_hash": "f175af2ce52358a1cb398199276483cd0b2b1788",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3081",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "f175af2ce52358a1cb398199276483cd0b2b1788",
"year": 2014
}
|
pes2o/s2orc
|
Presence of the Neurotoxin BMAA in Aquatic Ecosystems: What Do We Really Know?
The neurotoxin β-N-methylamino-l-alanine (BMAA) is suspected to play a role in the neurological diseases amyotrophic lateral sclerosis, Alzheimer’s disease, and Parkinson’s disease. BMAA production by cyanobacteria has been reported and contact with cyanobacteria infested waters or consumption of aquatic organisms are possible pathways to human exposure. However, there is little consensus regarding whether BMAA is present in cyanobacteria or not, and if so, at what concentrations. The aim of this review is to indicate the current state of knowledge on the presence of BMAA in aquatic ecosystems. Some studies have convincingly shown that BMAA can be present in aquatic samples at the µg/g dry weight level, which is around the detection limit of some equally credible studies in which no BMAA was detected. However, for the majority of the reviewed articles, it was unclear whether BMAA was correctly identified, either because inadequate analytical methods were used, or because poor reporting of analyses made it impossible to verify the results. Poor analysis, reporting and prolific errors have shaken the foundations of BMAA research. First steps towards estimation of human BMAA exposure are to develop and use selective, inter-laboratory validated methods and to correctly report the analytical work.
Introduction
β-N-Methylamino-L-alanine (BMAA) is a neurotoxin that has been linked to the progressive neurological diseases amyotrophic lateral sclerosis (ALS), Alzheimer's disease and Parkinson's disease [1][2][3][4]. BMAA was first discovered on the island of Guam in seeds of the cycad Cycas micronesica [5], which were used as food by the native Chamorro people [6]. As BMAA was shown to be neurotoxic [5], exposure to BMAA was considered as a possible cause of the high incidence of ALS/Parkinsonism-dementia complex (ALS/PDC) on this island [2]. However, the role of BMAA in the aetiology of ALS/PDC on Guam was heavily debated (e.g., [7][8][9]) and BMAA exposure is at present regarded as one of the possible causes of Western Pacific ALS-PDC [10].
BMAA research expanded beyond Guam after studies revealed the presence of BMAA beyond the seeds of the cycad, namely in the symbiotic cyanobacteria in the cycad's coralloid roots [11,12], as well as in free living cyanobacteria unrelated to the cycad [13]. The possibility of a global presence of BMAA, and, thus, of widespread human exposure to this neurotoxin led to the hypothesis that BMAA might be related to the global presence of neurodegenerative diseases [14]. The current state of knowledge recognizes the neurotoxicity of BMAA on cellular and in vivo level [1,15] but an animal model for BMAA induced ALS is still lacking [15,16]. Recently, additional toxicity mechanisms have been proposed that might better explain the relation between BMAA exposure and the chronic nature of ALS/PDC [10,17]. Reports of BMAA in the brain of deceased patients suffering from ALS, PDC, or Alzheimer's disease support the BMAA ALS/PDC hypothesis [3,18,19], however, these results could not be replicated by another research group [9,20,21]. Some, but not all, of the differences between these studies might be tracked down to the analytical procedures applied [22].
A possible important pathway for human exposure to BMAA is through cyanobacterial blooms in water or through consumption of higher aquatic organisms exposed to such blooms [13,23,24]. Recently, it was reported that also planktonic diatoms and dinoflagellates contain BMAA [25,26]. Therefore, in addition to on-going research on the role of BMAA in causing human neurodegenerative diseases, studies also focus on estimating concentrations of BMAA in aquatic ecosystems. However, reported BMAA concentrations in aquatic systems vary widely between studies. Several studies have detected BMAA in all tested cyanobacteria samples, whereas others have not detected it in any sample (Table 1). Furthermore, cyanobacterial BMAA concentrations vary orders of magnitude between studies (Table 1). Likewise, several studies have found BMAA in higher trophic levels like mollusks and fish [23,[27][28][29][30][31][32][33][34], but others have not [35,36]. Bioaccumulation of BMAA in higher aquatic organisms has been reported [23]. However, BMAA concentrations in the two food web studies performed so far differ greatly: those reported for the Baltic sea (mostly ng/g dry weight (DW), [23]) were a few orders of magnitude lower than those for Florida (high µg/g up to mg/g DW [28]).
A possible explanation for the striking variations in BMAA concentrations ( Table 1) could be that BMAA is produced in detectable amounts in some cyanobacteria and not in others. Concentrations of cyanobacterial secondary metabolites can vary within species, between species and between locations (e.g., [37][38][39]) but the variation within studies is usually larger than the variation between studies (e.g., [40,41])-though this is not the case for the BMAA results reported. Indeed, there is a strong bimodality in the absence/presence of BMAA in cyanobacteria samples, and analysis of similar [42,43] or comparable samples [44,45] with different methods yields different results (Table 1). This strongly suggests that additional factors to those influencing cyanobacterial metabolite production play a role in the reported divergences in cyanobacterial BMAA concentrations. In fact, the use of non-selective analytical methods likely is a major cause of the observed differences between studies [43], as is discussed in the next section. Additionally, even in cases where the appropriate analytical techniques are used, many research articles contain reporting errors such as an incomplete description of methods and results. In this setting, it is difficult to tell when BMAA has in fact been detected, as is shown in Section 3. Furthermore, the absence of critical discussions in many studies hinders the comparison of data and findings, as is shown in Section 4. Table 1. Reported β-N-methylamino-L-alanine (BMAA) concentrations in free living cyanobacteria. Data from studies that have tested more than five independent samples are included, free and protein associated concentrations are summarized. Merged rows represent single studies. Method abbreviations are explained in Appendix information 1.
The objective of this review is to elucidate the current state of knowledge on the presence of BMAA in aquatic ecosystems, based on studies in which appropriate analytical techniques have been employed and that were correctly reported. For this, I analyzed primary research articles on the analysis, occurrence and production of BMAA in phytoplankton and higher aquatic organisms. Moreover, in the Appendix information, I discuss some key articles on BMAA analysis, BMAA production by cyanobacteria, and human exposure through cyanobacteria to illustrate the effect of reporting errors in their context (Appendix information 2 to 6).
The main outcome of this review is that there is evidence for the presence of BMAA in aquatic organisms, but that this evidence is only based on a fraction of the published work. The assumed widespread occurrence of BMAA in aquatic ecosystems and its production by cyanobacteria could, therefore, not be verified. I find that unclear reporting and unsupported conclusions in key articles have shaken the foundations of BMAA research, an issue that needs to be tackled to determine human BMAA exposure routes and to provide a solid fundament for follow up studies.
The Role of Analytical Methods in the BMAA Controversy
The use of different analytical methods in BMAA research has recently extensively been discussed [43,52], and is summarized in this section, as it plays an important role in explaining observed differences in BMAA concentrations.
The most selective analytical methods used for BMAA analysis are 1 H-NMR and LC-MS/MS. 1 H-NMR was only used in one study [53], but the sensitivity of this method is very low (LOD 5 mg/L). LC-MS/MS is the most frequently applied technique (Figure 1), and it is selective because it relies on four criteria for the identification of analytes (retention time, mass-to-charge ratio (m/z) of the parent ion, m/z of product ions after collision induced dissociation, and ratio between these product ions). Therefore, the chance of misidentification is minimized. In LC-MS and GC-MS, no collision induced dissociation is used and these methods are, therefore, less selective than LC-MS/MS. In LC-FLD, LC-or CE-UV and AA analysis, analytes are identified by retention time and optical signal. As only two identification criteria are used by these methods, they are regarded as non-selective. An analytical technique that is based on different principles than the ones discussed before is ELISA, in which antibodies are used for identification. Recently, an ELISA for BMAA determination in environmental samples became commercially available. However, this test was shown to be unsuitable for its intended use [54].
A recent review on analytical techniques for BMAA research [52] recommends the use of MS/MS instead of optical methods (e.g., FLD) for detection. Indeed, there is a substantial chance of misidentification and overestimation of BMAA concentrations with optical detection based methods, as has experimentally been shown for one LC-FLD method [43]. In that study, FLD analyses resulted in overestimation or false positives in three out of eight tested samples. A group of scientist argued that BMAA could be successfully separated from its isomer α-,γ-diaminobutyric acid (DAB) [55] and other diamino acids [56] by a diversity of analytical methods, including those with optical detection. However, they do not properly answer the most important questions of possible misidentification by optical methods. It is unclear whether BMAA was separated from DAB in six of the ten investigated methods, because the chromatograms showing separation, or their legends are incomplete or incorrect (Appendix information 2). Furthermore, only a narrow selection of possibly interfering compounds was tested, while in real samples many more compounds could possibly interfere (Appendix information 3). Finally, it is not clear whether BMAA was correctly separated from the tested compounds in earlier studies by these authors, as it is not explicitly stated how the presented results relate to previous work (Appendix information 2 and 3).
MS/MS is currently generally accepted as the preferred detection technique for BMAA analysis, but when combined with LC separation, this technique also has its drawbacks. A main concern with LC-MS and LC-MS/MS analyses is the possible loss of signal by ion suppression [22,57], when sample components other than the analyte decrease (and in some cases also enhance) the analyte signal [58]. The severity of this effect should, therefore, be estimated and reported for each LC-MS(/MS) method [22].
To enhance its compatibility with different analytical methods, BMAA is sometimes derivatized. Derivatization is used to change the properties of BMAA, e.g., to enhance its volatility for GC-MS analysis, to add chromophores for optical detection or to reduce polarity for reversed phase LC separation. While derivatization adapts analytes to each technique, it does not necessarily influence the selectivity of each method. Therefore, the observed differences in cyanobacterial BMAA concentrations are not related to whether or not samples were derivatized ( Table 1). The observed differences cannot either be explained by lack of sensitivity, as the detection limits of most methods with which no BMAA was detected were generally below the average concentrations found by others (Table 1).
Optical detection methods are still used in BMAA research ( Figure 1). However, in the context of European guidelines for pesticide residue analysis, these techniques are only acceptable for frequently found residues-and always in conjunction with additional confirmatory methods-but more selective methods like MS/MS are preferred [59]. Identification by single MS is only regarded reliable when two or more diagnostic ions are used [59].
Review of Reported Methods and Results
As detailed above, BMAA can only be reliably detected if the appropriate methods are used. Furthermore, for results to be clear and comparable, it is essential to report methods and results adequately. I here evaluate the methods and results sections of studies on BMAA detection in aquatic ecosystems. For each method, I checked if the following basic information was well reported: sample origin and storage, sample processing, sample analysis, method performance and BMAA identification.
Sample Origin and Storage
Sample origin and storage conditions were well described in nearly half of the performed analyses ( Figure 2). Most of the studies for which no data on sample origin and storage were provided focused on method development, but five studies focused on BMAA detection in cyanobacteria [13,[60][61][62][63]. Especially for these latter studies, information on sample origin and storage is required for the right interpretation of the detected BMAA concentrations, as cyanobacterial amino acid and toxin concentrations can change with changing growth conditions (e.g., [37,38,64,65]). Incomplete description of sample origin in combination with incomplete data presentation can undermine the conclusions of a study. For instance, in a study on exposure of Gulf War veterans to BMAA [60], samples were taken from different locations in the Qatar desert. It is unclear how many samples were taken at each location, and which of these samples contained BMAA. The amount of BMAA in each sample was also not reported. As it unclear which locations of the Qatar desert contain which amount of BMAA, it is difficult to estimate the possible exposure of the veterans. The suggestion that BMAA exposure through desert dust may be linked to the increased incidence of ALS in Gulf War veterans is, therefore, not supported by the presented data (Appendix information 4).
Sample Processing
Essential information on sample processing was lacking for half of the analyses (Figure 2), and it was impossible to estimate the workup efficiency for most of these analyses. Most details were lacking on volumes and weights during sample processing and on the derivatization protocol (Appendix information 7). Information on volumes and weights is required because volume-to-weight ratios partly determine extraction efficiencies, and the amount of sample injected in LC-MS(/MS) can influence the signal strength during analysis. Furthermore, derivatization efficiency is also dependent on the sample/reagent ratio [52,66]. The derivatization procedure was only sufficiently described sixteen times, while derivatization was used in 49 analyses (Appendix information 7). For some analyses, an estimation of the total sample processing efficiency (including the derivatization step and/or cleanup) could be derived from the use of internal standards and/or recovery data (e.g., [29,44,45,67,68]). For most analyses, however, it remains unclear whether derivatization was efficient and, therefore, whether BMAA concentrations were correctly determined or underestimated.
Sample Analysis
In most studies the sample analysis was well described (Figure 2), but information on quantification was often missing (Appendix information 7). For 18 out of the 43 analyses in which BMAA concentrations were determined, it was unclear how this was done. Quantification can be performed in different ways, e.g., against a calibration curve of pure standards or spiked matrices, and with or without correction for internal standards or recoveries. Different methods of quantification can give different results, so this information is essential to allow comparison of studies.
When analytical methods are poorly described, they cannot be reproduced by other scientists and the results of the study cannot be validated by independent replication. For instance, some methods could not be reproduced because it was unclear how the elution programs was performed [25,69] and for another study, it was even impossible to tell which analytical procedure had been followed [33].
Method Performance
Method performance and validation data are used to show that the applied method is suitable for its intended purpose [70]. However, for most analyses these data were incomplete or missing ( Figure 2). For instance, recovery was only correctly reported for one third of the methods (Appendix information 7). Unless internal standards are used, recovery data should be used to correct the analyzed BMAA concentration for possible losses during processing and or analysis. One of the studies for which no recovery data are available is the study on BMAA concentrations in the Baltic food web [23]. The cyanobacterial BMAA concentrations found in this study are by far lower than those found in other studies ( Table 1). The validity of these results cannot be evaluated, because recovery data are neither given in the article in which the study is described [23], nor in the methodological article that preceded this study [27].
For sixteen methods, most data needed for method validation (detection limits, linear range, precision and recovery) were provided [14,27,28,32,34,36,43,45,47,50,53,68,71,72]. However, unvalidated methods (or methods for which no sufficient validation data were provided) were repeatedly referred to as 'validated'. This was mainly the case for one LC-FLD method [14,28,30,46,55,56,62]. In addition, it was also stated that 'the' 6-aminoquinolyl-Nhydroxysuccinimidyl carbamate (AQC) based method for BMAA analysis has been validated by other methods [57]. However, it is unclear what 'the' AQC based method is, as AQC derivatization has been used in combination with many different analytical methods (e.g., Table 1). In the only studies where multiple AQC methods were quantitatively compared, there was a discrepancy in results between the AQC LC-FLD and the AQC LC-MS/MS method [43] and between two AQC LC-FLD methods [42].
BMAA Identification
Correct identification of BMAA can be shown by comparing a sample chromatogram (for methods using optical and mass spectrometry detection) or spectrum (mass spectrometry) to that of a BMAA standard or a sample spiked with BMAA. In 27 out of 65 methods, chromatograms or spectra provided enough information to prove that BMAA was correctly identified (Appendix information 7). For 21 methods, BMAA identification could not be verified because no (six methods) or only one chromatogram/spectrum was shown (fifteen methods, Appendix information 7) and the response of a sample could not be compared to that of a standard. For other methods, chromatograms were incorrectly displayed. Reprinted from [44], with permission from Elsevier.
A B C
For LC-MS/MS analysis, the four analyte identification criteria (retention time, m/z of the parent ion, m/z of product ions after collision induced dissociation and ratio between these product ions) should be shown to be the same between a BMAA standard and BMAA detected in a sample. However, for many LC-MS/MS analyses this was not correctly demonstrated: none of the studies that used LC-MS/MS to confirm positive findings by LC-FLD [14,24,28,30,46,60,62,63,73] reported the LC-MS/MS identifications correctly. LC-MS/MS identification was only correctly presented in studies that used LC-MS/MS as their primary method [25,27,31,32,34,43,[47][48][49]68,71,74]. Examples of incorrectly displayed LC-MS/MS identifications are spectra of standards and samples acquired at different collision energies [28,60], which makes them incomparable. Furthermore, the ratio of the product ions between the BMAA standard and the sample differed in two studies [26,30], in other studies the integration method used for the different product ions was inconsistent [29,73] and different BMAA retention times between spiked pure water and a spiked cyanobacterial extract were shown without explanation [72].
There are also problems with the proof of correct identification of BMAA in the only GC-MS study on cyanobacterial BMAA concentrations [44]. Chromatograms of standards, spiked, and unspiked samples were provided, but the peaks in the unspiked samples that were attributed to BMAA did not exceed the noise level ( Figure 3B,C). Furthermore, the spectrum of the standard differed from the spectra of the samples (Figure 3), so additional fragments could not be used to confirm presence of BMAA, as is required in single MS analysis [59].
Bias through Selective Literature References and Lack of Discussion
As described in the previous sections, it is for many studies unclear whether BMAA is correctly detected and quantified. In addition, literature interpretation is hindered by a lack of critical reflection on the quality and limitations of some studies. Also, certain studies selectively cited only literature on positive findings of BMAA, thereby creating a biased view on the subject.
Selective Use of References
The selective use of references has in some studies resulted in a biased view towards positive findings of BMAA. For the first articles on BMAA in cyanobacteria [13,24,44], knowledge on presence of BMAA in cyanobacteria was limited and the results of these studies were in agreement with each other. However, results started to differ when no BMAA was detected in subsequent studies [47,53,75]. As a consequence, a public discussion on the suitability of the applied analytical methods and the correctness of results started (e.g., [47,49]). These conflicting data and/or methodological issues were mentioned in most articles from 2009 and later, but were ignored by some (e.g., [28,30,35,56,61,63,73]). Especially in studies in which optical detection was used as the primary analytical method and in which high BMAA concentrations were found using these methods [30,42,50,69], no reference to the debate on concentrations and methods was made.
Also in an experimental study on method performance [57], a biased view was created by selective referring to previous research. In this study, adduct formation in underivatized LC-MS analysis was investigated and it was concluded that for this type of analysis, adduct formation and complex formation may lead to an extreme underestimation of BMAA concentrations. However, the authors do not adequately discuss the underivatized LC-MS or LC-MS/MS publications in which complex and adduct formation do not seem to play a major role. Neither do they refer to the only study in which underivatized and derivatized LC-MS/MS analysis were directly compared, and in which underivatized LC-MS/MS performance was slightly better than derivatized LC-MS/MS performance. Finally, the authors recommend to use derivatized LC-MS analysis with LC-FLD as a confirmatory technique, but do not discuss the points raised against the use of LC-FLD for BMAA analysis in several other publications (Appendix information 5).
Discussion of Quality and Limitations of the Study
In most articles, presented work was not critically discussed and limitations of the study were rarely addressed, which contributes to uncertainty about the validity of some results. For instance, one group published an article on derivatization optimization [66], which was followed by a methodological article on SPE and LC-MS/MS analysis of samples [27]. This method was subsequently used for a food web study [23]. In 2012, the same group published two more methodological articles, one on separation of BMAA from isomers [31] and one describing quantification by LC-MS/MS [68]. These articles have greatly contributed to aquatic BMAA research, but on the same time give rise to some questions. For instance, why was an optimum ratio between sample protein and derivatization reagent advised in the first study [66], and this ratio by far exceeded in the second study [27]? Furthermore, the LC-MS/MS method was adjusted in 2012 because the ratio between product ions used for BMAA identification in samples did not always correspond to the ratio in a BMAA standard [31]. If these ratios do not correspond it is uncertain whether BMAA is present in the samples. It was concluded that the difference in ratios might have been caused by an interfering isomer, but the question regarding whether this interference was also present during the food web study [23] and, therefore, whether BMAA was correctly identified in this study was not addressed. Finally, a subsequent article by this group described an optimized LC-MS/MS method that could be used for quantification [68]. Sensitivity was improved in comparison to the first published method when expressed as fmole/injection (70 in [27] and 4.2 in [68]), but it is not discussed why, when expressed in µg/g dry weight, this method was a hundred times less sensitive (LOD of 0.1 µg/g DW) than the first method [27] by which a concentration of 0.001 µg/g DW had been detected [23].
Similarly, another group published several articles on method development and sample testing [44,45,67] and one on BMAA production by cyanobacteria [76]. In two of these articles, BMAA concentrations were determined in multiple cyanobacterial isolates. Although the tested isolates were not identical, they were described in both articles as being representative for the region and they were cultured under similar conditions [44,45]. The average BMAA concentration determined by GC-MS in one study [44] was nearly a hundred times higher than the average concentration determined by LC-MS in the other study [45] (Table 1), but possible causes of this difference were not adequately discussed [45]. Furthermore, from the experimental study [76], the authors conclude that cyanobacteria produce BMAA in response to nitrogen starvation. The authors suggest that some other studies did not detect BMAA because only nutritionally replete cyanobacteria were analyzed. However, the authors do not check this with their previous work, in which they frequently reported BMAA in cyanobacteria that were grown on BG11 [44,45], a medium that is very rich in nitrogen [77,78].
Presence of BMAA in Aquatic Ecosystems
There is evidence that BMAA can be present in cyanobacteria dominated samples [23,27,48,68,74], while in some equally credible studies, BMAA has not been detected in cyanobacteria [31,43,47,49,71,75]. The evidence for presence of BMAA is generated by studies that have used LC-MS/MS, which is at present regarded as one of the most suitable techniques for BMAA analysis due to its high selectivity and sensitivity [43,52]. In addition, these studies have correctly shown the appropriate chromatograms, so it is likely that BMAA has been correctly identified. The negative results are based on well reported LC-MS and LC-MS/MS studies. LC-MS is less selective than LC-MS/MS, but this method is included because less selective methods do not have a higher risk of creating false negative results than more selective methods.
Based on studies in which the quantification method has also properly been reported, BMAA concentrations in positive cyanobacterial samples are 0.73 µg/g DW in a cyanobacteria culture [68] or range from 4 to 42 µg/g DW in field material dominated by, but not necessarily solely comprised of, cyanobacteria [48]. These concentrations are close to the detection limits of the previously mentioned LC-MS(/MS) studies in which no BMAA was detected, which range from 0.1 to 10 µg/g DW [31,43,47,49,75] (LOD in [71] is not included because it is expressed per unit of wet weight). No BMAA was detected in cyanobacterial samples by 1 H-NMR, another highly selective, but rather insensitive method (LOD of 5 mg/L) [53].
One LC-MS/MS based study has convincingly shown that axenic diatom cultures can contain BMAA at concentrations between 1.1 and 3.3 ng/g DW, the BMAA concentration in a cyanobacteria/diatom dominated field sample was 27.6 ng/g DW [25].
Although it is shown that BMAA can be present in aquatic ecosystems, this conclusion is only based on a narrow selection of articles ( Figure 4). There is too little evidence to conclude that BMAA is occurring worldwide in aquatic ecosystems. Independent confirmation from a number of different laboratories is needed to verify this hypothesis. More work is also needed to identify the BMAA producers in aquatic systems. Only one study has confirmed presence of BMAA in an axenic cyanobacterial culture [27]. The only study so far on BMAA production by cyanobacteria [76] does not provide evidence as the experimental setup is flawed, there is too little evidence for correct BMAA identification and not all results are presented (Appendix information 6). Diatoms might also be BMAA producers, as one study has shown that axenic diatom cultures contain BMAA [25]. Taken together, only two studies have identified cyanobacteria and diatoms as possible BMAA producers, but their findings have not yet been confirmed by other, independent laboratories. The question whether other organisms than phytoplankton are capable of BMAA production has so far only been explored for cycads [79].
For the majority of the published work on BMAA in aquatic ecosystems, it is unclear whether the results were correct. The field of BMAA analysis is still developing, and lack of certified reference material and an inter-laboratory validated method has resulted in the use of different analytical techniques. Amongst these, non-selective analytical methods have frequently been used, which may have caused false positives or overestimations. In addition, the appropriateness of some selective analytical methods could not be verified because BMAA identification was insufficiently reported.
Nevertheless, the use and description of analytical techniques are not the only sources of confusion in aquatic BMAA research. It lacks a solid foundation as the conclusions drawn in some of the key articles on analysis [55][56][57], production by cyanobacteria [76] and human exposure through cyanobacteria [60] were either unclear or not supported by the presented data. Finally, literature interpretation is hindered by occasional selective referencing to positive findings and by lack of a critical evaluation of the presented work.
Improving the Science
The first steps to advance BMAA research in aquatic ecosystems have been made by the increased use and development of selective analytical methods. An inter-laboratory comparison and validation of these methods, preferably using certified reference material, would be a next step toward a more unified discussion on which analytical methods to use in BMAA research. However, the field can only move forward if the use of appropriate methods is combined with correct and complete description of research. Future studies should include an extensive and correct description of methods and results (see e.g., [22]), with special attention to recovery data, quantification procedure and identification. Furthermore, the work should be critically evaluated and should be put it in its scientific context.
A solid base of knowledge relies on good insight into past results. Most evidence for correct BMAA analysis is provided in sixteen studies [23,25,27,31,32,[34][35][36]43,[47][48][49]68,71,74,75]. While it is well possible that BMAA has correctly been identified (and/or quantified) in other studies, there is no publicly available evidence for it. Full analysis disclosure, or availability of sample material for comparative analyses are essential to improve the current state of knowledge in BMAA research.
In conclusion, the current knowledge on presence of BMAA in aquatic ecosystems is more limited than the literature suggests. The state of knowledge will increase if appropriate, inter-laboratory validated methods are developed and used, and if the analytical work is correctly reported. This progress is needed to establish to what extent humans are exposed to BMAA through for instance ingestion of surface water during recreation and consumption of seafood.
Conflict of Interest
I am the first author of three of the articles evaluated in this review [43,48,54]. I declare no other conflict of interest.
LC Separation of BMAA and DAB
The LC section starts with an overview of methods that can distinguish DAB from BMAA, and chromatograms are shown to prove separation. However, the legends of six of the ten provided chromatograms are incomplete or incorrect, so it is unclear by which methods the chromatograms are produced: • Figure 2A in [55]: Legend refers to a 60 min gradient in [80], but in this reference a 49 minute gradient was employed. • Figure 2C in [55]: Legend refers to studies in which no BMAA analysis was performed [81,82].
• Figure 4 in [55]: Sample used from [60], method unknown. Retention time of BMAA does not match the retention time in [60]. Figure shows BMAA and DAB, while in [60] only BMAA was shown in the LC-MS/MS chromatogram. • Figure 6 in [55]: No method given, this seems to be a later published method [45].
• Figure 8 in [55]: DAB not shown, no methods given but figure is identical to Figure 1 in a later published study [67]. Legend states that samples have been derivatized according to [44], but [44] is not a LC-MS but a GC-MS study in which a different derivatization procedure has been used. This is reflected by the different reported m/z for the BMAA derivative: 130.2 in [44] and 333 in this figure. The authors suggest that negative findings by underivatized Hydrophilic Liquid Interaction Chromatography (HILIC) MS/MS analysis are caused by inferior performance of this method. However, they did not compare method performance data like LODs and recoveries to verify whether the performance of underivatized methods was indeed worse than that of derivatized MS/MS methods. An example is given to support the claim that HILIC separation is less efficient and results in broad peaks, however the peak that is referred to (Figure 2A in [49]) is of equal width as one of the peaks shown for derivatized separation (Figure 2A in [55]). Furthermore, authors state that DAB was routinely used in sample queues for LC methods from the first article in 2003 on [11], but in many of the articles published after 2003 by the authors, DAB is not mentioned (e.g., [13,24,46]). Furthermore, they state that negative findings of another study ( [75]) were confirmed by the author's own LC-MS/MS methods, but no method details or references are provided.
GC Separation of BMAA and DAB
The question whether BMAA and DAB are separated in GC based methods [44,83] is not answered in this manuscript, although there is a paragraph dedicated to this subject. It is only stated (in the LC paragraph) that in one GC-MS method [44], BMAA is separated from alanine, sarcosine and trypotophan. However, the retention times of these three amino acids have not been reported in [44]. Instead, they are taken from another study in which different conditions are used [84]. As different chromatographic conditions can lead to differences in retention times, the results of [84] cannot be extrapolated to [44]. Furthermore, it was stated that a GC-MS method developed by another group [83] might be too insensitive for BMAA detection, however only the LOD for the protein-bound fraction is referred to, and not the LOD for free BMAA, which was 50 times lower [83].
Discussion
In the discussion, the authors recommend a comparative exercise between laboratories that should include a repetition of validated methods (for issues with the term validated, see § 3.4 in this review), a balanced view on whether HILIC chromatography is adequate for BMAA analysis and a check for unambiguous determination of BMAA. They hereby again question the suitability of HILIC based methods without providing fact based arguments against it.
In Conclusion
This article suffers from many reporting deficiencies: not all addressed questions were answered, parts were suggestive and incorrect references were given. Methods that had not yet been published were included in this review, although it was stated that previous results were considered. Furthermore, separation of DAB and BMAA by GC-MS was not adequately discussed. As it is not made explicit which studies have and which have not separated BMAA from DAB, the conclusion that these compounds are "routinely and clearly distinguished by having different retention times during chromatographic separation" is not supported by the presented data.
Appendix information 3: Article discussion 'Distinguishing the cyanobacterial neurotoxin β-N-methylamino-L-alanine (BMAA) from other diamino acids [56]'
In this 2011 article, a group of scientists reports the separation of BMAA from other diamino acids. Five analytical techniques are described and chromatograms showing separation of BMAA from other compounds are presented.
Justification and Research Aim
It is unclear why this research has been performed, it lacks a description of the justification and aim. In the introduction, authors only state that it is important to distinguish BMAA from similar molecules. The method section does not provide clarity either. It consists of the description of five analytical techniques, all but one are not referred to, and are therefore assumed to be developed for this study. The one method referred to (LC-FLD, § 2.7 in [56]), should also be regarded as a newly developed method. Although the references [11,14,80] suggest it has already been used before, the elution program used in two of these studies differs from the one described in § 2.7. [14,80]. In the other s tudy [11] no elution program was reported, but a reference to yet another study [85] is made. The elution program in this reference does however also not correspond to the elution program described in § 2.7.
Relevance of the Work
After having reached this point in the manuscript, it looks like new methods were developed to enable good separation of BMAA from other diamino acids. When placed in the context of the main debates in BMAA research from 2009 to 2011, some questions on the relevance of the work arise: • The interference in methods based on optical detection is not limited to diamino acids, but to compounds with amino groups [43,47]. Why are only a few diamino acids tested for these methods? • Diamino acids with a different molecular weight from BMAA are not the most likely candidates to interfere in methods with mass-spectrometry detection [43]. Why are all but two tested diamino acids compounds with a different molecular weight?
Relation to Previous and Future Work
In the results section, authors state that the methods have indeed been adapted in this study and show that BMAA can be distinguished from the tested diamino acids by these methods. It then, however, becomes confusing whether the subsequent conclusions are based on methods used in this study or on previously used methods. Authors state that standard LC-MS/MS methods distinguish BMAA from other amino acids, with a reference to previously used methods [14,27]. As these previous methods differ from the ones presented in this study, this statement would imply that the conclusion is based on previous methods, but this conclusion is not supported by data or chromatograms. Furthermore, it is mentioned that the BMAA isomer N-2(amino)ethylglycine coeluted with BMAA in the current LC-UV/MS method, but that in previous studies that included LC-UV or LC-MS analysis [14,46,60,86], BMAA identification was correct because other methods were used for confirmation. Also in this case, three [14,46,86] of the four the methods referred to are different from the tested method so it is unclear whether N-2(amino)ethylglycine coelution had also occurred in these studies. In the discussion "standard methods" are mentioned without definition: "We found that standard methods of amino acid analysis clearly distinguish BMAA from the twelve amino acids tested".
The repeated referring to "standard" and "previously used" methods implies that BMAA was already separated from other diamino acids in previous work. As it is neither discussed how the methods used in this study relate to previously used methods and nor whether, to the opinion of the authors, BMAA was correctly identified in previous studies, the implications of the study remain unclear.
In Conclusion
It is unclear which questions are answered by this study. Five methods have been shown to separate BMAA from a selection of diamino acids, but the relevance of this selection is questionable and not discussed. As these methods have all been adapted in this study, no conclusion about separation of BMAA from the selected compounds in previous studies can be drawn. However, the repeated reference to "standard" and previously used methods could mistakenly make a reader think that it is shown in this article that BMAA was also separated from the selected compounds in previous work.
Appendix information 4: Article discussion 'Cyanobacteria and BMAA exposure from desert dust: A possible link to sporadic ALS among Gulf War veterans [60]'
In this 2009 article, cyanobacteria crusts from the Qatar desert were collected and analyzed for BMAA in order to assess whether veterans had been exposed to BMAA during the Gulf War. It was concluded that BMAA was present in the cyanobacteria crusts, which led to the suggestion that BMAA exposure through desert dust may be linked to the increased incidence of ALS in Gulf War veterans.
Incomplete Description of Methods
Cyanobacteria crust samples were taken from one location in 2007 and from three locations in 2008. It is unknown how many samples were taken from each location. After sampling, some crust samples were dried and analysed, while others were cultured and subsequently analysed. It is unclear how many samples were cultured and what the origin of this/these samples was.
Four different methods were used for BMAA analysis. It is unclear which samples were analysed by which methods. The only thing that is clear is that samples that were positive for BMAA in AA and/or LC-FLD analysis were reanalysed by LC-MS/MS and that the cultured samples were analysed by LC-MS and AA.
Incomplete Description of Results
Authors state in the Results section that BMAA was detected in desert crust samples from one location and that BMAA and DAB were detected in the cultured crust. It is unknown whether BMAA was detected in the samples from the other locations. The provided chromatograms do not provide more clarity as is not made clear to which samples they belong, it is only stated whether they represent dried or cultured samples. Furthermore, some samples were analyzed by multiple methods, but the results of these analyses per sample are not presented, so it is unclear whether they are in agreement. BMAA concentrations are not reported.
Incorrect Data Visualisation
The provided LC-MS/MS chromatogram consists of three panels, one of which gives information on the sample and the other two give information on a BMAA standard. In two of these panels, product ions for a standard and for a sample are shown. However, the product ions of the standard were acquired at different settings than those of the sample. This results in different ratios of product ions between the standard and the sample (see also § 3.5 of this review). It is unclear whether these ratios would have been the same when analysed under the same conditions and therefore whether BMAA has correctly been identified by LC-MS/MS.
In Conclusion
The major problem of this article is that it is unclear how many samples were analyzed and how much BMAA was present in each sample. As the presence of BMAA in cyanobacteria was linked to BMAA exposure of veterans, it is essential to know the BMAA concentrations and distribution in the sampled areas. Another problem is the confirmation by LC-MS/MS. The crust samples that tested positive for BMAA with the less selective methods LC-FLD and AA were reanalysed by the more selective method LC-MS/MS. However, the presented LC-MS/MS chromatograms do not provide enough information to prove that BMAA was correctly identified with this method.
Appendix information 5: Article discussion 'Reactivity of β-methylamino-L-alanine in complex sample matrixes complicating detection and quantification by mass spectrometry [57]'
This 2012 article explores a possible explanation for the frequent lack of detection of BMAA by underivatized LC-MS(/MS) analyses. For this, the effect of different solutions on the BMAA signal in LC-MS analysis is determined. The authors hypothesise that the formation of adducts and complexes hinders electrospray ionisation MS analysis and can distort chromatography.
Adduct and Complex Formation and the Detection of Mass-to-Charge Ratio (m/z) 119
Adduct formation is determined by adding BMAA to different trichloroacetic acid (TCA)/salt solutions. These mixtures are scanned at different cone voltages and the intensity of the ion with a m/z of 119 (singly charged BMAA) and the m/z's of BMAA adducts and dimers are recorded. As the response of m/z 119 is low in most solutions and for most settings, it is concluded that even under optimal circumstances, m/z 119 accounts for less than 10% of the total BMAA ions in solution. This is however contradicted by Figure 2A in [57], which shows a 70% presence of m/z 119 and by Figure 1 in [57], which shows a 30% presence. Furthermore, this conclusion implies that other MS settings are optimized for BMAA analysis at m/z 119 and that TCA is the optimal solution for BMAA detection. This is contradictory to earlier work, which is not discussed in the current article. In this earlier work, m/z 119 is found to be the main peak after infusion of an aqueous BMAA solution [47].
Complementary to the evaluation of adduct formation, the authors explore whether metal complexes of BMAA can form in sample matrixes by chemically synthesizing a BMAA-Zinc complex. However, chemical synthesis of a BMAA-Zinc complex does not prove that this complex will be formed in real samples. To prove this, real samples should be analysed for the presence of such a complex, but this is not done.
Implications for Sample Analysis
The authors conclude that signal suppression and alteration of chromatographic behaviour due to adduct and complex formation may lead to an extreme underestimation of BMAA concentrations in underivatized LC-MS sample analysis, especially when external calibration curves are used for quantification. However, nearly all underivatized MS(/MS) studies have anticipated this by determining recovery [43,48,49,75,87], by using matrix based calibration curves [71] or by using D 3 BMAA as an internal standard [43,47]. In these studies, recovery rates generally exceed 80% and matrix based calibration curves have similar slopes as neat curves, so there is no indication that strong signal suppression indeed occurs in real samples. The authors, however, suggest that the reaction time used for recovery determination is too short for complex formation, and that reported recoveries are therefore highly overestimated. This is an important suggestion, as it supports the main conclusion of the article. However, it is not grounded on arguments or data. In the method section, it is not mentioned that BMAA was allowed to react with TCA or salt solutions for a given period of time, nor are data on reaction time shown in the results section. Also in an older article that describes BMAA-metal complex formation [88], no reaction times are given. Furthermore, the authors suggest that the solutions used in this study are representative for sample extracts, but they do not provide data on metal concentrations in cyanobacterial extracts. Trace element composition of marine phytoplankton [89], however, suggest that metal concentrations in cyanobacterial extracts will be much lower than the concentrated (9 mM salts and 10 mM TCA for each treatment) solutions used in this study. The authors conclude that BMAA reactivity may complicate analysis of many different types of samples, but have not verified this conclusion by analysing real samples.
Recommended Analytical Procedure
The article ends with the recommendation to use derivatization combined with reversed phase chromatography for sample analysis, and to use at least two orthogonal detection methods such as FLD and MS. Finally, it is recommended to use multiple m/z's in BMAA analysis. This latter recommendation is supported by the presented work, but the other recommendations are not. Firstly, no adequate comparison between derivatized and underivatized MS analysis is carried out in this study. In the only study in which these methods have been compared, derivatized MS/MS analysis did not perform better than underivatized MS/MS analysis [43], but the authors do not refer to this study in their discussion. Furthermore, FLD detection is suggested as additional technique, without referring to recent articles that provide arguments against the use of this method [43,52].
In Conclusion
A possible weak point in MS analysis is pointed out in this article, which is valuable. However, the relevance of the experiments for real sample analysis is not shown and most conclusions are not supported by data. Finally, not all relevant publications are discussed, resulting in a biased view on the subject.
Appendix information 6: Article discussion 'Nitrogen starvation results in the production of β-N-methylamino-L-alanine [76]'
This 2011 article was the first to investigate conditions under which cyanobacteria produce BMAA. In this experimental study, two cyanobacterial strains are repeatedly subjected to nitrogen starvation. The presence of BMAA in the cultures is monitored throughout the experiments and authors conclude that nitrogen starvation results in the production of BMAA.
Flaws in Experimental Setup
One weak point of this work is that the design of this experiment is flawed. Nitrogen is supplied as labelled ammonium in the experimental treatment and as unlabelled nitrate in the control treatment. As cyanobacteria differ in their response to ammonium and nitrate as nitrogen source [90], the type of nitrogen source should be similar between treatments. Furthermore, in a nitrogen starvation experiment, the proper control treatment would be nitrogen repletion rather than nitrogen starvation.
Suboptimal Analysis
Another problem with this study is the detection of BMAA. Samples were analysed by LC-MS/MS, either as a Q1 scan without collision induced dissociation or in multiple reaction monitoring mode with collision induced dissociation and detection of product ions. This means that the LC-MS/MS was used as a less selective LC-MS for most of the analyses.
Presentation of Raw Data
The presented data are either too little or too much processed for good interpretation. Examples of too little data processing are the figures in which a BMAA decrease or increase is shown. In these figures (Figures 1 and 3 in [76]), results are presented as LC-MS(/MS) peak areas instead of as cellular or biomass related toxin content or concentrations, which are more commonly used (e.g., [37,91]). During the experiment, a fixed volume of sample was taken at each sampling event. It is not shown how the biomass concentration changed during the experiments, but as the experiment is performed in batch cultures, it is likely to have changed. Therefore, it is reasonable to assume that different amounts of cyanobacteria were present in the different samples. As only the intensity of the MS(/MS) signal is shown, no correction is made for these changes in biomass concentration. Figures 1 and 3, therefore, only reflect the total amount of BMAA present in the sample and it cannot be derived whether the observed changes in these figures reflect changes in cellular BMAA concentrations or merely result from changes in biomass concentration during the experiment.
Obscured Data Presentation
The changes in labelled amino acid abundance in the experimental cultures are too much processed to allow easy interpretation. The increases in labelled amino acids after nitrogen starvation are shown as ratios of the singly labelled to the unlabelled amino acids, which are subsequently normalized against control cultures. This use of ratios of ratios obscures the results. More importantly however, results do not represent a response to nitrogen starvation as the control treatment had also been nitrogen starved. The results represent the differences in cyanobacterial response to the nitrogen sources used and from the presented data it can again not be derived whether these differences are caused by changes in cellular composition and/or in biomass.
Incomplete Data Presentation
Not all essential data are presented. Data on biomass indicators and nutritional status of the cyanobacteria are lacking. The table that shows the increase in labelled amino acids, lacks data for some samples, but this is not explained. In addition, Figures 1 and 3 only show free unlabelled BMAA, the unlabelled protein associated fractions and both fractions of labelled BMAA are missing.
In Conclusion
This study suffers from flaws in the experimental design and lack of data on cyanobacterial biomass, nutritional status and presence or absence of different BMAA fractions. It was for instance not checked whether the cyanobacteria were really nitrogen starved and the right control treatments were not included. Furthermore, the employed LC-MS/MS was mostly operated as a LC-MS without motivation. Too little data (e.g., chromatograms with product ions of standards and a samples) were provided to show that BMAA was correctly identified. Taken together the above mentioned flaws and omissions and either the lack of data processing or the expression of data as ratios of ratios instead, this article's conclusions cannot be verified by the presented data.
Appendix information 7: Reporting quality of methods and results
Methods for BMAA analysis that have been applied to environmental samples have been reviewed. For each of these methods, Table A1 shows which method details and results were reported.
Appendix information 8: Criteria used for classification in Figure 4
Based on the use of selective and sensitive analytical methods, and on the reporting of BMAA identification and quantification, studies are classified into groups (Figure 4 in main text). Table A2 shows the criteria used for this classification. Table A2. Criteria used for the classification in Figure 4 of the main text. BMAA reported but no highly selective method used; 6: BMAA identification by highly selective method insufficiently reported. n.a.: not applicable.
|
v3-fos-license
|
2019-04-16T13:29:02.094Z
|
2018-11-01T00:00:00.000
|
116749586
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1073/11/11/3013/pdf?version=1543311783",
"pdf_hash": "58d79876ab8a00c9526a42502ad64d82af505348",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3082",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Business"
],
"sha1": "58d79876ab8a00c9526a42502ad64d82af505348",
"year": 2018
}
|
pes2o/s2orc
|
Diverse Schemes of Cost Pooling for Carbon-Reduction Outsourcing in Low-Carbon Supply Chains
Among responses to governmental regulations for curbing carbon emissions, outsourcing carbon reduction to a specialized third-party is an important means to satisfy a variety of carbon-emission restraints. In this situation, however, designing efficient contracts for emission reducing while retaining appropriate supply-chain profit is a substantial but challenging problem. We therefore refine this from practice and consider a low-carbon supply chain consisting of one manufacturer and one retailer to analyze in which conditions the system should outsource its carbon reduction efforts to an external expert firm under the assumption that consumers with a sense of social responsibility prefer low carbon products. In the decarbonization expert firm embedded supply chain, we examine the respective impacts of three cost-pooling schemes for emission reduction on supply chain performances. We find that the manufacturer-undertaking contract is the worst in terms of profit and carbon reduction level among the contracts being studied, while the retailer-undertaking contract yields the best outcome in terms of the profit and performs well in carbon reduction when the contractor has cost efficiency in carbon reduction, which is even better than the joint-undertaking contract in carbon reduction when the contractor is inefficient. The study shows the diversity of contracts on outsourcing carbon reduction significantly impacts the supply chain profitability, carbon reduction efficiency and sustainability of operations.
Introduction
Academic research and real world practice indicate that the influence of climate change on consumers' behavior is far-reaching.Scientific studies show that consumers, especially those with consciousness of social responsibility, have low-carbon preferences.A Gallup report [1] shows that 80% of European respondents consider the environmental effects of products as a crucial factor in their purchasing decisions.Zhao et al. [2] indicated that attitudes significantly influence green purchasing behavior, according to the data collected in Qingdao (China).Regarding public willingness to pay (WTP) for the benefits of low-carbon farming, Kragt et al. [3] indicated that Australian respondents have a positive WTP for native vegetation.Mostafa [4] showed that consumers in Egypt are willing to pay seventy Egyptian pounds more for carbon-labeled products.
Understanding the preferences of consumers for low carbon products, in general firms commit themselves to voluntarily reduce carbon emissions.For example, UK top retailers, including Tesco and John-Lewis, have pledged to cut absolute carbon emissions from retail operations by 25% in 2020 [5].
Carbon emissions are caused by the actions of multiple parties, and thus a collective approach is required to tackle carbon reduction, particularly among firms in a supply chain.Siemens encourages its suppliers to contribute to a green supply chain by reducing energy consumption and greenhouse gas emissions [6].A total of seventy real estate companies in China, representing sales revenue of 163 billion US dollars in 2015, joined the "Green supply chain in real estate industry in China" program and engaged more than 2000 upstream suppliers of cement, steel and iron ( [7]) and a green index was launched to manage their suppliers ( [8]).In addition, the positive link between the environmental enhancement and competitiveness within supply chains in South Asia is identified [9].
However, the extant supply chain-related literature [10][11][12][13] considers that companies addressing carbon emissions often equate the business of carbon reduction with their regular business areas, such as R&D, manufacturing and marketing, matters that traditionally firms are skilled in handling, whereas in fact, the business of carbon reduction involves many specialized professional techniques such as industrial energy saving, gas separation, carbon capture, etc., in which neither the manufacturer nor the retailer may be an expert.As a result, only focusing on the original way of implementing carbon reduction projects has its limitations.Meanwhile, there now exist many professional carbon reduction firms that are known for their low cost, highly efficient and effective carbon reduction processes.For example, China already has more than 3000 companies focusing on providing excellent service of carbon reduction and industrial energy saving or Energy Performance Contracting (EPC) since 2013 [14].
Moreover, a mature business model for the carbon reduction (or energy saving) performance exists, wherein the contractor bears the corresponding investment and shares part of the returns from the investment with its customer.IBM reports that successful firms will locate the best balance of consolidating core business within the company and allocating non-core business [15].Hence, a rational manufacturer will distribute non-core businesses such as carbon reduction to external specialists.Nestlé outsourced its carbon reduction business to Quantis that is devoted to help organizations to move toward sustainability through the life cycle approach to cut the carbon footprint of a cup of Nespresso coffee by 20% in 2008 [16].Fashion retailer H&M and the retail giant Wal-Mart distributed the business of establishing and implementing their low carbon strategy and their vendors' participation program, respectively, to the firm Reset Carbon [17].One Siemens CEO in China said that the company adopted the "Energy Saving Performance Contracting" to implement its low carbon strategy [18].
Therefore, analyzing the consequences of incorporating a carbon reduction contractor within a supply chain is valuable.A contractor skilled in carbon reduction could increase the efficiency of carbon reduction as well as the profits for the supply chain.However, other (possibly detrimental) consequences of outsourcing to a contractor may occur, given the effect of multi-marginalization, thus, we will explore the conditions under which a manufacturer-retailer supply chain would choose an external contractor for carbon reduction.Furthermore, when the supply chain chooses the route of external contractor for carbon reduction, investigating whether both supply chain members share the carbon reduction service fee when the contractor reduces carbon emissions from the supply chain is important.However, cases in which the manufacturer or retailer dominates the contractor should also be investigated, considering the interests resulting from dominating the contractor.We then examine the effects on supply chain performance of leaving the manufacturer or the retailer to cooperatively or separately set the carbon reduction fee, given that paying the contractor means that it could induce the contractor's carbon reduction decision in its own interest.To sum up, we will investigate the following questions.(1) Should the carbon reduction contractor be added to the supply chain?(2) What are the effects of the three diverse payment contracts on supply chain performance?(3) What are the underlying operation mechanism and managerial implications?Within a supply chain consisting of a manufacturer and a retailer with low carbon preference of consumers, we study the condition in which a carbon reduction contractor could be added to the supply chain, and the effects of low carbon preference on the supply chain performance.With the contractor embedded in the supply chain, we present the "joint undertaking", "manufacturer undertaking" and "retailer undertaking" contracts, and explore the effects of these contracts on the contractor's decision and supply chain performance.
Our contributions to the literature lie in the explicit incorporation of a carbon reduction contractor into the supply chain and the investigation of the effects of different payment contracts on supply chain performance.We propose that when a contractor's efficiency of carbon reduction is two times more efficient than the supply chain itself, it makes sense to choose a contractor to reduce carbon emissions.The computation result shows that the manufacturer chooses to be a free-rider in the case of independent decision-making in the joint payment contract as it pays the contractor nothing.Compared with other contracts, the manufacturer payment contract is the worst about the profit and the carbon reduction level.Moreover, the joint payment contract does not seem "tactical", as it results in a low payment level to an efficient contractor while a high payment level to a less efficient one.By contrast, the retailer payment seems fairly "tactical", as it positively motivates the contractor.
The rest of the paper is organized as follows: in Section 2, we review the relevant literature.Section 3 presents the basic model and derives conditions for choosing an external contractor for carbon reduction.We analyze different payment contracts in Section 4 and explore the effect of different contracts on supply chain performance in Section 5. Section 6 provides our concluding remarks.
Literature Review
Although there is an increasing body of literature referring to sustainable operation management and green supply chains, we here follow closely two main streams of literature, i.e., carbon efficient supply chain and supply chain outsourcing, which are highly related to our study.
For a time long scholars and policy makers have paid attention to the topic of carbon efficient supply chains as the threat of global warming has become a consensus for society.For a comprehensive review on modeling and analysis of sustainable operations management, one can refer to Gunasekaran et al. [19].The literature modeling the low-carbon supply chain can be divided into threads as follows.One strand of interests focuses on the optimal supply chain network optimization considering carbon emissions.Cruz and Wakolbinger [20] presented a multi-criteria decision-making framework including an objective of minimizing emissions in a multi-period supply chain.Elhedhli and Merrick [21] argue that considering the costs of carbon emissions may alter the optimum configuration of a supply chain.Similarly, Chaabane et al. [13] introduced an example of supply chain design considering carbon emissions from production and transportation.Cachon [22] studied the optimal formulation of retail outlets in supply chains to minimize operational and environmental costs incurred by carbon emissions.
The other thread concentrates on the analysis of operations and coordination mechanisms in low-carbon supply chains.Benjaafar et al. [10] introduced a series of simple models to incorporate carbon emissions into supply chain operations management.Du et al. [11] studied a supply chain with an emission-dependent manufacturer and an emission permit supplier (a forest farm) under a cap and trade policy.Caro et al. [23] have presented a simple but general model on joint production of GHG emissions in general supply chains.Chen et al. [24] and Hua et al. [25] use the EOQ model in analyzing how firms manage carbon footprints in inventory management under the carbon emission trading mechanism scenario.Lukas and Welling [26] demonstrated that economic and ecological efficiency continue to be mutually exclusive under uncertainty in emission allowance prices.The problem worsens when the carbon reduction project hinges upon investing in the whole supply chain.Xu et al. [27,28] analyzed the carbon reduction and production decisions of a supply chain under the cap and trade policy and show that both wholesale price and cost-sharing contracts can coordinate the supply chain.The above literature does not analyze the low carbon preference of consumers.In the next section, we review the literature focusing on voluntary carbon reduction problems in a supply chain.
There is also some pertinent literature examining low-carbon strategies in supply chains in the presence of consumer preference.Liu et al. [29] explored the impact of consumers' environmental awareness and their competition on supply chain members.Supply chain coordination considering consumers' low carbon preference has also drawn the interest of scholars.Du et al. [12] compared the respective effects of wholesale-price, revenue-sharing and quantity-discount contracts on the emission-concerned supply chain facing consumers' low-carbon preference.Similarly, Giri and Bardhan [30] studied the supply chain coordination with the wholesale-price and wholesale-price discount contracts considering consumers' environmental perception.Ji et al. [31] addressed the emission reduction strategy in retail-channel and dual-channel cases using Stackelberg game theory.
Although we also realize the importance of consumers' low-carbon preference for supply chain operations, distinct from existing literature, we separate the carbon reduction task from the tranditional supply chain and focus on improving the efficiency of the carbon reduction contractor.A feature of this paper is that we consider the carbon-reduction contractor as an integral part of the extended supply chain, which could shift the carbon reduction function from the manufacturer and retailer to the contractor and thus change the supply chain structure.We propose the conditions in which the contractor can be added to the supply chain and examine effects of different payment contracts on supply chain performance.
Another stream of literature related to our work is about outsourcing in supply chains.Most relevant literature has focused on information asymmetry and market risk, and on how these issues affect the decision making between in-house production and the outsourcing.Regarding market risk, Liu and Nagurney [32] studied the effect of foreign exchange risk and competition intensity on supply chain companies involved in off-shore outsourcing activities.Yu et al. [33] evaluated the effects of supply disruption risks on the choice between the well-known single and dual sourcing methods in a two-stage supply chain.Given the information asymmetry in worker productivity, Hasija et al. [34] examined different combinations of contract features that enable firms to better manage contractors.Li et al. [35] addressed the transportation outsourcing and production issues of a supply chain under cap and trade and carbon tax policies, respectively.Kaya [36] investigated the effort and pricing decisions in a two-facility supply chain with outsourcing and in-house production.The author found that the power difference between the manufacturer and the supplier affects the effects of coordination contracts.Ding et al. [37,38] studied the motivation mechanism of coal power plants to turn to a environmental service contractor for reducing pollutant and carbon emissions.Specifically, Wang and He [39] performed a study on the impact of managing risk aversion in low-carbon supply chains on system performance and emission control in the presence of emission abatement outsourcing.
Kaya's [36] is the research most closely related to ours.It examines an R&D outsourcing problem with a linear motivation scheme under asymmetric information scenario.Specifically, they commit the contractor to leak its true information, whereas our work explores efficient payment contracts to improve supply chain performance.
Different from the abovementioned literature, we design different payment contracts including the joint, retailer and manufacturer undertaking contracts and examine the effect of payment contracts on supply chain performance, thereby exploring the underlying operation mechanism and managerial implications.We find that the joint payment contract, notwithstanding being viewed as the most reasonable contract, turns out to be the least efficient among the various contracts discussed.
Preliminaries and Basic Model
The term "low-carbon products" is used to identify goods that generate less carbon emissions through the whole life cycle (including manufacturing, distribution, retailing and consumption) than other similar products with relatively high carbon emission.Suppose that consumers with the sense of social responsibility have a low-carbon preference and thus contribute to the growth of market share for supply chains reducing carbon emissions.As is shown in Figure 1, the focused supply chain consists of one retailer (R) and one manufacturer (M), both of whom are risk-neutral and emit carbon dioxide.Besides, we will also assume that there exists a risk neutral contractor of carbon reduction C who has advantage of technology and cost in terms of carbon reduction.Assume the supply chain is already voluntarily reducing carbon emission in the low-carbon context or spurred by external forces like the pressure of Non-Governmental Organizations (NGO).For example, some companies, such as Nestlé and IKEA, have already implemented the low-carbon strategies in their supply chains.Hence our model does not involve the issue of deciding whether or not to reduce carbon emission, which is notwithstanding discussed in some existing literature.
Energies 2018, 11, x FOR PEER REVIEW 5 of 17 like the pressure of Non-Governmental Organizations (NGO).For example, some companies, such as Nestlé and IKEA, have already implemented the low-carbon strategies in their supply chains.Hence our model does not involve the issue of deciding whether or not to reduce carbon emission, which is notwithstanding discussed in some existing literature.Let p denote the retail price of a low carbon product, w the wholesale price and c the marginal cost of product production for the manufacturer (we assume that marginal sales cost of the retailer is negligible).In order that the manufacturer and retailer both have positive profits, we let p > w > c.
Considering the findings in the literature [1,40], we assume that the market demand for low carbon products is influenced by "carbon emission reduction level" e .Suppose that the market demand is an additive function of retail price p and carbon reduction level e.
Specifically, the supply chain demand is defined as: where α is the initial market potential, τ is the level of low carbon preference or the sensitivity coefficient of the carbon reduction level and β is the slope of the demand function.The linear demand function is commonly used in the literature of supply chain management and economics [41,42].Assume that the investment in emission reduction is a disposable input and an increasing convex function of e and denoted by ( ) C e .Considering the effect of diminishing returns for carbon reduction, we let the quadratic function of the reduction rate denote the carbon reduction investment, that is where k is the carbon reduction investment coefficient representing the cost factor.Note that the quadratic function used in this paper have also been considered by researchers like Du et al. [12], Dong et al. [43], Giri and Bardhan [30] and Savaskan and Van Wassenhove [44].Using the quadratic function to denote costs, which are progressively more difficult to achieve (i.e., carbon reduction requires more capital, technology and other resources to invest per additional decrement of carbon emissions), is a common practice (we refer to Nordhaus [45], Harnisch et al. [46] and Yang et al. [47] for further literature sources using the quadratic function form).Given the contractor is responsible for the manufacturer and retailer's carbon reduction business, k will not change with the payers.We list the notations used through the text in Appendix A.
No Emission-Reduction Outsourcing with the Centralized Supply Chain
This section determines the conditions in which a centralized supply chain might choose to amend its structure and possibly outsource its carbon reduction business.Therefore, as a benchmark, we first study the centralized supply chain in which a manufacturer and a retailer are concerned with producing products with low carbon emissions, the supply chain objectives are to determine the carbon reduction level and retail price.Let s k denote the investment coefficient of carbon reduction within a centralized supply chain.Let p denote the retail price of a low carbon product, w the wholesale price and c the marginal cost of product production for the manufacturer (we assume that marginal sales cost of the retailer is negligible).In order that the manufacturer and retailer both have positive profits, we let p > w > c.
Considering the findings in the literature [1,40], we assume that the market demand for low carbon products is influenced by "carbon emission reduction level" e. Suppose that the market demand is an additive function of retail price p and carbon reduction level e.
Specifically, the supply chain demand is defined as: where α is the initial market potential, τ is the level of low carbon preference or the sensitivity coefficient of the carbon reduction level and β is the slope of the demand function.The linear demand function is commonly used in the literature of supply chain management and economics [41,42].Assume that the investment in emission reduction is a disposable input and an increasing convex function of e and denoted by C(e).Considering the effect of diminishing returns for carbon reduction, we let the quadratic function of the reduction rate denote the carbon reduction investment, that is C(e) = ke 2 /2 f or 0 ≤ e < 1 or = +∞ f or e ≥ 1, where k is the carbon reduction investment coefficient representing the cost factor.Note that the quadratic function used in this paper have also been considered by researchers like Du et al. [12], Dong et al. [43], Giri and Bardhan [30] and Savaskan and Van Wassenhove [44].Using the quadratic function to denote costs, which are progressively more difficult to achieve (i.e., carbon reduction requires more capital, technology and other resources to invest per additional decrement of carbon emissions), is a common practice (we refer to Nordhaus [45], Harnisch et al. [46] and Yang et al. [47] for further literature sources using the quadratic function form).Given the contractor is responsible for the manufacturer and retailer's carbon reduction business, k will not change with the payers.We list the notations used through the text in Appendix A.
No Emission-Reduction Outsourcing with the Centralized Supply Chain
This section determines the conditions in which a centralized supply chain might choose to amend its structure and possibly outsource its carbon reduction business.Therefore, as a benchmark, we first study the centralized supply chain in which a manufacturer and a retailer are concerned with producing products with low carbon emissions, the supply chain objectives are to determine the carbon reduction level and retail price.Let k s denote the investment coefficient of carbon reduction within a centralized supply chain.
The profit function of the centralized supply chain of a firm can be expressed as follows: Lemma 1.The optimal solution for the centralized supply chain are given as follows only when 2kβ − τ 2 > 0 It is easy to show that both the carbon reduction level and profit decrease in the investment coefficient of carbon reduction k, but increasing in the consumer preference level, therefore, increasing its carbon reduction efficiency can increase its profit and the carbon reduction level.Considering that many firms do not have sufficient knowledge, experience, or resources to take on a carbon reduction project, and that improving its own carbon reduction technology is time-consuming and costly, outsourcing the carbon reduction business may be a good option.The low carbon preference of consumers significantly affects the profit.Thus, expanding the population of consumers with low-carbon preference through advertising is necessary.
Emission-Reduction Outsourcing Added to the Centralized Supply Chain
The centralized supply chain outsources its carbon reduction business to a contractor who is an expert in reducing carbon reduction reflected in its high efficiency (k < k S ), i.e., the contractor of carbon reduction with a low investment coefficient in carbon reduction is efficient.
The supply chain S, as a centralized agent, and the contractor C sign a "carbon-reduction performance contract" and agree that the contractor helps the manufacturer to reduce the carbon reduction to level e and bears the corresponding cost of carbon reduction ke 2 /2.The supply chain S enjoys the benefits of carbon reduction (the demand increase) and shares part of the profits caused by carbon reduction with the contractor (i.e., paying ρe to the contractor based on the carbon reduction level e (it is a common practice to apply a linear scheme to stimulate the contractor to reduce the emissions of carbon.For example, the customer pays the contractor a constant payment per year for the next few years for this service.As our problem is of a static nature, we use a linear scheme to represent the cost for the carbon reduction in a single period).
The decision sequence is that the manufacturer first sets its payment level ρ and retail price p to maximize its profit.The contractor then determines the level of carbon reduction e.
Thus, the contractor's profit from the carbon emission reduction service is as follows: where the first and second terms are the revenue and the carbon reduction investment for the centralized supply chain, respectively.Such a linear payment scheme is assumed for the tractability of our model, a similar payment scheme can be found in the existing literature (see Kaya and Özalp [48] and Kaya [36]).The problem regarding fairness is left to future research.The first order partial derivative of Equation ( 4) with respect to e is as follows: and then: e = ρ/k (5) Equation ( 6) indicates that the carbon reduction level decreases in k and increases with ρ.Obviously, this agrees with common sense.When the manufacturer pays the contractor more for the carbon reduction, e increases (i.e., the contractor should reduce carbon emissions further for optimal profit).In the same manner, when the investment coefficient of carbon reduction decreases, or in another word the contractor's technology for reducing carbon emissions improves, e will also increase (i.e., again, the contractor should reduce carbon emissions further for optimal profit).
The profit of the centralized supply chain with the manufacturer and retailer is: where the first term is the revenue of the supply chain from selling products, and the second term is the cost for purchasing the carbon reduction service.
Lemma 2. In the carbon-reduction outsourcing case, the Stackelberg equilibrium (p * s , e * c , ρ * s ) for the contracter and the original supply chain exists as follows only when 4kβ − τ 2 > 0 holds Lemma 2 shows that the retailing price p * s is the sum of the marginal cost of the manufacturer c and 2τ times the payment level ρ, i.e., p * s = c + 2τρ * S .Result 1: Table 1 summarizes the results in the cases of carbon reduction within a centralized supply chain and of outsourcing the carbon reduction.
Table 1.Results for a case of carbon reduction within a centralized supply chain and the case of outsourcing the carbon reduction.
Variable
Carbon Reduction within a Centralized Supply Chain Outsourcing the Carbon Reduction Theorem 1.Only when the contractor's efficiency of carbon reduction is two times more efficient than the supply chain itself, it makes sense for the contractor to reduce carbon emissions, i.e., k < k s /2.When k < k s /2, i.e., the contractor's investment coefficient of carbon emission reduction must be less than half of that of the manufacturer.Otherwise, the manufacturer will reduce carbon emissions on his own instead of through the service of the contractor.
When the firm stops its business of emission reduction and outsources it to an external specialized firm, the firm's profit normally falls.To avoid a reduction in aggregate profit, the specialized contractor must have a higher capability of reducing carbon emission than that of a centralized supply chain.Thus, the contractor must be sufficiently efficient to be accepted as a member of the supply chain.Otherwise, the contractor will not be chosen, and the manufacturer will select to reduce carbon emissions by itself.
Diverse Undertaking Contracts
We have shown that when k < k S /2, outsourcing the carbon reduction business to a carbon reduction contractor is beneficial for the supply chain.Therefore, we assume that a carbon reduction contractor subject to the efficiency requirement is introduced to the supply chain resulting in the structure with the manufacturer, retailer and contractor.The extended supply chain is denoted by S. As carbon emissions are produced within manufacturer and retailer processes, both should bear the carbon reduction service fee of the contractor at first thought.
In this section, we intend to comprise three cost pooling schemes, i.e., joint undertaking contract, manufacturer undertaking contract and retailer undertaking contract.Therefore, we first analyze the "joint undertaking contract".From Section 3.2, we learn that the level of carbon reduction is directly affected by the undertaking level or the payer.Various payers may set different undertaking levels to maximize individual profits.Thus, influences of these different undertaking levels on the supply chain performance may differ slightly.For this reason, we will also explore the effects when the payer is the manufacturer or retailer (i.e., the "manufacturer undertaking contract" as well as the "retailer undertaking contract", with a contractor embedded in the supply chain).k S used below are all assumed to be under the constraint k < k S /2 unless otherwise specified, and thus we will not repeatedly state this.We will let the superscript denote the type of undertaking contract and the subscript denote the member of the supply chain of interest at a given point.
Joint Undertaking Contract (J)
When the manufacturer and retailer bear the cost of carbon reduction, they pay ρ M e and ρ R e, respectively.Depending on whether the manufacturer and retailer cooperate in determining their undertaking levels, the joint undertaking contract can be divided into two cases (namely "cooperative" or "independent" decision-making).In the cooperative decision-making case, the manufacturer and retailer codetermine the sum of ρ M and ρ R , while in the independent decision-making case, they independently set its own level of carbon reduction.
We first investigate the cooperative decision-making case, wherein the manufacturer and retailer announce ρ M and ρ R collectively, and then the manufacturer reveals the wholesale price, and the retailer, and sets the retailing price based on the information obtained about the wholesale price.Finally, the contractor determines the carbon reduction level.
The respective profits of the contractor, retailer and manufacturer are as follows: Lemma 3.Under the joint undertaking contract, the Stackelberg equilibrium ) of chain members exists as follows only when the condition 16kβ − 3τ 2 > 0 holds We do not determine the specific individual values of ρ M and ρ R (only their sum), because they mainly depend on a level of bargaining between the manufacturer and retailer as the sum is set, which deviates from our main line of reasoning
Retailer Undertaking Contract (R) (Independent Decision Making)
We study the case when the retailer and manufacturer independently determine their undertaking levels.The decision order is that the manufacturer first set the wholesale price w and its undertaking level ρ M .The retailer then determines the retail price p and its undertaking level ρ R .After the manufacturer and retailer present their undertaking levels, the contractor sets the level of carbon reduction e for supply chain.
The profits of the contractor, retailer and manufacturer are the same as that in Section 4.1.The primary difference between the cases of cooperative and independent decision making is the sequence of the levels of undertaking.We can obtain Lemma 4 through backward induction.
Lemma 4.Under retailer undertaking contract, the Stackelberg equilibrium As ρ R M = 0, it shows that the "independent decision-making" case degenerates into the "retailer undertaking contract" (R), in which the manufacturer voluntarily gives up the opportunity of guiding the contractor to maximize its profit and expects to be a free-rider, thus, only the retailer independently bears the carbon reduction service fee.We let upper superscript R to denote the case of independent decision making.Lemma 4 indicates that the manufacturer does not care about carbon reduction in an RP contract and sets the wholesale price based on a and β.Naturally, the manufacturer is a free-rider, and thus, its wholesale price is unaffected by the carbon reduction business.
Manufacturer Undertaking Contract (M)
The manufacturer pays the contractor for its carbon reduction service and announces the wholesale price w and the undertaking level ρ M first to maximize its profit.After knowing the wholesale price and the undertaking level, the retailer decides on the retail price p.Finally, the contractor determines the level of carbon reduction e.
The profits of the contractor, manufacturer and retailer are stated as follows, respectively: Lemma 5.Under manufacturer undertaking contract, the Stackelberg equilibrium Result 2: Table 2 shows the optimal prices, carbon reduction level, as well as profits in the joint, retailer and manufacturer undertaking contracts.
Comparison and Analysis
In this section, we compare the equilibrium results among the three different contracts and present the underlying managerial implications.For ease of discussion, we use the subscripts R and M to denote the retailer and manufacturer, respectively.For example, π R R+M denotes the aggregate profit of the manufacturer and the retailer in the retailer undertaking profit.
When the investment coefficient is too high, the retail price is the highest under the J contract and is the lowest in the R contract among the three contracts.This situation implies that as the contractor is less efficient, the retailer should bear a high cost of carbon reduction, and thus a low retail price facilitates the retailer to expand the market to relieve its carbon reduction burden.When k is intermediate, the retail price is the lowest in an M contract, and is also the highest in the JP contract.When k is low, the retailing price is also the lowest in the M contract, which is caused by the retailer not directly bearing the outsourcing cost.Thus, the retailer could set a low retailing price, and is the highest in the R contract, which stems the fact that as the contractor is efficient, carbon reduction burden of the retailer is relatively light.Thus, the retailer sets a high retail price to maximize its profit.Observation 2. For the payment and carbon reduction levels, when τ 2 /4β < k < 3τ 2 /8β, we have ρ M < ρ J < ρ R , e M < e J < e R ; and when k > 3τ 2 /8β, we have ρ M < ρ R < ρ J , e M < e R < e J .(We refer to Appendix A.2 in the Appendix A for the proof).
The payment level is the lowest in the P contract.In other words, the P contract is the least beneficial to investment in reducing carbon emissions reflected by the payment level, which can be explained by the fact that the manufacturer is farther from the consumer market than the retailer.Therefore, the manufacturer is less acquainted with consumer preferences for the low carbon product.Hence, the M contract offers the lowest payment level to the contractor for the service of carbon emissions reduction.This partly explains why many manufacturing companies (for example, Foxconn) are in a disadvantaged position in the current relationship with suppliers and retailers like the Apple company.
When the contractor is efficient, the payment level is higher in the RP contract than that in the JP contract.When the contractor is less efficient, the result is reversed.Thus, only when the investment coefficient is high, does the JP contract display its advantage of considering the concerns of the manufacturer and retailer in facilitating the contractor to reduce carbon emissions.Otherwise, the R contract has an advantage in facilitating carbon reduction.
Owing to the linear relationship between the carbon reduction and the payment levels, we could also obtain the quantitative relationships of the carbon reduction level under different payment contracts.Observation 3.For the wholesale price, we have w R < w M < w J .(We refer to Appendix A.3 in the Appendix A for the proof).
The wholesale price is lowest in the R contract among the three contracts, which is consistent with our expectation.Given that the manufacturer bears the cost of carbon reduction paid to the carbon reduction the contractor, this cost must be transmitted to the retailer in the downstream along the supply chain, which increases the wholesale price.In the R contract, the manufacturer does not bear the service fee of carbon reduction.Consequently, the wholesale price is lowest in the R contract.
The wholesale price is higher in the JP contract than that of the M contract, which may not seem regular in the sense that the manufacturer merely shoulders part of carbon reduction cost in JP contract whereas it bears all related cost in the M contract, because the payment level in the JP contract is higher than that in the M contract.
Profits of Supply Chain Members
Observation 4. For the profit of the contractor, when k > 5τ 2 /8β, we have π For the profit of the manufacturer, we have π M M < π R M .For the profit of the retailer, we have π M R < π R R .(We refer to Appendixes A.4-A.6 in the Appendix A for the proof).In line with Observation 2, the profit of the contractor is lowest in the M contract owing to the lowest support from the manufacturer in the M contract.The less efficient contractor gets the highest profit in the J contract and the efficient contractor obtains the highest profit in the R contract, which is generally consistent with the payment relationship among the three contracts.The difference is the demarcation point determining when the contractor is efficient or less efficient.It has a higher requirement to be an efficient contractor in the payment level.However, only a low requirement is necessary in this case.Therefore, when a contractor needs to select between the GM-led supply chain and Wal-Mart-led supply chain, selecting the latter may be sensible.
Considering the JP contract, the specific profit of the retailer or manufacturer depends on the payment level it takes, we simply compare the profit in the R and M contacts for the retailer and manufacturer.
People generally tend to assume that the retailer has a high profit in the M contract whereas the manufacturer in the R contract obtains a high profit as each of them plays a free-rider role.However, Observation 4 shows that this hypothesis only explains part of the truth (that is, it only suits the manufacturer).The profit of the manufacturer is higher in the RP contract than that in the M contract.Like the analysis of the wholesale price, the manufacturer does not bear any cost paid to the contractor in the R contract, whereas bearing all the cost in the M contract.As a result, the profit of the manufacturer is high in the R contract.
For the retailer, its profit is larger in the R contract than that in the M contract, which suggests that the profit increment of the retailer due to the added demand surpasses the contracting cost born in the RP contract.This shows the superiority of the RP contract.Observation 5.For the aggregate profit of the manufacturer and retailer, we have π M R+M < π J R+M < π R R+M , which are the sums of manufacturer's and retailer's profits under M, J and R scenarios, respectively.For the supply chain profit of the three members of the supply chain, we have π M S < π J S < π R S .(Readers can refer to Appendixes A.7 and A.8 in the Appendix A for the proof).
The aggregate profit in the M contract is at the bottom among the three contracts, which results from the point that in the M, the manufacturer does not attach sufficient importance to consumers' preference for low carbon products.Therefore, the contractor does not have any incentive to cut carbon emissions due to low payments from the manufacturer.The aggregate profit of the manufacturer and retailer is the highest amount in the R contract rather than in the JP contract.This is because the R contract is always "tactical" as it pays the highest only when the contractor is efficient.However, the J contract is not, who offers the highest payment level when the contractor is inefficient.From some point of view, the J contract does not make a "right" decision.
The lowest supply chain profit can be obtained from Observation 4. We can also get Observation 5 when the contractor is highly efficient from Observation 4. When the contractor has low efficiency, the supply chain profit in the R contract is higher than in the J contract.Therefore, the profit increment of the contractor in the R contract compared with that in JP contract is less than the decrement of the aggregate profit in the J contract compared with that in the RP contract, i.e., π
Concluding Remarks
Confronting the anxiety of global climate change around the world, we consider the supply chain operations by capturing consumers' low-carbon preference due to the sense of social responsibility and existence of carbon-reduction contractor in the market.We initially present the conditions where the supply chain should add a contractor to the supply chain, and conclude that only when the efficiency of carbon reduction contractor compensates the loss due to the multi-marginalization effect, does the supply chain benefit from extending itself by adding a contractor.In the extended supply chain, we design three different payment contracts including the joint, retailer and manufacturer undertaking contracts.The effects of the three payment contracts on supply chain performance are then analyzed.
We find that the profit of the extended supply chain can be generated most in the retailer undertaking contract, and the least in the manufacturer undertaking contract.The result reveals that the retailer payment contract presents a Pareto improvement compared with the manufacturer undertaking contract.The reason is that the retailer payment contract offers a higher profit for the three members in the extended supply chain, a lower wholesale price, and a high carbon reduction level.This finding also conveys that the contracting payment decision should consider the downstream of the supply chain.In terms of the retailer undertaking contract and the joint undertaking contract, the profits of the extended supply chain, the manufacturer and the retailer are higher in the retailer payment contract, while the contractor's profit, the level of payment or the level of carbon reduction in the retailer payment contract surpass those of the joint payment contract only when the carbon reduction technology of the contractor is efficient.In addition, the joint undertaking contract presents an advantage in terms of carbon reduction when the contractor is not too competent in carbon reduction compared with the retailer undertaking contract.
This study can be extended in at least two possible directions for future research.First, we assumed that only one carbon reduction expert exists, and thus, the supply chain with the manufacturer and retailer has only one choice if the supply chain needs carbon reduction service of the contractor.If there are other contractors in the market, it will be more realistic.Second, we supposed that the information is complete, and that there is no moral risk and adverse selection.An extension would allow members in supply chain to access their own private information.− α+cβ 2β = 3τ 2 (α−cβ) 2β(16kβ−3τ 2 ) > 0. Similarly, the wholesale price is higher in the JP contract than in the MP contract w J − w M = 4kτ 2 (α−cβ) (8kβ−τ 2 )(16kβ−3τ 2 ) > 0. In summary, we have w R < w M < w J .
Appendix A.4.The Contractor's Profit Due to the requirement 4kβ − τ 2 > 0, the inequalities π M C − π R C < 0 and π M C − π J C < 0 hold.Therefore, the contractor's profit is higher in the R contract than that of the M contract.Considering 2 , we have π R C < π J C when k > 5τ 2 8k and π R C > π J C when . It shows that when k is high, the profit of the contractor in the J contract surpasses that of the R contract, while the result is the reverse when k is low, which also corresponds to the payment level.Briefly, we have
Table 2 .
The results in three undertaking contracts.
|
v3-fos-license
|
2017-06-15T21:09:51.973Z
|
2014-06-17T00:00:00.000
|
3148346
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.g3journal.org/content/ggg/4/8/1539.full.pdf",
"pdf_hash": "61e1ae4ba36fab8364b0097d8921bcc0f7dfa60c",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3083",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "b69b534317ad25124ce791aecea3412073c3027b",
"year": 2014
}
|
pes2o/s2orc
|
Harnessing Natural Sequence Variation to Dissect Posttranscriptional Regulatory Networks in Yeast
Understanding how genomic variation influences phenotypic variation through the molecular networks of the cell is one of the central challenges of biology. Transcriptional regulation has received much attention, but equally important is the posttranscriptional regulation of mRNA stability. Here we applied a systems genetics approach to dissect posttranscriptional regulatory networks in the budding yeast Saccharomyces cerevisiae. Quantitative sequence-to-affinity models were built from high-throughput in vivo RNA binding protein (RBP) binding data for 15 yeast RBPs. Integration of these models with genome-wide mRNA expression data allowed us to estimate protein-level RBP regulatory activity for individual segregants from a genetic cross between two yeast strains. Treating these activities as a quantitative trait, we mapped trans-acting loci (activity quantitative trait loci, or aQTLs) that act via posttranscriptional regulation of transcript stability. We predicted and experimentally confirmed that a coding polymorphism at the IRA2 locus modulates Puf4p activity. Our results also indicate that Puf3p activity is modulated by distinct loci, depending on whether it acts via the 5′ or the 3′ untranslated region of its target mRNAs. Together, our results validate a general strategy for dissecting the connectivity between posttranscriptional regulators and their upstream signaling pathways.
RNA-binding proteins cis-regulatory analysis inference of protein-level regulatory activity quantitative trait locus (QTL) mapping Pumilio/FBF homology domain proteins (Puf3p Puf4p) The advent of high-throughput genotyping and gene expression profiling technologies has made it possible to treat messenger RNA levels as quantitative traits and map the cis-and trans-acting loci that drive their variation (Brem et al. 2002;Schadt et al. 2003;Smith and Kruglyak 2008;Lee et al. 2009). This has created new opportunities for studying genetic variation at the level of gene regulatory networks rather than individual genes. A common approach is to treat the mRNA expression levels as heritable traits and use them to identify expression quantitative trait loci (eQTL) hotspots that regulate the expression of a large number of genes. Detection of eQTL hotspots is then based on a clustering procedure that identifies loci with many significant eQTL linkages.
Integrative studies thus far have focused on regulation by DNAbinding transcription factors (Ye et al. 2009;Lee and Bussemaker 2010). However, posttranscriptional control of transcript stability by RNA-binding proteins (RBPs) is also critical for the regulation of mRNA abundance (Foat et al. 2005). An increasing number of studies report the involvement of posttranscriptional regulation by RBPs in human genetic disorders (Lukong et al. 2008;Cooper et al. 2009;Polymenidou et al. 2012;Yamazaki et al. 2012), and several studies have identified nucleotide motifs associated with posttranscriptional regulation by RBPs (Foat et al. 2005;Shalgi et al. 2005;Hogan et al. 2008;Riordan et al. 2011).
Even though many studies have shown that mRNA stability is often regulated through cis-regulatory elements in the 39 untranslated region (UTR), examples also exist of stability regulation through the 59 UTR of the transcripts, such as an upstream open reading frame (ORF) that can inhibit ribosomal scanning and promote mRNA decay (Vilela et al. 1999;Hatano et al. 2013). It has also been shown that binding by the heat shock protein HSP70 to a motif located in the 59 UTR of the tumor suppressor gene SMAR1 stabilizes the transcript and leads to increased SMAR1 protein levels (Pavithra et al. 2010). Finally, the secondary structure of the 59 UTR has been linked to the mRNA stability (Cannons and Cannon 2002). Taken together, these studies show that it is important to include the 59 UTR when searching for cis-regulatory elements that control mRNA stability and not just focus on the 39 UTR.
In this article, we formulate and apply a general method for discovering genetic polymorphisms that accounts for differences in genomewide mRNA abundance patterns between strains or individuals that reflect posttranscriptional regulation of mRNA stability. Because our approach uses the RNA sequence specificity of RBPs as prior information, we first systematically constructed sequence-to-affinity models for 15 RBPs. Our motif discovery procedure uses the MatrixREDUCE algorithm, which combines accurate biophysical modeling of protein2RNA interaction (Foat et al. 2005(Foat et al. , 2006 with the use of high-throughput in vivo mRNA binding data (Riordan et al. 2011). As previously demonstrated, the resulting position-specific affinity matrices (PSAMs) can be used to infer changes in the protein-level regulatory activity of each RBP from the genomewide pattern of changes in steady-state mRNA level (Foat et al. 2005).
For the systems genetics component of our study, we exploited parallel mRNA expression data and genotype data across a segregating population of yeast strains. The goal was to identify chromosomal loci that modulate the protein-level regulatory activity of a particular RBP. To this end, we first inferred RBP activities for each segregant from its genome-wide mRNA expression pattern. Next, we treated the activity of each particular RBP as a quantitative trait, and used linkage analysis to map genetic loci whose allelic variation has an effect on the RBP activity. We call these loci activity quantitative trait loci, or "aQTLs," by analogy with a study on DNA-binding transcription factors (Lee and Bussemaker 2010). Because we infer the RBP activities from the collective behavior of the targets of each particular RBP, our trait is far less noisy than the expression levels used as traits in eQTL analyses. Also because of the reduced number of marker/trait combinations that needs to be tested for significance compared with eQTL analyses, the statistical power to detect trans-acting genetic variation is greatly increased.
Our analysis yielded causal aQTL relationships for a number of RBPs, including Puf3p and Puf4p, members of the Pumilio/FBF (PUF) homology domain family. The regulatory activities of these factors typically change in opposite directions in response to changes in nutrient conditions, suggesting regulation by a single genetic locus. However, we found that the activity of these two PUF proteins are linked to distinct genetic loci. We mapped distinct aQTLs depending on whether Puf3p acts via binding to the 59 or to the 39 UTR of its target mRNAs. We also performed experiments to determine the effect of allelic variation at the IRA2 locus on Puf4p posttranscriptional activity. We found that the activity of Puf4p depends on the allele at the IRA2 locus in the BY but not the RM strain background. It is likely that other polymorphisms in RM make expression of Puf4p targets less sensitive to Puf4p activity.
Real-time (RT)-PCR Yeast were grown under standard laboratory conditions to early log phase in yeast extract peptone dextrose (YPD) and harvested by centrifugation. Total RNA was extracted using the Master Pure YeastRNA Purification Kit (Epicentre, Madison, WA) and 2.5 ng was used to generate cDNA with the iScript cDNA Synthesis Kit (Bio-Rad, Hercules, CA). 1/20 th of this reaction mix was used as template for RT-PCR using 1 · iQSYBR Green Supermix (Bio-Rad, Hercules, CA). All reactions were carried out in triplicate using a StepOnePlus machine (Applied Biosystems, Foster City, CA) and the data analyzed using AB software. Three technical replicates were carried out for each strain (see Figure S6).
Amplification efficiencies were determined for each set of primers using serial dilutions of genomic template from 8 to 0.25 ng/mL. Relative amounts of RRS1 were determined, using THI6 as a reference (Pfaffl 2001). Each cDNA synthesis reaction was carried out in parallel with a control sample lacking reverse transcriptase, so that the signal due to contaminating genomic DNA could be determined.
RBP data set
For our motif search, we analyzed genome-wide imunoaffinity purification data for 45 different RNA binding proteins (Hogan et al. 2008). In this study, bound mRNA molecules were isolated at mid-log phase from cells growing in YPD media. The mRNA was hybridized to a microarray. For each RBP, two to six experimental replicates were performed for a total of 132 immunoprecipitation (IP) experiments.
Segregant mRNA expression and genotype data sets
For the aQTL analysis, we used genome-wide mRNA expression data for 108 haploid segregants from a genetic cross between two parental strains: BY4716 and RM11-1a (Smith and Kruglyak 2008). As differential expression values, we used the log 2 -ratios between segregants and a reference consisting of a mixture of the BY and RM strains. Genotype data for the same segregants at 2956 markers was obtained (Brem et al. 2002).
Preprocessing of RBP binding data For our motif analysis, we took log 2 -ratios between the microarray intensities for the immunoprecipitated sample and input sample, respectively, for each RBP. To reduce the effect of outliers, we applied a rankquantile transformation based on the standard normal distribution. For each RBP, x ¼ ðx 1 ; x 2 ; . . . ; x n Þ denotes the vector of binding log 2 -ratios across all genes, sorted in ascending order. We first ranked the data points (x) in each column (i.e. IP experiment). Let PrðX , xÞ denote the cumulative distribution function for a standard normal random variable X. As illustrated in Figure S1, we then defined x i as the i th quantile, PrðX , x iÞ ¼ rankðx i 2 1 2 Þ n and used it to replace the i th element in the vector x. This transformation reduces the effect of outliers.
Motif discovery
To model for RBP-mRNA binding occupancy, we used a biophysical model similar to that presented by Foat et al. (2006). We assume that the free protein concentration is low relative to the dissociation constant for protein-RNA interaction (Ghaemmaghami et al. 2003;Miller et al. 2008;Zhu et al. 2009). The occupancy (N) of sequence (S) of mRNA of gene (g) by an RBP (f) is given as follows: N gf ðSÞ ½fK gf ðSÞ Where the total affinity K gf of S is defined as the sum of the relative affinities of a sliding window of length L f . Here, L f represents the length of the binding site.
Here b indicates the nucleotide identity of the base at the coordinate i þ j 2 1 within sequence S. The aforementioned formula assumes that the contribution to the binding free energy at each position within the binding site is independent. The set of w's thus represent the PSAM of RBP (f).
We assume the relative abundances between the amount of mRNA bound to an RBP and the control sample is approximately proportional to the mRNA occupancy. We can then use the binding data to train the PSAMs (Foat et al. 2005). Our RBPs motif discovery approach is shown in Figure S2. To detect the motifs, we used the MatrixREDUCE program from the REDUCE Suite package (bussemakerlab.org/software/ REDUCE) to perform a genome-wide fit of a PSAM to the rank-quantile log 2 -ratios of RBP binding data. MatrixREDUCE builds a multiple linear regression model originally developed by (Foat et al. 2005(Foat et al. , 2006. We used an enhanced version of MatrixREDUCE that can infer PSAMs and can explain the signal variation in multiple data sets simultaneously. The MatrixREDUCE algorithm consists of two steps: seed motif finding and PSAM optimization. The seed finding step seeks to identify within the set of all possible nucleotides of a specified length, the one whose occurrence best correlates with the binding signal. The motif size is allowed to vary from to 1 to 10 nucleotides in our analysis. Once the optimal motif was identified, it was used as a seed for the optimization procedure. For an optimal motif of length L, a matrix of size 4 · L, representing each nucleotide A, C, G, and T/U at positions 1 to L was constructed. At every column (i.e., position in the seed motif) the best nucleotide element was given value equal to 1 and unacceptable nucleotides (i.e., the other three elements) were given a very small number close to zero. The optimization step aims to find the optimal weight matrix by minimizing an error function: The indices f, e, and g label RBP, IP experiment replicate, and gene, respectively. Here, Z represents the rank-quantile transformed binding data for training set. The training set was obtained by randomly selecting 50% of the data from each transformed IP experiment. The motif seed finding step and subsequent optimization step of the PSAM are both performed on the training set. Once the optimization step converges for this PSAM, the residues of Z are then used for the next seed finding and optimization iteration. The intercept C represents the genome-wide basal expression level when no preferred motif is present on the sequence. The slope F reflects a combination of the activity of the RBP under the media conditions in which binding was assayed, and the efficiency of the protocol for the particular technical replicate. We split every column of the binding data randomly into two sets of equal sized training and test sets and ran MatrixREDUCE on the training set of all experimental replicates of an RBP simultaneously (using command line option -mf). For every RBP, we searched for binding motifs on the whole mRNAs, 59 UTRs, ORFs and 39 UTRs sequences separately. For Idh1p, Nrd1p, Tdh3p, and Vts1p, we also ran the software without the -mf argument because for all of these factors one of the experimental duplicates was missing more than 40% of the data points. We obtained the Saccharomyces cerevisiae UTR sequences RNA-seq data (Nagalakshmi et al. 2008). mRNA ORF sequences were downloaded from Yeast Genome Database (Saccharomyces Genome Database; www.yeastgenome.org). For all RBPs, we searched for PSAMs of length 1210 iteratively with a p-value cut-off of 0.001.
Computational validation of PSAMs
We calculated the affinity scores of the discovered PSAMs using the AffinityProfile software from the REDUCE Suite package. We then calculated the Pearson t-value and Spearman p-value for the correlation of the affinity scores to the test data set. We further tried to capture low-specificity flanking sequences for PSAMs that passed a validation step on the test set (i.e., the remaining 50% of each IP experiment). We extended the flanking sides at most one nucleotide position (i.e., added column (1,1,1,1) to the flanks of the PSAM) and ran the OptimizePSAM software from the same package using the PSAM with added columns on the sides as seed. We continued adding columns to the sides of the PSAM until no nucleotide's weight at the added side flanks was less than 0.1. In the case of Nrd1p PSAM optimization we neglected this criterion where the matrix element for G nucleotide at position 8 was equal to 1 · 10 27 . Further flank addition and optimization of this PSAM resulted in optimization divergence after several rounds. At the end of each optimization cycle, we validated the newly extended PSAM by calculating the Pearson t-value and Spearman p-value on the test set. After this step, we ran OptimizePSAM using the full data on the PSAMs passed validation steps. We selected the final set of PSAMs by performing a specificity test. For each PSAM, we calculated the Pearson t-values for correlation of each of the 132 IP experiments to the affinity score on the 59 UTR, ORF, and 39 UTR separately. We only accepted those RBP/region combinations for which the PSAM affinity score exclusively was correlated to the IP data from which it was derived. In the case of YLL032C, we used the complete mRNA sequence. For YLL032C, no statistically significant motif was obtained when MatrixREDUCE was run using the 59 UTRs, ORFs, or 39 UTRs separately. Only the search that was performed using the complete mRNA sequences detected a motif for YLL032C.
Functional assessment of novel motifs
We used two different approaches to functionally assess the discovered PSAMs: Gene Ontology (GO) enrichment analysis (Ashburner et al. 2000) and correlation to 167 stress conditions (Gasch et al. 2000). We used GO enrichment scoring analysis to detect the underlying regulatory program, cellular state, or cellular component for the novel motifs. For each GO category, we tested whether the affinity scores for a PSAM on ORF or UTR sequences were associated with a specific biological pathway or not. We applied the nonparametric Wilcoxon-Mann-Whitney test to determine whether the affinity scores of mRNA regions within a particular GO category have a different distribution than the affinity scores for all other mRNAs. We used an iterative procedure for removing the effect of redundant nested GO categories (Boorsma et al. 2008). We only considered GO categories with at least 10 genes. To correct for multiple testing, we performed a Bonferroni correction on the resulting p-values accepting only categories with p-values smaller than 0.01/N, where N is the number of unique GO categories.
To perform the GO enrichment analysis, we used packages GO.db and org.Sc.sgd.db for Saccharomyces cerevisiae from the Bioconductor website within the R statistical programming environment (www. Bioconductor.org).
To further validate novel motifs, we correlated the affinity scores of the 25 factors to expression data from 173 different stress conditions. The stress conditions include heat shock, exposure to oxidative or reductive chemical agents, nutrients or amino acid starvation, and changes in osmolarity. We performed least-squares multiple linear regression of the genome-wide mRNA expression levels of each condition to the affinity scores of all of the selected RBP/region combination and compared the t-values of the regression coefficients among different conditions.
Inferring segregant-specific RBP activities
From the RBP motif discovery analysis we obtained 25 independent RBP/region combinations. As in Lee and Bussemaker (2010), we used the affinity scores of the obtained PSAMs as a predictor for mRNA differential expression (Foat et al. 2006;Bussemaker et al. 2007). The study by Lee and Bussemaker (2010) showed that the effect of transacting polymorphisms on mRNA expression via the activity of transcription factors is independent of the effect due to allelic variation in cis-regulatory sequences. Because this study focused on the trans-acting genetic variation, and also to simplify the analysis, all affinity scores were calculated based on the transcript sequences of S288c, a strain isogenic to BY. We performed genome-wide multiple regression on the 25 RBP/region combinations of every segregant mRNA expression log 2 -ratios to infer segregant-specific activity levels of the RBPs.
Where y gs represents the differential mRNA levels of gene g for segregant s relative to the reference. Here, the regression coefficient b fs represents the activity level of RBP f for segregant s, whereas K fg represents the aggregated affinity of the 59 UTRs, ORFs, 39 UTRs, or the complete mRNA sequence of gene g of the BY strain for the factor under consideration, as mentioned previously.
aQTL mapping Significant aQTL regions were discovered by splitting the multiple regression coefficient between BY and RM at every marker and testing for the significance of the difference between the distributions of the two groups of coefficients using the composite interval mapping (CIM) method for maximum resolution (Zeng 1994). CIM uses multiple regression on multiple markers to obtain a precise mapping of the QTL. We used CIM implementation in R/qtl package by (Broman et al. 2003). LOD score, an acronym for "logarithm of the odds ratio," was calculated to check for linkage. We calculated the LOD score to test the linkage of the RBPs inferred activities at each locus. We performed 200 independent random permutations on the columns of expression data (i.e., segregants) while preserving the genotype data to get LOD score thresholds at 1% false discovery rate (FDR) level. We obtained this threshold for each RBP/region combination separately.
To confirm that the detected aQTL regions for the RBPs are modulated by trans-acting factors and not dominated by a single-gene eQTL, we repeated the analysis after eliminating 3 groups of genes: (i) genes that encode RBPs; (ii) genes fully or partly located within 10 kb of aQTL markers; and (iii) genes whose expression variation maps to a marker 20 kb of aQTL markers, and scoring an affinity of at least 50% of the maximum for the RBP under consideration (see Figure S5). To find the latter group, we carried out eQTL analysis using the expression of each gene as a trait and calculated LOD score for every marker using CIM method. We combined these three groups of genes and removed them from the affinity and expression data sets for each RBP separately. This way, the activity calculation for each factor was not affected by elimination of unrelated genes.
Protein2protein interaction data
Protein2protein interaction data were downloaded from (thebiogrid. org) for yeast, April 2012. We used it to detect any known genetic or physical interactions with the genes located in aQTL regions.
Validation of predicted locus2RBP associations
We used gene expression profiles for two mutant strains growing in glucose medium collected by Smith and Kruglyak (2008) where IRA2 alleles were swapped between the BY and RM strains. We label the strain carrying the RM allele of IRA2 in the BY background as (RM@IRA2) and the strain carrying the BY allele of IRA2 in the RM background as (BY@IRA2). The reference sample used for the gene expression measurements was pooled parental mRNA (BY and RM). To obtain the net effect of the IRA2 allele replacement on the genome-wide mRNA levels (y g ), we subtracted the mean log-ratio of the related background of each mutated strain (shown for RM@IRA2 strain as an example): We performed multiple regression between the aforementioned data vector and the affinity scores of 25 RBP/region combinations. Similarly, we calculated the relative mRNA expression for the RM strain when IRA2 was replaced by the BY allele in the RM background and carried out multiple regression analysis. To capture the effect of the IRA2 allele swap between the two backgrounds, we subtracted the regression coefficients between the two cases for all the 25 combinations. We then permuted the two y vectors for all genes 1000 times independently to calculate the statistical significance threshold at 1% FDR level (jyj . 2:7).
Linear model analysis of quantitative RT-PCR data
For each of the three technical replicates of the eight strains, we calculated the difference in C T values between RRS1 (test) and THI6 (control). Because of the inverse relation between mRNA level and C T , we subtracted the test value from the control value. We performed a least-squares fit of the following linear model to estimate the effect of IRA2, and PUF4 on the mRNA level of RRS1, as well as the genetic interaction between IRA2 allelic status and PUF4 deletion: Here each independent variable takes on the value x = 1 when the IRA2 allele is RM, or the PUF4 gene is deleted, respectively, and x = 0 otherwise. We fit the model separately for two subsets that were selected based on the background strain BY or RM.
Data
For motif discovery, we used RNA immunoaffinity purification data that included a total of 132 IP experiments for 45 RBPs from Hogan et al. (2008). For the aQTL search, we used parallel genotyping and genome-wide mRNA expression data (collected in rich-media conditions) for 108 segregants from a cross between two haploid yeast strains: BY4716 (BY) and RM11-1a (RM) (Smith and Kruglyak 2008).
RBP motif discovery
Our motif discovery procedure is summarized in Figure 1A and Figure S2. We used the MatrixREDUCE program from the REDUCE Suite software package, which takes as inputs the nucleotide sequences and RBP binding log 2 -ratios for all mRNAs. To reduce the effect of outliers, we applied a quantile-based transformation to the binding data (see the section Materials and Methods). To define RNA sequences, we used the annotation from (Nagalakshmi et al. 2008) and extracted 59 and 39 UTRs, ORFs, and complete mRNA nucleotide sequences. For every RBP, we performed a genome-wide motif search on complete mRNA transcripts, ORFs, 59 and 39 UTRs separately. We did this to allow for functional differences within each transcript. In some cases, we also observed that using the complete mRNA sequence hindered our ability to discover motifs. For instance, for Nrd1p and Puf2p, we could only detect statistically significant binding motifs using the 39 UTRs. After the training step, using a random sample of 50% of the data, we calculated the affinity scores using the derived position-specific affinity matrices (PSAMs; see the section Materials and Methods) and only selected those validated using the remaining 50% of the data. We further optimized these PSAMs by adding flanking sequence (up to one nucleotide on either side) to capture low-specificity bases not identified during the training step. A final optimization step using the complete data set yielded PSAMs for 20 of 45 RBPs (see Table S1 and Table S5).
Regulation of mRNA stability typically is carried out through protein interactions via the 39 UTR (Grzybowska et al. 2001;Mignone et al. 2002;Shalgi et al. 2005); however, there are exceptions: yeast Khd1p represses FLO11 expression by binding to its coding region (Wolf et al. 2010) and Msl5p binds a specific motif near the intronexon boundary during splicing (Berglund et al. 1997;Garrey et al. 2006). To accommodate binding at multiple locations along a transcript, we scored the correlation between 132 IP experiments and the affinity of each mRNA region (i.e., 59 UTR, ORF, and 39 UTR) separately. We also required that each PSAM was most correlated with binding for each factor (IP data). Five of the 20 PSAMs we discovered (Idh1p, Mrn1p, Puf1p, Rna15p, and Yra2p) did not pass this specificity test ( Figure S4). Figure 2A shows the sequence logos for the final 15 PSAMs, whereas Figure 2B lists the 25 significant RBP/region combinations we detected.
There was an exception for Scp160p, for which the aggregated affinity across the ORFs was more correlated to the Bfr1p IP experiment (green dots). This was expected because Bfr1p associates with cytoplasmic mRNP complexes containing Scp160p (Lang et al. 2001). There is a large gap between the relevant IP experiments (red dots) and the rest of the IP experiments (blue dots) for the affinity of 39 UTRs to Pub1p, Puf2p, Puf3p, Puf4p, and Puf5p, indicating that these PSAM are highly specific to the binding data from which they were derived.
Refined PSAMs for RBPs of known sequence specificity Of the 15 specific PSAMs we discovered, 12 are broadly consistent with motifs derived previously from the same binding data (Gerber et al. 2004;Hogan et al. 2008). Our motif for Gbp2p, GRNGNNGR (R is A/G), is predictive for binding in the ORF. Gbp2p is involved in mRNA export from the nucleus to the cytoplasm. The motif HGGUGW (H is A/C/U, W is A/U) previously reported for this protein (Riordan et al. 2011) is compatible with our finding. Khd1p is involved in the asymmetric localization of Ash1p in daughter cells, which is a transcription inhibitor of the mating type switch protein encoded by the HO gene. Khd1p binds to CNN repeats in coding regions of its targets in vitro (Hasegawa et al. 2008). A more recent study reported enrichment of YCAY (Y is C/U) in the mRNAs bound to Khd1p (Wolf et al. 2010). Msl5p is part of the splicing initiation complex and binds to branch-point sequence UACUAAC (Berglund et al. 1997;Garrey et al. 2006), in agreement with the PSAM we identified for this protein. Nab2p is involved in poly(A) tail formation control and export of mRNA from nucleus to cytoplasm (Hector et al. 2002;Kelly et al. 2010). Consistent with our finding, the Nab2p PSAM is enriched with adenine (Kim Guisbert et al. 2005). The PSAM we found using Nrd1p binding data has a core motif CUUG. This protein is subunit of the Nrd1p-Nab3p-Sen1p complex, which mediates the termination of small nucleolar RNAs (Vasiljeva et al. 2008). It has been reported that Nrd1p binds to GUAR and Nab3p recognizes UCUU or CUUG (Carroll et al. 2004;Lunde et al. 2011;Porrua et al. 2012). Because Nab3p and Nrd1p form a complex, it is not surprising that we identified the Nab3p motif when analyzing the Nrd1p binding data. Indeed, we found a highly similar motif when using Nab3p binding data (p-value , 10 216 , Spearman rank correlation between aggregated 39 UTR affinities). The motif we obtained for Pin4p looks similar to the motif reported by (Hogan et al. 2008). In the case of Pub1p, a poly(U) binding protein that is essential for stability of many mRNAs (Anderson et al. 1993;Matunis et al. 1993;Duttagupta et al. 2005;Hogan et al. 2008;Li et al. 2010), our motif is indeed a U-rich element. The YLL032C gene encodes an unannotated protein that may interact with ribosomal complexes (Fleischer et al. 2006). Our algorithm found an AUACC motif as reported previously (Hogan et al. 2008).
PUF proteins
Among the RBPs for which we were able to identify a binding motif are four members of PUF family. Not much is known about the physiological role of Puf2p. It interacts preferentially with mRNAs that encode membrane-associated proteins (Gerber et al. 2004). In a recent study, it was shown computationally and experimentally that Puf2p binds to a dual UAAU motif with 3 nucleotide linker. (Yosefzon et al. 2011). Our PSAM search algorithm captures the same motif. The Puf2p binding consensus motif is distinct from the UGUA-containing motifs bound by Puf3p, Puf4p, and Puf5p (Gerber et al. 2004;Foat et al. 2005;Miller et al. 2008). Puf3p binds nearly exclusively to mRNAs that encode mitochondrial proteins (Gerber et al. 2004) Figure 1 Overview of our computational methodology. (A) Motif discovery. We used MatrixREDUCE on rank-quantile transformed binding data (training set) and mRNA sequences. We repeated this analysis by replacing the complete mRNA sequences by 59 untranslated regions (UTRs), open reading frames (ORFs), and 39 UTRs separately. We accepted the position-specific affinity matrices (PSAMs) only if they passed validation on test set and specificity test (exclusive correlation of affinity scores for a RNA-binding protein affinity to its own binding data). (B) Activity quantitative trait loci (aQTL) method. Genome-wide affinity scores were calculated using PSAM and sequences. The affinity scores were used to infer segregant-specific RNA-binding protein (RBP) activities. The activities were obtained by multiple linear regression on differential mRNA expression levels to the affinity scores. The regression coefficients represent the RBP activity levels for each segregant. For linkage analysis, the activities were treated as quantitative traits. For each factor we split the inferred activities of segregants at each marker based on the inherited allele (BY or RM) at that marker. We then test whether the distribution of the activities levels between the two subsets is significantly different using Composite interval mapping. Whenever the distribution of these inferred activity levels of a RBP depends on the genotype variation of a specific chromosomal marker, we obtain a high logarithm of the odds ratio score at that marker, indicating the presence of an aQTL (at 1% false discovery rate level). and is involved with mitochondrial localization of nuclear-encoded mRNAs (Saint-Georges et al. 2008). Puf3p enhances COX17 mRNA degradation by binding to a UGURNAUA motif in its 39 UTR (Olivas and Parker 2000;Jackson et al. 2004). Puf4p is known to bind to a UGUAUAUUA motif in the 39 UTR of HO endonuclease mRNA and, together with Puf5p, negatively regulates it (Hook et al. 2007;Miller et al. 2008). Puf4p is also known to bind preferentially to mRNA encode ribosomal proteins (Gerber et al. 2004). The PSAMs we found for Puf3p, Puf4p and Puf5p are all in agreement with the motifs reported earlier.
Novel binding specificities for Scp160p, Sik1p, and Tdh3p
Our method detected novel binding specificities for three RBPs for which previous motif finding attempts had failed. To corroborate these finding, we analyzed the correlation of PSAM-based affinity scores with differential mRNA expression data across 173 different stress conditions (Gasch et al. 2000). It was previously shown that this procedure allows us to quantitatively estimate changes in protein-level regulatory activity of RNA-binding trans-acting factors (Foat et al.). We also performed GO analysis on the in vitro affinity scores using the Wilcoxon-Mann-Whitney test (see the section Materials and Methods).
Scp160p is an RBP involved in the mating response (Guo et al. 2003). It contains multiple heterogeneous nuclear ribonucleoprotein K-homology domains. Scp160 affinity for ORF sequences is highly anticorrelated with expression in YPD stationary phase relative to early log phase, YPD, nitrogen depletion, and heat shock conditions and positively correlated for cold shock and hypo-osmotic shock conditions (see Table S2). GO analysis based on the mRNA affinity scores for Scp160p showed an association with the nitrogen compound metabolic process category (p-value , 10 28 , Wilcoxon-Mann-Whitney test).
Sik1p (Nop56p) is a component of the box C/D snoRNP complexes that direct 29-O-methylation of pre-rRNA during its maturation. Our detected motif for Sik1p is enriched in both 59 UTRs and ORFs of mRNAs. We observed positive correlation of ORF affinity scores with differential expression during YPD stationary phase growth and after heat shock (see Table S2). It could be that Sik1p has a direct or indirect role in rRNA methylation regulation under heat shock and that could explain why we observed positive correlation for the heat shock conditions. GO analysis showed significant association with ribosome and rRNA-related categories (p-value , 10 29 ).
Tdh3p encodes glyceraldehyde-3-phosphate dehydrogenase, which is required during gluconeogenesis and is essential for yeast cells to grow on noncarbohydrate sources such as ethanol and glycerol (McAlister and Holland 1985). The affinity score for Tdh3p correlates positively with expression changes after exposure to menadione, a synthetic nutritional compound, and negatively to sorbitol and nitrogen depletion. GO scoring analysis for this factor showed the categories "intrinsic to membrane," "thiolester hydrolase activity," "glucosyltransferase activity," and "glycerophospholipid metabolic process" to be significantly associated (p-value , 10 26 in all cases).
Dissecting genetic variation in RBP activity across segregants
It has been experimentally validated that by analyzing the mRNA differential expression levels of putative targets of a transcription factor, changes in the protein-level regulatory activity of that factor can be inferred (Boorsma et al. 2008). As for the activity levels, it has been shown that they vary among members of a population of an organism and can be treated as a quantitative trait for genetic linkage analysis to capture polymorphisms that modulate the activity of the transcription factors (Lee and Bussemaker 2010). Here we applied a similar approach to identify trans-acting loci controlling the activity of RBPs that influence mRNA expression levels through posttranscriptional control of their half-lives (Foat et al. 2005).
We combined segregant-specific genome-wide mRNA expression profiles with prior information about the posttranscriptional regula-tory network to infer differential RBP activity levels in each of 108 haploid segregants from a genetic cross between a lab (BY) and a wild (RM) strain (Smith and Kruglyak 2008). Figure 1B and Figure S3 illustrate the steps involved in this analysis. We used the 25 RBP/ region combinations from the motif discovery analysis to calculate the aggregate affinity scores ( Figure 2B). Multiple regression on affinities was then performed independently for each segregant mRNA expression profile, and the regression coefficients were interpreted as estimates of the corresponding RBP activities. We used the CIM method (Zeng 1994) to map aQTLs for each of the 25 RBP/region combinations. To account for multiple testing, we calculated LOD score thresholds corresponding to a 1% FDR by performing 200 permutations (see the section Materials and Methods). Table 1 summarizes the results of our analysis. We were able to map at least one aQTL for Khd1p, Msl5p, Pub1p, Puf2p, Puf3p, Puf4p, and the putative regulator YLL032C.
Segregation of trans-acting alleles decouples Puf3p and Puf4p activity
The binding specificities of Puf3p and Puf4p differ with respect to the length of the gap between the UGUA and AUA submotifs (cf. Figure 2A). As a consequence, they have distinct target sets, as measured in terms of the correlation in binding affinity across all transcripts (Figure 3A). The same observation holds across the larger set of RBPs: with few exceptions, they control independent sets of targets ( Figure 3B).
It has been previously noted that Puf3p and Puf4p activity levels respond oppositely when cells are exposed to different sugar sources (Foat et al. 2005). Consistently, when we analyzed the activity variation for both factors form genome-wide expression data across a variety of stress conditions (Gasch et al. 2000), we observed a marked negative correlation ( Figure 4A; r = 20.67, p-value , 10 216 ). By contrast, inferred Puf3p and Puf4p activity levels do not correlate n across the segregants from the BY-RM cross ( Figure 4B; r = 20.001, p-value = 0.99). This suggests that their respective activities are modulated by distinct genetic loci. Thus, our systems genetics approach provides us with a unique opportunity to dissect the connectivity between the upstream TOR signaling pathway and these two factors.
Recovering MKT1 as an aQTL for Puf3p acting via the 39 UTR The LOD score profile shown in Figure 5 highlights the genomic locations at which allelic variation drives variation in Puf3p activity. When we used 39 UTR sequences ( Figure 5, C and F), our method recovered an aQTL on chromosome XIV. This locus was previously discovered computationally and experimentally (Lee et al. 2009). The authors suggested that the MKT1 gene at this locus regulates p-body abundance, which in turn regulates Puf3p target abundance; they also tested the effect of MKT1 deletion on Puf3p target mRNA expression in a RM background. The genome-wide mRNA expression profile of the MKT1D strain was used to demonstrate that Puf3p targets are significantly down-regulated. The Mkt1 protein contains two amino acid polymorphisms between the RM and BY strands: G30D and R453K (Lee et al. 2009).
Besides MKT1, we identified a second aQTL for Puf3p activity. This locus, which is marginally significant, contains the LAP4 gene on chromosome XI ( Figure 5, C and E). It has been reported that Lap4p physically interacts with Puf3p (Breitkreutz et al. 2010) and contains four coding polymorphism between RM and S288c, a strain isogenic to BY.
A distinct aQTL modulates Puf3p acting via the 59 UTR Puf3p is believed to interact with the 39 UTRs of its targets (Olivas and Parker 2000;Gerber et al. 2004;Jackson et al. 2004). No evidence of functional interaction with the 59 UTR has been reported to our knowledge. As mentioned previously, Puf3p activity is modulated by an aQTL at the MKT1 locus on chromosome XIV when acting through 39 UTRs of its targets. However, as described above, when we analyzed the binding data for Puf3p, we found that binding motif matches in the 59 UTR were also predictive of transcript binding. Surprisingly, we found that the activity of Puf3p when acting through the 59 UTR is modulated by a locus on chromosome II that is distinct from the MKT1 locus ( Figure 5, A and D). This aQTL region contains POP7, a gene that is reported to have positive genetic interaction with Puf3p (Wilmes et al. 2008). Sequence alignment between the RM and S288c strains revealed a coding polymorphism at amino acid position 58 on the Pop7p sequence: the histidine (H) in the RM strain is a glutamine (Q) in S288c. Pop7p is the subunit of both RNase MRP and nuclear RNase P; RNase mitochondrial RNA processing cleaves pre-rRNA, whereas nuclear RNase P cleaves tRNA precursors to generate mature 59 ends and facilitates turnover of nuclear RNAs (Chamberlain et al. 1998;Houser-Scott et al. 2002). This makes the H58Q polymorphism a prime candidate for experimental validation. The same aQTL region also contains two mitochondrial related genes EHT1 and FZO1. The coding region of these genes contains three coding and three noncoding polymorphisms for the former and two coding and ten noncoding polymorphisms for the latter. Regardless of the identification of the precise causal single-nucleotide polymorphisms at this locus, our results suggest that Puf3p acts by distinct mechanisms depending on where it binds within the transcript, and responds to distinct upstream pathways.
Genetic modulation of Puf4p activity
The aQTL LOD score profile for Puf4p is shown in Figure 6. In this case, regulatory activity seems to be modulated by the same locus on chromosome XV regardless of whether it acts through the 59 UTR, ORF, or 39 UTR of its target mRNAs. Strikingly, however, the direction of the change in Puf4p activity from the BY and RM allele at this locus takes opposite values depending on whether Puf4p acts via the 39 or the 59 UTR ( Figure 6, D and F). Messenger RNA eQTL analysis of mRNA levels of the PUF4 gene revealed a linkage to the same locus on chromosome XV. This finding suggests that the detected difference in the activity of Puf4p between BY and RM is due to transcriptional and/or posttranscriptional variation between the two strains. One of the genes located in the aQTL region is IRA2, which encodes a GTPase-activating protein that negatively regulates Ras signaling and controls intercellular cAMP levels (Tanaka et al. 1990). Puf4p interacts with Tpk1p, cAMP-dependent protein kinase catalytic subunit (Cannon and Tatchell 1987;Toda et al. 1987;Ptacek et al. 2005). Indeed, the genome-wide expression profiles associated with allele replacement from BY@IRA2 to RM@IRA2 (Smith and Kruglyak 2008) correlated significantly with Puf4p affinity scored on the 39 UTR (r = +0.10; t = +5.6; p-value = 2.03 · 10 28 ; see the section Materials and Methods for details).
The expression response to IRA2 allele replacement did not correlate with affinity scored on the ORF or 59 UTR. This finding suggests that a polymorphism that is genetically linked to but outside IRA2 is responsible for modulation Puf4p activity as it acts via the 59 UTR. Possible causal genes are REX4 and BRX1, both of which have putative roles in pre-rRNA possessing and ribosome assembly. Puf4p is known to interact with mRNAs encoding nucleolar rRNA-processing factors (van Hoof et al. 2000;Kaser et al. 2001;Eppens et al. 2002). The coding region of REX4 contains three coding single-nucleotide polymorphisms between the RM and S288c strains: the asparagine (N) at position 34, phenylalanine (F) at position 155, and lysine (K) at position 248 in RM are lysine (K), leucine (L), and arginine (R) in S288c, respectively. There is a single noncoding polymorphism at position 243 within the coding region of BRX1; the thymine in RM is cytosine in S288c.
The activities of five other RBPs (Khd1p, Msl5p, Pub1p, Puf2p, and YLL032C) are also linked to the IRA2 locus. Using the IRA2 allele replacement data again to test these aQTL associations, we found that only the affinity of Puf2p via the 59 UTR (r = -0.079; t = -4.4; p-value = 1.05 · 10 25 ) was significantly correlated with differential mRNA expression. Even though we found the linkage to IRA2 locus and significant correlation between IRA2 allele replacement data and the affinity score of Puf2p, no evidence has been reported for a connection between them in the literature thus far.
Validation of detected loci using IRA2 allele replacement To test our computational prediction that the activity of Puf4p is influenced by the allelic variation at the IRA2 locus, we used RT-PCR to monitor expression of RRS1, a representative target of Puf4p. To normalize our RRS1 measurements, we used as a nontarget control THI6, based on the criteria that it did not show any notable predicted binding affinity for Puf4p according to our PSAM, was not enriched for Puf4p binding in the study by (Hogan et al. 2008), and showed no significant expression difference after IRA2 allele replacement (Smith and Kruglyak 2008). We carried out three technical replicates for each strain (see Figure S6 and Table S3). When active, Puf4p destabilizes its target mRNAs through interaction mostly via 39 UTR of its targets (Hook et al. 2007;Miller and Olivas 2011). Indeed, we observed that the expression of RRS1 increases 1.8 fold when PUF4 is deleted in the BY background (Figure 7). The expression of RRS1 in the same BY background decreased by 3.1-fold when IRA2 was replaced with the RM allele, indicating that Puf4p was more active. This is consistent with the prediction by our aQTL analysis that Puf4p activity is modulated by the IRA2 locus ( Figure 6, C and F). Since our aQTL analysis treats each locus as independent, we performed the same analysis in the RM background (Figure 7). Surprisingly, we did not observe any change in RRS1 expression either upon deletion of PUF4 or allele replacement for IRA2. This suggests that in the RM background RRS1 becomes insensitive to Puf4p activity.
To analyze the results from the quantitative RT-PCR, we performed a least-squares fit of a linear model to the difference in normalized mRNA expression level (see the section Materials and Methods). Performing the regression on the four strains with BY background, we obtained an excellent fit (R 2 = 0.96), with a significant negative effect of allele replacement to RM@IRA2 (t-value = -9.2; p-value = 1.6 · 10 25 ), a significant positive effect of PUF4 deletion (t-value = +4.7; p-value = 1.6 · 10 23 ), and a significant positive interaction between these two predictors (t-value = +4.4; p-value = 2.4 · 10 23 ). The signs of the regression terms are all consistent with our aQTL-based findings and the known role of PUF4 as a destabilizer of mRNA transcripts (Hook et al. 2007;Miller and Olivas 2011). In addition, once Puf4p is absent the allelic identity of IRA2 does not have any significant effect on RRS1 based on the positive sign of the interaction term. The fit parameters of the regression on the 4 strains with RM background were all insignificant. It could be that the effects are less severe on this particular target. Taken together, these results validate our aQTL prediction for IRA2-Puf4p, but also point to additional genetic complexity that remains to be elucidated.
DISCUSSION
We have presented a method for identifying trans-acting genetic modulators of gene expression, which uses mRNA expression and genotyping data from a segregating population. We used this method to detect aQTL of RBPs. The activities are inferred from RBP binding preferences and the expression data. The inferred activity levels of the RBPs are treated as quantitative traits and were mapped to the Figure 5 Activity quantitative trait loci (aQTL) profiles for Puf3p. Results for the trans-acting genetic modulators of Puf3p activity, mapped using our aQTL method. The significant thresholds at 1% false discovery rate level are calculated using 200 independent permutations of the expression data among segregants (red horizontal lines). We obtained distinct aQTL profiles for Puf3p when using affinity scores on (A) the 59 untranslated regions (UTRs), (B) open reading frames (ORFs), and (C) the 39 UTRs. Significant aQTL peaks remained after filtering out for the three groups of genes mentioned previously. We identified POP7 as a putative modulator of Puf3p activity levels when inferred from the 59 UTRs for the locus on chromosome II (A). The corresponding split of the activity levels at this marker is shown (D). We detected two possible modulators, LAP4 on chromosome XI and MKT1 on chromosome XIV, for Puf3p activity levels when inferred from the 39 UTRs (C). (E) and (F) present the activity level splits at these two loci.
chromosomal marker using genotype data. Our method aims to identify posttranscriptional regulatory mechanism underlying genetic variation in gene expression levels.
We applied our aQTL method to a data set for 108 segregants from a genetic cross between two yeast strains (Smith and Kruglyak 2008). RBP sequence specificities were obtained by our motif discovery approach. We calculated the affinity scores for the 25 RBP/region combinations and detected 12 locus-RBP linkages of which only one was previously reported. We recovered the MKT1 locus on chromosome XIV as a putative modulator of Puf3p activity inferred from 39 UTRs (Lee et al. 2009). Interestingly, we found different loci as modulators of Puf3p when using the 59 and 39 UTRs. We also predicted and experimentally validated IRA2 as a possible modulator of Puf4p activity when the 39 UTRs affinities were used to infer the activities. Allelic variation at the IRA2 locus has been shown to be an important determinant of phenotypic differences between the BY and RM strains (Smith and Kruglyak 2008;Chen et al. 2009;Litvin et al. 2009;Lee and Bussemaker 2010). Taken together, these results show that post-transcriptional regulation accounts for at least some of these differences.
Our motif discovery approach is based on biophysical modeling of the binding of RBPs to target RNAs. It detects potential regulatory elements within RNA sequences that are recognized by diverse RBPs. Our algorithm searches for binding sites in the form of positionspecific affinity matrices (PSAMs). Most approaches either impose a threshold to filter RBPs binding data or use gene expression data in combination with mRNA half-lives to identify stability motifs associated Figure 6 Activity quantitative trait loci (aQTL) profiles for Puf4p. Results of the trans-acting genetic modulators of Puf4p activity levels mapped using our aQTL method. The significant thresholds at 1% false discovery rate level were calculated using 200 independent permutations of the expression data (horizontal red lines). The peaks on chromosome XV remained after filtering out for the 3 groups of genes mentioned in the text. (A2C) aQTL profiles for Puf4p activity inferred from 59 untranslated regions (UTRs), open reading frames (ORFs), and 39 UTR affinity scores, respectively. Puf4p activity showed a significant linkage to a locus on chromosome XV irrespective of the mRNA region used for affinity calculation. This locus includes the IRA2 gene. The effect of allelic variation at the IRA2 locus on Puf4p activity changes direction depending on whether Puf4p acts via 59 UTRs or 39 UTRs (D, F).
with RBPs. Measuring mRNA half-lives requires transcription arrest, which can interfere with the post-transcriptional control of mRNAs under study (Grigull et al. 2004). Hence, the interpretation and usage of mRNA half-lives should be performed cautiously. By contrast, our model is not based on defining a target set or mRNA half-lives.
The biophysical model that underlies our method assumes that binding of RBP to mRNA transcripts occurs at nonsaturating concentrations. This indeed seems to be a reasonable assumption. For example, the total number of Puf3p and Puf4p proteins in haploid S288c cells was found to be 846 and 721 molecules, respectively, and therefore the total protein concentration for these factors is~30 nM (Ghaemmaghami et al. 2003). The dissociation constant for the optimal binding sequence for Puf3p equals~3 nM (Zhu et al. 2009) and that for Puf4p~14 nM (Miller et al. 2008). Considering that the free protein concentration is likely to be much smaller than the total protein concentration, and that the dissociation constant increases for suboptimal binding sites, we believe that our assumption of lack of binding saturation is valid.
The PSAMs discovered for 12 RBPs agree with previously reported consensus motifs in other studies. In addition, we discovered three novel motifs for Scp160p, Sik1p, and Tdh3p. The functional validation results from GO enrichment analysis and condition-specific genome-wide mRNA expression data suggest that these novel motifs could be the binding site for Scp160p, Sik1p and Tdh3p or their cofactors. Since we used binding data obtained using imunoaffinity purification, the pulled-down mRNA molecules could plausibly be bound indirectly by the RBP, in which case the motifs obtained would reflect the RNA binding specificity of the cofactor(s). Experimental follow up will be required to rule out this possibility and further validate our new findings.
Taken together, our findings highlight the importance of posttranscriptional regulation that reflects in the mRNA stability by RBPs. Our approach is not yeast-specific and can be applied to other organisms.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2013-12-22T00:00:00.000
|
1802398
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academic.oup.com/nar/article-pdf/42/5/3450/7698566/gkt1109.pdf",
"pdf_hash": "7c8d086b5a6add85dcecdce3e675b279766fb932",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:3084",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"sha1": "28f1d7643b4a4a5906355ca4efaa6519d6f0ffbe",
"year": 2013
}
|
pes2o/s2orc
|
Base-displaced intercalation of the 2-amino-3-methylimidazo[4,5-f]quinolone N2-dG adduct in the NarI DNA recognition sequence
2-Amino-3-methylimidazo[4,5-f]quinolone (IQ), a heterocyclic amine found in cooked meats, undergoes bioactivation to a nitrenium ion, which alkylates guanines at both the C8-dG and N2-dG positions. The conformation of a site-specific N2-dG-IQ adduct in an oligodeoxynucleotide duplex containing the iterated CG repeat restriction site of the NarI endonuclease has been determined. The IQ moiety intercalates, with the IQ H4a and CH3 protons facing the minor groove, and the IQ H7a, H8a and H9a protons facing the major groove. The adducted dG maintains the anti-conformation about the glycosyl bond. The complementary dC is extruded into the major groove. The duplex maintains its thermal stability, which is attributed to stacking between the IQ moiety and the 5′- and 3′-neighboring base pairs. This conformation is compared to that of the C8-dG-IQ adduct in the same sequence, which also formed a ‘base-displaced intercalated’ conformation. However, the C8-dG-IQ adopted the syn conformation placing the Watson−Crick edge of the modified dG into the major groove. In addition, the C8-dG-IQ adduct was oriented with the IQ CH3 group and H4a and H5a facing the major groove. These differences may lead to differential processing during DNA repair and replication.
We have synthesized phosphoramidite reagents of the C8-and N 2 -dG-IQ adducts in which the Buchwald-Hartwig palladium-catalyzed N-arylation was the key C-N bonding-forming step (64)(65)(66)(67). These adducts have been site-specifically incorporated into the NarI restriction sequence, 5 0 -d(CG 1 G 2 CX 3 CC)-3 0 , using automated solidphase synthesis. The G 3 position is a hot spot for two-base frameshift deletions in bacterial mutagenesis assays, while the G 1 position is not (68)(69)(70)(71). In addition, human DNA polymerase (hpol) Z produces two-base deletions when replicating past the N 2 -dG-IQ adduct at position G 3 , in vitro (72). Thus, this sequence provides a platform for investigating sequence-specific conformational perturbation of DNA structure by IQ adducts, in relationship to their biological processing (68)(69)(70)(71)73,74). Previously, we determined the conformation of the C8-dG-IQ adduct at the G 3 position of the NarI sequence, which exhibited a base-displaced intercalated conformation (75).
Presently, we have determined the conformation of the N 2 -dG-IQ adduct in 5 0 -d(C 1 T 2 C 3 G 4 G 5 C 6 X 7 C 8 C 9 A 10 T 11 C 12 )-3 0 :5 0 -d(G 13 A 14 T 15 G 16 G 17 C 18 G 19 C 20 C 21 G 22 A 23 G 24 )-3 0 ; X = N 2 -dG-IQ adduct (Chart 1). This duplex contains the recognition sequence of the NarI restriction endonuclease, in which the G 3 nucleotide (X 3 in the NarI sequence and X 7 in this study) represents a hot spot for two-base deletions. The IQ moiety intercalates, with the IQ H4a and CH 3 protons facing the minor groove, and the IQ H7a, H8a and H9a protons facing the major groove. The adducted nucleotide maintains the anti-conformation about the glycosyl bond. The complementary dC is extruded into the major groove. Nevertheless, the duplex maintains its thermal stability. This is attributed, in part, to stacking between the IQ moiety and the 5 0 -and 3 0 -neighboring base pairs. The base-displaced intercalated conformation of the N 2 -dG-IQ adduct differs from that of the C8-dG-IQ adduct, and may also be compared with that of the N 2 -dG AAF adduct (76), providing insight as to the persistence of the N 2 -dG-IQ adduct (63) and its processing during replication and repair events.
Thermal melting experiments
UV melting temperatures were collected on Cary 100 Bio UV spectrometer using 0.5 OD of duplex in 1 ml of solution containing 0.1 M NaCl, 10 mM NaH 2 PO 4 and 0.05 mM Na 2 EDTA (pH 7.0). The temperature was increased from 25 to 75 C at a rate of 1 C per min.
NMR spectroscopy
The N 2 -dG-IQ modified and the unmodified duplexes were prepared at concentrations of 570 and 810 mM, respectively, and placed into 3 mm diameter micro NMR sample tubes (77). The samples were prepared in 0.1 M NaCl, 50 mM Na 2 EDTA and 10 mM NaH 2 PO 4 (pH 7.0). To observe non-exchangeable protons, the samples were Scheme 1. The bioactivation of IQ. Cytochrome P450-mediated N-hydroxylation followed by N-acetylation of the hydroxylamine and deacetylation forms an electrophilic aryl nitrenium ion. The aryl nitrenium ion alkylates guanine in DNA either via the IQ amine nitrogen to form the C8-dG-IQ adduct, or alternatively, alkylates DNA via the C5 position of the IQ ring to form the N 2 -dG-IQ adduct. exchanged with D 2 O. 1 H NMR spectra were recorded at 600 or 800 MHz. The spectra were collected at 15 C; NOESY experiments were conducted at mixing times of 150, 200 and 250 ms with a relaxation delay of 1.8 s. Additional experiments were conducted with a longer relaxation delay to evaluate NOE distances arising from the adenine H2 protons, which typically exhibit longer T 1 relaxation values. The data were collected with 512 points in the t1 dimension and 2048 points in the t2 dimension. Chemical shifts were referenced to water. For the observation of exchangeable protons, the samples were dissolved in 9:1 H 2 O:D 2 O. 1 H NMR spectra were recorded at 600 or 800 MHz at 15 C. The data were collected with 512 points in the t1 dimension and 2048 points in the t2 dimension. A mixing time of 250 ms was used. Water suppression was performed using the WATERGATE pulse sequence (78). The spectra were processed using the TOPSPIN software (Bruker Biospin Inc., Billerica, MA, USA).
NMR experimental restraints
The spectral data were evaluated using the program SPARKY (79). The intensities of NOE cross peaks were measured by volume integrations. The bounds for overlapped peaks were optimized manually. Noise was assigned half the intensity of the weakest peak, and motion was assumed to be isotropic. Experimental intensities were combined with intensities obtained from complete relaxation matrix analysis (CORMA) of starting model to generate a hybrid intensity matrix (80,81). The intensities were converted to distances with the program MARDIGRAS (82), which refined the hybrid intensity matrix. Calculations were performed using 150, 200 and 250 ms mixing time data and 2, 3 and 4 ns isotropic correlation times. Evaluation of the resulting distance data allowed creation of upper and lower bound distance restraints that were used in restrained molecular dynamics (rMD) calculations.
Restrained molecular dynamics calculations
An unmodified B-DNA model (83) was used as a starting structure. The guanine at position G 7 was replaced by the N 2 -dG-IQ adduct using the program INSIGHT II (Accelrys Inc., San Diego, CA, USA). Partial charges for the N 2 -dG-IQ adduct were calculated with the B3LYP/6-31 G* basis set in GAUSSIAN (84). The starting structure was energy minimized for 1000 cycles. Simulated annealing protocols (85) used for the rMD calculations were conducted with the parm99 force field (86), using the program AMBER (87). Force constants of 32 kcal/mol/Å 2 were applied for distance and torsion angle restraints. The generalized Born model (88) was used for solvation. The salt concentration was 0.1 M. The molecule was coupled to the bath to control the temperature during simulated annealing (89). First, calculations were performed for 20 ps (20 000 steps) and recording data every ps by the following protocol: during steps 0-1000, the system was heated from 0 to The N 2 -dG-IQ adduct is positioned at X 7 , which corresponds to the G 3 frameshift-prone position of the NarI sequence. 600 K with a coupling of 0.5 ps. During steps 1001-2000, the system was kept at 600 K. The system was then cooled from 600 to 100 K during steps 2001-18 000 with a coupling of 4 ps. Further cooling from 100 to 0 K occurred during steps 18 001-20 000 with a coupling of 1 ps. After initial cycles of refinement a longer 100 ps (100 000 steps) calculation was performed by the following protocol: During steps 0-5000, the system was heated from 0 to 600 K with a coupling of 0.5 ps. During steps 5001-10 000, the system was kept at 600 K. The system was cooled from 600 to 100 K during steps 10 001-90 000 with a coupling of 4 ps. Additional cooling from 100 to 0 K occurred during steps 90 001-100 000 with a coupling of 1 ps. Structure coordinates were saved after each cycle and were subjected to potential energy minimization. CORMA (80,81) was used to compare intensities calculated from these emergent structures with the distance restraints. Helicoidal analysis was performed using the CURVES+ web server (90,91).
Oligodeoxynucleotide containing the N 2 -dG-IQ adduct
The N 2 -dG-IQ adduct was incorporated into 5 0 -d(CTCGG CXCCATC)-3 0 using automated solid-phase synthesis (67). The position of the N 2 -dG-IQ adduct was located at position X 7 , corresponding to position G 3 in the NarI sequence. The modified oligodeoxynucleotide was purified by C18 reverse phase HPLC and characterized by MALDI-TOF mass spectrometry in negative ion mode [m/z 3777.7, calcd for (M -H), 3776.6]. Thermal melting (T m ) profiles of 0.5 A 260 units of the IQ-modified duplex were monitored at 100 mM NaCl (1 ml volume) as a function of temperature by absorbance at 260 nm. An unmodified duplex was evaluated under the same conditions to provide a basis of comparison. The T m of the modified duplex was 63 C, within experimental error of the unmodified duplex (Supplementary Figure S1). Thus, the N 2 -dG-IQ adduct did not reduce the stability of this oligodeoxynucleotide. This result differed from our previous report, which had indicated that this adduct destabilized this duplex (67). Subsequent analysis of the previous sample by mass spectrometry revealed that the complement strand was not correct, accounting for the discrepancy. Table 1 lists the correct T m values of the N 2 -dG-IQ adduct at the three positions of the NarI sequence.
NMR
The modified duplex yielded well-resolved NMR spectra with narrow line shapes for the non-exchangeable protons at 15 C. The best spectral quality for the exchangeable protons was obtained at 5 C.
Non-exchangeable DNA protons
The base aromatic and deoxyribose anomeric protons were assigned using established procedures ( Figure 1) (92,93). The intensity of the X 7 H8 to X 7 H1 0 NOE was not changed in the presence of the adduct, indicating minimal change in the conformation of the glycosyl torsion angle. In the complementary strand, the intensity of the NOE between C 18 H1 0 and G 19 H8 was weakened. The N 2 -dG-IQ adduct did not induce breaks in the sequential pattern of NOEs between the aromatic base protons and the anomeric protons. With the exception of the adduct site, the internucleotide NOEs were characteristic of a B-type duplex. The adenine H2 protons were assigned based upon NOEs to the thymine imino protons of the respective A:T base pairs. With the deoxyribose H1 0 assignments in hand, the remainder of the deoxyribose protons was assigned from a combination of NOESY and COSY data. The assignments of the non-exchangeable DNA protons are summarized in Supplementary Table S1.
Exchangeable DNA protons
The imino and amino proton regions of the NOESY spectrum are shown in Figure 2. The assignments were made using established methods (94). The N 2 -dG-IQ adduct perturbed Watson-Crick hydrogen bonding. At the X 7 :C 18 base pair, the X 7 imino proton resonance was broadened, probably due to an enhanced rate of exchange with water. The amino protons for C 18 were not detected. No NOE was observed between the X 7 and G 19 imino protons, perhaps due to the broadening of the X 7 imino proton. The chemical shifts of the X 7 and G 17 imino protons were almost isochronous. It was not possible to determine if a NOE between these two protons existed. All other base pairs were assigned, with the exception of the two terminal base pairs C 1 :G 24 and C 12 :G 13 . The imino protons from the terminal base pairs were exchange broadened. Overall, the data suggested that the duplex maintained Watson-Crick hydrogen bonding, with the exception of the modified base pair ( Figure 2B). The assignments of the exchangeable protons are summarized in Supplementary Table S2.
IQ protons
The IQ protons, consisting of the CH 3 group, the H4a proton, and the H7a, H8a and H9a spin system, were assigned from a combination of COSY and NOESY data ( Figure 3). The CH 3 resonance was observed at 3.57 ppm. It displayed an intense NOE to the H4a proton, whose resonance was observed at 8.55 ppm. A 3 J coupling between the H8a proton (d 6.55 ppm) and the H9a proton (d 7.65 ppm) was observed in the COSY spectrum. The H8a proton also exhibited an NOE to the H7a proton (d 7.6 ppm). The 3 J coupling between H8 and H7a exhibited weak intensity in the COSY spectrum. This was attributed to presence of the nitrogen atom in the ring, which broadened the H7a resonance. This effect Table 1. Thermal melting temperatures (T m measurements) of NarI duplexes containing the N 2 -dG-IQ adducts was also observed for the C8-dG-IQ adduct, for which the COSY cross-peak between H7a and H8a was only observed between 25 and 35 C (95). The IQ amine proton was not assigned.
Chemical shift perturbations
The N 2 -dG-IQ adduct resulted in localized chemical shift perturbations, involving the modified base pair X 7 :C 18 and the neighboring C 6 :G 19 and C 8 :G 17 base pairs ( Figure 4). At the modified X 7 :C 18 base pair, the X 7 H8 resonance shifted 0.4 ppm downfield relative to the G 7 H8 resonance in the unmodified duplex. In contrast, the C 18 H6 and C 18 H1 0 resonances shifted 1 and 0.8 ppm downfield, respectively. At the 5 0 -neighbor C 6 :G 19 base pair, the C 6 H6 resonance shifted upfield by 0.2 ppm, whereas the C 6 H1 0 resonance shifted downfield by 0.4 ppm. The G 19 H8 and H1 0 resonances each shifted upfield by 0.4 ppm. At the 3 0 -neighbor C 8 :G 17 base pair, In the imino proton region of the spectrum, the X 7 and G 17 N1H imino resonances, at 11.57 and 11.59 ppm, respectively, exhibited upfield chemical shifts of >1 ppm from those of the unmodified duplex, at 13.24 and 13.16 ppm, respectively. The X 7 N 2 H amine resonance was observed at 9.5 ppm.
NOEs between IQ and DNA The CH 3 , H4a, H7a and H8a protons of IQ exhibited NOEs to the C 8 , G 17 , C 18 and G 19 bases ( Table 2). The pattern of NOEs involving H9a was difficult to establish due to resonance overlap with G 17 . The CH 3 group showed medium strength NOEs to X 7 H1 0 and C 8 H6 and weak NOEs to C 8 H1 0 and C 8 H5. It also showed an NOE to the G 19 N1H imino proton. The H4a proton showed a strong NOE to X 7 H1 0 , and medium NOEs to C 8 H5, the X 7 N2H amine proton and the G 19 N1H imino proton and weak NOEs to X 7 H2 0 and X 7 H2 00 The H7a proton exhibited weak NOEs to G 17 H1 0 , and the X 7 N2H amine proton. Three NOEs from X 7 to C 18 were observed; these were of medium strength NOE between H9a and C 18 H1 0 , and H8a and medium strength between C 18 H2 0 and H2 00 . The H8a proton showed a medium strength NOE to G 19 H3 0 and weak NOEs to G 17 H1 0 and G 19 H8. Some 30% of the NOEs from the IQ ring were to protons in the complementary strand, whereas another 46% were to other IQ protons and protons of the modified base. The remaining 24% of the NOEs were to neighbor bases in the modified strand.
Conformational Refinement
After the unmodified duplex was constructed using B-DNA coordinates (83), the guanine at position G 7 was replaced by the N 2 -dG-IQ adduct. The partial charges for the N 2 -dG-IQ adduct are provided in Supplementary Figure S2. Potential energy minimization provided an energy minimized starting duplex. A total of 329 distance restraints consisting of 127 inter-and 202 intranucleotide distances (Table 3) were obtained using the program MARDIGRAS (81,82), from 15 C NOESY data. Similar distance restraints were obtained if the data were collected at 150, 200 or 250 ms mixing times. These restraints included 16 DNA-IQ distances. A total of 49 Watson-Crick hydrogen-bonding restraints were applied for all of the base pairs except for the modified X 7 :C 18 base pair. An additional 100 phosphodiester backbone and 20 deoxyribose pseudorotation restraints for base pairs not proximal to the site of modification were obtained from canonical values derived from B-DNA (83), consistent with the spectroscopic data indicating that the duplex maintained a B-DNA like structure. A series of rMD calculations were performed using a simulated annealing protocol in which the generalized Born solvation model (88) was used, with a salt concentration of 0.1 M. The emergent structures were subjected to potential energy minimization before further analysis, which involved a 100 ps rMD calculation using the protocol described above, again followed by potential energy minimization.
The pairwise rmsd analysis of structures emergent from the rMD calculations was used to measure the precision of the structural refinement. Ten structures were chosen based on the lowest deviations from the experimental distance and dihedral restraints ( Figure 5). These exhibited an rmsd of 0.012 Å in distances and 2.5 in torsion angles (Table 3). There were 56 distance violations with a maximum penalty of 0.187 kcal/mol and a total distance penalty of 2.3 kcal/mol. There were 50 torsion angle violations with a maximum penalty of 0.177 kcal/ mol and a total torsion angle penalty of 2.8 kcal/mol. The maximum pairwise rmsd distances were 1.12 Å . These structures were averaged and subjected to potential energy minimization.
The accuracy of the refined structures was assessed by complete relaxation matrix analyses (80,81), which compared intensities calculated from the refined structures with the distance restraints ( Figure 6). The sixth root residual R 1 x value of the average structure was 8.4%, and the individual values for intra-nucleotide restraints (8.5%) and inter-nucleotide restraints (8.3%) were of similar magnitudes. This indicated agreement with the NOE data. Nucleotide G 19 exhibited a greater R 1 x value of 17.1%, suggesting that it was not as well-refined. This was attributed to several NOEs involving G 19 being overlapped with other resonances. The structural statistics are summarized in Table 4.
Conformation of the N 2 -dG-IQ Adduct
The modified nucleotide (X 7 ) remained in the anti-conformation about the glycosyl bond. It was displaced toward the major groove. The IQ ring was intercalated and oriented such that the H4a proton and the CH 3 group faced into the minor groove, whereas the H7a, H8a and H9a protons faced into the major groove ( Figure 7). The IQ ring was angled by $15 with respect to the modified guanine, but otherwise remained largely in plane with the damaged base. The helix was unwound between C 6 and X 7 , with a reduced helicoidal twist of 30 . This was partially compensated by an increased twist of 9 between X 7 and C 8 . At base pair X 7 :C 18 , the roll of the X 7 purine decreased by 24 . This was compensated at base pair C 8 :G 17 , where the roll decreased by 12 . Consequently, the N 2 -dG-IQ adduct induced a bend of 10 to the duplex. The IQ ring exhibited stacking with the flanking base pairs (Figure 8). IQ was stacked between G 17 and G 19 of the complementary strand of the C 6 :G 19 and C 8 :G 17 base pairs. The complementary nucleotide, C 18 , extruded into the major groove and did not exhibit stacking with the neighboring base pairs. The base opening between X 7 and C 18 increased Figure 5. Superposition of ten potential energy minimized structures emergent from the rMD calculations of the N 2 -dG-IQ modified duplex, using distance restraints from the 250 ms NOESY data. The positions of the modified X 7 nucleotide and the C 18 nucleotide in the complementary strand are as indicated. The maximum pairwise rmsd between these 10 structures was 1.12 Å .
DISCUSSION
The N 2 -dG-IQ DNA adduct has been of interest following reports that it is more persistent than the C8-dG-IQ adduct in rodents and primates that were fed IQ in their diet (63). The synthesis of this adduct into oligodeoxynucleotides (67) has allowed the conformation of the N 2 -dG-IQ adduct at the G 3 position of this sequence to be determined. This is a hot spot for two-base frameshift deletions in bacterial mutagenesis assays (68)(69)(70)(71)73,74). In addition, human DNA polymerase (hpol) Z produces two-base deletions when replicating past the N 2 -dG-IQ adduct at the reiterated G 3 position of the NarI sequence, in vitro (72).
Conformation of the N 2 -dG-IQ adduct
The IQ ring intercalates when the N 2 -dG-IQ adduct is positioned at the frameshift-prone G 3 position of the NarI sequence (Figure 7). The strong NOE intensities of the IQ H4a and CH 3 protons to the X 7 and C 8 H1 0 protons (Table 1) indicate that these protons face into the minor groove and establish the conformation about the bond between N 2 -dG and C5 of the IQ moiety. In contrast, NOEs involving the H8a proton of the IQ ring are primarily to bases G 17 , C 18 and G 19 of the complementary strand ( Table 1). The chemical shifts of the IQ H7a, H8a and H9a protons are observed between 6.5 and 8.0 ppm, which is 1.3-2.0 ppm upfield as compared to the N 2 -IQ-dG nucleoside. This is consistent with the intercalated conformation and stacking of the IQ ring below the 5 0 -neighboring G 19 of the complementary strand and above the 3 0 -neighboring C 8 :G 17 base pair ( Figure 8). Chemical shift perturbations corroborate the NOE data ( Figure 4). The IQ H4a proton resonance, observed at 9.6 ppm, is 0.4 ppm upfield from the resonance observed for the modified N 2 -dG-IQ nucleoside (67), consistent with its location below G 19 and above C 8 (Figure 8). The IQ moiety displaces the complementary C 18 base from the duplex, and flips it into the major groove. This is supported by smaller perturbations in chemical shifts for the H4a and CH 3 protons as compared to the H7a, H8a and H9a aromatic protons of IQ. The displacement of the modified nucleotide X 7 toward the major groove ( Figure 8) is supported by the downfield chemical shift change of 0.4 ppm for the X 7 H8 and H1 0 protons of the modified base. The C 8 H6 proton resonance also experiences a downfield shift of 0.3 ppm. The stacking interactions of the IQ ring with the flanking bases C 8 , G 17 and G 19 are reflected in the thermodynamic analysis of the adduct, in which the thermal melting temperature of 63 C is unchanged from that of the unmodified duplex.
Comparison to the N 2 -acetylaminofluorene-dG adduct
The other N 2 -dG arylamine adduct that has been subjected to conformational analysis, although not in the NarI sequence of interest herein, is that arising from N-acetylaminofluorene (AAF; Chart 1) (76). The N 2 -dG-AAF adduct conformation has also been examined using computational approaches (96). Zaliznyak et al. (76) have shown that the AAF moiety resides in the minor grove with its long axis directed toward the 5 0 -end of the modified strand. This shields the hydrophobic AAF ring from water. Similar to the N 2 -dG-IQ adduct, the modified nucleotide maintains the anti-conformation about the glycosyl bond. Notably, the N 2 -dG-AAF adduct increases the stability of the DNA, which has been attributed to a favorable entropic effect (76). The present data reveal that the base-displaced intercalated conformation of the N 2 -dG-IQ adduct at position G 3 of the NarI sequence differs from that of the N 2 -dG-AAF adduct, suggesting that the conformations of N 2 -dG arylamine adducts vary rather than following a common motif. At the molecular level, the factors governing whether planar aromatic molecules such as AAF or IQ favor DNA groove binding versus intercalation are not well established, but may be influenced both by their electronic structures and their respective geometries (97). Replication bypass studies have revealed that the N 2 -dG-AAF adduct largely blocked DNA synthesis, but with some bypass and misincorporation of dATP opposite the lesion (98).
Comparison to the C8-dG-IQ adduct
When the C8-dG-IQ adduct was placed into the NarI sequence at the frameshift-prone G 3 position, the IQ ring also intercalated into the duplex and the complementary C 18 base was extruded into the major groove. The conformation of the C8-dG-IQ adduct also was characterized as base-displaced intercalated (75). Thus, at the G 3 position within the NarI sequence, both the C8-dG-IQ and N 2 -dG-IQ adducts share a motif in which the IQ ring intercalates and C 18 is extruded into the major groove. However, the two conformations are distinctive. Apart from the difference in the regiochemistry of alkylation (C8 versus N 2 ; Scheme 1), a major difference between the C8-dG-IQ and N 2 -dG-IQ adducts is that the C8-dG-IQ-modified guanine adopts a syn conformation about the glycosyl bond, whereas the N 2 -dG-IQmodified guanine maintains the anti-conformation about the glycosyl bond (Figures 8 and 9). In addition, for the C8-dG-IQ adduct, rotation of the glycosyl bond into the syn conformation places the WatsonÀCrick hydrogen bonding edge of the modified dG into the major groove. The X 7 imino and amino protons are exposed to solvent. For the C8-dG-IQ adduct the orientation of the IQ ring with respect to the base is opposite to that of the N 2 -IQ adduct, such that the IQ CH 3 group and H4a and H5a protons face the major groove rather than the minor groove (75). The orientation of the C8-dG-IQ adduct in the duplex rotates the bulk of the IQ aromatic ring away from the flanking bases, resulting in a loss of base-stacking interactions, as shown in Figure 8. In comparison, the N 2 -dG-IQ adduct appears to have more favorable stacking interactions with G 19 . These differences may lead to differential processing during both DNA repair and DNA replication.
Structure-activity relationships
The N 2 -dG-IQ adduct is less efficiently removed from genomic DNA by nucleotide excision repair (63,99). The NER machinery is thought to recognize bulky DNA damage that is destabilizing and distortive to the duplex (100-103). It has been proposed that the thermal stabilization of the N 2 -dG-AAF adduct hinders NER (76). We observe that the T m of the N 2 -dG-IQ adduct at position G 3 within the NarI sequence does not destabilize the duplex (Table 1), correcting our original report (67). The T m of the N 2 -dG-IQ modified duplex is 63 C, and does not differ significantly from the unmodified duplex. This is remarkable given that the intercalated IQ moiety disrupts Watson-Crick hydrogen bonding and that the complementary C 18 base is displaced into the major groove. The stability of the N 2 -dG-IQ modified duplex likely arises from favorable stacking between the IQ moiety and the neighboring base pairs (Figure 8). It is also interesting to note that unlike the N 2 -dG-IQ adduct, the C8-dG-IQ adduct, which does not stack with the neighboring bases as well at this position ( Figure 8) thermally destabilizes the duplex, reducing the T m by 4 C. Yeo et al. (104) examined AAF and AF C8-dG adducts within the NarI sequence and observed a correlation between the degree of destabilization induced by the lesions, binding affinities to the damage recognition protein XPC-RAD23B and overall NER efficiencies. Likewise, Zaliznyak et al. (76) attributed the increased stability of the N 2 -dG-AAF adduct to its orientation within the minor groove and the entropy-favored release of waters from the duplex. Similar conclusions were reached by Cai et al. (105) who correlated thermodynamic stabilities and van der Waals interaction energies with repair efficiencies for stereoisomeric intercalated N 6 -dA PAH adducts. Their studies showed that intercalated adducts with fewer DNA structural distortions and increased van der Waals interactions with neighboring bases correlated with reduced repair efficiencies. The HCA PhIP adduct has been compared with the cis-B[a]P-N 2 -dG adduct in duplex DNA and in a nucleotide deletion duplex, and it was concluded that local stabilization induced by these adducts governs the ability of the b-hairpins of NER proteins to recognize the damage (106). In summary, it seems plausible that the thermal stability of the N 2 -dG-IQ adduct may, in part, explain the persistence of the N 2 -dG-IQ adduct in rats and primates.
If not repaired, the N 2 -dG-IQ adduct is anticipated to be genotoxic. Indeed, IQ is an order of magnitude more mutagenic than is aflatoxin B 1 in Ames assays. The mutations occur primarily at G:C base pairs (20,21). The replication of the N 2 -dG-IQ-adduct within the NarI sequence is influenced by the identity of the DNA polymerase. Because the damaged guanine remains in the anti-conformation about the glycosyl bond (Figures 7 and 8), one might anticipate that the N 2 -dG-IQ lesion should block Watson-Crick base pairing with incoming dNTPs during lesion bypass. Stover et al. (107) incorporated the N 2 -dG-IQ-adduct into the G 1 and G 3 positions of the NarI sequence and examined replication of the oligodeoxynucleotides with Escherichia coli polymerases (pol) I (exonuclease deficient Klenow fragment), exonuclease deficient pol II and the Solfolobus solfataricus P2 DNA polymerase IV (Dpo4), in vitro. At the G 3 position, the N 2 -dG-IQ adduct blocked the E. coli polymerases. Pol II exo À favored correct incorporation of dCTP over dGTP but was unable to extend either of these initial insertion products. In contrast, the Dpo4 polymerase bypassed the N 2 -dG-IQ adduct and produced an error-free product. The present studies do not necessarily predict the structure of the N 2 -dG-IQ adduct during trans-lesion bypass. Consequently, it will be of interest to prepare complexes of bypass polymerases with N 2 -dG-IQ modified template:primers in an effort to determine how the N 2 -dG-IQ adduct is accommodated during lesion bypass and how polymerases, e.g. the Dpo4 polymerase (108) allow bypass of this lesion.
Bypass of the N 2 -dG-IQ adduct has been reported to be dependent upon its position in the NarI sequence. Choi et al. (72) have demonstrated that the human DNA polymerase (hpol) Z can extend primers beyond template N 2 -dG-IQ adducts. Pol Z correctly inserts dCTP and incorrectly inserts dATP. Analyses of hpol Z extension products reveal that a À2 bp deletion occurs with the G 3 N 2 -dG-IQ adduct. In contrast, at the G 1 position replication past the N 2 -dG-IQ adduct results in error-free incorporation of dCTP, but further extension is inhibited and the polymerase stalls. In contrast, hpol Z does not yield À2 bp deletions with the C8-dG-IQ adduct located at position G 3 . While further studies will be necessary to probe the basis for these observations, it is of interest to note that the stability of the N 2 -dG-IQ adduct placed opposite a 2-bp deletion increases as compared to the fully complementary duplex, suggesting that the adduct may stabilize a 2-bp strand slippage intermediate (67). At the G 1 position, the N 2 -dG-IQ adduct is bypassed and extended by the E. coli polymerases and the Dpo4 polymerase, and error-free product is observed. Thus, it will also be of interest to examine the structure of the N 2 -dG-IQ adduct when positioned at position G 1 of the NarI sequence.
Summary
Analysis of the N 2 -dG-IQ adduct placed at position G 3 of the NarI sequence (68)(69)(70)(71)73,74), where it has been observed to cause À2 bp deletions when bypassed by hpol Z (72), reveals that it adopts a base-displaced intercalated conformation in which the H4a and CH 3 protons of the IQ ring face the minor groove and the H7a, H8a and H9a protons face the major groove. The IQ ring is shielded from water and stacks with the 5 0 -and 3 0 -neighbor base pairs. Remarkably, despite this conformational perturbation, the N 2 -dG-IQ adduct does not destabilize the duplex, which may correlate with the observation that it is refractory to repair by NER (63,99). In addition, the IQ moiety disrupts the potential for Watson-Crick hydrogen bonding with incoming dNTPs, which perhaps explains why this lesion blocks DNA synthesis by many polymerases.
ACCESSION NUMBERS
The structural coordinates were deposited in the Protein Data Bank (www.rcsb.org): The PDB ID code for the N 2 -dG-IQ duplex is 2MAV.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.